threads
listlengths
1
2.99k
[ { "msg_contents": "Hi all,\n\nWhile reviewing the area, I have bumped into the following bit in\nfe-secure-openssl.c and be-secure-openssl.c:\n- /* OpenSSL 0.96 does not support X509_V_FLAG_CRL_CHECK */\n-#ifdef X509_V_FLAG_CRL_CHECK\n[... stuff ...]\n\nI think that this did not get removed because of the incorrect version\nnumber in the comment, which should have been 0.9.6 from the start.\n\nAnyway, let's clean up this code as per the attached. This set of\nflags indeed exists since 0.9.7. Any thoughts or objections?\n--\nMichael", "msg_date": "Fri, 27 Sep 2019 12:23:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Cleanup code related to OpenSSL <= 0.9.6 in fe/be-secure-openssl.c" }, { "msg_contents": "On 2019-09-27 05:23, Michael Paquier wrote:\n> While reviewing the area, I have bumped into the following bit in\n> fe-secure-openssl.c and be-secure-openssl.c:\n> - /* OpenSSL 0.96 does not support X509_V_FLAG_CRL_CHECK */\n> -#ifdef X509_V_FLAG_CRL_CHECK\n> [... stuff ...]\n> \n> I think that this did not get removed because of the incorrect version\n> number in the comment, which should have been 0.9.6 from the start.\n> \n> Anyway, let's clean up this code as per the attached. This set of\n> flags indeed exists since 0.9.7. Any thoughts or objections?\n\nYes, it seems OK to clean this up in master.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 27 Sep 2019 15:46:09 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup code related to OpenSSL <= 0.9.6 in\n fe/be-secure-openssl.c" }, { "msg_contents": "On Fri, Sep 27, 2019 at 03:46:09PM +0200, Peter Eisentraut wrote:\n> Yes, it seems OK to clean this up in master.\n\nThanks, applied on HEAD.\n--\nMichael", "msg_date": "Sat, 28 Sep 2019 15:28:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Cleanup code related to OpenSSL <= 0.9.6 in\n fe/be-secure-openssl.c" } ]
[ { "msg_contents": "Hi,\n\nUnfortunately I found a performance regression for JITed query\ncompilation introduced in 12, compared to 11. Fixed in one of the\nattached patches (v1-0009-Fix-determination-when-tuple-deforming-can-be-JIT.patch\n- which needs a better commit message).\n\nThe first question is when to push that fix. I'm inclined to just do so\nnow - as we still do JITed tuple deforming in most cases, as well as\ndoing so in 11 in the places this patch fixes, the risk of that seems\nlow. But I can also see an arguments for waiting after 12.0.\n\n\nFor me the bigger question is about how to make sure we can write tests\ndetermining which parts of the querytree are JIT compiled and which are\nnot. There's the above bug, and I'm also hunting a regression introduced\nsomewhere during 11's lifetime, which suggests to me that we need better\ncoverage. I also want to add new JIT logic, making this even more\nimportant.\n\n\nThe reason that 11 didn't have tests verifying that certain parts of the\nplan tree are JIT compiled is that EXPLAIN doesn't currently show the\nrelevant information, and it's not that trivial to do so.\n\nWhat I'd like to do is to add additional, presumably optional, output to\nEXPLAIN showing additional information about expressions.\n\nThere's two major parts to doing so:\n\n1) Find a way to represent the additional information attached to\n expressions, and provide show_expression et al with the ExprState to\n be able to do so. The additional information I think is necessary is\n a) is expression jit compiled\n b-d) is scan/outer/inner tuple deforming necessary, and if so, JIT\n compiled.\n\n We can't unconditionally JIT compile for tuple deforming, because\n there's a number of cases where the source slot doesn't have\n precisely the same tuple desc, and/or doesn't have the same type.\n\n2) Expand EXPLAIN output to show expressions that currently aren't\n shown. Performance-wise the ones most critical that aren't currently\n visible, and that I know about, are:\n - Agg's combined transition function, we also currently don't display\n in any understandable way how many passes over the input we do (for\n grouping sets), nor how much memory is needed.\n - Agg's hash comparator (separate regression referenced above)\n - Hash/HashJoin's hashkeys/hjclauses\n\n\nFor 1) think we need to change show_expression()/show_qual() etc to also\npass down the corresponding ExprState if available (not available in\nplenty of cases, most of which are not particularly important). That's\nfairly mechanical.\n\nThen we need to add information about JIT to individual expressions. In\nthe attached WIP patchset I've made that dependent on the new\n\"jit_details\" EXPLAIN option. When specified, new per-expression\ninformation is shown:\n- JIT-Expr: whether the expression was JIT compiled (might e.g. not be\n the case because no parent was provided)\n- JIT-Deform-{Scan,Outer,Inner}: wether necessary, and whether JIT accelerated.\n\nI don't like these names much, but ...\n\nFor the deform cases I chose to display\na) the function name if JIT compiled\nb) \"false\" if the expression is JIT compiled, deforming is\n necessary, but deforming is not JIT compiled (e.g. because the slot type\n wasn't fixed)\nc) \"null\" if not necessary, with that being omitted in text mode.\n\nSo e.g in json format this looks like:\n\n\"Filter\": {\n \"Expr\": \"(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp without time zone)\",\n \"JIT-Expr\": \"evalexpr_0_2\",\n \"JIT-Deform-Scan\": \"deform_0_3\",\n \"JIT-Deform-Outer\": null,\n \"JIT-Deform-Inner\": null\n}\nand in text mode:\n\nFilter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp without time zone); JIT-Expr: evalexpr_0_2, JIT-Deform-Scan: deform_0_3\n\nFor now I chose to make Filter a group when both, not in text mode and\njit_details on - otherwise it's unclear what the JIT fields would apply\nto. But that's pretty crappy, because it means that the 'shape' of the\noutput depends on the jit_details option. I think if we were starting\nfrom scratch it'd make sense to alway have the Expression as it's own\nsub-node, so interpreting code doesn't have to know all the places an\nexpression can be referenced from. But it's probably not too attractive\nto change that today?\n\nSomewhat independently the series also contains a patch that renames\nverbose mode's \"Output\" to project if the node projects. I find it\npretty hard to interpret whether a node projects otherwise, and it's\nconfusing when jit_details shows details only for some node's Output,\nbut not for others. But the compat break due to that change is not small\n- perhaps we could instead mark that in another way?\n\n\nFor 2) I've only started to improve the situation, but it's a pretty\nnumber of pretty crucial pieces.\n\nI first focussed adding information for Agg nodes, as a) those are\ntypically performance sensitive in cases where JIT is beneficial b) the\ncurrent instrumentation is really insufficient, especially in cases\nwhere multiple grouping sets are computed at the same time - I think\nit's effectilvey not interpretable.\n\nIn verbose mode explain now shows per-phase output about the transition\ncomputation. E.g. for a grouping set query that can't be computed in one\npass, it now displays something like\n\nMixedAggregate (cost=6083420.07..14022888.98 rows=10011685 width=64)\n Project: avg((l_linenumber)::bigint), count((l_partkey)::bigint), sum(l_quantity), l_linenumber, l_partkey, l_quantity\n Filter: (sum(lineitem.l_quantity) IS NOT NULL)\n Phase 2 using strategy \"Sort\":\n Sort Key: lineitem.l_partkey, lineitem.l_quantity\n Transition Function: 2 * int8_avg_accum(TRANS, (l_linenumber)::bigint), 2 * int8inc_any(TRANS, (l_partkey)::bigint), 2 * float8pl(TRANS, l_quantity)\n Sorted Group: lineitem.l_partkey, lineitem.l_quantity\n Sorted Group: lineitem.l_partkey\n Phase 1 using strategy \"Sorted Input & All & Hash\":\n Transition Function: 6 * int8_avg_accum(TRANS, (l_linenumber)::bigint), 6 * int8inc_any(TRANS, (l_partkey)::bigint), 6 * float8pl(TRANS, l_quantity)\n Sorted Input Group: lineitem.l_linenumber, lineitem.l_partkey, lineitem.l_quantity\n Sorted Input Group: lineitem.l_linenumber, lineitem.l_partkey\n Sorted Input Group: lineitem.l_linenumber\n All Group\n Hash Group: lineitem.l_quantity\n Hash Group: lineitem.l_quantity, lineitem.l_linenumber\n -> Sort (cost=6083420.07..6158418.50 rows=29999372 width=16)\n ...\n\nThe N * indicates how many of the same transition functions are computed\nduring that phase.\n\nI'm not sure that 'TRANS' is the best placeholder for the transition\nvalue here. Maybe $TRANS would be clearer?\n\nFor a parallel aggregate the upper level looks like:\n\nFinalize HashAggregate (cost=610681.93..610682.02 rows=9 width=16)\n Project: l_tax, sum(l_quantity)\n Phase 0 using strategy \"Hash\":\n Transition Function: float8pl(TRANS, (PARTIAL sum(l_quantity)))\n Hash Group: lineitem.l_tax\n -> Gather (cost=610677.11..610681.70 rows=45 width=16)\n Output: l_tax, (PARTIAL sum(l_quantity))\n Workers Planned: 5\n -> Partial HashAggregate (cost=609677.11..609677.20 rows=9 width=16)\n Project: l_tax, PARTIAL sum(l_quantity)\n\nI've not done that yet, but I think it's way past time that we also add\nmemory usage information to Aggregate nodes (both for the hashtable(s),\nand for internal sorts if those are performed for grouping sets). Which\nwould also be very hard in the \"current\" format, as there's no\nrepresentation of passes.\n\nWith jit_details enabled, we then can show information about the\naggregation function, and grouping functions:\n Phase 0 using strategy \"Hash\":\n Transition Function: float8pl(TRANS, (PARTIAL sum(l_quantity))); JIT-Expr: evalexpr_0_11, JIT-Deform-Outer: false\n Hash Group: lineitem.l_tax; JIT-Expr: evalexpr_0_8, JIT-Deform-Outer: deform_0_10, JIT-Deform-Inner: deform_0_9\n\n\nCurrently the \"new\" format is used when either grouping sets are in use\n(as the previous explain output was not particularly useful, and\ninformation about the passes is important), or if VERBOSE or JIT_DETAILS\nare specified.\n\n\nFor HashJoin/Hash I've added 'Outer Hash Key' and 'Hash Key' for each\nkey, but only in verbose mode. That's somewhat important because for\nHashJoins those currently are often the performance critical bit,\nbecause they'll commonly be the expressions that deform the slots from\nbelow. That display is somewhat redundant with HashJoins \"Hash Cond\",\nbut they're evaluated separately. Under verbose that seems OK to me.\n\nWith jit_details enabled, this e.g. looks like this:\n\n Hash Join (cost=271409.60..2326739.51 rows=30000584 width=250)\n Project: lineitem.l_orderkey, lineitem.l_partkey, lineitem.l_suppkey, lineitem.l_linenumber, lineitem.l_quantity, lineitem.l_extendedprice, lineitem.l_discount, lineitem.l_tax,\n Inner Unique: true\n Hash Cond: ((lineitem.l_partkey = partsupp.ps_partkey) AND (lineitem.l_suppkey = partsupp.ps_suppkey)); JIT-Expr: evalexpr_0_7, JIT-Deform-Outer: deform_0_9, JIT-Deform-Inner:\n Outer Hash Key: lineitem.l_partkey; JIT-Expr: evalexpr_0_10, JIT-Deform-Outer: deform_0_11\n Outer Hash Key: lineitem.l_suppkey; JIT-Expr: evalexpr_0_12, JIT-Deform-Outer: deform_0_13\n -> Seq Scan on public.lineitem (cost=0.00..819684.84 rows=30000584 width=106)\n Output: lineitem.l_orderkey, lineitem.l_partkey, lineitem.l_suppkey, lineitem.l_linenumber, lineitem.l_quantity, lineitem.l_extendedprice, lineitem.l_discount, lineitem.l\n -> Hash (cost=129384.24..129384.24 rows=3999824 width=144)\n Output: partsupp.ps_partkey, partsupp.ps_suppkey, partsupp.ps_availqty, partsupp.ps_supplycost, partsupp.ps_comment\n Hash Key: partsupp.ps_partkey; JIT-Expr: evalexpr_0_0, JIT-Deform-Outer: deform_0_1\n Hash Key: partsupp.ps_suppkey; JIT-Expr: evalexpr_0_2, JIT-Deform-Outer: deform_0_3\n -> Seq Scan on public.partsupp (cost=0.00..129384.24 rows=3999824 width=144)\n Output: partsupp.ps_partkey, partsupp.ps_suppkey, partsupp.ps_availqty, partsupp.ps_supplycost, partsupp.ps_comment\n JIT:\n Functions: 14 (6 for expression evaluation, 8 for tuple deforming)\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n\nthis also highlights the sad fact that we currently use a separate\nExprState to compute each of the hash keys, and then \"manually\" invoke\nthe hash function itself. That's bad both for interpreted execution, as\nwe repeatedly pay executor startup overhead and don't even hit the\nfastpath, as well as for JITed execution, because we have more code to\noptimize (some of it pretty redundant, in particular the deforming). In\nboth cases we suffer from the problem that we deform the tuple\nincrementally.\n\n\nA later patch in the series then uses the new explain output to add some\ntests for JIT, and then fixes two bugs, showing that the test output\nchanges.\n\nAdditionally I've also included a small improvement to the expression\nevaluation logic, which also changes output in the JIT test, as it\nshould.\n\nComments?\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 27 Sep 2019 00:20:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "Hi,\n\nOn 2019-09-27 00:20:53 -0700, Andres Freund wrote:\n> Unfortunately I found a performance regression for JITed query\n> compilation introduced in 12, compared to 11. Fixed in one of the\n> attached patches (v1-0009-Fix-determination-when-tuple-deforming-can-be-JIT.patch\n> - which needs a better commit message).\n> \n> The first question is when to push that fix. I'm inclined to just do so\n> now - as we still do JITed tuple deforming in most cases, as well as\n> doing so in 11 in the places this patch fixes, the risk of that seems\n> low. But I can also see an arguments for waiting after 12.0.\n\nSince nobody opined, I now have pushed that, and the other fix mentioned\nlater in that email.\n\nI'd appreciate comments on the rest of the email, it's clear that we\nneed to improve the test infrastructure here. And also the explain\noutput for grouping sets...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 29 Sep 2019 16:30:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": ">But that's pretty crappy, because it means that the 'shape' of the output depends on the jit_details option.\n\nYeah, that would be hard to work with. What about adding it as a sibling group?\n\n\"Filter\": \"(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp\nwithout time zone)\",\n\"Filter JIT\": {\n \"Expr\": \"evalexpr_0_2\",\n \"Deform Scan\": \"deform_0_3\",\n \"Deform Outer\": null,\n \"Deform Inner\": null\n}\n\nAlso not that pretty, but at least it's easier to work with (I also\nchanged the dashes to spaces since that's what the rest of EXPLAIN is\ndoing as a matter of style).\n\n>But the compat break due to that change is not small- perhaps we could instead mark that in another way?\n\nWe could add a \"Projects\" boolean key instead? Of course that's more\nawkward in text mode. Maybe compat break is less of an issue in text\nmode and we can treat this differently?\n\n>I'm not sure that 'TRANS' is the best placeholder for the transition value here. Maybe $TRANS would be clearer?\n\n+1, I think the `$` makes it clearer that this is not a literal expression.\n\n>For HashJoin/Hash I've added 'Outer Hash Key' and 'Hash Key' for each key, but only in verbose mode.\n\nThat reads pretty well to me. What does the structured output look like?\n\n\n", "msg_date": "Mon, 28 Oct 2019 11:27:02 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Fri, Sep 27, 2019 at 3:21 AM Andres Freund <andres@anarazel.de> wrote:\n> - JIT-Expr: whether the expression was JIT compiled (might e.g. not be\n> the case because no parent was provided)\n> - JIT-Deform-{Scan,Outer,Inner}: wether necessary, and whether JIT accelerated.\n>\n> I don't like these names much, but ...\n>\n> For the deform cases I chose to display\n> a) the function name if JIT compiled\n> b) \"false\" if the expression is JIT compiled, deforming is\n> necessary, but deforming is not JIT compiled (e.g. because the slot type\n> wasn't fixed)\n> c) \"null\" if not necessary, with that being omitted in text mode.\n\nI mean, why not just omit in all modes if it's not necessary? I don't\nsee that making the information we produce randomly inconsistent\nbetween modes is buying us anything.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 15:05:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "Hi,\n\nOn 2019-10-28 15:05:01 -0400, Robert Haas wrote:\n> On Fri, Sep 27, 2019 at 3:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > - JIT-Expr: whether the expression was JIT compiled (might e.g. not be\n> > the case because no parent was provided)\n> > - JIT-Deform-{Scan,Outer,Inner}: wether necessary, and whether JIT accelerated.\n> >\n> > I don't like these names much, but ...\n> >\n> > For the deform cases I chose to display\n> > a) the function name if JIT compiled\n> > b) \"false\" if the expression is JIT compiled, deforming is\n> > necessary, but deforming is not JIT compiled (e.g. because the slot type\n> > wasn't fixed)\n> > c) \"null\" if not necessary, with that being omitted in text mode.\n> \n> I mean, why not just omit in all modes if it's not necessary? I don't\n> see that making the information we produce randomly inconsistent\n> between modes is buying us anything.\n\nBecause that's the normal way to represent something non-existing for\nformats like json? There's a lot of information we show always for !text\nformat, even if not really applicable to the context (e.g. Triggers for\nselect statements). I think there's an argument to made to deviate in\nthis case, but I don't think it's obvious.\n\nAbstract formatting reasons aside, it's actually useful to see where we\nknow we're dealing with tuples that don't need to be deformed and thus\noverhead due to that cannot be relevant. Not sure if there's sufficient\nconsumers for that, but ... We e.g. should verify that the \"none\"\ndoesn't suddenly vanish, because we broke the information that let us\ninfer that we don't need tuple deforming - and that's easier to\nunderstand if there's an explicit field, rather than reasining from\nabsence. IMO.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 28 Oct 2019 16:21:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "Hi,\n\nOn 2019-10-28 11:27:02 -0700, Maciek Sakrejda wrote:\n> >But that's pretty crappy, because it means that the 'shape' of the output depends on the jit_details option.\n> \n> Yeah, that would be hard to work with. What about adding it as a sibling group?\n> \n> \"Filter\": \"(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp\n> without time zone)\",\n> \"Filter JIT\": {\n> \"Expr\": \"evalexpr_0_2\",\n> \"Deform Scan\": \"deform_0_3\",\n> \"Deform Outer\": null,\n> \"Deform Inner\": null\n> }\n> \n> Also not that pretty, but at least it's easier to work with\n\nWhat I dislike about that is that it basically again is introducing\nsomething that requires either pattern matching on key names (i.e. a key\nof '(.*) JIT' is one that has information about JIT, and the associated\nexpresssion is in key $1), or knowing all the potential keys an\nexpression could be in.\n\n\n> (I also\n> changed the dashes to spaces since that's what the rest of EXPLAIN is\n> doing as a matter of style).\n\nThat makes sense.\n\n\n> >But the compat break due to that change is not small- perhaps we could instead mark that in another way?\n> \n> We could add a \"Projects\" boolean key instead? Of course that's more\n> awkward in text mode. Maybe compat break is less of an issue in text\n> mode and we can treat this differently?\n\nYea, I think projects as a key for each node makes sense. For text mode\nI guess we could just display the key on the same line when es->verbose\nis set? Still not sure if not just changing the output is the better\napproach.\n\nAnother alternative would be to just remove the 'Output' line when a\nnode doesn't project - it can't really carry meaning in those cases\nanyway?\n\n\n> >For HashJoin/Hash I've added 'Outer Hash Key' and 'Hash Key' for each key, but only in verbose mode.\n> \n> That reads pretty well to me. What does the structured output look\n> like?\n\nJust a new \"Outer Hash Key\" for the HashJoin node, and \"Hash Key\" for\nthe Hash node. Perhaps the latter should be 'Inner Hash Key' - while\nthat's currently a bit confusing because of Hash's subtree being the\nouter tree, it'd reduce changes when merging Hash into HashJoin [1], and\nit's clearer when looking at the HashJoin node itself.\n\nHere's an example query:\n\nEXPLAIN (VERBOSE, FORMAT JSON, COSTS OFF) SELECT pc.oid::regclass, pc.relkind, pc.relfilenode, pc_t.oid::regclass as toast_rel, pc_t.relfilenode as toast_relfilenode FROM pg_class pc LEFT OUTER JOIN pg_class pc_t ON (pc.reltoastrelid = pc_t.oid);\n[\n {\n \"Plan\": {\n \"Node Type\": \"Hash Join\",\n \"Parallel Aware\": false,\n \"Join Type\": \"Left\",\n \"Project\": [\"(pc.oid)::regclass\", \"pc.relkind\", \"pc.relfilenode\", \"(pc_t.oid)::regclass\", \"pc_t.relfilenode\"],\n \"Inner Unique\": true,\n \"Hash Cond\": \"(pc.reltoastrelid = pc_t.oid)\",\n \"Outer Hash Key\": \"pc.reltoastrelid\",\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"pg_class\",\n \"Schema\": \"pg_catalog\",\n \"Alias\": \"pc\",\n \"Output\": [\"pc.oid\", \"pc.relname\", \"pc.relnamespace\", \"pc.reltype\", \"pc.reloftype\", \"pc.relowner\", \"pc.relam\", \"pc.relfilenode\", \"pc.reltablespace\", \"pc.relpages\", \"pc.reltuples\", \"pc.relallvisible\", \"pc.reltoastrelid\", \"pc.relhasindex\", \"pc.relisshared\", \"pc.relpersistence\", \"pc.relkind\", \"pc.relnatts\", \"pc.relchecks\", \"pc.relhasrules\", \"pc.relhastriggers\", \"pc.relhassubclass\", \"pc.relrowsecurity\", \"pc.relforcerowsecurity\", \"pc.relispopulated\", \"pc.relreplident\", \"pc.relispartition\", \"pc.relrewrite\", \"pc.relfrozenxid\", \"pc.relminmxid\", \"pc.relacl\", \"pc.reloptions\", \"pc.relpartbound\"]\n },\n {\n \"Node Type\": \"Hash\",\n \"Parent Relationship\": \"Inner\",\n \"Parallel Aware\": false,\n \"Output\": [\"pc_t.oid\", \"pc_t.relfilenode\"],\n \"Hash Key\": \"pc_t.oid\",\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"pg_class\",\n \"Schema\": \"pg_catalog\",\n \"Alias\": \"pc_t\",\n \"Project\": [\"pc_t.oid\", \"pc_t.relfilenode\"]\n }\n ]\n }\n ]\n }\n }\n]\n\nand in plain text:\n\nHash Left Join\n Project: (pc.oid)::regclass, pc.relkind, pc.relfilenode, (pc_t.oid)::regclass, pc_t.relfilenode\n Inner Unique: true\n Hash Cond: (pc.reltoastrelid = pc_t.oid)\n Outer Hash Key: pc.reltoastrelid\n -> Seq Scan on pg_catalog.pg_class pc\n Output: pc.oid, pc.relname, pc.relnamespace, pc.reltype, pc.reloftype, pc.relowner, pc.relam, pc.relfilenode, pc.reltablespace, pc.relpages, pc.reltuples, pc.relallvisible, pc.reltoastrelid, pc.relhasindex, pc.relisshared, pc.relpersistence, pc.relkind, pc.relnatts, pc.relchecks, pc.relhasrules, pc.relhastriggers, pc.relhassubclass, pc.relrowsecurity, pc.relforcerowsecurity, pc.relispopulated, pc.relreplident, pc.relispartition, pc.relrewrite, pc.relfrozenxid, pc.relminmxid, pc.relacl, pc.reloptions, pc.relpartbound\n -> Hash\n Output: pc_t.oid, pc_t.relfilenode\n Hash Key: pc_t.oid\n -> Seq Scan on pg_catalog.pg_class pc_t\n Project: pc_t.oid, pc_t.relfilenode\n\nwhich also serves as an example about my previous point about\npotentially just hiding the 'Output: ' bit when no projection is done:\nIt's very verbose, without adding much, while hiding that there's\nactually nothing being done at the SeqScan level.\n\nI've attached a rebased version of the patcheset. No changes except for\na minor conflict, and removing some already applied bugfixes.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20191028231526.wcnwag7lllkra4qt%40alap3.anarazel.de", "msg_date": "Mon, 28 Oct 2019 17:02:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Mon, Oct 28, 2019 at 5:02 PM Andres Freund <andres@anarazel.de> wrote:\n> What I dislike about that is that it basically again is introducing\n\n\"again\"? Am I missing some history here? I'd love to read up on this\nif there are mistakes to learn from.\n\n> something that requires either pattern matching on key names (i.e. a key\n> of '(.*) JIT' is one that has information about JIT, and the associated\n> expresssion is in key $1), or knowing all the potential keys an\n> expression could be in.\n\nThat still seems less awkward than having to handle a Filter field\nthat's either scalar or a group. Most current EXPLAIN options just add\nadditional fields to the structured plan instead of modifying it, no?\nIf that output is better enough, though, maybe we should just always\nmake Filter a group and go with the breaking change? If tooling\nauthors need to treat this case specially anyway, might as well evolve\nthe format.\n\n> Another alternative would be to just remove the 'Output' line when a\n> node doesn't project - it can't really carry meaning in those cases\n> anyway?\n\n¯\\_(ツ)_/¯\n\nFor what it's worth, I certainly wouldn't miss it.\n\n\n", "msg_date": "Tue, 12 Nov 2019 13:42:10 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "Hi,\n\nOn 2019-11-12 13:42:10 -0800, Maciek Sakrejda wrote:\n> On Mon, Oct 28, 2019 at 5:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > What I dislike about that is that it basically again is introducing\n> \n> \"again\"? Am I missing some history here? I'd love to read up on this\n> if there are mistakes to learn from.\n\nI think I was mostly referring to mistakes we've made for the json etc\nkey names. By e.g. having expressions as \"Function Call\", \"Table\nFunction Call\", \"Filter\", \"TID Cond\", ... a tool that wants to interpret\nthe output needs awareness of all of these different names, rather than\nknowing that everything with a sub-group \"Expression\" has to be an\nexpression.\n\nI.e. instead of\n\n \"Plan\": {\n \"Node Type\": \"Seq Scan\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"pg_class\",\n \"Schema\": \"pg_catalog\",\n \"Alias\": \"pg_class\",\n \"Startup Cost\": 0.00,\n \"Total Cost\": 17.82,\n \"Plan Rows\": 385,\n \"Plan Width\": 68,\n \"Output\": [\"relname\", \"tableoid\"],\n \"Filter\": \"(pg_class.relname <> 'foo'::name)\"\n }\n\nwe ought to have gone for\n\n \"Plan\": {\n \"Node Type\": \"Seq Scan\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"pg_class\",\n \"Schema\": \"pg_catalog\",\n \"Alias\": \"pg_class\",\n \"Startup Cost\": 0.00,\n \"Total Cost\": 17.82,\n \"Plan Rows\": 385,\n \"Plan Width\": 68,\n \"Output\": [\"relname\", \"tableoid\"],\n \"Filter\": {\"Expression\" : { \"text\": (pg_class.relname <> 'foo'::name)\"}}\n }\n\nor something like that. Which'd then make it obvious how to add\ninformation about JIT to each expression.\n\n\nWhereas the proposal of the separate key name perpetuates the\nmessiness...\n\n\n> > something that requires either pattern matching on key names (i.e. a key\n> > of '(.*) JIT' is one that has information about JIT, and the associated\n> > expresssion is in key $1), or knowing all the potential keys an\n> > expression could be in.\n> \n> That still seems less awkward than having to handle a Filter field\n> that's either scalar or a group.\n\nYea, it's a sucky option :(\n\n\n> Most current EXPLAIN options just add\n> additional fields to the structured plan instead of modifying it, no?\n> If that output is better enough, though, maybe we should just always\n> make Filter a group and go with the breaking change? If tooling\n> authors need to treat this case specially anyway, might as well evolve\n> the format.\n\nYea, maybe that's the right thing to do. Would be nice to have some more\ninput...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Nov 2019 14:21:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Mon, Oct 28, 2019 at 7:21 PM Andres Freund <andres@anarazel.de> wrote:\n> Because that's the normal way to represent something non-existing for\n> formats like json? There's a lot of information we show always for !text\n> format, even if not really applicable to the context (e.g. Triggers for\n> select statements). I think there's an argument to made to deviate in\n> this case, but I don't think it's obvious.\n\nI've consistently been of the view that anyone who thinks that the\nFORMAT option should affect what information gets displayed doesn't\nunderstand the meaning of the word \"format.\" And I still feel that\nway.\n\nI also think that conditionally renaming \"Output\" to \"Project\" is a\nsuper-bad idea. The idea of a format like this is that the \"keys\" stay\nconstant and the values change. If you need to tell people more, you\nadd more keys.\n\nI also think that making the Filter field a group conditionally is a\nbad idea, for similar reasons. But making it always be a group doesn't\nnecessarily seem like a bad idea. I think, though, that you could\nhandle this in other ways, like by suffixing existing keys. e.g. if\nyou've got Index-Qual and Filter, just do Index-Qual-JIT and\nFilter-JIT and call it good.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 13 Nov 2019 14:29:07 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "Hi,\n\nOn 2019-11-13 14:29:07 -0500, Robert Haas wrote:\n> On Mon, Oct 28, 2019 at 7:21 PM Andres Freund <andres@anarazel.de> wrote:\n> > Because that's the normal way to represent something non-existing for\n> > formats like json? There's a lot of information we show always for !text\n> > format, even if not really applicable to the context (e.g. Triggers for\n> > select statements). I think there's an argument to made to deviate in\n> > this case, but I don't think it's obvious.\n> \n> I've consistently been of the view that anyone who thinks that the\n> FORMAT option should affect what information gets displayed doesn't\n> understand the meaning of the word \"format.\" And I still feel that\n> way.\n\nWell, it's not been that way since the format option was added, so ...\n\n\n\n> I also think that conditionally renaming \"Output\" to \"Project\" is a\n> super-bad idea. The idea of a format like this is that the \"keys\" stay\n> constant and the values change. If you need to tell people more, you\n> add more keys.\n\nYea, I don't like the compat break either. But I'm not so convinced\nthat just continuing to collect cruft because of compatibility is worth\nit - I just don't see an all that high reliance interest for explain\noutput.\n\nI think adding a new key is somewhat ok for !text, but for text that\ndoesn't seem like an easy solution?\n\nI kind of like my idea somewhere downthread, in a reply to Maciek, of\nsimply not listing \"Output\" for nodes that don't project. While that's\nstill a format break, it seems that tools already need to deal with\n\"Output\" not being present?\n\n\n> I also think that making the Filter field a group conditionally is a\n> bad idea, for similar reasons.\n\nOh, yea, it's utterly terrible (I called it crappy in my email :)).\n\n\n> But making it always be a group doesn't necessarily seem like a bad\n> idea. I think, though, that you could handle this in other ways, like\n> by suffixing existing keys. e.g. if you've got Index-Qual and Filter,\n> just do Index-Qual-JIT and Filter-JIT and call it good.\n\nMaciek suggested the same. But to me it seems going down that way will\nmake the format harder and harder to understand? So I think I'd rather\nbreak compat here, and go for a group.\n\nPersonally I think the group naming choice for explain makes the the\n!text outputs much less useful than they could be - we basically force\nevery tool to understand all possible keys, to make sense of formatted\noutput. Instead of something like 'Filter: {\"Qual\":{\"text\" : \"...\",\n\"JIT\": ...}' where a tool only needed to understand that everything that\nhas a \"Qual\" inside is a filtering expression, everything that has a\n\"Project\" is a projecting type of expression, ... a tool needs to know\nabout \"Inner Cond\", \"Order By\", \"Filter\", \"Recheck Cond\", \"TID Cond\",\n\"Join Filter\", \"Merge Cond\", \"Hash Cond\", \"One-Time Filter\", ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Nov 2019 12:03:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Wed, Nov 13, 2019 at 3:03 PM Andres Freund <andres@anarazel.de> wrote:\n> Well, it's not been that way since the format option was added, so ...\n\nIt was pretty close in the original version, but people keep trying to\nbe clever.\n\n> > I also think that conditionally renaming \"Output\" to \"Project\" is a\n> > super-bad idea. The idea of a format like this is that the \"keys\" stay\n> > constant and the values change. If you need to tell people more, you\n> > add more keys.\n>\n> Yea, I don't like the compat break either. But I'm not so convinced\n> that just continuing to collect cruft because of compatibility is worth\n> it - I just don't see an all that high reliance interest for explain\n> output.\n>\n> I think adding a new key is somewhat ok for !text, but for text that\n> doesn't seem like an easy solution?\n>\n> I kind of like my idea somewhere downthread, in a reply to Maciek, of\n> simply not listing \"Output\" for nodes that don't project. While that's\n> still a format break, it seems that tools already need to deal with\n> \"Output\" not being present?\n\nYes, I think leaving out Output for a node that doesn't Project would\nbe fine, as long as we're consistent about it.\n\n> > But making it always be a group doesn't necessarily seem like a bad\n> > idea. I think, though, that you could handle this in other ways, like\n> > by suffixing existing keys. e.g. if you've got Index-Qual and Filter,\n> > just do Index-Qual-JIT and Filter-JIT and call it good.\n>\n> Maciek suggested the same. But to me it seems going down that way will\n> make the format harder and harder to understand? So I think I'd rather\n> break compat here, and go for a group.\n\nPersonally, I don't care very much about backward-compatibility, or\nabout how hard it is for tools to parse. I want it to be possible, but\nif it takes a little extra effort, so be it. My main concern is having\nthe text output look good to human beings, because that is the primary\nformat and they are the primary consumers.\n\n> Personally I think the group naming choice for explain makes the the\n> !text outputs much less useful than they could be - we basically force\n> every tool to understand all possible keys, to make sense of formatted\n> output. Instead of something like 'Filter: {\"Qual\":{\"text\" : \"...\",\n> \"JIT\": ...}' where a tool only needed to understand that everything that\n> has a \"Qual\" inside is a filtering expression, everything that has a\n> \"Project\" is a projecting type of expression, ... a tool needs to know\n> about \"Inner Cond\", \"Order By\", \"Filter\", \"Recheck Cond\", \"TID Cond\",\n> \"Join Filter\", \"Merge Cond\", \"Hash Cond\", \"One-Time Filter\", ...\n\nIt's not that long of a list, and I don't know of a tool that tries to\ndo something in particular with all of those types of things anyway.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 15 Nov 2019 08:49:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Personally, I don't care very much about backward-compatibility, or\n> about how hard it is for tools to parse. I want it to be possible, but\n> if it takes a little extra effort, so be it.\n\nI think these are two separate issues. I agree on\nbackward-compatibility (especially if we can embed a server version in\nstructured EXPLAIN output to make it easier for tools to track format\ndifferences), but not caring how hard it is for tools to parse? What's\nthe point of structured formats, then?\n\n> My main concern is having\n> the text output look good to human beings, because that is the primary\n> format and they are the primary consumers.\n\nStructured output is also for human beings, albeit indirectly. That\ntext is the primary format may be more of a reflection of the\ndifficulty of building and integrating EXPLAIN tools than its inherent\nsuperiority (that said, I'll concede it's a concise and elegant format\nfor what it does). What if psql supported an EXPLAINER like it does\nEDITOR?\n\nFor what it's worth, after thinking about this a bit, I'd like to see\nstructured EXPLAIN evolve into a more consistent format, even if it\nmeans breaking changes (and I do think a version specifier at the root\nof the plan would make this easier).\n\n\n", "msg_date": "Fri, 15 Nov 2019 17:04:52 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n> On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Personally, I don't care very much about backward-compatibility, or\n>> about how hard it is for tools to parse. I want it to be possible, but\n>> if it takes a little extra effort, so be it.\n\n> I think these are two separate issues. I agree on\n> backward-compatibility (especially if we can embed a server version in\n> structured EXPLAIN output to make it easier for tools to track format\n> differences), but not caring how hard it is for tools to parse? What's\n> the point of structured formats, then?\n\nI'd not been paying any attention to this thread, but Andres just\nreferenced it in another discussion, so I went back and read it.\nHere's my two cents:\n\n* I agree with Robert that conditionally changing \"Output\" to \"Project\" is\nan absolutely horrid idea. That will break every tool that looks at this\nstuff, and it just flies in the face of the design principle that the\noutput schema should be stable, and it'll be a long term pain-in-the-rear\nfor regression test back-patching, and it will confuse users much more than\nit will help them. The other idea of suppressing \"Output\" in cases where\nno projection is happening might be all right, but only in text format\nwhere we don't worry about schema stability. Another idea perhaps is\nto emit \"Output: all columns\" (in text formats, less sure what to do in\nstructured formats).\n\n* In the structured formats, I think it should be okay to convert\nexpression-ish fields from being raw strings to being {Expression}\nsub-nodes with the raw string as one field. Aside from making it easy\nto inject JIT info, that would also open the door to someday showing\nexpressions in some more-parse-able format than a string, since other\nrepresentations could also be added as new fields. (I have a vague\nrecollection of wanting a list of all the Vars used in an expression,\nfor example.)\n\n* Unfortunately that does nothing for the problem of how to show\nper-expression JIT info in text format. Maybe we just shouldn't.\nI do not think that the readability-vs-usefulness tradeoff is going\nto be all that good there, anyway. Certainly for testing purposes\nit's going to be more useful to examine portions of a structured output.\n\n* I'm not on board with the idea of adding a version number to the\nstructured output formats. In the first place, it's too late, since\nwe didn't leave room for one to begin with. In the second, an overall\nversion number just isn't very helpful for this sort of problem. If a\ntool sees a version number higher than the latest thing it knows, what's\nit supposed to do, just fail? In practice it could still extract an awful\nlot of info, so that really isn't a desirable answer. It's better if the\ndata structure is such that a tool can understand that some sub-part of\nthe data is something it can't interpret, and just ignore that part.\n(This is more or less the same design principle that PNG image format\nwas built on, FWIW.) Adding on fields to an existing node type easily\nmeets that requirement, as does inventing new sub-node types, and that's\nall that we've done so far. But I think that replacing a scalar field\nvalue with a sub-node probably works too (at least for well-written\ntools), so the expression change suggested above should be OK.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Jan 2020 12:15:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "Hi,\n\nOn 2020-01-27 12:15:53 -0500, Tom Lane wrote:\n> Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n> > On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> Personally, I don't care very much about backward-compatibility, or\n> >> about how hard it is for tools to parse. I want it to be possible, but\n> >> if it takes a little extra effort, so be it.\n>\n> > I think these are two separate issues. I agree on\n> > backward-compatibility (especially if we can embed a server version in\n> > structured EXPLAIN output to make it easier for tools to track format\n> > differences), but not caring how hard it is for tools to parse? What's\n> > the point of structured formats, then?\n>\n> I'd not been paying any attention to this thread, but Andres just\n> referenced it in another discussion, so I went back and read it.\n> Here's my two cents:\n>\n> * I agree with Robert that conditionally changing \"Output\" to \"Project\" is\n> an absolutely horrid idea.\n\nYea, I think I'm convinced on that front. I never liked the idea, and\nthe opposition has been pretty unanimous...\n\n\n> That will break every tool that looks at this stuff, and it just flies\n> in the face of the design principle that the output schema should be\n> stable, and it'll be a long term pain-in-the-rear for regression test\n> back-patching, and it will confuse users much more than it will help\n> them. The other idea of suppressing \"Output\" in cases where no\n> projection is happening might be all right, but only in text format\n> where we don't worry about schema stability. Another idea perhaps is\n> to emit \"Output: all columns\" (in text formats, less sure what to do\n> in structured formats).\n\nI think I like the \"all columns\" idea. Not what I'd do on a green field,\nbut...\n\nIf we were just dealing with the XML format, we could just add a\n\n<Projecting>True/False</Projecting>\nto the current\n<Output>\n <Item>a</Item>\n <Item>b</Item>\n ...\n</Output>\n\nand it'd make plenty sense. but for json's\n \"Output\": [\"a\", \"b\"]\nand yaml's\n Output:\n - \"a\"\n - \"b\"\nthat's not an option as far as I can tell. Not sure what to do about\nthat.\n\n\n\n> * In the structured formats, I think it should be okay to convert\n> expression-ish fields from being raw strings to being {Expression}\n> sub-nodes with the raw string as one field. Aside from making it easy\n> to inject JIT info, that would also open the door to someday showing\n> expressions in some more-parse-able format than a string, since other\n> representations could also be added as new fields. (I have a vague\n> recollection of wanting a list of all the Vars used in an expression,\n> for example.)\n\nCool. Being extendable seems like a good direction. That's what I\nprimarily dislike about the various work-arounds for how to associate\ninformation about JIT by a \"related\" name.\n\nThat'd e.g. open the door to have both a normalized and an original\nexpression in the explain output. Which would be quite valuable for\nsome monitoring tools.\n\n\n> * Unfortunately that does nothing for the problem of how to show\n> per-expression JIT info in text format. Maybe we just shouldn't.\n> I do not think that the readability-vs-usefulness tradeoff is going\n> to be all that good there, anyway. Certainly for testing purposes\n> it's going to be more useful to examine portions of a structured output.\n\nI think I can live with that, I don't think it's going to be a very\ncommonly used option. It's basically useful for regression tests, JIT\nimprovements, and people that want to see whether they can change their\nquery / schema to make better use of JIT - the latter category won't be\nmany, I think.\n\nSince this is going to be a default off option anyway, I don't think\nwe'd need to be as concerned with compatibility. But even leaving\ncompatibility aside, it's not that clear how to best attach information\nin the current text format, without being confusing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Jan 2020 09:41:03 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Fri, Nov 15, 2019 at 8:05 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Personally, I don't care very much about backward-compatibility, or\n> > about how hard it is for tools to parse. I want it to be possible, but\n> > if it takes a little extra effort, so be it.\n>\n> I think these are two separate issues. I agree on\n> backward-compatibility (especially if we can embed a server version in\n> structured EXPLAIN output to make it easier for tools to track format\n> differences), but not caring how hard it is for tools to parse? What's\n> the point of structured formats, then?\n\nTo make the data easy to parse. :-)\n\nI mean, it's clear that, on the one hand, having a format like JSON\nthat, as has recently been pointed out elsewhere, is parsable by a\nwide variety of tools, is advantageous. However, I don't think it\nreally matters whether the somebody's got to look at a tag called\nFlump and match it up with the data in another tag called JIT-Flump,\nor whether there's a Flump group that has RegularStuff and JIT tags\ninside of it. There's just not much difference in the effort involved.\nBeing able to parse the JSON or XML using generic code is enough of a\nwin that the details shouldn't matter that much.\n\nI think if you were going to complain about the limitations of our\ncurrent EXPLAIN output format, it'd make a lot more sense to focus on\nthe way we output expressions. If you want to mechanically parse one\nof those expressions and figure out what it's doing - what functions\nor operators are involved, and to what they are being applied - you\nare probably out of luck altogether, and you are certainly not going\nto have an easy time of it. I'm not saying we have to solve that\nproblem, but I believe it's a much bigger nuisance than the sort of\nthing we are talking about here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 27 Jan 2020 14:01:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Mon, Jan 27, 2020 at 12:41 PM Andres Freund <andres@anarazel.de> wrote:\n> > I do not think that the readability-vs-usefulness tradeoff is going\n> > to be all that good there, anyway. Certainly for testing purposes\n> > it's going to be more useful to examine portions of a structured output.\n>\n> I think I can live with that, I don't think it's going to be a very\n> commonly used option. It's basically useful for regression tests, JIT\n> improvements, and people that want to see whether they can change their\n> query / schema to make better use of JIT - the latter category won't be\n> many, I think.\n\nI intensely dislike having information that we can't show in the text\nformat, or really, that we can't show in every format.\n\nI might be outvoted, but I stand by that position.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 27 Jan 2020 14:02:26 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n>>> I do not think that the readability-vs-usefulness tradeoff is going\n>>> to be all that good there, anyway. Certainly for testing purposes\n>>> it's going to be more useful to examine portions of a structured output.\n\n> I intensely dislike having information that we can't show in the text\n> format, or really, that we can't show in every format.\n\nWell, if it's relegated to a \"jit = detail\" option or some such,\nthe readability objection could be overcome. But I'm still not clear\non how you'd physically wedge it into the output, at least not in a way\nthat matches up with the proposal that non-text modes handle this stuff\nby producing sub-nodes for the existing types of expression fields.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Jan 2020 16:18:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Mon, Jan 27, 2020 at 11:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Nov 15, 2019 at 8:05 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> > On Fri, Nov 15, 2019 at 5:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > Personally, I don't care very much about backward-compatibility, or\n> > > about how hard it is for tools to parse. I want it to be possible, but\n> > > if it takes a little extra effort, so be it.\n> >\n> > I think these are two separate issues. I agree on\n> > backward-compatibility (especially if we can embed a server version in\n> > structured EXPLAIN output to make it easier for tools to track format\n> > differences), but not caring how hard it is for tools to parse? What's\n> > the point of structured formats, then?\n>\n> To make the data easy to parse. :-)\n>\n> I mean, it's clear that, on the one hand, having a format like JSON\n> that, as has recently been pointed out elsewhere, is parsable by a\n> wide variety of tools, is advantageous. However, I don't think it\n> really matters whether the somebody's got to look at a tag called\n> Flump and match it up with the data in another tag called JIT-Flump,\n> or whether there's a Flump group that has RegularStuff and JIT tags\n> inside of it. There's just not much difference in the effort involved.\n> Being able to parse the JSON or XML using generic code is enough of a\n> win that the details shouldn't matter that much.\n\nHaving a structured EXPLAIN schema that's semantically consistent is\nstill valuable. At the end of the day, it's humans who are writing the\ntools that consume that structured output. Given the sparse structured\nEXPLAIN schema documentation, as someone who currently works on\nEXPLAIN tooling, I'd prefer a trend toward consistency at the expense\nof backward compatibility. (Of course, we should avoid gratuitous\nchanges.)\n\nBut I take back the version number suggestion after reading Tom's\nresponse; that was naïve.\n\n> I think if you were going to complain about the limitations of our\n> current EXPLAIN output format, it'd make a lot more sense to focus on\n> the way we output expressions.\n\nThat would be nice to have, but for what it's worth, my main complaint\nwould be about documentation (especially around structured formats).\nThe \"Using EXPLAIN\" section covers the basics, but understanding what\nnode types exist, and what fields show up for what nodes and what they\nmean--that seems to be a big missing piece (I don't feel entitled to\nthis documentation; as a structured format consumer, I'm just pointing\nout a deficiency). Contrast that with the great wire protocol\ndocumentation. In some ways it's easier to work on native drivers than\non EXPLAIN tooling because the docs are thorough and well organized.\n\n\n", "msg_date": "Mon, 27 Jan 2020 13:31:06 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" }, { "msg_contents": "On Mon, Jan 27, 2020 at 4:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> I do not think that the readability-vs-usefulness tradeoff is going\n> >>> to be all that good there, anyway. Certainly for testing purposes\n> >>> it's going to be more useful to examine portions of a structured output.\n>\n> > I intensely dislike having information that we can't show in the text\n> > format, or really, that we can't show in every format.\n>\n> Well, if it's relegated to a \"jit = detail\" option or some such,\n> the readability objection could be overcome. But I'm still not clear\n> on how you'd physically wedge it into the output, at least not in a way\n> that matches up with the proposal that non-text modes handle this stuff\n> by producing sub-nodes for the existing types of expression fields.\n\nWell, remember that the text format was the original format. The whole\nidea of \"groups\" was an anachronism that I imposed on the text format\nto make it possible to add other formats. It wasn't entirely natural,\nbecause the text format basically indicated nesting by indentation,\nand that wasn't going to work for XML or JSON. The text format also\nfelt free to repeat elements and assume the reader would figure it\nout; repeating elements is OK in XML in general, but in JSON it's only\nOK if the surrounding context is an array rather than an object.\nAnyway, the point is that I (necessarily) started with whatever we had\nand found a way to fit it into a structure. It seems like it ought to\nbe possible to go the other direction also, and figure out how to make\nthe structured data look OK as text.\n\nHere's Andres's original example:\n\n\"Filter\": {\n \"Expr\": \"(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp\nwithout time zone)\",\n \"JIT-Expr\": \"evalexpr_0_2\",\n \"JIT-Deform-Scan\": \"deform_0_3\",\n \"JIT-Deform-Outer\": null,\n \"JIT-Deform-Inner\": null\n}\n\nRight now we show:\n\nFilter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp\nwithout time zone)\n\nAndres proposed:\n\nFilter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp\nwithout time zone); JIT-Expr: evalexpr_0_2, JIT-Deform-Scan:\ndeform_0_3\n\nThat's not ideal because it's all on one line, but that could be changed:\n\nFilter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp\nwithout time zone)\n JIT-Expr: evalexpr_0_2\n JIT-Deform-Scan: deform_0_3\n\nI would propose either including null all the time or omitting it all\nthe time, so that we would either change the JSON output to...\n\n\"Filter\": {\n \"Expr\": \"(lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp\nwithout time zone)\",\n \"JIT-Expr\": \"evalexpr_0_2\",\n \"JIT-Deform-Scan\": \"deform_0_3\"\n}\n\nOr the text output to:\n\nFilter: (lineitem.l_shipdate <= '1998-09-18 00:00:00'::timestamp\nwithout time zone)\n JIT-Expr: evalexpr_0_2\n JIT-Deform-Scan: deform_0_3\n JIT-Deform-Outer: null\n JIT-Deform-Inner: null\n\nYou could argue that this is inconsistent because the JSON format\nshows a bunch of keys that are essentially parallel, and this text\nformat makes the Expr key essentially the primary value and the others\nsecondary. But since the text format is for human beings, and since\nhuman beings are likely to find the Expr key to be the primary piece\nof information, maybe that's totally fine.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 28 Jan 2020 13:07:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JIT performance bug/regression & JIT EXPLAIN" } ]
[ { "msg_contents": "On Thu, Sep 26, 2019 at 2:57 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:\n> Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru> writes:\n> > On Thu, Sep 26, 2019 at 2:12 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:\n> >> The proximate problem seems to be that compareItems() is insufficiently\n> >> careful to ensure that both values are non-null before passing them\n> >> off to datatype-specific code. The code accidentally fails to crash\n> >> on 64-bit machines, but it's still giving garbage answers, I think.\n>\n> > I've found compareItems() code to not apply appropriate cast to/from\n> > Datum. Fixed in 7881bb14f4. This makes test pass on my local 32-bit\n> > machine. I'll keep look on buildfarm.\n>\n> Hm. dromedary seems not to crash either with that fix, but I'm not\n> sure why not, because when I was running the previous tree by hand,\n> the stack trace showed pretty clearly that we were getting to\n> timestamp_cmp with one null and one non-null argument. So I don't\n> believe your argument that that's impossible, and even if it is,\n> I do not think it's sane for compareItems to depend on that ---\n> especially when one of its code paths *does* check for nulls.\n>\n> I do not have a very good opinion about the quality of this code\n> upon my first glance at it. Just looking at compareDatetime:\n\n\nThe patch with compareDatetime() refactoring was posted in the original\nthread in pgsql-hackers [1].\n\n\n> * The code is schizophrenic about whether it's allowed to pass a\n> null have_error pointer or not. It is not very sensible to have\n> some code doing\n> if (have_error && *have_error)\n> return 0;\n> when other code branches will dump core for null have_error.\n> Given that if this test actually was doing anything, what it\n> would be doing is failing to detect error conditions, I think\n> the only sensible design is to insist that have_error mustn't be\n> null, in which case these checks for null pointer should be removed,\n> because (a) they waste code and cycles and (b) they mislead the\n> reader as to what the API of compareDatetime actually is.\n>\n> * At least some of the code paths will malfunction if the caller\n> didn't initialize *have_error to false. If that is an intended API\n> requirement, it is not acceptable for the function header comment\n> not to say so. (For bonus points, it'd be nice if the header\n> comment agreed with the code as to the name of the variable.)\n> If this isn't an intended requirement, you need to fix the code,\n> probably by initializing \"*have_error = false;\" up at the top.\n>\n> * This is silly:\n>\n> if (*have_error)\n> return 0;\n>\n> *have_error = false;\n>\n> * Also, given that you have that \"if (*have_error)\" where you do,\n> the have_error tests inside the switch are useless redundancy.\n> You might as well just remove them completely and let the final\n> test handle falling out if a conversion failed. Alternatively\n> you could drop the final test, because as the code stands right\n> now, it's visibly impossible to get there with *have_error true.\n\nYes, oddities in have_error handling seem to appear during numerous\nreworks of the patch. have_error is really non-NULL now, and its\nhandling was simplified in the patch.\n\n\n> * It's a bit schizophrenic also that some of the switches\n> lack default:s (and rely on the if (!cmpfunc) below), while\n> the outer switch does have its own, completely redundant\n> default:. I'd get rid of that default: and instead add\n> a comment explaining that the !cmpfunc test substitutes for\n> default branches.\n\nDefault cases with elog()s were added to the all switches. Previously,\nthe default case in the outer switch was used to report invalid type1,\nand cmpfunc was used to report invalid type2.\n\n\n> * OIDs are unsigned, so if you must print them, use %u not %d.\n\nFixed.\n\n> * The errhints don't follow project message style.\n\nFixed, but I'm not sure about \"*_tz()\". Maybe it's worth to pass current\njsonb_xxx function name to compareDatetime() through JsonPathExecContext?\n\n\n> * The blank lines before \"break\"s aren't really per project\n> style either, IMO. They certainly aren't doing anything to\n> improve readability, and they help limit how much code you\n> can see at once.\n\nFixed. If I recall it correctly, these lines were added by pgindent.\n\n\n> * More generally, it's completely unclear why some error conditions\n> are thrown as errors and others just result in returning *have_error.\n> In particular, it seems weird that some unsupported datatype combinations\n> cause hard errors while others do not. Maybe that's fine, but if so,\n> the function header comment is falling down on the job by not explaining\n> the reasoning.\n\nAll cast errors are caught by jsonpath predicate. Comparison of the\nuncomparable datetime types (time[tz] to dated types) also returns Unknown.\nAnd only if datatype conversion requires current timezone, which is not\navailable in immutable family of jsonb_xxx() functions, hard error is thrown.\nThis behavior is specific only for our jsonpath implementation. But I'm\nreally not sure if we should throw an error or return Unknown in this case.\n\n[1]: https://www.postgresql.org/message-id/d9244568-08bb-5dcf-db25-540412e2e61f%40postgrespro.ru\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn Thu, Sep 26, 2019 at 2:57 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:\n> Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru> writes:\n> > On Thu, Sep 26, 2019 at 2:12 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:\n> >> The proximate problem seems to be that compareItems() is insufficiently\n> >> careful to ensure that both values are non-null before passing them\n> >> off to datatype-specific code. The code accidentally fails to crash\n> >> on 64-bit machines, but it's still giving garbage answers, I think.\n>\n> > I've found compareItems() code to not apply appropriate cast to/from\n> > Datum. Fixed in 7881bb14f4. This makes test pass on my local 32-bit\n> > machine. I'll keep look on buildfarm.\n>\n> Hm. dromedary seems not to crash either with that fix, but I'm not\n> sure why not, because when I was running the previous tree by hand,\n> the stack trace showed pretty clearly that we were getting to\n> timestamp_cmp with one null and one non-null argument. So I don't\n> believe your argument that that's impossible, and even if it is,\n> I do not think it's sane for compareItems to depend on that ---\n> especially when one of its code paths *does* check for nulls.\n>\n> I do not have a very good opinion about the quality of this code\n> upon my first glance at it. Just looking at compareDatetime:\n\n\nThe patch with compareDatetime() refactoring was posted in the original \nthread in pgsql-hackers [1].\n\n\n> * The code is schizophrenic about whether it's allowed to pass a\n> null have_error pointer or not. It is not very sensible to have\n> some code doing\n> if (have_error && *have_error)\n> return 0;\n> when other code branches will dump core for null have_error.\n> Given that if this test actually was doing anything, what it\n> would be doing is failing to detect error conditions, I think\n> the only sensible design is to insist that have_error mustn't be\n> null, in which case these checks for null pointer should be removed,\n> because (a) they waste code and cycles and (b) they mislead the\n> reader as to what the API of compareDatetime actually is.\n>\n> * At least some of the code paths will malfunction if the caller\n> didn't initialize *have_error to false. If that is an intended API\n> requirement, it is not acceptable for the function header comment\n> not to say so. (For bonus points, it'd be nice if the header\n> comment agreed with the code as to the name of the variable.)\n> If this isn't an intended requirement, you need to fix the code,\n> probably by initializing \"*have_error = false;\" up at the top.\n>\n> * This is silly:\n>\n> if (*have_error)\n> return 0;\n>\n> *have_error = false;\n>\n> * Also, given that you have that \"if (*have_error)\" where you do,\n> the have_error tests inside the switch are useless redundancy.\n> You might as well just remove them completely and let the final\n> test handle falling out if a conversion failed. Alternatively\n> you could drop the final test, because as the code stands right\n> now, it's visibly impossible to get there with *have_error true.\n\nYes, oddities in have_error handling seem to appear during numerous \nreworks of the patch. have_error is really non-NULL now, and its\nhandling was simplified in the patch.\n\n\n> * It's a bit schizophrenic also that some of the switches\n> lack default:s (and rely on the if (!cmpfunc) below), while\n> the outer switch does have its own, completely redundant\n> default:. I'd get rid of that default: and instead add\n> a comment explaining that the !cmpfunc test substitutes for\n> default branches.\n\nDefault cases with elog()s were added to the all switches. Previously, \nthe default case in the outer switch was used to report invalid type1, \nand cmpfunc was used to report invalid type2.\n\n\n> * OIDs are unsigned, so if you must print them, use %u not %d.\n\nFixed.\n\n> * The errhints don't follow project message style.\n\nFixed, but I'm not sure about \"*_tz()\". Maybe it's worth to pass current \njsonb_xxx function name to compareDatetime() through JsonPathExecContext?\n\n\n> * The blank lines before \"break\"s aren't really per project\n> style either, IMO. They certainly aren't doing anything to\n> improve readability, and they help limit how much code you\n> can see at once.\n\nFixed. If I recall it correctly, these lines were added by pgindent.\n\n\n> * More generally, it's completely unclear why some error conditions\n> are thrown as errors and others just result in returning *have_error.\n> In particular, it seems weird that some unsupported datatype combinations\n> cause hard errors while others do not. Maybe that's fine, but if so,\n> the function header comment is falling down on the job by not explaining\n> the reasoning.\n\nAll cast errors are caught by jsonpath predicate. Comparison of the \nuncomparable datetime types (time[tz] to dated types) also returns Unknown.\nAnd only if datatype conversion requires current timezone, which is not \navailable in immutable family of jsonb_xxx() functions, hard error is thrown.\nThis behavior is specific only for our jsonpath implementation. But I'm \nreally not sure if we should throw an error or return Unknown in this case.\n\n\n\n[1]: https://www.postgresql.org/message-id/d9244568-08bb-5dcf-db25-540412e2e61f%40postgrespro.ru\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 27 Sep 2019 18:55:31 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" }, { "msg_contents": "On Fri, Sep 27, 2019 at 6:58 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> On Thu, Sep 26, 2019 at 2:57 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:\n> > * More generally, it's completely unclear why some error conditions\n> > are thrown as errors and others just result in returning *have_error.\n> > In particular, it seems weird that some unsupported datatype combinations\n> > cause hard errors while others do not. Maybe that's fine, but if so,\n> > the function header comment is falling down on the job by not explaining\n> > the reasoning.\n>\n> All cast errors are caught by jsonpath predicate. Comparison of the\n> uncomparable datetime types (time[tz] to dated types) also returns Unknown.\n> And only if datatype conversion requires current timezone, which is not\n> available in immutable family of jsonb_xxx() functions, hard error is thrown.\n> This behavior is specific only for our jsonpath implementation. But I'm\n> really not sure if we should throw an error or return Unknown in this case.\n\nI'd like to share my further thoughts about errors. I think we should\nsuppress errors defined by standard and which user can expect. So,\nuser can expect that wrong date format causes an error, division by\nzero causes an error and so on. And those errors are defined by\nstandard.\n\nHowever, we error is caused by limitation of our implementation, then\nsuppression doesn't look right to me.\n\nFor instance.\n\n# select jsonb_path_query('\"1000000-01-01\"', '$.datetime() >\n\"2020-01-01 12:00:00\".datetime()'::jsonpath);\n jsonb_path_query\n------------------\n null\n(1 row)\n\n# select '1000000-01-01'::date > '2020-01-01 12:00:00'::timestamp;\nERROR: date out of range for timestamp\n\nSo, jsonpath behaves like 1000000 is not greater than 2020. This\nlooks like plain false. And user can't expect that unless she is\nfamiliar with our particular issues. Now I got opinion that such\nerrors shouldn't be suppressed. We can't suppress *every* error. If\ntrying to do this, we can come to an idea to suppress OOM error and\nreturn garbage then, which is obviously ridiculous. Opinions?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 29 Sep 2019 17:29:56 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" }, { "msg_contents": "On Sun, Sep 29, 2019 at 10:30 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> So, jsonpath behaves like 1000000 is not greater than 2020. This\n> looks like plain false. And user can't expect that unless she is\n> familiar with our particular issues. Now I got opinion that such\n> errors shouldn't be suppressed. We can't suppress *every* error. If\n> trying to do this, we can come to an idea to suppress OOM error and\n> return garbage then, which is obviously ridiculous. Opinions?\n\nI don't know enough about jsonpath to have a view on specifically\nwhich errors ought to be suppressed, but I agree that it's probably\nnot all of them. In fact, I'd go so far as to say that thinking about\nit in terms of error suppression is probably not the right approach in\nthe first place. Rather, you want to ask what behavior you're trying\nto create.\n\nFor example, if I'm trying to write a function that takes a string as\ninput and returns JSON, where the result is formatted as a number if\npossible or a string otherwise, I might want access at the C level to\nthe guts of numeric_in, with all parsing errors returned rather than\nthrown. But it would be silly to suppress an out-of-memory condition,\nbecause that doesn't help the caller. The caller wants to know whether\nthe thing can be parsed as a number or not, and that has nothing to do\nwith whether we're out of memory, so an out-of-memory error should\nstill be thrown.\n\nIn this case here, it seems to me that you should similarly start by\ndefining the behavior you're trying to create. Unless that's clearly\ndefined, deciding which errors to suppress may be difficult.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 30 Sep 2019 15:55:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" }, { "msg_contents": "On Mon, Sep 30, 2019 at 10:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Sun, Sep 29, 2019 at 10:30 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > So, jsonpath behaves like 1000000 is not greater than 2020. This\n> > looks like plain false. And user can't expect that unless she is\n> > familiar with our particular issues. Now I got opinion that such\n> > errors shouldn't be suppressed. We can't suppress *every* error. If\n> > trying to do this, we can come to an idea to suppress OOM error and\n> > return garbage then, which is obviously ridiculous. Opinions?\n>\n> I don't know enough about jsonpath to have a view on specifically\n> which errors ought to be suppressed, but I agree that it's probably\n> not all of them. In fact, I'd go so far as to say that thinking about\n> it in terms of error suppression is probably not the right approach in\n> the first place. Rather, you want to ask what behavior you're trying\n> to create.\n>\n> For example, if I'm trying to write a function that takes a string as\n> input and returns JSON, where the result is formatted as a number if\n> possible or a string otherwise, I might want access at the C level to\n> the guts of numeric_in, with all parsing errors returned rather than\n> thrown. But it would be silly to suppress an out-of-memory condition,\n> because that doesn't help the caller. The caller wants to know whether\n> the thing can be parsed as a number or not, and that has nothing to do\n> with whether we're out of memory, so an out-of-memory error should\n> still be thrown.\n>\n> In this case here, it seems to me that you should similarly start by\n> defining the behavior you're trying to create. Unless that's clearly\n> defined, deciding which errors to suppress may be difficult.\n\nMaking C functions return errors rather than throw is what we're\nimplementing in our patchsets. In big picture the behavior we're\ntrying to create is SQL Standard 2016. It defines error handling as\nfollowing.\n\n> The SQL operators JSON_VALUE, JSON_QUERY, JSON_TABLE, and JSON_EXISTS provide\n> the following mechanisms to handle these errors:\n> 1) The SQL/JSON path language traps any errors that occur during the evaluation\n> of a <JSON filter expression>. Depending on the precise <JSON path predicate>\n> contained in the <JSON filter expression>, the result may be Unknown, True, or\n> False, depending on the outcome of non-error tests evaluated in the <JSON path\n> predicate>.\n> 2) The SQL/JSON path language has two modes, strict and lax, which govern\n> structural errors, as follows:\n> a) In lax mode:\n> i) If an operation requires an SQL/JSON array but the operand is not an SQL\n> JSON array, then the operand is first “wrapped” in an SQL/JSON array prior\n> to performing the operation.\n> ii) If an operation requires something other than an SQL/JSON array, but\n> the operand is an SQL/JSON array, then the operand is “unwrapped” by\n> converting its elements into an SQL/JSON sequence prior to performing the\n> operation.\n> iii) After applying the preceding resolutions to structural errors, if\n> there is still a structural error, the result is an empty SQL/JSON\n> sequence.\n> b) In strict mode, if the structural error occurs within a <JSON filter\n> expression>, then the error handling of <JSON filter expression> applies\n> Otherwise, a structural error is an unhandled error.\n> 3) Non-structural errors outside of a <JSON path predicate> are always\n> unhandled errors, resulting in an exception condition returned from the path\n> engine to the SQL/JSON query operator.\n> 4) The SQL/JSON query operators provide an ON ERROR clause to specify the\n> behavior in case of an input conversion error, an unhandled structural error,\n> an unhandled non-structural error, or an output conversion error.\n\nSo, basically standard requires us to suppress any error happening in\nfilter expression. But as I wrote before suppression of errors in\ndatetime comparison may lead to surprising results. That happens in\nrare corner cases, but still. This makes uneasy choice between\nconsistent behavior and standard behavior.\n\nHowever, Nikita Glukhov gave to good idea about that. Instead on\nthinking about whether we should suppress or not cast errors in\ndatetime comparison, we may just eliminate those error. So, if we\nknow that casting date to timestamp overflows upper bound of finite\ntimestamp, then we also know that this date is greater than any finite\ntimestamp. So, we still able to do correct comparison. I'm going to\nimplement this and post a patch.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\nOn Mon, Sep 30, 2019 at 10:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Sep 29, 2019 at 10:30 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > So, jsonpath behaves like 1000000 is not greater than 2020. This\n> > looks like plain false. And user can't expect that unless she is\n> > familiar with our particular issues. Now I got opinion that such\n> > errors shouldn't be suppressed. We can't suppress *every* error. If\n> > trying to do this, we can come to an idea to suppress OOM error and\n> > return garbage then, which is obviously ridiculous. Opinions?\n>\n> I don't know enough about jsonpath to have a view on specifically\n> which errors ought to be suppressed, but I agree that it's probably\n> not all of them. In fact, I'd go so far as to say that thinking about\n> it in terms of error suppression is probably not the right approach in\n> the first place. Rather, you want to ask what behavior you're trying\n> to create.\n>\n> For example, if I'm trying to write a function that takes a string as\n> input and returns JSON, where the result is formatted as a number if\n> possible or a string otherwise, I might want access at the C level to\n> the guts of numeric_in, with all parsing errors returned rather than\n> thrown. But it would be silly to suppress an out-of-memory condition,\n> because that doesn't help the caller. The caller wants to know whether\n> the thing can be parsed as a number or not, and that has nothing to do\n> with whether we're out of memory, so an out-of-memory error should\n> still be thrown.\n>\n> In this case here, it seems to me that you should similarly start by\n> defining the behavior you're trying to create. Unless that's clearly\n> defined, deciding which errors to suppress may be difficult.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 1 Oct 2019 20:41:43 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" }, { "msg_contents": "On Tue, Oct 1, 2019 at 1:41 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> So, basically standard requires us to suppress any error happening in\n> filter expression.\n\nSounds like the standard is dumb, then. :-)\n\n> But as I wrote before suppression of errors in\n> datetime comparison may lead to surprising results. That happens in\n> rare corner cases, but still. This makes uneasy choice between\n> consistent behavior and standard behavior.\n\nYeah.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 3 Oct 2019 09:48:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" }, { "msg_contents": "On Thu, Oct 3, 2019 at 4:48 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Oct 1, 2019 at 1:41 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > So, basically standard requires us to suppress any error happening in\n> > filter expression.\n>\n> Sounds like the standard is dumb, then. :-)\n>\n> > But as I wrote before suppression of errors in\n> > datetime comparison may lead to surprising results. That happens in\n> > rare corner cases, but still. This makes uneasy choice between\n> > consistent behavior and standard behavior.\n>\n> Yeah.\n\nProposed patch eliminates this dilemma in particular case. It\nprovides correct cross-type comparison of datetime values even if one\nof values overflows during cast. In order to do this, I made cast\nfunctions to report whether lower or upper boundary is overflowed. We\nknow that overflowed value is lower (or upper) than any valid value\nexcept infinity.\n\nThis patch also changes the way timestamp to timestamptz cast works.\nPreviously it did timestamp2tm() then tm2timestamp(). Instead, after\ntimestamp2tm() it calculates timezone offset and applies it to\noriginal timestamp value. I hope this is correct. If so, besides\nmaking overflow handling easier, this refactoring saves some CPU\ncycles.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 13 Oct 2019 04:52:21 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" }, { "msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> This patch also changes the way timestamp to timestamptz cast works.\n> Previously it did timestamp2tm() then tm2timestamp(). Instead, after\n> timestamp2tm() it calculates timezone offset and applies it to\n> original timestamp value. I hope this is correct.\n\nI'd wonder whether this gives the same answers near DST transitions,\nwhere it's not real clear which offset applies.\n\nPlease *don't* wrap this sort of thing into an unrelated feature patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Oct 2019 22:24:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" }, { "msg_contents": "On Sun, Oct 13, 2019 at 5:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > This patch also changes the way timestamp to timestamptz cast works.\n> > Previously it did timestamp2tm() then tm2timestamp(). Instead, after\n> > timestamp2tm() it calculates timezone offset and applies it to\n> > original timestamp value. I hope this is correct.\n>\n> I'd wonder whether this gives the same answers near DST transitions,\n> where it's not real clear which offset applies.\n\nI will try this and share the results.\n\n> Please *don't* wrap this sort of thing into an unrelated feature patch.\n\nSure, thank you for noticing.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 14 Oct 2019 05:36:39 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" }, { "msg_contents": "On Mon, Oct 14, 2019 at 5:36 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Sun, Oct 13, 2019 at 5:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > > This patch also changes the way timestamp to timestamptz cast works.\n> > > Previously it did timestamp2tm() then tm2timestamp(). Instead, after\n> > > timestamp2tm() it calculates timezone offset and applies it to\n> > > original timestamp value. I hope this is correct.\n> >\n> > I'd wonder whether this gives the same answers near DST transitions,\n> > where it's not real clear which offset applies.\n>\n> I will try this and share the results.\n\nI've separated refactoring of timestamp to timestamptz cast into a\nseparate patch. Patchset is attached.\n\nI've investigates the behavior near DST transitions in Moscow\ntimezone. Last two DST transitions it had in 2010-03-28 and\n2010-10-31. It behaves the same with and without patch. The tests\nare below.\n\n# set timezone = 'Europe/Moscow';\n\n# select '2010-03-28 01:59:59'::timestamp::timestamptz;\n timestamptz\n------------------------\n 2010-03-28 01:59:59+03\n(1 row)\n\n# select '2010-03-28 02:00:00'::timestamp::timestamptz;\n timestamptz\n------------------------\n 2010-03-28 03:00:00+04\n(1 row)\n\n# select '2010-03-28 02:59:59'::timestamp::timestamptz;\n timestamptz\n------------------------\n 2010-03-28 03:59:59+04\n(1 row)\n\n# select '2010-03-28 03:00:00'::timestamp::timestamptz;\n timestamptz\n------------------------\n 2010-03-28 03:00:00+04\n(1 row)\n\n# select '2010-10-31 01:59:59'::timestamp::timestamptz;\n timestamptz\n------------------------\n 2010-10-31 01:59:59+04\n(1 row)\n\n# select '2010-10-31 02:00:00'::timestamp::timestamptz;\n timestamptz\n------------------------\n 2010-10-31 02:00:00+03\n(1 row)\n\nBTW, I've noticed how ridiculous cast behaves for values in the range\nof [2010-03-28 02:00:00, 2010-03-28 03:00:00). Now, I think that\ntimestamptz type, which explicitly stores timezone offset, has some\npoint. At least, it would be possible to save the same local time\nvalue during casts.\n\nI'm going to push these two patches if no objections.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 19 Oct 2019 19:44:49 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Implement jsonpath .datetime() method" } ]
[ { "msg_contents": "Hackers,\n\nIn versions < PG12 recovery_target_action has a behavior that appears to\nbe a bug, or is at least undocumented. If hot_standby = off and\nrecovery_target_action is not specified then the cluster will promote\nwhen the target is found rather than shutting down as the documentation\nseems to indicate. If recovery_target_action is explicitly set to pause\nthen the cluster will shutdown as expected.\n\nIn PG12 the shutdown occurs even when recovery_target_action is not\nexplicitly set. This seems like good behavior and it matches the\ndocumentation as I read it.\n\nThe question for the old versions: is this something that should be\nfixed in the code or in the documentation?\n\nMy vote is to make this explicit in the documentation, since changing\nthe recovery behavior in old versions could lead to nasty surprises.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 27 Sep 2019 13:52:38 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Document recovery_target_action behavior?" }, { "msg_contents": "On Sat, Sep 28, 2019 at 2:52 AM David Steele <david@pgmasters.net> wrote:\n>\n> Hackers,\n>\n> In versions < PG12 recovery_target_action has a behavior that appears to\n> be a bug, or is at least undocumented. If hot_standby = off and\n> recovery_target_action is not specified then the cluster will promote\n> when the target is found rather than shutting down as the documentation\n> seems to indicate. If recovery_target_action is explicitly set to pause\n> then the cluster will shutdown as expected.\n\nGood catch!\n\n> In PG12 the shutdown occurs even when recovery_target_action is not\n> explicitly set. This seems like good behavior and it matches the\n> documentation as I read it.\n\nAgreed.\n\n> The question for the old versions: is this something that should be\n> fixed in the code or in the documentation?\n>\n> My vote is to make this explicit in the documentation, since changing\n> the recovery behavior in old versions could lead to nasty surprises.\n\n+1 to update the documentation.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Sun, 29 Sep 2019 00:14:44 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Document recovery_target_action behavior?" }, { "msg_contents": "On 9/28/19 11:14 AM, Fujii Masao wrote:\n> On Sat, Sep 28, 2019 at 2:52 AM David Steele <david@pgmasters.net> wrote:\n> \n>> The question for the old versions: is this something that should be\n>> fixed in the code or in the documentation?\n>>\n>> My vote is to make this explicit in the documentation, since changing\n>> the recovery behavior in old versions could lead to nasty surprises.\n> \n> +1 to update the documentation.\n\nOK, I'll put that on my list for after GA. This has been the behavior\nsince 9.1 so it hardly seems like an emergency.\n\nThe behavior change in 12 may be a surprise for users, though, perhaps\nwe should add something to the Streaming Replication and Recovery\nchanges section in the release notes?\n\nLooping in Jonathan to see if he thinks that's a good idea.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Sat, 28 Sep 2019 12:00:21 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Document recovery_target_action behavior?" }, { "msg_contents": "On 9/28/19 12:00 PM, David Steele wrote:\n> On 9/28/19 11:14 AM, Fujii Masao wrote:\n>> On Sat, Sep 28, 2019 at 2:52 AM David Steele <david@pgmasters.net> wrote:\n>>\n>>> The question for the old versions: is this something that should be\n>>> fixed in the code or in the documentation?\n>>>\n>>> My vote is to make this explicit in the documentation, since changing\n>>> the recovery behavior in old versions could lead to nasty surprises.\n>>\n>> +1 to update the documentation.\n\nFYI, documentation to compare, PG11:\n\nhttps://www.postgresql.org/docs/11/recovery-target-settings.html#RECOVERY-TARGET-ACTION\n\nPG12:\n\nhttps://www.postgresql.org/docs/12/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET\n\nAfter reading through, yes, I agree that +1 we should modify the\ndocumentation.\n\nAnd +1 for not modifying the behavior in the supported PG < 12 versions,\nthat could certainly catch people by surprise.\n\n> \n> OK, I'll put that on my list for after GA. This has been the behavior\n> since 9.1 so it hardly seems like an emergency.\n> \n> The behavior change in 12 may be a surprise for users, though, perhaps\n> we should add something to the Streaming Replication and Recovery\n> changes section in the release notes?\n> \n> Looping in Jonathan to see if he thinks that's a good idea.\n\nI would suggest we add a bullet to the \"E.1.2 Migration to Version\n12\"[1] section as one could see this behavior change as being\n\"incompatible\" with older versions. Moving aside the \"recovery.conf\"\nfile change, if you did not specify your \"recovery_target_action\" but\nexpect your instance to be available (albeit paused), you may be in for\na surprise, especially if you have things automated.\n\nI don't know if I would put it in the \"E.1.3.2\" section though, but I\ncould be convinced either way.\n\nDo you have some suggested wording? I could attempt to cobble together a\nquick patch.\n\nThanks,\n\nJonathan\n\n[1] https://www.postgresql.org/docs/12/release-12.html", "msg_date": "Sat, 28 Sep 2019 13:03:32 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Document recovery_target_action behavior?" } ]
[ { "msg_contents": "Over in the incremental sort patch discussion we found [1] a case\nwhere a higher cost plan ends up being chosen because a low startup\ncost partial path is ignored in favor of a lower total cost partial\npath and a limit is a applied on top of that which would normal favor\nthe lower startup cost plan.\n\n45be99f8cd5d606086e0a458c9c72910ba8a613d originally added\n`add_partial_path` with the comment:\n\n> Neither do we need to consider startup costs:\n> parallelism is only used for plans that will be run to completion.\n> Therefore, this routine is much simpler than add_path: it needs to\n> consider only pathkeys and total cost.\n\nI'm not entirely sure if that is still true or not--I can't easily\ncome up with a scenario in which it's not, but I also can't come up\nwith an inherent reason why such a scenario cannot exist.\n\nWe could just continue to include this change as part of the\nincremental sort patch itself, but it seemed worth it to me to break\nit out for some more targeted discussion, and also include Robert as\nthe initial author of add_partial_path in the hopes that maybe we\ncould retrieve some almost 4-year-old memories on why this was\ninherently true then, and maybe that would shed some light on whether\nit's still inherently true.\n\nI've attached a patch (by Tomas Vondra, also cc'd) to consider startup\ncost in add_partial_path, but should we apply the patch we'll also\nlikely need to apply the same kind of change to\nadd_partial_path_precheck.\n\nJames Coleman\n\n[1]: https://www.postgresql.org/message-id/20190720132244.3vgg2uynfpxh3me5%40development", "msg_date": "Fri, 27 Sep 2019 14:24:10 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Consider low startup cost in add_partial_path" }, { "msg_contents": "On Fri, Sep 27, 2019 at 2:24 PM James Coleman <jtc331@gmail.com> wrote:\n> Over in the incremental sort patch discussion we found [1] a case\n> where a higher cost plan ends up being chosen because a low startup\n> cost partial path is ignored in favor of a lower total cost partial\n> path and a limit is a applied on top of that which would normal favor\n> the lower startup cost plan.\n>\n> 45be99f8cd5d606086e0a458c9c72910ba8a613d originally added\n> `add_partial_path` with the comment:\n>\n> > Neither do we need to consider startup costs:\n> > parallelism is only used for plans that will be run to completion.\n> > Therefore, this routine is much simpler than add_path: it needs to\n> > consider only pathkeys and total cost.\n>\n> I'm not entirely sure if that is still true or not--I can't easily\n> come up with a scenario in which it's not, but I also can't come up\n> with an inherent reason why such a scenario cannot exist.\n\nI think I just didn't think carefully about the Limit case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 28 Sep 2019 00:16:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Consider low startup cost in add_partial_path" }, { "msg_contents": "On Sat, Sep 28, 2019 at 12:16:05AM -0400, Robert Haas wrote:\n>On Fri, Sep 27, 2019 at 2:24 PM James Coleman <jtc331@gmail.com> wrote:\n>> Over in the incremental sort patch discussion we found [1] a case\n>> where a higher cost plan ends up being chosen because a low startup\n>> cost partial path is ignored in favor of a lower total cost partial\n>> path and a limit is a applied on top of that which would normal favor\n>> the lower startup cost plan.\n>>\n>> 45be99f8cd5d606086e0a458c9c72910ba8a613d originally added\n>> `add_partial_path` with the comment:\n>>\n>> > Neither do we need to consider startup costs:\n>> > parallelism is only used for plans that will be run to completion.\n>> > Therefore, this routine is much simpler than add_path: it needs to\n>> > consider only pathkeys and total cost.\n>>\n>> I'm not entirely sure if that is still true or not--I can't easily\n>> come up with a scenario in which it's not, but I also can't come up\n>> with an inherent reason why such a scenario cannot exist.\n>\n>I think I just didn't think carefully about the Limit case.\n>\n\nThanks! In that case I suggest we treat it as a separate patch/fix,\nindependent of the incremental sort patch. I don't want to bury it in\nthat patch series, it's already pretty large.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 29 Sep 2019 00:37:41 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Consider low startup cost in add_partial_path" }, { "msg_contents": "On Saturday, September 28, 2019, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Sat, Sep 28, 2019 at 12:16:05AM -0400, Robert Haas wrote:\n>\n>> On Fri, Sep 27, 2019 at 2:24 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>>> Over in the incremental sort patch discussion we found [1] a case\n>>> where a higher cost plan ends up being chosen because a low startup\n>>> cost partial path is ignored in favor of a lower total cost partial\n>>> path and a limit is a applied on top of that which would normal favor\n>>> the lower startup cost plan.\n>>>\n>>> 45be99f8cd5d606086e0a458c9c72910ba8a613d originally added\n>>> `add_partial_path` with the comment:\n>>>\n>>> > Neither do we need to consider startup costs:\n>>> > parallelism is only used for plans that will be run to completion.\n>>> > Therefore, this routine is much simpler than add_path: it needs to\n>>> > consider only pathkeys and total cost.\n>>>\n>>> I'm not entirely sure if that is still true or not--I can't easily\n>>> come up with a scenario in which it's not, but I also can't come up\n>>> with an inherent reason why such a scenario cannot exist.\n>>>\n>>\n>> I think I just didn't think carefully about the Limit case.\n>>\n>>\n> Thanks! In that case I suggest we treat it as a separate patch/fix,\n> independent of the incremental sort patch. I don't want to bury it in\n> that patch series, it's already pretty large.\n>\n\nNow the trick is to figure out a way to demonstrate it in test :)\n\nBasically we need:\nPath A: Can short circuit with LIMIT but has high total cost\nPath B: Can’t short circuit with LIMIT but has lower total cost\n\n(Both must be parallel aware of course.)\n\nMaybe ordering in B can be a sort node and A can be an index scan (perhaps\nwith very high random page cost?) and force choosing a parallel plan?\n\nI’m trying to describe this to jog my thoughts (not in front of my laptop\nright now so can’t try it out).\n\nAny other ideas?\n\nJames\n\nOn Saturday, September 28, 2019, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Sat, Sep 28, 2019 at 12:16:05AM -0400, Robert Haas wrote:\n\nOn Fri, Sep 27, 2019 at 2:24 PM James Coleman <jtc331@gmail.com> wrote:\n\nOver in the incremental sort patch discussion we found [1] a case\nwhere a higher cost plan ends up being chosen because a low startup\ncost partial path is ignored in favor of a lower total cost partial\npath and a limit is a applied on top of that which would normal favor\nthe lower startup cost plan.\n\n45be99f8cd5d606086e0a458c9c72910ba8a613d originally added\n`add_partial_path` with the comment:\n\n> Neither do we need to consider startup costs:\n> parallelism is only used for plans that will be run to completion.\n> Therefore, this routine is much simpler than add_path: it needs to\n> consider only pathkeys and total cost.\n\nI'm not entirely sure if that is still true or not--I can't easily\ncome up with a scenario in which it's not, but I also can't come up\nwith an inherent reason why such a scenario cannot exist.\n\n\nI think I just didn't think carefully about the Limit case.\n\n\n\nThanks! In that case I suggest we treat it as a separate patch/fix,\nindependent of the incremental sort patch. I don't want to bury it in\nthat patch series, it's already pretty large.\nNow the trick is to figure out a way to demonstrate it in test :)Basically we need:Path A: Can short circuit with LIMIT but has high total costPath B: Can’t short circuit with LIMIT but has lower total cost(Both must be parallel aware of course.)Maybe ordering in B can be a sort node and A can be an index scan (perhaps with very high random page cost?) and force choosing a parallel plan?I’m trying to describe this to jog my thoughts (not in front of my laptop right now so can’t try it out).Any other ideas?James", "msg_date": "Sat, 28 Sep 2019 19:21:33 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Consider low startup cost in add_partial_path" }, { "msg_contents": "On Sat, Sep 28, 2019 at 7:21 PM James Coleman <jtc331@gmail.com> wrote:\n> Now the trick is to figure out a way to demonstrate it in test :)\n>\n> Basically we need:\n> Path A: Can short circuit with LIMIT but has high total cost\n> Path B: Can’t short circuit with LIMIT but has lower total cost\n>\n> (Both must be parallel aware of course.)\n\nI'm adding one requirement, or clarifying it anyway: the above paths\nmust be partial paths, and can't just apply at the top level of the\nparallel part of the plan. I.e., the lower startup cost has to matter\nat a subtree of the parallel portion of the plan.\n\n> Maybe ordering in B can be a sort node and A can be an index scan (perhaps with very high random page cost?) and force choosing a parallel plan?\n>\n> I’m trying to describe this to jog my thoughts (not in front of my laptop right now so can’t try it out).\n>\n> Any other ideas?\n\nI've been playing with this a good bit, and I'm struggling to come up\nwith a test case. Because the issue only manifests in a subtree of the\nparallel portion of the plan, a scan on a single relation won't do.\nMerge join seems like a good area to look at because it requires\nordering, and that ordering can be either the result of an index scan\n(short-circuit-able) or an explicit sort (not short-circuit-able). But\nI've been unable to make that result in any different plans with\neither 2 or 3 relations joined together, ordered, and a limit applied.\n\nIn all cases I've been starting with:\n\nset enable_hashjoin = off;\nset enable_nestloop = off;\nset max_parallel_workers_per_gather = 4;\nset min_parallel_index_scan_size = 0;\nset min_parallel_table_scan_size = 0;\nset parallel_setup_cost = 0;\nset parallel_tuple_cost = 0;\n\nI've also tried various combinations of random_page_cost,\ncpu_index_tuple_cost, cpu_tuple_cost.\n\nInterestingly I've noticed plans joining two relations that look like:\n\n Limit\n -> Merge Join\n Merge Cond: (t1.pk = t2.pk)\n -> Gather Merge\n Workers Planned: 4\n -> Parallel Index Scan using t_pkey on t t1\n -> Gather Merge\n Workers Planned: 4\n -> Parallel Index Scan using t_pkey on t t2\n\nWhere I would have expected a Gather Merge above a parallelized merge\njoin. Is that reasonable to expect?\n\nIf there doesn't seem to be an obvious way to reproduce the issue\ncurrently, but we know we have a reproduction example along with\nincremental sort, what is the path forward for this? Is it reasonable\nto try to commit it anyway knowing that it's a \"correct\" change and\nbeen demonstrated elsewhere?\n\nJames\n\n\n", "msg_date": "Wed, 2 Oct 2019 10:22:17 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Consider low startup cost in add_partial_path" }, { "msg_contents": "On Wed, Oct 2, 2019 at 10:22 AM James Coleman <jtc331@gmail.com> wrote:\n> In all cases I've been starting with:\n>\n> set enable_hashjoin = off;\n> set enable_nestloop = off;\n> set max_parallel_workers_per_gather = 4;\n> set min_parallel_index_scan_size = 0;\n> set min_parallel_table_scan_size = 0;\n> set parallel_setup_cost = 0;\n> set parallel_tuple_cost = 0;\n>\n> I've also tried various combinations of random_page_cost,\n> cpu_index_tuple_cost, cpu_tuple_cost.\n>\n> Interestingly I've noticed plans joining two relations that look like:\n>\n> Limit\n> -> Merge Join\n> Merge Cond: (t1.pk = t2.pk)\n> -> Gather Merge\n> Workers Planned: 4\n> -> Parallel Index Scan using t_pkey on t t1\n> -> Gather Merge\n> Workers Planned: 4\n> -> Parallel Index Scan using t_pkey on t t2\n>\n> Where I would have expected a Gather Merge above a parallelized merge\n> join. Is that reasonable to expect?\n\nWell, you told the planner that parallel_setup_cost = 0, so starting\nworkers is free. And you told the planner that parallel_tuple_cost =\n0, so shipping tuples from the worker to the leader is also free. So\nit is unclear why it should prefer a single Gather Merge over two\nGather Merges: after all, the Gather Merge is free!\n\nIf you use give those things some positive cost, even if it's smaller\nthan the default, you'll probably get a saner-looking plan choice.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 4 Oct 2019 08:36:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Consider low startup cost in add_partial_path" }, { "msg_contents": "On Fri, Oct 4, 2019 at 8:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Oct 2, 2019 at 10:22 AM James Coleman <jtc331@gmail.com> wrote:\n> > In all cases I've been starting with:\n> >\n> > set enable_hashjoin = off;\n> > set enable_nestloop = off;\n> > set max_parallel_workers_per_gather = 4;\n> > set min_parallel_index_scan_size = 0;\n> > set min_parallel_table_scan_size = 0;\n> > set parallel_setup_cost = 0;\n> > set parallel_tuple_cost = 0;\n> >\n> > I've also tried various combinations of random_page_cost,\n> > cpu_index_tuple_cost, cpu_tuple_cost.\n> >\n> > Interestingly I've noticed plans joining two relations that look like:\n> >\n> > Limit\n> > -> Merge Join\n> > Merge Cond: (t1.pk = t2.pk)\n> > -> Gather Merge\n> > Workers Planned: 4\n> > -> Parallel Index Scan using t_pkey on t t1\n> > -> Gather Merge\n> > Workers Planned: 4\n> > -> Parallel Index Scan using t_pkey on t t2\n> >\n> > Where I would have expected a Gather Merge above a parallelized merge\n> > join. Is that reasonable to expect?\n>\n> Well, you told the planner that parallel_setup_cost = 0, so starting\n> workers is free. And you told the planner that parallel_tuple_cost =\n> 0, so shipping tuples from the worker to the leader is also free. So\n> it is unclear why it should prefer a single Gather Merge over two\n> Gather Merges: after all, the Gather Merge is free!\n>\n> If you use give those things some positive cost, even if it's smaller\n> than the default, you'll probably get a saner-looking plan choice.\n\nThat makes sense.\n\nRight now I currently see trying to get this a separate test feels a\nbit like a distraction.\n\nGiven there doesn't seem to be an obvious way to reproduce the issue\ncurrently, but we know we have a reproduction example along with\nincremental sort, what is the path forward for this? Is it reasonable\nto try to commit it anyway knowing that it's a \"correct\" change and\nbeen demonstrated elsewhere?\n\nJames\n\n\n", "msg_date": "Thu, 24 Oct 2019 14:38:33 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Consider low startup cost in add_partial_path" }, { "msg_contents": "Hi,\n\nFor the record, here is the relevant part of the Incremental Sort patch\nseries, updating add_partial_path and add_partial_path_precheck to also\nconsider startup cost.\n\nThe changes in the first two patches are pretty straight-forward, plus\nthere's a proposed optimization in the precheck function to only run\ncompare_pathkeys if entirely necessary. I'm currently evaluating those\nchanges and I'll post the results to the incremental sort thread.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 5 Apr 2020 16:14:49 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Consider low startup cost in add_partial_path" } ]
[ { "msg_contents": "https://www.postgresql.org/docs/12/release-12.html\n\n|Allow modifications of system catalogs' options using ALTER TABLE (Peter Eisentraut)\n|Modifications of catalogs' reloptions and autovacuum settings are now supported.\n\nI wonder if that should say: \"... WHEN ALLOW_SYSTEM_TABLE_MODS IS ENABLED.\"\n\nJustin\n\n\n", "msg_date": "Fri, 27 Sep 2019 13:30:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v12 relnotes: alter system tables" }, { "msg_contents": "On 2019-09-27 20:30, Justin Pryzby wrote:\n> https://www.postgresql.org/docs/12/release-12.html\n> \n> |Allow modifications of system catalogs' options using ALTER TABLE (Peter Eisentraut)\n> |Modifications of catalogs' reloptions and autovacuum settings are now supported.\n> \n> I wonder if that should say: \"... WHEN ALLOW_SYSTEM_TABLE_MODS IS ENABLED.\"\n\nfixed\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 29 Sep 2019 23:30:56 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 relnotes: alter system tables" } ]
[ { "msg_contents": "Hackers,\n\nRestoring these files could cause surprising behaviors so it seems best\nto let the restore process create them when needed.\n\nPatch is attached.\n\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Fri, 27 Sep 2019 14:52:54 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Skip recovery/standby signal files in pg_basebackup" }, { "msg_contents": "On Sat, Sep 28, 2019 at 3:53 AM David Steele <david@pgmasters.net> wrote:\n>\n> Hackers,\n>\n> Restoring these files could cause surprising behaviors so it seems best\n> to let the restore process create them when needed.\n\nIt's not a normal situation where a running postgres has either\nrecovery.signal or standby.signal but I'm +1 on this change for\nsafety.\n\nThe patch looks good to me.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Mon, 30 Sep 2019 16:05:58 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Skip recovery/standby signal files in pg_basebackup" }, { "msg_contents": "On Fri, Sep 27, 2019 at 02:52:54PM -0400, David Steele wrote:\n> Restoring these files could cause surprising behaviors so it seems best\n> to let the restore process create them when needed.\n> \n> Patch is attached.\n\nWhen taking a base backup from a standby, we have always copied\nrecovery.conf if present, which would have triggered recovery (and\na standby if standby_mode was enabled). Hence always including\nRECOVERY_SIGNAL_FILE would be consistent with the past behavior.\n\nIncluding STANDBY_SIGNAL_FILE would be consistent with checking if\nstandby_mode was set or not in recovery.conf. We have replaced\nstandby_mode by the standby signal file, so including it if present\nis consistent with the past as well, no?\n--\nMichael", "msg_date": "Mon, 30 Sep 2019 16:21:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Skip recovery/standby signal files in pg_basebackup" }, { "msg_contents": "Hi David,\n\nOn Sat, Sep 28, 2019 at 12:23 AM David Steele <david@pgmasters.net> wrote:\n>\n> Hackers,\n>\n> Restoring these files could cause surprising behaviors so it seems best\n> to let the restore process create them when needed.\n>\n\nCould you please let us know what is the surprising behaviour you are\ntalking about here when including recovery/standby signal files in\npg_basebackup output.\n\nIf including recovery.conf in pg_basebackup output earlier wasn't a\nproblem then why including recovery/standby.signal should be a\nproblem.\n\nYour patch is just trying to skip standby.signal or recovery.signal\nfiles when the base backup is either taken on standby server or it is\ntaken on the server where the PITR is still going on or may be paused.\n\nWhat would be the behaviour with your patch when *-R* option is used\nwith pg_basebackup to take backup from standby server ? Won't it\ncreate a standby.signal file.\n\n> Patch is attached.\n>\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Sep 2019 14:19:58 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Skip recovery/standby signal files in pg_basebackup" } ]
[ { "msg_contents": "The current docs for max_parallel_workers start out:\n\n\"Sets the maximum number of workers that the system can support for\nparallel operations...\"\n\nIn my interpretation, \"the system\" means the entire cluster, but the\nmax_parallel_workers setting is PGC_USERSET. That's a bit confusing,\nbecause two different backends can have different settings for \"the\nmaximum number ... the system can support\".\n\nmax_parallel_workers is compared against the total number of parallel\nworkers in the system, which appears to be why the docs are worded that\nway. But it's still confusing to me.\n\nIf the purpose is to make sure parallel queries don't take up all of\nthe worker processes, perhaps we should rename the setting\nreserved_worker_processes, and make it PGC_SUPERUSER.\n\nIf the purpose is to control execution within a backend, perhaps we\nshould just compare it to the count of parallel processes that the\nbackend is already using.\n\nIf the purpose is just to be a more flexible version of\nmax_worker_processes, maybe we should change it to PGC_SIGHUP?\n\nIf it has multiple purposes, perhaps we should have multiple GUCs?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 27 Sep 2019 17:07:39 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "max_parallel_workers question" }, { "msg_contents": "On Fri, Sep 27, 2019 at 8:07 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The current docs for max_parallel_workers start out:\n>\n> \"Sets the maximum number of workers that the system can support for\n> parallel operations...\"\n>\n> In my interpretation, \"the system\" means the entire cluster, but the\n> max_parallel_workers setting is PGC_USERSET. That's a bit confusing,\n> because two different backends can have different settings for \"the\n> maximum number ... the system can support\".\n\nOops.\n\nI intended it to mean \"the entire cluster.\" Basically, how many\nworkers out of max_worker_processes are you willing to use for\nparallel query, as opposed to other things. I agree that PGC_USERSET\ndoesn't make any sense.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 28 Sep 2019 00:10:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_parallel_workers question" }, { "msg_contents": "On Sat, 2019-09-28 at 00:10 -0400, Robert Haas wrote:\n> I intended it to mean \"the entire cluster.\" Basically, how many\n> workers out of max_worker_processes are you willing to use for\n> parallel query, as opposed to other things. I agree that PGC_USERSET\n> doesn't make any sense.\n\nIn that case, PGC_SIGHUP seems most appropriate.\n\nIt also might make more sense to rename it to reserved_worker_processes\nand invert the meaning. To me, that would be more clear that it's\ndesigned to prevent parallel query from interfering with other uses of\nworker processes.\n\nAnother option would be to make it two pools, one for parallel workers\nand one for everything else, and each one would be controlled by a\nPGC_POSTMASTER setting. But it seems like some thought went into trying\nto share the pool of workers[1], so I assume there was a good reason\nyou wanted to do that.\n\nRegards,\n\tJeff Davis\n\n[1] If I'm reading correctly, it uses both lock-free code and\nintentional overflow.\n\n\n\n\n", "msg_date": "Sat, 28 Sep 2019 10:36:45 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: max_parallel_workers question" }, { "msg_contents": "On Sat, Sep 28, 2019 at 1:36 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> In that case, PGC_SIGHUP seems most appropriate.\n\nYeah.\n\n> It also might make more sense to rename it to reserved_worker_processes\n> and invert the meaning. To me, that would be more clear that it's\n> designed to prevent parallel query from interfering with other uses of\n> worker processes.\n\nI don't think that would work as well. Some day we might have another\nclass of worker processes with its own independent limit, and then\nthis terminology would get confusing. It makes sense to say that you\ncan have up to 10 worker processes of which at most 4 can be used for\nparallel query and at most 3 can be used for logical replication, but\nit doesn't make nearly as much sense to say that you can have up to 10\nworker processes of which 6 can't be used for parallel query and of\nwhich 7 can't be used for logical application. That leaves, uh, how\nmany?\n\n> Another option would be to make it two pools, one for parallel workers\n> and one for everything else, and each one would be controlled by a\n> PGC_POSTMASTER setting. But it seems like some thought went into trying\n> to share the pool of workers[1], so I assume there was a good reason\n> you wanted to do that.\n\nHere again, I imagine that in the future we might have various\ndifferent worker classes that need to share the total number of\nworkers, but not necessarily via a hard partition. For example, you\ncould sensible say that there are 3 purposes for workers and 10\nworkers, and no single purpose can consume more than 4 workers. Even\nthough 4 * 3 > 10, it's a completely reasonable configuration. The\nearly bird gets the juiciest worm, and the late bird doesn't starve to\ndeath. Even a more extreme configuration where you limit each purpose\nto, say, 7 workers could be reasonable. Here there is a risk of\nstarvation, but you may know that in your environment it's not likely\nto last for very long.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 4 Oct 2019 08:33:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_parallel_workers question" }, { "msg_contents": "On Sat, Sep 28, 2019 at 12:10:53AM -0400, Robert Haas wrote:\n> On Fri, Sep 27, 2019 at 8:07 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > The current docs for max_parallel_workers start out:\n> >\n> > \"Sets the maximum number of workers that the system can support for\n> > parallel operations...\"\n> >\n> > In my interpretation, \"the system\" means the entire cluster, but the\n> > max_parallel_workers setting is PGC_USERSET. That's a bit confusing,\n> > because two different backends can have different settings for \"the\n> > maximum number ... the system can support\".\n> \n> Oops.\n> \n> I intended it to mean \"the entire cluster.\" Basically, how many\n> workers out of max_worker_processes are you willing to use for\n> parallel query, as opposed to other things. I agree that PGC_USERSET\n> doesn't make any sense.\n\nI found two places there \"custer\" was better than \"system\", so I applied\nthe attached patch to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 8 Nov 2023 16:15:21 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: max_parallel_workers question" } ]
[ { "msg_contents": "My Valgrind test script reports the following error, triggered from\nwithin contrib/bloom's regression test suite on master as of right\nnow:\n\n\"\"\"\"\"\"\n2019-09-27 20:53:50.910 PDT 9740 DEBUG: building index \"bloomidx\" on\ntable \"tst\" serially\n2019-09-27 20:53:51.049 PDT 9740 DEBUG: CommitTransaction(1) name:\nunnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid:\n20721/1/2\n2019-09-27 20:53:51.052 PDT 9740 DEBUG: StartTransaction(1) name:\nunnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0\n2019-09-27 20:53:51.054 PDT 9740 LOG: statement: ALTER INDEX bloomidx\nSET (length=80);\n==9740== VALGRINDERROR-BEGIN\n==9740== Conditional jump or move depends on uninitialised value(s)\n==9740== at 0x26D400: RangeVarGetRelidExtended (namespace.c:349)\n==9740== by 0x33D084: AlterTableLookupRelation (tablecmds.c:3445)\n==9740== by 0x4D0BC1: ProcessUtilitySlow (utility.c:1111)\n==9740== by 0x4D0802: standard_ProcessUtility (utility.c:927)\n==9740== by 0x4D083B: ProcessUtility (utility.c:360)\n==9740== by 0x4CD0A4: PortalRunUtility (pquery.c:1175)\n==9740== by 0x4CDBC0: PortalRunMulti (pquery.c:1321)\n==9740== by 0x4CE7F9: PortalRun (pquery.c:796)\n==9740== by 0x4CA3D9: exec_simple_query (postgres.c:1231)\n==9740== by 0x4CB3BD: PostgresMain (postgres.c:4256)\n==9740== by 0x4547DE: BackendRun (postmaster.c:4459)\n==9740== by 0x4547DE: BackendStartup (postmaster.c:4150)\n==9740== by 0x4547DE: ServerLoop (postmaster.c:1718)\n==9740== by 0x455E2C: PostmasterMain (postmaster.c:1391)\n==9740== by 0x3B94AC: main (main.c:210)\n==9740== Uninitialised value was created by a stack allocation\n==9740== at 0x402F202: _PG_init (blutils.c:54)\n==9740==\n==9740== VALGRINDERROR-END\n{\n <insert_a_suppression_name_here>\n Memcheck:Cond\n fun:RangeVarGetRelidExtended\n fun:AlterTableLookupRelation\n fun:ProcessUtilitySlow\n fun:standard_ProcessUtility\n fun:ProcessUtility\n fun:PortalRunUtility\n fun:PortalRunMulti\n fun:PortalRun\n fun:exec_simple_query\n fun:PostgresMain\n fun:BackendRun\n fun:BackendStartup\n fun:ServerLoop\n fun:PostmasterMain\n fun:main\n}\n\"\"\"\"\"\"\n\nI suspect that the recent commit 69f94108 is involved here, but I\nhaven't confirmed that explanation myself.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Sep 2019 21:02:34 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "contrib/bloom Valgrind error" }, { "msg_contents": "On Fri, Sep 27, 2019 at 09:02:34PM -0700, Peter Geoghegan wrote:\n> My Valgrind test script reports the following error, triggered from\n> within contrib/bloom's regression test suite on master as of right\n> now:\n> \n> I suspect that the recent commit 69f94108 is involved here, but I\n> haven't confirmed that explanation myself.\n\nIt looks that the complain is about the set of custom reloptions\ninitialized by bloom in _PG_init(), and that lockmode is actually not\nset after fetching it via AlterTableGetLockLevel(), which is exactly\nwhat 736b84e was addressing.\n\nBy repeating the beginning of the regression tests of bloom, I am\nunfortunately not able to reproduce the problem. Here is what I used\nto start the server with valgrind:\nvalgrind --suppressions=$PG_SOURCE/src/tools/valgrind.supp\n--trace-children=yes --track-origins=yes --leak-check=full\n--read-var-info=yes postgres -D $PGDATA\n\nWhat kind of commands and or compilation options do you use?\n--\nMichael", "msg_date": "Sat, 28 Sep 2019 15:00:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: contrib/bloom Valgrind error" } ]
[ { "msg_contents": "postgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\nCREATE TABLE\npostgres=# CREATE TABLE t0 PARTITION OF t DEFAULT PARTITION BY RANGE(i);\nCREATE TABLE\npostgres=# CREATE TABLE t00 PARTITION OF t0 DEFAULT; -- oh yes\nCREATE TABLE\n...\n\nNot sure how it could be useful to partition default into subpartitions of\nlists, ranges, hashes.\n\nJustin\n\n\n", "msg_date": "Sat, 28 Sep 2019 10:18:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "default partitions can be partitioned and have default partitions?" }, { "msg_contents": "On Sun, Sep 29, 2019 at 12:18 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> postgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\n> CREATE TABLE\n> postgres=# CREATE TABLE t0 PARTITION OF t DEFAULT PARTITION BY RANGE(i);\n> CREATE TABLE\n> postgres=# CREATE TABLE t00 PARTITION OF t0 DEFAULT; -- oh yes\n> CREATE TABLE\n\nActually, you can go even further\n\nCREATE TABLE t00 PARTITION OF t0 DEFAULT PARTITION BY HASH (i);\n\n> Not sure how it could be useful to partition default into subpartitions of\n> lists, ranges, hashes.\n\nYeah, maybe the top-level partitioning should be designed such that\nthe default partition doesn't need sub-partitioning, but perhaps\nPostgres shouldn't prevent users from trying it. This was discussed\nwhen the default partition feature went in; see [1].\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYh-hitRRUfxVxDVAjioYPrjhBCehePGRUa6qNNUnKvuw%40mail.gmail.com\n\n\n", "msg_date": "Mon, 30 Sep 2019 10:54:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: default partitions can be partitioned and have default\n partitions?" } ]
[ { "msg_contents": "Hi all,\n\nI noticed this strange behaviour whilst trying to write a function for Postgres 11.5 (PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit) and reduced it to this minimal example. Using a function parameter in the window frame definition seems to be the cause of the error.\n\n create or replace function f(group_size bigint) returns setof int[] as\n $$\n select array_agg(s) over w\n from generate_series(1,10) s\n window w as (order by s rows between current row and group_size following)\n $$ language sql immutable;\n\nCalling the function without a column list succeeds:\n\n postgres=# select f(3);\n f\n ------------\n {1,2,3,4}\n {2,3,4,5}\n {3,4,5,6}\n {4,5,6,7}\n {5,6,7,8}\n {6,7,8,9}\n {7,8,9,10}\n {8,9,10}\n {9,10}\n {10}\n (10 rows)\n\nCalling the function with select * fails:\n\n postgres=# select * from f(3);\n ERROR: 42704: no value found for parameter 1\n LOCATION: ExecEvalParamExtern, execExprInterp.c:2296\n\nUsing a plpgsql function with a stringified query works, which is my current workaround:\n\n create or replace function f1(group_size bigint) returns setof int[] as\n $$\n begin\n return query execute format($q$\n select array_agg(s) over w as t\n from generate_series(1,10) s\n window w as (order by s rows between current row and %1$s following)\n $q$,group_size);\n end;\n $$ language plpgsql immutable;\n\nThis appears to be a bug to me. If confirmed that this is not some expected behaviour unknown to me I will report this.\n\nAlastair\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi all,\n\n\n\nI noticed this strange behaviour whilst trying to write a function for Postgres 11.5 (PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit) and reduced it to this minimal example.  Using a function parameter\n in the window frame definition seems to be the cause of the error.\n\n\n\n    create or replace function f(group_size bigint) returns setof int[] as\n\n    $$    \n\n        select array_agg(s) over w \n\n        from generate_series(1,10) s    \n\n        window w as (order by s rows between current row and group_size following)\n\n    $$ language sql immutable;\n\n\n\nCalling the function without a column list succeeds:\n\n\n\n    postgres=# select f(3);                                                                                                                                                                                              \n\n        f      \n\n    ------------\n\n    {1,2,3,4}\n\n    {2,3,4,5}\n\n    {3,4,5,6}\n\n    {4,5,6,7}\n\n    {5,6,7,8}\n\n    {6,7,8,9}\n\n    {7,8,9,10}\n\n    {8,9,10}\n\n    {9,10}\n\n    {10}\n\n    (10 rows)\n\n\n\nCalling the function with select * fails:\n\n\n\n    postgres=# select * from f(3);\n\n    ERROR:  42704: no value found for parameter 1\n\n    LOCATION:  ExecEvalParamExtern, execExprInterp.c:2296\n\n\n\nUsing a plpgsql function with a stringified query works, which is my current workaround:\n\n\n\n    create or replace function f1(group_size bigint) returns setof int[] as\n\n    $$\n\n    begin\n\n        return query execute format($q$ \n\n            select array_agg(s) over w as t\n\n            from generate_series(1,10) s\n\n            window w as (order by s rows between current row and %1$s following)\n\n        $q$,group_size);\n\n    end;\n\n    $$ language plpgsql immutable;\n\n\n\nThis appears to be a bug to me.  If confirmed that this is not some expected behaviour unknown to me I will report this.\n\n\n\nAlastair", "msg_date": "Sat, 28 Sep 2019 15:33:50 +0000", "msg_from": "Alastair McKinley <a.mckinley@analyticsengines.com>", "msg_from_op": true, "msg_subject": "Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": ">>>>> \"Alastair\" == Alastair McKinley <a.mckinley@analyticsengines.com> writes:\n\n Alastair> Hi all,\n\n Alastair> I noticed this strange behaviour whilst trying to write a\n Alastair> function for Postgres 11.5 (PostgreSQL 11.5 on\n Alastair> x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623\n Alastair> (Red Hat 4.8.5-36), 64-bit) and reduced it to this minimal\n Alastair> example. Using a function parameter in the window frame\n Alastair> definition seems to be the cause of the error.\n\n [...]\n\n Alastair> This appears to be a bug to me.\n\nYes, it's a bug, related to function inlining (the select f(3); is not\ninlined and therefore works, but the select * from f(3); is being\ninlined, but the original Param is somehow making it into the final plan\nrather than being substituted with its value). Looking into why.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Sat, 28 Sep 2019 16:59:55 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Alastair\" == Alastair McKinley <a.mckinley@analyticsengines.com> writes:\n> Alastair> This appears to be a bug to me.\n\n> Yes, it's a bug, related to function inlining (the select f(3); is not\n> inlined and therefore works, but the select * from f(3); is being\n> inlined, but the original Param is somehow making it into the final plan\n> rather than being substituted with its value). Looking into why.\n\nIt looks to me that the reason is that query_tree_mutator (likewise\nquery_tree_walker) fails to visit query->windowClause, which is a\nbug of the first magnitude if we allow those to contain expressions.\nNot sure how we've missed that up to now.\n\nLooking at struct Query, it seems like that's not the only questionable\nomission. We're also not descending into\n\n Node *utilityStmt; /* non-null if commandType == CMD_UTILITY */\n List *groupClause; /* a list of SortGroupClause's */\n List *groupingSets; /* a list of GroupingSet's if present */\n List *distinctClause; /* a list of SortGroupClause's */\n List *sortClause; /* a list of SortGroupClause's */\n List *rowMarks; /* a list of RowMarkClause's */\n\nNow probably this is never called on utility statements, and maybe\nthere is never a reason for anyone to examine or mutate SortGroupClauses,\nGroupingSets, or RowMarkClauses, but I'm not sure it's any business of\nthis module to assume that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Sep 2019 16:37:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> It looks to me that the reason is that query_tree_mutator\n Tom> (likewise query_tree_walker) fails to visit query->windowClause,\n\nI noticed this too. I spent some time looking at what might break if\nthat was changed (found two places so far, see attached draft patch).\n\n Tom> which is a bug of the first magnitude if we allow those to contain\n Tom> expressions. Not sure how we've missed that up to now.\n\nI suspect because the partition/order by expressions are actually in the\ntargetlist instead (with only SortGroupClause nodes in the\nwindowClause), so only window framing expressions are being missed.\n\n Tom> Looking at struct Query, it seems like that's not the only\n Tom> questionable omission. We're also not descending into\n\n Tom> Node *utilityStmt; /* non-null if commandType == CMD_UTILITY */\n\nI assume that utility statements are doing any necessary expression\nprocessing themselves...\n\n Tom> List *groupClause; /* a list of SortGroupClause's */\n\nThere's at least one place that walks this (and the distinct and sort\nclauses) explicitly (find_expr_references_walker) but most places just\naren't interested in SortGroupClause nodes given that the actual\nexpressions are elsewhere.\n\n Tom> List *groupingSets; /* a list of GroupingSet's if present */\n\nLikewise, GroupingSet nodes are not any form of expression, they only\nreference the groupClause entries. \n\n Tom> List *distinctClause; /* a list of SortGroupClause's */\n Tom> List *sortClause; /* a list of SortGroupClause's */\n\nSame goes as for groupClause.\n\n Tom> List *rowMarks; /* a list of RowMarkClause's */\n\n Tom> Now probably this is never called on utility statements, and maybe\n Tom> there is never a reason for anyone to examine or mutate\n Tom> SortGroupClauses, GroupingSets, or RowMarkClauses, but I'm not\n Tom> sure it's any business of this module to assume that.\n\nI think the logic that query_tree_walker is specifically there to walk\nplaces that might contain _expressions_ is reasonably valid. That said,\nthe fact that we do have one caller that finds it necessary to\nexplicitly walk some of the places that query_tree_walker omits suggests\nthat this decision may have been a mistake.\n\n-- \nAndrew (irc:RhodiumToad)", "msg_date": "Sat, 28 Sep 2019 22:30:59 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Now probably this is never called on utility statements, and maybe\n> Tom> there is never a reason for anyone to examine or mutate\n> Tom> SortGroupClauses, GroupingSets, or RowMarkClauses, but I'm not\n> Tom> sure it's any business of this module to assume that.\n\n> I think the logic that query_tree_walker is specifically there to walk\n> places that might contain _expressions_ is reasonably valid. That said,\n> the fact that we do have one caller that finds it necessary to\n> explicitly walk some of the places that query_tree_walker omits suggests\n> that this decision may have been a mistake.\n\nI'm okay with assuming that these functions aren't used on utility\nstatements (but maybe we should add Assert(query->utilityStmt == NULL)?).\nI'm a bit uncomfortable with skipping the other lists. Admittedly,\nthere's probably not huge value in examining SortGroupClauses in a\nvacuum (that is, without knowing which list they appear in). The only\napplication I can think of offhand is extracting dependencies, which\nis already covered by that one caller you mention.\n\nHowever, we need to fix this in all active branches, and I definitely\nagree with minimizing the amount of change to back branches.\nThe fact that the minimal change breaks (or exposes an oversight in)\nassign_collations_walker makes it very plausible that it will also\nbreak somebody's third-party code. If we push the API change further\nwe increase the risk of breaking stuff. That seems OK in HEAD but\nnot in back branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Sep 2019 19:10:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> However, we need to fix this in all active branches, and I\n Tom> definitely agree with minimizing the amount of change to back\n Tom> branches. The fact that the minimal change breaks (or exposes an\n Tom> oversight in) assign_collations_walker makes it very plausible\n Tom> that it will also break somebody's third-party code. If we push\n Tom> the API change further we increase the risk of breaking stuff.\n Tom> That seems OK in HEAD but not in back branches.\n\nWe could minimize the chance of breakage in a back-patched fix by having\nquery_tree_walker/mutator iterate the windowClause list itself and\ninvoke the walker only on offset expressions; is it worth it?\n\nWalkers that follow the recommended code structure should be unaffected;\nit only shows up in the collations walker because that treats\nexpressions as the \"default\" case and tries to explicitly handle all\nnon-expression nodes.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Sun, 29 Sep 2019 05:43:28 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n\n Andrew> We could minimize the chance of breakage in a back-patched fix\n Andrew> by having query_tree_walker/mutator iterate the windowClause\n Andrew> list itself\n\nHere is a draft patch along those lines; the intent of this one is that\nno existing walker or mutator should need to change (the change to the\ndependency code is basically cosmetic I believe, just avoids walking\nsome things twice).\n\nAlso added some tests.\n\n-- \nAndrew (irc:RhodiumToad)", "msg_date": "Sun, 29 Sep 2019 11:46:49 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> Andrew> We could minimize the chance of breakage in a back-patched fix\n> Andrew> by having query_tree_walker/mutator iterate the windowClause\n> Andrew> list itself\n\n> Here is a draft patch along those lines; the intent of this one is that\n> no existing walker or mutator should need to change (the change to the\n> dependency code is basically cosmetic I believe, just avoids walking\n> some things twice).\n\nHmm. I think this is a reasonable direction to go in, but\nwhat about groupingSets and rowMarks?\n\nAlso, in HEAD I'd be inclined to add assertions about utilityStmt\nbeing NULL.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Sep 2019 14:15:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> Here is a draft patch along those lines; the intent of this one is\n >> that no existing walker or mutator should need to change (the change\n >> to the dependency code is basically cosmetic I believe, just avoids\n >> walking some things twice).\n\n Tom> Hmm. I think this is a reasonable direction to go in, but\n Tom> what about groupingSets and rowMarks?\n\ngroupingSets ultimately contains nothing but numbers which are\nmeaningless without reference to the matching groupClause list. So\nanything that cares about those is really going to have to process them\nin its Query case in the walker function in order to get at both\nclauses.\n\nSimilarly, rowMarks contains indexes into the rangetable (and no\nrecursive substructure at all), so it's likewise better processed at the\nQuery level.\n\n Tom> Also, in HEAD I'd be inclined to add assertions about utilityStmt\n Tom> being NULL.\n\nYup.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Mon, 30 Sep 2019 06:37:48 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Hmm. I think this is a reasonable direction to go in, but\n> Tom> what about groupingSets and rowMarks?\n\n> groupingSets ultimately contains nothing but numbers which are\n> meaningless without reference to the matching groupClause list. So\n> anything that cares about those is really going to have to process them\n> in its Query case in the walker function in order to get at both\n> clauses.\n\nAh. I was thinking there were SortGroupClauses under them, but that\nwas based on an overly hasty reading of the parsenodes.h comments.\n\nNo further complaints.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2019 16:25:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "[moving to -hackers, removing OP and -general]\n\n>>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Also, in HEAD I'd be inclined to add assertions about utilityStmt\n Tom> being NULL.\n\nTried this. The assertion is hit:\n\n#3 0x0000000000bb9144 in ExceptionalCondition (conditionName=0xd3c7a9 \"query->utilityStmt == NULL\", \n errorType=0xc3da24 \"FailedAssertion\", fileName=0xd641f8 \"nodeFuncs.c\", lineNumber=2280) at assert.c:54\n#4 0x000000000081268e in query_tree_walker (query=0x80bb34220, walker=0x98d150 <rangeTableEntry_used_walker>, \n context=0x7fffffffc768, flags=0) at nodeFuncs.c:2280\n#5 0x0000000000815a29 in query_or_expression_tree_walker (node=0x80bb34220, walker=0x98d150 <rangeTableEntry_used_walker>, \n context=0x7fffffffc768, flags=0) at nodeFuncs.c:3344\n#6 0x000000000098d13d in rangeTableEntry_used (node=0x80bb34220, rt_index=1, sublevels_up=0) at rewriteManip.c:900\n#7 0x0000000000698ce6 in transformRuleStmt (stmt=0x80241bd20, \n queryString=0x80241b120 \"create rule r3 as on delete to rules_src do notify rules_src_deletion;\", actions=0x7fffffffc968, \n whereClause=0x7fffffffc960) at parse_utilcmd.c:2883\n#8 0x00000000009819c5 in DefineRule (stmt=0x80241bd20, \n queryString=0x80241b120 \"create rule r3 as on delete to rules_src do notify rules_src_deletion;\") at rewriteDefine.c:206\n\nAny suggestions where best to fix this? transformRuleStmt could be\ntaught to skip a lot of the per-Query stuff it does in the event that\nthe Query is actually a NOTIFY, or a check for NOTIFY could be added\nfurther down the stack, e.g. in rangeTableEntry_used. Any preferences?\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Wed, 02 Oct 2019 16:24:05 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Also, in HEAD I'd be inclined to add assertions about utilityStmt\n> Tom> being NULL.\n\n> Tried this. The assertion is hit:\n> ...\n\n> Any suggestions where best to fix this? transformRuleStmt could be\n> taught to skip a lot of the per-Query stuff it does in the event that\n> the Query is actually a NOTIFY, or a check for NOTIFY could be added\n> further down the stack, e.g. in rangeTableEntry_used. Any preferences?\n\nHm. transformRuleStmt already does special-case utility statements to\nsome extent, so my inclination would be to make it do more of that.\nHowever, it looks like that might end up with rather spaghetti-ish\ncode, as that function is kind of messy already.\n\nOr we could abandon the notion of adding the assertion. I don't\nknow how much work it's worth.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Oct 2019 11:32:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Hm. transformRuleStmt already does special-case utility statements\n Tom> to some extent, so my inclination would be to make it do more of\n Tom> that. However, it looks like that might end up with rather\n Tom> spaghetti-ish code, as that function is kind of messy already.\n\n Tom> Or we could abandon the notion of adding the assertion. I don't\n Tom> know how much work it's worth.\n\nFixing transformRuleStmt just pushes the issue along another step:\nInsertRule wants to do recordDependencyOnExpr on the rule actions,\nwhich just does find_expr_references_walker.\n\nI'm going to leave the assertion out for now and put in a comment for\nfuture reference.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Wed, 02 Oct 2019 17:20:11 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> I'm going to leave the assertion out for now and put in a comment for\n> future reference.\n\nWFM. At this point it's clear it would be a separate piece of work\nnot something to slide into the bug-fix patch, anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Oct 2019 12:31:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> I'm going to leave the assertion out for now and put in a comment\n >> for future reference.\n\n Tom> WFM. At this point it's clear it would be a separate piece of work\n Tom> not something to slide into the bug-fix patch, anyway.\n\nOK. So here's the final patch.\n\n(For the benefit of anyone in -hackers not following the original thread\nin -general, the problem here is that expressions in window framing\nclauses were not being walked or mutated by query_tree_walker /\nquery_tree_mutator. This has been wrong ever since 9.0, but somehow\nnobody seems to have noticed until now.)\n\n-- \nAndrew (irc:RhodiumToad)", "msg_date": "Wed, 02 Oct 2019 17:50:13 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> OK. So here's the final patch.\n\n> (For the benefit of anyone in -hackers not following the original thread\n> in -general, the problem here is that expressions in window framing\n> clauses were not being walked or mutated by query_tree_walker /\n> query_tree_mutator. This has been wrong ever since 9.0, but somehow\n> nobody seems to have noticed until now.)\n\nTwo nitpicky suggestions:\n\n* Please run it through pgindent. Otherwise v13+ are going to be randomly\ndifferent from older branches in this area, once we next pgindent HEAD.\n\n* I think you missed s/walk/mutate/ in some of the comments you copied\ninto query_tree_mutator.\n\nLooks good otherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Oct 2019 12:56:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> * Please run it through pgindent. Otherwise v13+ are going to be\n Tom> randomly different from older branches in this area, once we next\n Tom> pgindent HEAD.\n\ngotcha.\n\n Tom> * I think you missed s/walk/mutate/ in some of the comments you\n Tom> copied into query_tree_mutator.\n\n... where? The only mention of \"walk\" near query_tree_mutator is in its\nheader comment, which I didn't touch.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Wed, 02 Oct 2019 18:15:37 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> * I think you missed s/walk/mutate/ in some of the comments you\n> Tom> copied into query_tree_mutator.\n\n> ... where? The only mention of \"walk\" near query_tree_mutator is in its\n> header comment, which I didn't touch.\n\nWup, sorry, I misparsed the patch. On second read there's no issue there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Oct 2019 13:21:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible bug: SQL function parameter in window frame definition" } ]
[ { "msg_contents": "Hi,all\n\n\nIn our PostgreSQL 10.7(rhel 6.3) database, autovacuum process and many insert processes blocked in gin index's LWLock:buffer_content for long time. \n\n\nIn other words, the following gin index lwlock deadlock phenomenon has occurred again. Since the following bug in 10.7 has been fixed. So this should be a new bug.\n\n\nhttps://www.postgresql.org/message-id/flat/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.com\n\n\nWe have already obtained coredump files of autovacuum process and one of insert processes.\nUnfortunately the insert process(run by gcore) held no lwlock, it should be another process(we did not fetch core file) that hold the lwlock needed for autovacuum process.\n\n\nthe stack is as following:\n\n\n## stack of one insert process: Acquire lock 0x7f6c517dbfa4 which was held by vacuum process\n----------------------------------------------------------------------------\n(gdb) bt\n#0 0x000000369ea0da00 in sem_wait () from /lib64/libpthread.so.0\n#1 0x00000000006a7910 in PGSemaphoreLock (sema=0x7f6c4f76a7b8) at pg_sema.c:316\n#2 0x0000000000718225 in LWLockAcquire (lock=0x7f6c517dbfa4, mode=LW_SHARED) at lwlock.c:1233\n#3 0x000000000048b622 in ginTraverseLock (buffer=224225, searchMode=0 '\\000') at ginbtree.c:40\n#4 0x000000000048ca13 in ginFindLeafPage (btree=0x7fffc71c4ea0, searchMode=0 '\\000', snapshot=0x0) at ginbtree.c:97\n#5 0x00000000004894db in ginInsertItemPointers (index=<value optimized out>, rootBlkno=<value optimized out>, items=<value optimized out>, nitem=<value optimized out>, buildStats=0x0)\n at gindatapage.c:1909\n#6 0x00000000004863a7 in ginEntryInsert (ginstate=0x1c72158, attnum=1, key=20190913, category=0 '\\000', items=0x1c81508, nitem=72, buildStats=0x0) at gininsert.c:214\n#7 0x000000000049219a in ginInsertCleanup (ginstate=0x1c72158, full_clean=0 '\\000', fill_fsm=1 '\\001', forceCleanup=<value optimized out>, stats=<value optimized out>) at ginfast.c:878\n#8 0x000000000049308e in ginHeapTupleFastInsert (ginstate=0x1c72158, collector=<value optimized out>) at ginfast.c:443\n#9 0x0000000000486749 in gininsert (index=<value optimized out>, values=0x7fffc71c54f0, isnull=0x7fffc71c5600 \"\", ht_ctid=0x1c6d3a4, heapRel=<value optimized out>, \n checkUnique=<value optimized out>, indexInfo=0x1c61da8) at gininsert.c:522\n#10 0x00000000005f75f0 in ExecInsertIndexTuples (slot=0x1c62168, tupleid=0x1c6d3a4, estate=0x1c61768, noDupErr=0 '\\000', specConflict=0x0, arbiterIndexes=0x0) at execIndexing.c:387\n#11 0x0000000000616497 in ExecInsert (pstate=0x1c61ab8) at nodeModifyTable.c:519\n#12 ExecModifyTable (pstate=0x1c61ab8) at nodeModifyTable.c:1779\n#13 0x00000000005fb6bf in ExecProcNode (queryDesc=0x1c67760, direction=<value optimized out>, count=0, execute_once=-72 '\\270') at ../../../src/include/executor/executor.h:250\n#14 ExecutePlan (queryDesc=0x1c67760, direction=<value optimized out>, count=0, execute_once=-72 '\\270') at execMain.c:1723\n#15 standard_ExecutorRun (queryDesc=0x1c67760, direction=<value optimized out>, count=0, execute_once=-72 '\\270') at execMain.c:364\n#16 0x00007f6e226aa6f8 in pgss_ExecutorRun (queryDesc=0x1c67760, direction=ForwardScanDirection, count=0, execute_once=1 '\\001') at pg_stat_statements.c:889\n#17 0x00007f6e224a474d in explain_ExecutorRun (queryDesc=0x1c67760, direction=ForwardScanDirection, count=0, execute_once=1 '\\001') at auto_explain.c:267\n#18 0x000000000072a15b in ProcessQuery (plan=<value optimized out>, \n sourceText=0x1c21458 \"INSERT INTO bi_dm.tdm_wh_shopgds_fnsh_rt (STATIS_DATE,SITE_CD,LGORT,ZSIZE,ZVTWEG,VSBED,TOTAL_CNT,FNSH_CNT,UNFNSH_CNT,ETL_TIME,DEPT_CD,TMALL_FLG,BUSS_TP,ZCKYWLX) VALUES($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$\"..., params=0x1c21580, queryEnv=0x0, dest=<value optimized out>, completionTag=0x7fffc71c5de0 \"\") at pquery.c:161\n#19 0x000000000072a395 in PortalRunMulti (portal=0x1c57f18, isTopLevel=1 '\\001', setHoldSnapshot=0 '\\000', dest=0xc9b480, altdest=0xc9b480, completionTag=0x7fffc71c5de0 \"\") at pquery.c:1286\n#20 0x000000000072aa98 in PortalRun (portal=0x1c57f18, count=1, isTopLevel=1 '\\001', run_once=1 '\\001', dest=0x1c25768, altdest=0x1c25768, completionTag=0x7fffc71c5de0 \"\") at pquery.c:799\n#21 0x0000000000728c9a in exec_execute_message (argc=<value optimized out>, argv=<value optimized out>, dbname=0x1bbb800 \"lbiwhdb\", username=<value optimized out>) at postgres.c:2007\n#22 PostgresMain (argc=<value optimized out>, argv=<value optimized out>, dbname=0x1bbb800 \"lbiwhdb\", username=<value optimized out>) at postgres.c:4180\n#23 0x00000000006bb43a in BackendRun (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4405\n---Type <return> to continue, or q <return> to quit---\n#24 BackendStartup (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4077\n#25 ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1755\n#26 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1363\n#27 0x000000000063b4d0 in main (argc=3, argv=0x1b839e0) at main.c:228\n(gdb) f 2\n#2 0x0000000000718225 in LWLockAcquire (lock=0x7f6c517dbfa4, mode=LW_SHARED) at lwlock.c:1233\n1233lwlock.c: No such file or directory.\nin lwlock.c\n(gdb) p num_held_lwlocks\n$1 = 0\n(gdb) \n\n\n\n\n## stack of autovacuum:Acquire lock 0x7f6c519ba5a4 and hold 0x7f6c517dbfa4, 0x7f6c51684f64\n--------------------------------------\n(gdb) bt\n#0 0x000000369ea0da00 in sem_wait () from /lib64/libpthread.so.0\n#1 0x00000000006a7910 in PGSemaphoreLock (sema=0x7f6c4f77fdb8) at pg_sema.c:316\n#2 0x0000000000718225 in LWLockAcquire (lock=0x7f6c519ba5a4, mode=LW_EXCLUSIVE) at lwlock.c:1233\n#3 0x00000000004900fb in ginDeletePage (gvs=0x7fffc71c26d0, blkno=3709, isRoot=0 '\\000', parent=<value optimized out>, myoff=2) at ginvacuum.c:154\n#4 ginScanToDelete (gvs=0x7fffc71c26d0, blkno=3709, isRoot=0 '\\000', parent=<value optimized out>, myoff=2) at ginvacuum.c:297\n#5 0x000000000049004c in ginScanToDelete (gvs=0x7fffc71c26d0, blkno=6418, isRoot=1 '\\001', parent=<value optimized out>, myoff=0) at ginvacuum.c:281\n#6 0x0000000000490d1f in ginVacuumPostingTree (info=0x7fffc71c4da0, stats=<value optimized out>, callback=<value optimized out>, callback_state=<value optimized out>) at ginvacuum.c:408\n#7 ginbulkdelete (info=0x7fffc71c4da0, stats=<value optimized out>, callback=<value optimized out>, callback_state=<value optimized out>) at ginvacuum.c:643\n#8 0x00000000005e9d05 in lazy_vacuum_index (indrel=0x7f6c4f396fe8, stats=0x1c25ad8, vacrelstats=0x1c25578) at vacuumlazy.c:1621\n#9 0x00000000005eaa29 in lazy_scan_heap (onerel=<value optimized out>, options=<value optimized out>, params=0x1ca7550, bstrategy=<value optimized out>) at vacuumlazy.c:1311\n#10 lazy_vacuum_rel (onerel=<value optimized out>, options=<value optimized out>, params=0x1ca7550, bstrategy=<value optimized out>) at vacuumlazy.c:258\n#11 0x00000000005e8d68 in vacuum_rel (relid=176178, relation=<value optimized out>, options=97, params=0x1ca7550) at vacuum.c:1445\n#12 0x00000000005e9187 in vacuum (options=97, relation=0x7fffc71c5640, relid=<value optimized out>, params=0x1ca7550, va_cols=0x0, bstrategy=<value optimized out>, isTopLevel=1 '\\001')\n at vacuum.c:306\n#13 0x00000000006ab262 in autovacuum_do_vac_analyze () at autovacuum.c:3135\n#14 do_autovacuum () at autovacuum.c:2490\n#15 0x00000000006aba70 in AutoVacWorkerMain (argc=<value optimized out>, argv=<value optimized out>) at autovacuum.c:1707\n#16 0x00000000006abb46 in StartAutoVacWorker () at autovacuum.c:1504\n#17 0x00000000006b972a in StartAutovacuumWorker (postgres_signal_arg=<value optimized out>) at postmaster.c:5462\n#18 sigusr1_handler (postgres_signal_arg=<value optimized out>) at postmaster.c:5159\n#19 <signal handler called>\n#20 0x000000369e2e1393 in __select_nocancel () from /lib64/libc.so.6\n#21 0x00000000006baa66 in ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1719\n#22 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1363\n#23 0x000000000063b4d0 in main (argc=3, argv=0x1b839e0) at main.c:228\n(gdb) f 2\n#2 0x0000000000718225 in LWLockAcquire (lock=0x7f6c519ba5a4, mode=LW_EXCLUSIVE) at lwlock.c:1233\n1233lwlock.c: No such file or directory.\nin lwlock.c\n(gdb) p num_held_lwlocks\n$1 = 2\n(gdb) p InterruptHoldoffCount\n$2 = 3\n(gdb) p held_lwlocks\n$3 = {{lock = 0x7f6c517dbfa4, mode = LW_EXCLUSIVE}, {lock = 0x7f6c51684f64, mode = LW_EXCLUSIVE}, {lock = 0x7f6c4f782a00, mode = LW_SHARED}, {lock = 0x7f6c4f78df80, mode = LW_EXCLUSIVE}, {\n lock = 0x7f6c51a7a1e4, mode = LW_EXCLUSIVE}, {lock = 0x7f6c4f78df80, mode = LW_EXCLUSIVE}, {lock = 0x7f6c53a7eb64, mode = LW_EXCLUSIVE}, {lock = 0x7f6c4f78df80, mode = LW_EXCLUSIVE}, {\n lock = 0x0, mode = LW_EXCLUSIVE} <repeats 192 times>}\n(gdb) \n(gdb) f 4\n#4 ginScanToDelete (gvs=0x7fffc71c26d0, blkno=3709, isRoot=0 '\\000', parent=<value optimized out>, myoff=2) at ginvacuum.c:297\n297in ginvacuum.c\n(gdb) p *me\n$15 = {child = 0x0, parent = 0x7fffc71c26a0, blkno = 0, leftBlkno = 5649, isRoot = 0 '\\000'}\n(gdb) p *(&BufferDescriptors[(buffer - 1)].bufferdesc)\n$3 = {tag = {rnode = {spcNode = 1663, dbNode = 16397, relNode = 692113}, forkNum = MAIN_FORKNUM, blockNum = 3709}, buf_id = 202271, state = {value = 3549691907}, wait_backend_pid = 0, \n freeNext = -2, content_lock = {tranche = 54, state = {value = 1627389952}, waiters = {head = 539, tail = 1223}}}\n(gdb) p &((&BufferDescriptors[(buffer - 1)].bufferdesc)->content_lock)\n$5 = (LWLock *) 0x7f6c51684f64\n\n\n(gdb) f 5\n#5 0x000000000049004c in ginScanToDelete (gvs=0x7fffc71c26d0, blkno=6418, isRoot=1 '\\001', parent=<value optimized out>, myoff=0) at ginvacuum.c:281\n281in ginvacuum.c\n(gdb) p *(&BufferDescriptors[(buffer - 1)].bufferdesc)\n$6 = {tag = {rnode = {spcNode = 1663, dbNode = 16397, relNode = 692113}, forkNum = MAIN_FORKNUM, blockNum = 6418}, buf_id = 224224, state = {value = 3549692473}, wait_backend_pid = 14261, \n freeNext = -2, content_lock = {tranche = 54, state = {value = 1627389952}, waiters = {head = 527, tail = 370}}}\n(gdb) p &((&BufferDescriptors[(buffer - 1)].bufferdesc)->content_lock)\n$7 = (LWLock *) 0x7f6c517dbfa4\n\n\n\n\nAccording to above information in core file and source code, autovacuum process is trying to lock the block 5649,and be blocked.\n\n\n 6418(1. held LWLock:0x7f6c517dbfa4) root\n |\n5649(3. Acquire LWLock:0x7f6c519ba5a4) --> 3709(2.held LWLock:0x7f6c51684f64)\n\n\nDoes the locking order of autovacuum process(root->right->left) correct? While insert process lock gin buffer by order of bottom->top and left->right.\n\n\n1. vacuum(root->right->left):\n---------------------------------------------------------------------\nstatic void\nginVacuumPostingTree(GinVacuumState *gvs, BlockNumber rootBlkno)\n{\nif (ginVacuumPostingTreeLeaves(gvs, rootBlkno))\n{\n...\nLockBufferForCleanup(buffer);//1. (1*)lock the root\n...\nginScanToDelete(gvs, rootBlkno, true, &root, InvalidOffsetNumber);//2. (2)scan to delete for root\n}\n}\n\n\nstatic bool\nginScanToDelete(GinVacuumState *gvs, BlockNumber blkno, bool isRoot,\nDataPageDeleteStack *parent, OffsetNumber myoff)\n{\n...\nif (!isRoot)\nLockBuffer(buffer, GIN_EXCLUSIVE);//2.1.1 (4)lock the first child(left); \n //2.2.1 (7*)lock the second child(right); \n...\n\n\nif (!GinPageIsLeaf(page))\n{\n...\nfor (i = FirstOffsetNumber; i <= GinPageGetOpaque(page)->maxoff; i++)\n{\nPostingItem *pitem = GinDataPageGetPostingItem(page, i);\n\n\nif (ginScanToDelete(gvs, PostingItemGetBlockNumber(pitem), FALSE, me, i))// 2.1 (3)scan to delete for the first child(left);\n // 2.2 (6)scan to delete for the second child(right);\ni--;\n}\n}\n...\nif (isempty)\n{\n/* we never delete the left- or rightmost branch */\nif (me->leftBlkno != InvalidBlockNumber && !GinPageRightMost(page))\n{\nAssert(!isRoot);\nginDeletePage(gvs, blkno, me->leftBlkno, me->parent->blkno, myoff, me->parent->isRoot);//2.2.2 (8)delete the second child(right)\nmeDelete = TRUE;\n}\n}\n\n\nif (!isRoot)\nLockBuffer(buffer, GIN_UNLOCK);//2.1.2 (5)unlock the first child(left)\n...\n}\n\n\nstatic void\nginDeletePage(GinVacuumState *gvs, BlockNumber deleteBlkno, BlockNumber leftBlkno,\n BlockNumber parentBlkno, OffsetNumber myoff, bool isParentRoot)\n{\n...\nlBuffer = ReadBufferExtended(gvs->index, MAIN_FORKNUM, leftBlkno,\nRBM_NORMAL, gvs->strategy);\n...\nLockBuffer(lBuffer, GIN_EXCLUSIVE);// 2.2.2.1 (9*)lock the first child(left); \n...\n}\n---------------------------------------------------------------------\n\n\n2. insert(bottom->top,left->right):\n\n\nhttps://www.postgresql.org/message-id/19e4290b.3bb.166ea2c6831.Coremail.chjischj%40163.com\n> # insert process(ginInsertValue())\n> \n> \n> \n> 644(root blkno)\n> |\n> 7054(2. held LWLock:0x2aaac587ae64) ----rightlink----> xxxx(3. Acquire LWLock:0x2aaab4009564,buffer = 2119038,blkno should be 9954)\n> |\n> 701(1. held LWLock:0x2aaab670dfe4)\n> \n> \n> The ginInsertValue() function above gets the lwlock in the order described in the README.\n> \n> \n> src/backend/access/gin/README\n> ---------------------------------------------------------------\n> To avoid deadlocks, B-tree pages must always be locked in the same order:\n> left to right, and bottom to top.\n> ...\n> -----------------------------------------------------------------\n\n\n\n\nRegards,\nChen Huajun\n\n\nHi,allIn our PostgreSQL 10.7(rhel 6.3) database, autovacuum process and many insert processes blocked in gin index's LWLock:buffer_content for long time. In other words, the following gin index lwlock deadlock phenomenon has occurred again. Since the following bug in 10.7 has been fixed. So this should be a new bug.https://www.postgresql.org/message-id/flat/31a702a.14dd.166c1366ac1.Coremail.chjischj%40163.comWe have already obtained coredump files of autovacuum process and one of insert processes.Unfortunately the insert process(run by gcore) held no lwlock, it should be another process(we did not fetch core file) that hold the lwlock needed for autovacuum process.the stack is as following:## stack of one insert process: Acquire lock 0x7f6c517dbfa4 which was held by vacuum process----------------------------------------------------------------------------(gdb) bt#0  0x000000369ea0da00 in sem_wait () from /lib64/libpthread.so.0#1  0x00000000006a7910 in PGSemaphoreLock (sema=0x7f6c4f76a7b8) at pg_sema.c:316#2  0x0000000000718225 in LWLockAcquire (lock=0x7f6c517dbfa4, mode=LW_SHARED) at lwlock.c:1233#3  0x000000000048b622 in ginTraverseLock (buffer=224225, searchMode=0 '\\000') at ginbtree.c:40#4  0x000000000048ca13 in ginFindLeafPage (btree=0x7fffc71c4ea0, searchMode=0 '\\000', snapshot=0x0) at ginbtree.c:97#5  0x00000000004894db in ginInsertItemPointers (index=<value optimized out>, rootBlkno=<value optimized out>, items=<value optimized out>, nitem=<value optimized out>, buildStats=0x0)    at gindatapage.c:1909#6  0x00000000004863a7 in ginEntryInsert (ginstate=0x1c72158, attnum=1, key=20190913, category=0 '\\000', items=0x1c81508, nitem=72, buildStats=0x0) at gininsert.c:214#7  0x000000000049219a in ginInsertCleanup (ginstate=0x1c72158, full_clean=0 '\\000', fill_fsm=1 '\\001', forceCleanup=<value optimized out>, stats=<value optimized out>) at ginfast.c:878#8  0x000000000049308e in ginHeapTupleFastInsert (ginstate=0x1c72158, collector=<value optimized out>) at ginfast.c:443#9  0x0000000000486749 in gininsert (index=<value optimized out>, values=0x7fffc71c54f0, isnull=0x7fffc71c5600 \"\", ht_ctid=0x1c6d3a4, heapRel=<value optimized out>,     checkUnique=<value optimized out>, indexInfo=0x1c61da8) at gininsert.c:522#10 0x00000000005f75f0 in ExecInsertIndexTuples (slot=0x1c62168, tupleid=0x1c6d3a4, estate=0x1c61768, noDupErr=0 '\\000', specConflict=0x0, arbiterIndexes=0x0) at execIndexing.c:387#11 0x0000000000616497 in ExecInsert (pstate=0x1c61ab8) at nodeModifyTable.c:519#12 ExecModifyTable (pstate=0x1c61ab8) at nodeModifyTable.c:1779#13 0x00000000005fb6bf in ExecProcNode (queryDesc=0x1c67760, direction=<value optimized out>, count=0, execute_once=-72 '\\270') at ../../../src/include/executor/executor.h:250#14 ExecutePlan (queryDesc=0x1c67760, direction=<value optimized out>, count=0, execute_once=-72 '\\270') at execMain.c:1723#15 standard_ExecutorRun (queryDesc=0x1c67760, direction=<value optimized out>, count=0, execute_once=-72 '\\270') at execMain.c:364#16 0x00007f6e226aa6f8 in pgss_ExecutorRun (queryDesc=0x1c67760, direction=ForwardScanDirection, count=0, execute_once=1 '\\001') at pg_stat_statements.c:889#17 0x00007f6e224a474d in explain_ExecutorRun (queryDesc=0x1c67760, direction=ForwardScanDirection, count=0, execute_once=1 '\\001') at auto_explain.c:267#18 0x000000000072a15b in ProcessQuery (plan=<value optimized out>,     sourceText=0x1c21458 \"INSERT INTO bi_dm.tdm_wh_shopgds_fnsh_rt (STATIS_DATE,SITE_CD,LGORT,ZSIZE,ZVTWEG,VSBED,TOTAL_CNT,FNSH_CNT,UNFNSH_CNT,ETL_TIME,DEPT_CD,TMALL_FLG,BUSS_TP,ZCKYWLX) VALUES($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$\"..., params=0x1c21580, queryEnv=0x0, dest=<value optimized out>, completionTag=0x7fffc71c5de0 \"\") at pquery.c:161#19 0x000000000072a395 in PortalRunMulti (portal=0x1c57f18, isTopLevel=1 '\\001', setHoldSnapshot=0 '\\000', dest=0xc9b480, altdest=0xc9b480, completionTag=0x7fffc71c5de0 \"\") at pquery.c:1286#20 0x000000000072aa98 in PortalRun (portal=0x1c57f18, count=1, isTopLevel=1 '\\001', run_once=1 '\\001', dest=0x1c25768, altdest=0x1c25768, completionTag=0x7fffc71c5de0 \"\") at pquery.c:799#21 0x0000000000728c9a in exec_execute_message (argc=<value optimized out>, argv=<value optimized out>, dbname=0x1bbb800 \"lbiwhdb\", username=<value optimized out>) at postgres.c:2007#22 PostgresMain (argc=<value optimized out>, argv=<value optimized out>, dbname=0x1bbb800 \"lbiwhdb\", username=<value optimized out>) at postgres.c:4180#23 0x00000000006bb43a in BackendRun (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4405---Type <return> to continue, or q <return> to quit---#24 BackendStartup (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4077#25 ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1755#26 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1363#27 0x000000000063b4d0 in main (argc=3, argv=0x1b839e0) at main.c:228(gdb) f 2#2  0x0000000000718225 in LWLockAcquire (lock=0x7f6c517dbfa4, mode=LW_SHARED) at lwlock.c:12331233 lwlock.c: No such file or directory. in lwlock.c(gdb) p num_held_lwlocks$1 = 0(gdb) ## stack of autovacuum:Acquire lock 0x7f6c519ba5a4 and hold 0x7f6c517dbfa4, 0x7f6c51684f64--------------------------------------(gdb) bt#0  0x000000369ea0da00 in sem_wait () from /lib64/libpthread.so.0#1  0x00000000006a7910 in PGSemaphoreLock (sema=0x7f6c4f77fdb8) at pg_sema.c:316#2  0x0000000000718225 in LWLockAcquire (lock=0x7f6c519ba5a4, mode=LW_EXCLUSIVE) at lwlock.c:1233#3  0x00000000004900fb in ginDeletePage (gvs=0x7fffc71c26d0, blkno=3709, isRoot=0 '\\000', parent=<value optimized out>, myoff=2) at ginvacuum.c:154#4  ginScanToDelete (gvs=0x7fffc71c26d0, blkno=3709, isRoot=0 '\\000', parent=<value optimized out>, myoff=2) at ginvacuum.c:297#5  0x000000000049004c in ginScanToDelete (gvs=0x7fffc71c26d0, blkno=6418, isRoot=1 '\\001', parent=<value optimized out>, myoff=0) at ginvacuum.c:281#6  0x0000000000490d1f in ginVacuumPostingTree (info=0x7fffc71c4da0, stats=<value optimized out>, callback=<value optimized out>, callback_state=<value optimized out>) at ginvacuum.c:408#7  ginbulkdelete (info=0x7fffc71c4da0, stats=<value optimized out>, callback=<value optimized out>, callback_state=<value optimized out>) at ginvacuum.c:643#8  0x00000000005e9d05 in lazy_vacuum_index (indrel=0x7f6c4f396fe8, stats=0x1c25ad8, vacrelstats=0x1c25578) at vacuumlazy.c:1621#9  0x00000000005eaa29 in lazy_scan_heap (onerel=<value optimized out>, options=<value optimized out>, params=0x1ca7550, bstrategy=<value optimized out>) at vacuumlazy.c:1311#10 lazy_vacuum_rel (onerel=<value optimized out>, options=<value optimized out>, params=0x1ca7550, bstrategy=<value optimized out>) at vacuumlazy.c:258#11 0x00000000005e8d68 in vacuum_rel (relid=176178, relation=<value optimized out>, options=97, params=0x1ca7550) at vacuum.c:1445#12 0x00000000005e9187 in vacuum (options=97, relation=0x7fffc71c5640, relid=<value optimized out>, params=0x1ca7550, va_cols=0x0, bstrategy=<value optimized out>, isTopLevel=1 '\\001')    at vacuum.c:306#13 0x00000000006ab262 in autovacuum_do_vac_analyze () at autovacuum.c:3135#14 do_autovacuum () at autovacuum.c:2490#15 0x00000000006aba70 in AutoVacWorkerMain (argc=<value optimized out>, argv=<value optimized out>) at autovacuum.c:1707#16 0x00000000006abb46 in StartAutoVacWorker () at autovacuum.c:1504#17 0x00000000006b972a in StartAutovacuumWorker (postgres_signal_arg=<value optimized out>) at postmaster.c:5462#18 sigusr1_handler (postgres_signal_arg=<value optimized out>) at postmaster.c:5159#19 <signal handler called>#20 0x000000369e2e1393 in __select_nocancel () from /lib64/libc.so.6#21 0x00000000006baa66 in ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1719#22 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1363#23 0x000000000063b4d0 in main (argc=3, argv=0x1b839e0) at main.c:228(gdb) f 2#2  0x0000000000718225 in LWLockAcquire (lock=0x7f6c519ba5a4, mode=LW_EXCLUSIVE) at lwlock.c:12331233 lwlock.c: No such file or directory. in lwlock.c(gdb) p num_held_lwlocks$1 = 2(gdb) p InterruptHoldoffCount$2 = 3(gdb) p held_lwlocks$3 = {{lock = 0x7f6c517dbfa4, mode = LW_EXCLUSIVE}, {lock = 0x7f6c51684f64, mode = LW_EXCLUSIVE}, {lock = 0x7f6c4f782a00, mode = LW_SHARED}, {lock = 0x7f6c4f78df80, mode = LW_EXCLUSIVE}, {    lock = 0x7f6c51a7a1e4, mode = LW_EXCLUSIVE}, {lock = 0x7f6c4f78df80, mode = LW_EXCLUSIVE}, {lock = 0x7f6c53a7eb64, mode = LW_EXCLUSIVE}, {lock = 0x7f6c4f78df80, mode = LW_EXCLUSIVE}, {    lock = 0x0, mode = LW_EXCLUSIVE} <repeats 192 times>}(gdb) (gdb) f 4#4  ginScanToDelete (gvs=0x7fffc71c26d0, blkno=3709, isRoot=0 '\\000', parent=<value optimized out>, myoff=2) at ginvacuum.c:297297 in ginvacuum.c(gdb) p *me$15 = {child = 0x0, parent = 0x7fffc71c26a0, blkno = 0, leftBlkno = 5649, isRoot = 0 '\\000'}(gdb) p *(&BufferDescriptors[(buffer - 1)].bufferdesc)$3 = {tag = {rnode = {spcNode = 1663, dbNode = 16397, relNode = 692113}, forkNum = MAIN_FORKNUM, blockNum = 3709}, buf_id = 202271, state = {value = 3549691907}, wait_backend_pid = 0,   freeNext = -2, content_lock = {tranche = 54, state = {value = 1627389952}, waiters = {head = 539, tail = 1223}}}(gdb) p &((&BufferDescriptors[(buffer - 1)].bufferdesc)->content_lock)$5 = (LWLock *) 0x7f6c51684f64(gdb) f 5#5  0x000000000049004c in ginScanToDelete (gvs=0x7fffc71c26d0, blkno=6418, isRoot=1 '\\001', parent=<value optimized out>, myoff=0) at ginvacuum.c:281281 in ginvacuum.c(gdb) p *(&BufferDescriptors[(buffer - 1)].bufferdesc)$6 = {tag = {rnode = {spcNode = 1663, dbNode = 16397, relNode = 692113}, forkNum = MAIN_FORKNUM, blockNum = 6418}, buf_id = 224224, state = {value = 3549692473}, wait_backend_pid = 14261,   freeNext = -2, content_lock = {tranche = 54, state = {value = 1627389952}, waiters = {head = 527, tail = 370}}}(gdb) p &((&BufferDescriptors[(buffer - 1)].bufferdesc)->content_lock)$7 = (LWLock *) 0x7f6c517dbfa4According to above information in core file and source code, autovacuum process is trying to lock the block 5649,and be blocked.                                          6418(1. held LWLock:0x7f6c517dbfa4) root                                            |5649(3. Acquire LWLock:0x7f6c519ba5a4)     -->  3709(2.held LWLock:0x7f6c51684f64)Does the locking order of autovacuum process(root->right->left) correct? While insert process lock gin buffer by order of bottom->top and left->right.1. vacuum(root->right->left):---------------------------------------------------------------------static voidginVacuumPostingTree(GinVacuumState *gvs, BlockNumber rootBlkno){ if (ginVacuumPostingTreeLeaves(gvs, rootBlkno)) {... LockBufferForCleanup(buffer);//1. (1*)lock the root... ginScanToDelete(gvs, rootBlkno, true, &root, InvalidOffsetNumber);//2. (2)scan to delete for root }}static boolginScanToDelete(GinVacuumState *gvs, BlockNumber blkno, bool isRoot, DataPageDeleteStack *parent, OffsetNumber myoff){... if (!isRoot) LockBuffer(buffer, GIN_EXCLUSIVE);//2.1.1 (4)lock the first child(left);                                    //2.2.1 (7*)lock the second child(right); ... if (!GinPageIsLeaf(page)) {... for (i = FirstOffsetNumber; i <= GinPageGetOpaque(page)->maxoff; i++) { PostingItem *pitem = GinDataPageGetPostingItem(page, i); if (ginScanToDelete(gvs, PostingItemGetBlockNumber(pitem), FALSE, me, i))// 2.1 (3)scan to delete for the first child(left);                                                                          // 2.2 (6)scan to delete for the second child(right); i--; } }... if (isempty) { /* we never delete the left- or rightmost branch */ if (me->leftBlkno != InvalidBlockNumber && !GinPageRightMost(page)) { Assert(!isRoot); ginDeletePage(gvs, blkno, me->leftBlkno, me->parent->blkno, myoff, me->parent->isRoot);//2.2.2 (8)delete the second child(right) meDelete = TRUE; } } if (!isRoot) LockBuffer(buffer, GIN_UNLOCK);//2.1.2 (5)unlock the first child(left)...}static voidginDeletePage(GinVacuumState *gvs, BlockNumber deleteBlkno, BlockNumber leftBlkno,   BlockNumber parentBlkno, OffsetNumber myoff, bool isParentRoot){... lBuffer = ReadBufferExtended(gvs->index, MAIN_FORKNUM, leftBlkno, RBM_NORMAL, gvs->strategy);... LockBuffer(lBuffer, GIN_EXCLUSIVE);// 2.2.2.1 (9*)lock the first child(left); ...}---------------------------------------------------------------------2. insert(bottom->top,left->right):https://www.postgresql.org/message-id/19e4290b.3bb.166ea2c6831.Coremail.chjischj%40163.com> # insert process(ginInsertValue())> > > >                      644(root blkno)>                       |>      7054(2. held LWLock:0x2aaac587ae64)     ----rightlink---->    xxxx(3. Acquire LWLock:0x2aaab4009564,buffer = 2119038,blkno should be 9954)>        |> 701(1. held LWLock:0x2aaab670dfe4)> > > The ginInsertValue() function above gets the lwlock in the order described in the README.> > > src/backend/access/gin/README> ---------------------------------------------------------------> To avoid deadlocks, B-tree pages must always be locked in the same order:> left to right, and bottom to top.> ...> -----------------------------------------------------------------Regards,Chen Huajun", "msg_date": "Sun, 29 Sep 2019 16:16:28 +0800 (CST)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": true, "msg_subject": "Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "Hi!\n\nThank you for reporting.\n\nOn Sun, Sep 29, 2019 at 11:17 AM chenhj <chjischj@163.com> wrote:\n> Does the locking order of autovacuum process(root->right->left) correct? While insert process lock gin buffer by order of bottom->top and left->right.\n>\n> 1. vacuum(root->right->left):\n\nStarting from root seems OK for me, because vacuum blocks all\nconcurrent inserts before doing this. But this needs to be properly\ndocumented in readme.\n\nLocking from right to left is clearly wrong. It could deadlock with\nconcurrent ginStepRight(), which locks from left to right. I expect\nthis happened in your case. I'm going to reproduce this and fix.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 29 Sep 2019 17:38:08 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Sun, Sep 29, 2019 at 5:38 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Sun, Sep 29, 2019 at 11:17 AM chenhj <chjischj@163.com> wrote:\n> > Does the locking order of autovacuum process(root->right->left) correct? While insert process lock gin buffer by order of bottom->top and left->right.\n> >\n> > 1. vacuum(root->right->left):\n>\n> Starting from root seems OK for me, because vacuum blocks all\n> concurrent inserts before doing this. But this needs to be properly\n> documented in readme.\n>\n> Locking from right to left is clearly wrong. It could deadlock with\n> concurrent ginStepRight(), which locks from left to right. I expect\n> this happened in your case. I'm going to reproduce this and fix.\n\nI just managed to reproduce this using two sessions on master branch.\n\nsession 1\n session 2\n\n# create table test with (autovacuum_enabled = false) as (select\narray[1] ar from generate_series(1,20000) i);\n# create index test_ar_idx on test using gin (ar);\n# vacuum analyze test;\n# delete from test;\n\n # set enable_seqscan = off;\n gdb> b ginbtree.c:150\n # select * from test where ar @> '{1}'::integer[];\n Step in gdb just before ReadBuffer() in ReleaseAndReadBuffer().\n\ngdb> b ginvacuum.c:155\n# vacuum test;\n\n gdb > continue\ngdb> continue\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 29 Sep 2019 18:12:31 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Sun, Sep 29, 2019 at 6:12 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Sun, Sep 29, 2019 at 5:38 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > On Sun, Sep 29, 2019 at 11:17 AM chenhj <chjischj@163.com> wrote:\n> > > Does the locking order of autovacuum process(root->right->left) correct? While insert process lock gin buffer by order of bottom->top and left->right.\n> > >\n> > > 1. vacuum(root->right->left):\n> >\n> > Starting from root seems OK for me, because vacuum blocks all\n> > concurrent inserts before doing this. But this needs to be properly\n> > documented in readme.\n> >\n> > Locking from right to left is clearly wrong. It could deadlock with\n> > concurrent ginStepRight(), which locks from left to right. I expect\n> > this happened in your case. I'm going to reproduce this and fix.\n>\n> I just managed to reproduce this using two sessions on master branch.\n>\n> session 1\n> session 2\n>\n> # create table test with (autovacuum_enabled = false) as (select\n> array[1] ar from generate_series(1,20000) i);\n> # create index test_ar_idx on test using gin (ar);\n> # vacuum analyze test;\n> # delete from test;\n>\n> # set enable_seqscan = off;\n> gdb> b ginbtree.c:150\n> # select * from test where ar @> '{1}'::integer[];\n> Step in gdb just before ReadBuffer() in ReleaseAndReadBuffer().\n>\n> gdb> b ginvacuum.c:155\n> # vacuum test;\n>\n> gdb > continue\n> gdb> continue\n\nPatch with fix is attached. Idea is simple: ginScanToDelete() now\nkeeps exclusive lock on left page eliminating the need to relock it.\nSo, we preserve left-to-right locking order and can't deadlock with\nginStepRight().\n\nAlso, we need to adjust Concurrency section in GIN README. For me the\ndescription looks vague and inconsistent even with current behavior.\nI'm going to post this later.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 29 Sep 2019 19:27:28 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Sun, Sep 29, 2019 at 7:38 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Starting from root seems OK for me, because vacuum blocks all\n> concurrent inserts before doing this. But this needs to be properly\n> documented in readme.\n\nI never got an adequate answer to this closely related question almost\ntwo years ago:\n\nhttps://www.postgresql.org/message-id/CAH2-Wz=GTnAPzEEZqYELOv3h1Fxpo5xhMrBP6aMGEKLKv95csQ@mail.gmail.com\n\nIn general, ginInsertCleanup() seems badly designed. Why is it okay\nthat there is no nbtree-style distinction between page deletion and\npage recycling?\n\n> Locking from right to left is clearly wrong. It could deadlock with\n> concurrent ginStepRight(), which locks from left to right. I expect\n> this happened in your case. I'm going to reproduce this and fix.\n\nI am sick and tired of seeing extremely basic errors like this within\nGIN's locking protocols. Bugs happen, but these are not ordinary bugs.\nThey're more or less all a result of giving no thought to the high\nlevel design. I'm not blaming you for this, or any one person. But\nthis is not okay.\n\nAnything around index concurrency needs to be explained in\nexcruciating detail, while taking a top-down approach that applies\ngeneral rules (e.g. you can only do lock coupling left to right, or\nbottom to top in nbtree). Anything less than that should be assumed to\nbe wrong on general principle.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 29 Sep 2019 12:52:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "\n\n> 29 сент. 2019 г., в 21:27, Alexander Korotkov <a.korotkov@postgrespro.ru> написал(а):\n> \n> Patch with fix is attached. Idea is simple: ginScanToDelete() now\n> keeps exclusive lock on left page eliminating the need to relock it.\n> So, we preserve left-to-right locking order and can't deadlock with\n> ginStepRight().\n\nIn this function ginDeletePage(gvs, blkno, BufferGetBlockNumber(me->leftBuffer),...)\nwe are going to reread buffer\nlBuffer = ReadBufferExtended(gvs->index, MAIN_FORKNUM, leftBlkno,\n RBM_NORMAL, gvs->strategy);\nIs it OK?\n\n\n> 30 сент. 2019 г., в 0:52, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> Why is it okay\n> that there is no nbtree-style distinction between page deletion and\n> page recycling?\nAs far as I understand deleted page is stamped with\nGinPageSetDeleteXid(page, ReadNewTransactionId());\nIt will not be recycled until that Xid is far behind.\nBTW we found a small bug (wraparound) in similar GiST and B-tree implementations.\nProbably, it's there in GIN too.\n\n--\nAndrey Borodin\nOpen source RDBMS development team leader\nYandex.Cloud\n\n\n\n", "msg_date": "Mon, 30 Sep 2019 10:38:10 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Mon, Sep 30, 2019 at 8:38 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > 29 сент. 2019 г., в 21:27, Alexander Korotkov <a.korotkov@postgrespro.ru> написал(а):\n> >\n> > Patch with fix is attached. Idea is simple: ginScanToDelete() now\n> > keeps exclusive lock on left page eliminating the need to relock it.\n> > So, we preserve left-to-right locking order and can't deadlock with\n> > ginStepRight().\n>\n> In this function ginDeletePage(gvs, blkno, BufferGetBlockNumber(me->leftBuffer),...)\n> we are going to reread buffer\n> lBuffer = ReadBufferExtended(gvs->index, MAIN_FORKNUM, leftBlkno,\n> RBM_NORMAL, gvs->strategy);\n> Is it OK?\n\n\nThat's related not only to left buffer. We also could pass buffer\ninstead of deleteBlkno. And with some code changes it's also possible\nto pass buffer instead of parentBlkno. But I decided to keep code\nchanges minimal at least for backpatch version. We could apply such\noptimization to master as a separate patch.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 30 Sep 2019 20:37:54 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Sun, Sep 29, 2019 at 10:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Sep 29, 2019 at 7:38 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Starting from root seems OK for me, because vacuum blocks all\n> > concurrent inserts before doing this. But this needs to be properly\n> > documented in readme.\n>\n> I never got an adequate answer to this closely related question almost\n> two years ago:\n>\n> https://www.postgresql.org/message-id/CAH2-Wz=GTnAPzEEZqYELOv3h1Fxpo5xhMrBP6aMGEKLKv95csQ@mail.gmail.com\n>\n> In general, ginInsertCleanup() seems badly designed. Why is it okay\n> that there is no nbtree-style distinction between page deletion and\n> page recycling?\n\nThank you for pointing. I remember this thread, but don't remember\ndetails. I'll reread it.\n\n> > Locking from right to left is clearly wrong. It could deadlock with\n> > concurrent ginStepRight(), which locks from left to right. I expect\n> > this happened in your case. I'm going to reproduce this and fix.\n>\n> I am sick and tired of seeing extremely basic errors like this within\n> GIN's locking protocols. Bugs happen, but these are not ordinary bugs.\n> They're more or less all a result of giving no thought to the high\n> level design. I'm not blaming you for this, or any one person. But\n> this is not okay.\n>\n> Anything around index concurrency needs to be explained in\n> excruciating detail, while taking a top-down approach that applies\n> general rules (e.g. you can only do lock coupling left to right, or\n> bottom to top in nbtree). Anything less than that should be assumed to\n> be wrong on general principle.\n\nFrankly speaking I'm not very happy with special version of B-tree,\nwhich is builtin to GIN. This version of B-tree is lacking of high\nkeys. AFAIR because of lack of high keys, we can't implement the same\nconcurrency model as nbtree. Instead we have to do super-locking for\npage deletion and so on. That looks ridiculous for me. I think in\nfuture we should somehow reimplement GIN on top of nbtree.\n\nBut we have to provide some way less radical fixes for our stable\nreleases. In particular, I believe patch I've posted in this thread\nmakes situation better not worse. That is it fixes one bug without\nintroducing mode bugs. But I'm going to analyze more on this and\ndocument GIN concurrency better in the README. Probably, I'll spot\nmore details.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 30 Sep 2019 20:59:49 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Sun, Sep 29, 2019 at 10:38 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> As far as I understand deleted page is stamped with\n> GinPageSetDeleteXid(page, ReadNewTransactionId());\n> It will not be recycled until that Xid is far behind.\n\nThat only gets used within posting tree pages, though.\nginInsertCleanup() is concerned with pending list pages.\n\n> BTW we found a small bug (wraparound) in similar GiST and B-tree implementations.\n> Probably, it's there in GIN too.\n\nProbably, but that's much less of a problem to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 30 Sep 2019 12:07:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Mon, Sep 30, 2019 at 11:00 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Thank you for pointing. I remember this thread, but don't remember\n> details. I'll reread it.\n\nI think that doing this now would be a good idea:\n\n\"\"\"\nDefensively checking the page type (pending, posting, etc) within\nscanPendingInsert() would be a good start. That's already something\nthat we do for posting trees. If we're following the same rules as\nposting trees (which sounds like a good idea), then we should have the\nsame defenses. Same applies to using elogs within ginInsertCleanup()\n-- we should promote those Assert()s to elog()s.\n\"\"\"\n\nIn other words, ginInsertCleanup() should have defensive \"Page types\nmatch?\" checks that are similar to the existing checks in\nginStepRight(). In both cases we're stepping right while coupling\nlocks, to avoid concurrent page deletion.\n\n> > > Locking from right to left is clearly wrong. It could deadlock with\n> > > concurrent ginStepRight(), which locks from left to right. I expect\n> > > this happened in your case. I'm going to reproduce this and fix.\n\n> Frankly speaking I'm not very happy with special version of B-tree,\n> which is builtin to GIN. This version of B-tree is lacking of high\n> keys. AFAIR because of lack of high keys, we can't implement the same\n> concurrency model as nbtree. Instead we have to do super-locking for\n> page deletion and so on. That looks ridiculous for me. I think in\n> future we should somehow reimplement GIN on top of nbtree.\n\nI think that that could work on top of the new nbtree posting list\nstuff, provided that it was acceptable to not use posting list\ncompression in the main tree -- the way that posting list splits work\nthere needs to be able to think about free space in a very simple way\nthat is broken even by GIN's varbyte compression. Compression could\nstill be used in posting trees, though.\n\n> But we have to provide some way less radical fixes for our stable\n> releases. In particular, I believe patch I've posted in this thread\n> makes situation better not worse. That is it fixes one bug without\n> introducing mode bugs. But I'm going to analyze more on this and\n> document GIN concurrency better in the README. Probably, I'll spot\n> more details.\n\nThanks.\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 30 Sep 2019 12:25:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Sun, Sep 29, 2019 at 8:12 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I just managed to reproduce this using two sessions on master branch.\n>\n> session 1\n> session 2\n\nWas the involvement of the pending list stuff in Chen's example just a\ncoincidence? Can you recreate the problem while eliminating that\nfactor (i.e. while setting fastupdate to off)?\n\nChen's example involved an INSERT that deadlocked against VACUUM --\nnot a SELECT. Is this just a coincidence?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 30 Sep 2019 12:54:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Mon, Sep 30, 2019 at 10:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Sep 29, 2019 at 8:12 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > I just managed to reproduce this using two sessions on master branch.\n> >\n> > session 1\n> > session 2\n>\n> Was the involvement of the pending list stuff in Chen's example just a\n> coincidence? Can you recreate the problem while eliminating that\n> factor (i.e. while setting fastupdate to off)?\n>\n> Chen's example involved an INSERT that deadlocked against VACUUM --\n> not a SELECT. Is this just a coincidence?\n\nChen wrote.\n\n> Unfortunately the insert process(run by gcore) held no lwlock, it should be another process(we did not fetch core file) that hold the lwlock needed for autovacuum process.\n\nSo, he catched backtrace for INSERT and post it for information. But\nsince INSERT has no lwlocks held, it couldn't participate deadlock.\nIt was just side waiter.\n\nI've rerun my reproduction case and it still deadlocks. Just the same\nsteps but GIN index with (fastupdate = off).\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 1 Oct 2019 05:55:26 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Tue, Oct 1, 2019 at 5:55 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Mon, Sep 30, 2019 at 10:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Sun, Sep 29, 2019 at 8:12 AM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > I just managed to reproduce this using two sessions on master branch.\n> > >\n> > > session 1\n> > > session 2\n> >\n> > Was the involvement of the pending list stuff in Chen's example just a\n> > coincidence? Can you recreate the problem while eliminating that\n> > factor (i.e. while setting fastupdate to off)?\n> >\n> > Chen's example involved an INSERT that deadlocked against VACUUM --\n> > not a SELECT. Is this just a coincidence?\n>\n> Chen wrote.\n>\n> > Unfortunately the insert process(run by gcore) held no lwlock, it should be another process(we did not fetch core file) that hold the lwlock needed for autovacuum process.\n>\n> So, he catched backtrace for INSERT and post it for information. But\n> since INSERT has no lwlocks held, it couldn't participate deadlock.\n> It was just side waiter.\n>\n> I've rerun my reproduction case and it still deadlocks. Just the same\n> steps but GIN index with (fastupdate = off).\n\nBTW, while trying to revise README I found another bug. It appears to\nbe possible to reach deleted page from downlink. The reproduction\ncase is following.\n\nsession 1\n session 2\n\n# create table tmp (ar int[]) with (autovacuum_enabled = false);\n# insert into tmp (select '{1}' from generate_series(1,10000000) i);\n# insert into tmp values ('{1,2}');\n# insert into tmp (select '{1}' from generate_series(1,10000000) i);\n# create index tmp_idx on tmp using gin(ar);\n\n # delete from tmp;\n\n# set max_parallel_workers_per_gather = 0;\n/* Breakpoint where entyLoadMoreItems() calls ginFindLeafPage() to\nsearch GIN posting tree */\ngdb> b ginget.c:682\ngdb> select * from tmp where ar @> '{1,2}';\ngdb> /* step till ReleaseAndReadBuffer() releases a buffer */\n\n # vacuum tmp;\n\n# continue\n\nIt also appears that previous version of deadlock fix didn't supply\nleft sibling to leftmost child of any page. As result, internal pages\nwere never deleted. The first attached patch is revised fix is\nattached.\n\nThe second patch fix traversing to deleted page using downlink.\nSimilarly to nbtree, we just always move right if landed on deleted\npage. Also, it appears that we clear all other flags while marking\npage as deleted. That cause assert to fire. With patch, we just add\ndeleted flag without erasing others. Also, I have to remove assert\nthat ginStepRight() never steps to deleted page. If we landed to\ndeleted page from downlink, then we can find other deleted page by\nrightlink.\n\nI'm planning to continue work on README, comments and commit messages.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 4 Oct 2019 00:05:43 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "Hi, Peter!\n\nOn Fri, Oct 4, 2019 at 12:05 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I'm planning to continue work on README, comments and commit messages.\n\nIt tool me a lot of efforts to revise a concurrency section in README.\nI can't judge the result, but I probably did my best. I'd like to\ncommit this patchset including both bug fixes and README update. But\nI'd like you to take a look on the README patch first.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 27 Oct 2019 22:03:48 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "Hi Alexander,\n\nOn Sun, Oct 27, 2019 at 7:04 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> It tool me a lot of efforts to revise a concurrency section in README.\n> I can't judge the result, but I probably did my best. I'd like to\n> commit this patchset including both bug fixes and README update. But\n> I'd like you to take a look on the README patch first.\n\nThank you for working on this.\n\nI am flying back to the USA today, and will try to take a look at what\nyou came up with on the way. I will definitely have some feedback in\nthe next few days.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Oct 2019 11:00:06 +0000", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Mon, Oct 28, 2019 at 2:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Oct 27, 2019 at 7:04 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > It tool me a lot of efforts to revise a concurrency section in README.\n> > I can't judge the result, but I probably did my best. I'd like to\n> > commit this patchset including both bug fixes and README update. But\n> > I'd like you to take a look on the README patch first.\n>\n> Thank you for working on this.\n>\n> I am flying back to the USA today, and will try to take a look at what\n> you came up with on the way. I will definitely have some feedback in\n> the next few days.\n\nThank you so much!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 20:32:47 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Sun, Oct 27, 2019 at 12:04 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> (0001-Fix-deadlock-between-ginDeletePage-and-ginStepRigh-3.patch)\n\nSome thoughts on the first patch:\n\n* How do comparisons of items in each type of B-Tree work?\n\nI would like to see a statement about invariants similar to nbtree's\n\"subtree\" invariant. Looks like the \"high key\" is <= (not <) in each\ncase (i.e. for both entry trees and posting trees), just like nbtree.\nnbtree has an explicit high key rather than using the current highest\ndata item on the page (maybe this can be called a \"pseudo high key\" or\nsomething). That is one important difference, but in general GIN seems\nto copy nbtree. I think.\n\nHowever, GIN never explains stuff that must be affected by invariants\nin binary search routines like entryLocateLeafEntry() and\ndataLocateItem(). GIN never makes the similarities and differences\nclear. Maybe you could do more on that -- it's too hard to tell why\nentryLocateLeafEntry() and entryLocateEntry() (i.e. leaf and internal\npage binary search variants for entry B-Trees) do things differently\nto each other (and do things differently to nbtree's binary search\nroutine).\n\nThis code from entryLocateEntry() is a good example of how this can be\nconfusing:\n\n if (result == 0)\n {\n stack->off = mid;\n Assert(GinGetDownlink(itup) != GIN_ROOT_BLKNO);\n return GinGetDownlink(itup);\n }\n else if (result > 0)\n low = mid + 1;\n else\n high = mid;\n\nSome questions about this code:\n\n* Why is the \"if (result == 0)\" branch using the block number/downlink\nfrom the \"mid\" tuple as its return value?\n\nWhy not follow nbtree's _bt_binsrch() here too, by returning \"the last\nkey < scan key\" on internal pages? If your high key from one level\ndown is <=, and also a pseudo high key (i.e. both a \"real\" entry and a\nhigh key), how can it be correct to go to the block/downlink from an\nequal \"mid\" tuple like this? Won't that make index scans miss the\npseudo high key, which they have to return to the scan?\n\n* Why is it okay that there is no \"negative infinity\" item in internal\npages for the entry tree?\n\nGIN does not have to copy nbtree in every detail to be correct, but it\nconfuses me that it *mostly* copies nbtree, but doesn't do so *fully*.\nAs I wrote just now (or at least implied), entryIsMoveRight() works in\na similar way to _bt_moveright(), and yet we still have these apparent\ndifferences with how binary searches work that seems to not be\ncompatible with that. Maybe this is okay, but I cannot understand why\nthat is right now. It looks wrong to me -- so wrong that I suppose I\nmust be mistaken.\n\nI also spotted some fairly minor issues:\n\n* s/rightest/rightmost/\n\n* It's weird that GinBtree/GinBtreeData is a set of callbacks for\nboth posting trees and main entry trees, since the rules are now quite\ndifferent in each case. Not sure if it's worth saying something about\nthat.\n\n* \"Exclusive lock\" makes me think of \"ExclusiveLock\", which is a kind\nof heavyweight lock (not a buffer lock). I suggest changing that\nwording to avoid confusion.\n\nIn general, it seems very important to be clear about exactly how the\nkey space works. The nbtree work for v12 greatly benefitted from\ndefining comparisons in a way that didn't really change how nbtree\nworked, while at the same time minimizing I/O and making nbtree\nfaithful to Lehman & Yao's original design. It isn't obvious how\nvaluable it is to really carefully define how invariants and key\ncomparisons work, but it seems possible to solve a lot of problems\nthat way.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 29 Oct 2019 16:34:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "Hi!\n\nI'm sorry for late reply. I was busy with various things. Also\ndigging into these details took some time. Please find my explanation\nbelow.\n\nOn Wed, Oct 30, 2019 at 2:34 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> In general, it seems very important to be clear about exactly how the\n> key space works. The nbtree work for v12 greatly benefitted from\n> defining comparisons in a way that didn't really change how nbtree\n> worked, while at the same time minimizing I/O and making nbtree\n> faithful to Lehman & Yao's original design. It isn't obvious how\n> valuable it is to really carefully define how invariants and key\n> comparisons work, but it seems possible to solve a lot of problems\n> that way.\n\ngin packs downlinks and pivot key into tuples in the different way\nthan nbtree does. Let's see the logical structure of internal B-tree\npage. We can represent it as following.\n\ndownlink_1, key_1_2, downlink_2, key_2_3, .... , downlink_n, highkey\n\nkey_{i-1}_i is pivot key, which splits key space between\ndownlink_{i-1} and downlink_i. So, every key under downlink_{i-1} is\n<= key_{i-1}_i. Every key under downlink_i is > key_{i-1}_i. And all\nunderlying keys are <= highkey.\n\nnbtree packs that into tuples as followings (tuples are shown in parentheses).\n\n(highkey), (-inf, downlink_1), (key_1_2, downlink_2), ...\n(key_{n-1}_n, downlink_n)\n\nSo, we store highkey separately. key_{i-1}_i and downlink_i form a\ntuple together. downlink_1 is coupled with -inf key.\n\ngin packs tuples in a different way.\n\n(downlink_1, key_1_2), (downlink_2, key_2_3), ... , (downlink_n, highkey)\n\nSo, in GIN key_{i-1}_i and downlink_{i-1} form a tuple. Highkey is\ncoupled with downlink_n. And -inf key is not needed here.\n\nBut there is couple notes about highkey:\n1) In entry tree rightmost page, a key coupled with downlink_n doesn't\nreally matter. Highkey is assumed to be infinity.\n2) In posting tree, a key coupled with downlink_n always doesn't\nmatter. Highkey for non-rightmost pages is stored separately and\naccessed via GinDataPageGetRightBound().\n\nI think this explains following:\n1) The invariants in gin btree are same as they are in nbtree. Just\npage layout is different.\n2) The way entryLocateEntry() works. In particular why it's OK to go\nthe mid downlink if corresponding key equals.\n3) There is no \"negative infinity\" item in internal pages.\n\nDoes the explanation of above looks clear for you? If so, I can put\nit into the README.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 11 Nov 2019 02:42:42 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Mon, Nov 11, 2019 at 2:42 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I'm sorry for late reply. I was busy with various things. Also\n> digging into these details took some time. Please find my explanation\n> below.\n>\n> On Wed, Oct 30, 2019 at 2:34 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > In general, it seems very important to be clear about exactly how the\n> > key space works. The nbtree work for v12 greatly benefitted from\n> > defining comparisons in a way that didn't really change how nbtree\n> > worked, while at the same time minimizing I/O and making nbtree\n> > faithful to Lehman & Yao's original design. It isn't obvious how\n> > valuable it is to really carefully define how invariants and key\n> > comparisons work, but it seems possible to solve a lot of problems\n> > that way.\n>\n> gin packs downlinks and pivot key into tuples in the different way\n> than nbtree does. Let's see the logical structure of internal B-tree\n> page. We can represent it as following.\n>\n> downlink_1, key_1_2, downlink_2, key_2_3, .... , downlink_n, highkey\n>\n> key_{i-1}_i is pivot key, which splits key space between\n> downlink_{i-1} and downlink_i. So, every key under downlink_{i-1} is\n> <= key_{i-1}_i. Every key under downlink_i is > key_{i-1}_i. And all\n> underlying keys are <= highkey.\n>\n> nbtree packs that into tuples as followings (tuples are shown in parentheses).\n>\n> (highkey), (-inf, downlink_1), (key_1_2, downlink_2), ...\n> (key_{n-1}_n, downlink_n)\n>\n> So, we store highkey separately. key_{i-1}_i and downlink_i form a\n> tuple together. downlink_1 is coupled with -inf key.\n>\n> gin packs tuples in a different way.\n>\n> (downlink_1, key_1_2), (downlink_2, key_2_3), ... , (downlink_n, highkey)\n>\n> So, in GIN key_{i-1}_i and downlink_{i-1} form a tuple. Highkey is\n> coupled with downlink_n. And -inf key is not needed here.\n>\n> But there is couple notes about highkey:\n> 1) In entry tree rightmost page, a key coupled with downlink_n doesn't\n> really matter. Highkey is assumed to be infinity.\n> 2) In posting tree, a key coupled with downlink_n always doesn't\n> matter. Highkey for non-rightmost pages is stored separately and\n> accessed via GinDataPageGetRightBound().\n>\n> I think this explains following:\n> 1) The invariants in gin btree are same as they are in nbtree. Just\n> page layout is different.\n> 2) The way entryLocateEntry() works. In particular why it's OK to go\n> the mid downlink if corresponding key equals.\n> 3) There is no \"negative infinity\" item in internal pages.\n>\n> Does the explanation of above looks clear for you? If so, I can put\n> it into the README.\n\nSo, I've put this explanation into README patch. I just change\ndesignation to better match with Lehman & Yao paper and did some minor\nenchantments.\n\nI'm going to push this patchset if no objections.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 17 Nov 2019 21:18:43 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" }, { "msg_contents": "On Sun, Nov 17, 2019 at 9:18 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> So, I've put this explanation into README patch. I just change\n> designation to better match with Lehman & Yao paper and did some minor\n> enchantments.\n>\n> I'm going to push this patchset if no objections.\n\nSo, pushed with minor changes during backpatching.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 20 Nov 2019 00:20:47 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Connections hang indefinitely while taking a gin index's LWLock\n buffer_content lock(PG10.7)" } ]
[ { "msg_contents": "Hi,\n\nI got the following assertion failure when I enabled recovery_min_apply_delay\nand started archive recovery (i.e., I put only recovery.signal not\nstandby.signal).\n\nTRAP: FailedAssertion(\"latch->owner_pid == MyProcPid\", File:\n\"latch.c\", Line: 522)\n\nHere is the example to reproduce the issue:\n\n----------------------------\ninitdb -D data\npg_ctl -D data start\npsql -c \"alter system set recovery_min_apply_delay to '60s'\"\npsql -c \"alter system set archive_mode to on\"\npsql -c \"alter system set archive_command to 'cp %p ../arch/%f'\"\npsql -c \"alter system set restore_command to 'cp ../arch/%f %p'\"\nmkdir arch\npg_basebackup -D bkp -c fast\npgbench -i\npgbench -t 1000\npg_ctl -D data -m i stop\nrm -rf bkp/pg_wal\nmv data/pg_wal bkp\nrm -rf data\nmv bkp data\ntouch data/recovery.signal\npg_ctl -D data -W start\n----------------------------\n\nThe latch that causes this assertion failure is recoveryWakeupLatch.\nThe ownership of this latch is taken only when standby mode is\nrequested. But this latch can be used when starting archive recovery\nwith recovery_min_apply_delay set even though it's unowned.\nSo the assertion failure happened.\n\nAttached patch fixes this issue by making archive recovery always ignore\nrecovery_min_apply_delay. This change is OK because\nrecovery_min_apply_delay was introduced for standby mode, I think.\n\nThis issue is not new in v12. I observed that the issue was reproduced\nin v11. So the back-patch is necessary.\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Mon, 30 Sep 2019 00:49:03 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "recovery_min_apply_delay in archive recovery causes assertion failure\n in latch" }, { "msg_contents": "On Mon, Sep 30, 2019 at 12:49:03AM +0900, Fujii Masao wrote:\n> Attached patch fixes this issue by making archive recovery always ignore\n> recovery_min_apply_delay. This change is OK because\n> recovery_min_apply_delay was introduced for standby mode, I think.\n> \n> This issue is not new in v12. I observed that the issue was reproduced\n> in v11. So the back-patch is necessary.\n\nI have not directly tested, but from a lookup at the code I think\nthat you are right. Perhaps we'd want more safeguards in\nWaitForWALToBecomeAvailable(), like an assert within the\nXLOG_FROM_STREAM part similar to the check you are adding? My point\nis that we should switch to XLOG_FROM_STREAM only if we are in standby\nmode, and that's the only place where the startup process looks at\nrecoveryWakeupLatch.\n--\nMichael", "msg_date": "Mon, 30 Sep 2019 12:42:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" }, { "msg_contents": "On Mon, Sep 30, 2019 at 12:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 30, 2019 at 12:49:03AM +0900, Fujii Masao wrote:\n> > Attached patch fixes this issue by making archive recovery always ignore\n> > recovery_min_apply_delay. This change is OK because\n> > recovery_min_apply_delay was introduced for standby mode, I think.\n> >\n> > This issue is not new in v12. I observed that the issue was reproduced\n> > in v11. So the back-patch is necessary.\n>\n> I have not directly tested, but from a lookup at the code I think\n> that you are right. Perhaps we'd want more safeguards in\n> WaitForWALToBecomeAvailable(), like an assert within the\n> XLOG_FROM_STREAM part similar to the check you are adding? My point\n> is that we should switch to XLOG_FROM_STREAM only if we are in standby\n> mode, and that's the only place where the startup process looks at\n> recoveryWakeupLatch.\n\nThanks for the review! OK, attached is the patch which also added\ntwo assertion checks as you described.\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Mon, 30 Sep 2019 17:50:03 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" }, { "msg_contents": "On Mon, Sep 30, 2019 at 05:50:03PM +0900, Fujii Masao wrote:\n> Thanks for the review! OK, attached is the patch which also added\n> two assertion checks as you described.\n\nThanks, looks fine. The indentation looks a bit wrong for the\ncomments, but that's a nit.\n--\nMichael", "msg_date": "Tue, 1 Oct 2019 14:06:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" }, { "msg_contents": "On Mon, Sep 30, 2019 at 12:49 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> Hi,\n>\n> I got the following assertion failure when I enabled recovery_min_apply_delay\n> and started archive recovery (i.e., I put only recovery.signal not\n> standby.signal).\n>\n> TRAP: FailedAssertion(\"latch->owner_pid == MyProcPid\", File:\n> \"latch.c\", Line: 522)\n>\n> Here is the example to reproduce the issue:\n>\n> ----------------------------\n> initdb -D data\n> pg_ctl -D data start\n> psql -c \"alter system set recovery_min_apply_delay to '60s'\"\n> psql -c \"alter system set archive_mode to on\"\n> psql -c \"alter system set archive_command to 'cp %p ../arch/%f'\"\n> psql -c \"alter system set restore_command to 'cp ../arch/%f %p'\"\n> mkdir arch\n> pg_basebackup -D bkp -c fast\n> pgbench -i\n> pgbench -t 1000\n> pg_ctl -D data -m i stop\n> rm -rf bkp/pg_wal\n> mv data/pg_wal bkp\n> rm -rf data\n> mv bkp data\n> touch data/recovery.signal\n> pg_ctl -D data -W start\n> ----------------------------\n>\n> The latch that causes this assertion failure is recoveryWakeupLatch.\n> The ownership of this latch is taken only when standby mode is\n> requested. But this latch can be used when starting archive recovery\n> with recovery_min_apply_delay set even though it's unowned.\n> So the assertion failure happened.\n>\n> Attached patch fixes this issue by making archive recovery always ignore\n> recovery_min_apply_delay. This change is OK because\n> recovery_min_apply_delay was introduced for standby mode, I think.\n\nNo, I found the following description in the doc.\n\n------------------------------\nThis parameter is intended for use with streaming replication deployments;\nhowever, if the parameter is specified it will be honored in all cases\n------------------------------\n\nSo archive recovery with recovery_min_apply_delay enabled would be\nintended configuration. My patch that changes archive recovery so that\nit always ignores thesetting might be in wrong direction. Maybe we should\nmake recovery_min_apply_delay work properly even in archive recovery.\nThought?\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Fri, 4 Oct 2019 21:03:18 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" }, { "msg_contents": "On Fri, Oct 4, 2019 at 9:03 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Mon, Sep 30, 2019 at 12:49 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I got the following assertion failure when I enabled recovery_min_apply_delay\n> > and started archive recovery (i.e., I put only recovery.signal not\n> > standby.signal).\n> >\n> > TRAP: FailedAssertion(\"latch->owner_pid == MyProcPid\", File:\n> > \"latch.c\", Line: 522)\n> >\n> > Here is the example to reproduce the issue:\n> >\n> > ----------------------------\n> > initdb -D data\n> > pg_ctl -D data start\n> > psql -c \"alter system set recovery_min_apply_delay to '60s'\"\n> > psql -c \"alter system set archive_mode to on\"\n> > psql -c \"alter system set archive_command to 'cp %p ../arch/%f'\"\n> > psql -c \"alter system set restore_command to 'cp ../arch/%f %p'\"\n> > mkdir arch\n> > pg_basebackup -D bkp -c fast\n> > pgbench -i\n> > pgbench -t 1000\n> > pg_ctl -D data -m i stop\n> > rm -rf bkp/pg_wal\n> > mv data/pg_wal bkp\n> > rm -rf data\n> > mv bkp data\n> > touch data/recovery.signal\n> > pg_ctl -D data -W start\n> > ----------------------------\n> >\n> > The latch that causes this assertion failure is recoveryWakeupLatch.\n> > The ownership of this latch is taken only when standby mode is\n> > requested. But this latch can be used when starting archive recovery\n> > with recovery_min_apply_delay set even though it's unowned.\n> > So the assertion failure happened.\n> >\n> > Attached patch fixes this issue by making archive recovery always ignore\n> > recovery_min_apply_delay. This change is OK because\n> > recovery_min_apply_delay was introduced for standby mode, I think.\n>\n> No, I found the following description in the doc.\n>\n> ------------------------------\n> This parameter is intended for use with streaming replication deployments;\n> however, if the parameter is specified it will be honored in all cases\n> ------------------------------\n>\n> So archive recovery with recovery_min_apply_delay enabled would be\n> intended configuration. My patch that changes archive recovery so that\n> it always ignores thesetting might be in wrong direction. Maybe we should\n> make recovery_min_apply_delay work properly even in archive recovery.\n> Thought?\n\nPatch attached. This patch allows archive recovery with\nrecovery_min_apply_delay set, but not crash recovery.\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Tue, 8 Oct 2019 02:18:00 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" }, { "msg_contents": "On Tue, Oct 08, 2019 at 02:18:00AM +0900, Fujii Masao wrote:\n> On Fri, Oct 4, 2019 at 9:03 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>> So archive recovery with recovery_min_apply_delay enabled would be\n>> intended configuration. My patch that changes archive recovery so that\n>> it always ignores thesetting might be in wrong direction. Maybe we should\n>> make recovery_min_apply_delay work properly even in archive recovery.\n>> Thought?\n> \n> Patch attached. This patch allows archive recovery with\n> recovery_min_apply_delay set, but not crash recovery.\n\nRight. In short it makes no sense to wait the delay when in crash\nrecovery. After more testing I have been able to reproduce the\nfailure myself.\n\n+ /* nothing to do if crash recovery is requested */\n+ if (!ArchiveRecoveryRequested && !StandbyModeRequested)\n+ return false;\n\nArchiveRecoveryRequested will be set to true if recovery.signal or\nstandby.signal are found, so it seems to me that you can make all\nthose checks more simple by removing from the equation\nStandbyModeRequested, no? StandbyModeRequested is never set to true\nif ArchiveRecoveryRequested is not itself true.\n\nIt would be nice to test some scenario within 002_archiving.pl.\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 14:35:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" }, { "msg_contents": "On Thu, Oct 17, 2019 at 02:35:13PM +0900, Michael Paquier wrote:\n> ArchiveRecoveryRequested will be set to true if recovery.signal or\n> standby.signal are found, so it seems to me that you can make all\n> those checks more simple by removing from the equation\n> StandbyModeRequested, no? StandbyModeRequested is never set to true\n> if ArchiveRecoveryRequested is not itself true.\n\nFor the sake of the archives, this has been applied by Fujii-san as of\nec1259e8.\n--\nMichael", "msg_date": "Sat, 19 Oct 2019 11:28:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" }, { "msg_contents": "On 2019-Oct-19, Michael Paquier wrote:\n\n> On Thu, Oct 17, 2019 at 02:35:13PM +0900, Michael Paquier wrote:\n> > ArchiveRecoveryRequested will be set to true if recovery.signal or\n> > standby.signal are found, so it seems to me that you can make all\n> > those checks more simple by removing from the equation\n> > StandbyModeRequested, no? StandbyModeRequested is never set to true\n> > if ArchiveRecoveryRequested is not itself true.\n> \n> For the sake of the archives, this has been applied by Fujii-san as of\n> ec1259e8.\n\nSo, the commit message says\n\n Fix failure of archive recovery with recovery_min_apply_delay enabled.\n \n recovery_min_apply_delay parameter is intended for use with streaming\n replication deployments. However, the document clearly explains that\n the parameter will be honored in all cases if it's specified. So it should\n take effect even if in archive recovery. But, previously, archive recovery\n with recovery_min_apply_delay enabled always failed, and caused assertion\n failure if --enable-caasert is enabled.\n\nbut I'm not clear how would this problem manifest in the case of a build\nwith assertions disabled. Will it keep sleeping beyond the specified\ntime?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Dec 2019 12:35:38 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" }, { "msg_contents": "On Sat, Dec 14, 2019 at 12:35 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Oct-19, Michael Paquier wrote:\n>\n> > On Thu, Oct 17, 2019 at 02:35:13PM +0900, Michael Paquier wrote:\n> > > ArchiveRecoveryRequested will be set to true if recovery.signal or\n> > > standby.signal are found, so it seems to me that you can make all\n> > > those checks more simple by removing from the equation\n> > > StandbyModeRequested, no? StandbyModeRequested is never set to true\n> > > if ArchiveRecoveryRequested is not itself true.\n> >\n> > For the sake of the archives, this has been applied by Fujii-san as of\n> > ec1259e8.\n>\n> So, the commit message says\n>\n> Fix failure of archive recovery with recovery_min_apply_delay enabled.\n>\n> recovery_min_apply_delay parameter is intended for use with streaming\n> replication deployments. However, the document clearly explains that\n> the parameter will be honored in all cases if it's specified. So it should\n> take effect even if in archive recovery. But, previously, archive recovery\n> with recovery_min_apply_delay enabled always failed, and caused assertion\n> failure if --enable-caasert is enabled.\n>\n> but I'm not clear how would this problem manifest in the case of a build\n> with assertions disabled. Will it keep sleeping beyond the specified\n> time?\n\nWhen assertion is disabled, the recovery exists with the following messages.\n\nFATAL: cannot wait on a latch owned by another process\nLOG: startup process (PID 81007) exited with exit code 1\nLOG: terminating any other active server processes\nLOG: database system is shut down\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Mon, 16 Dec 2019 11:09:07 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovery_min_apply_delay in archive recovery causes assertion\n failure in latch" } ]
[ { "msg_contents": "\nI just tried building with Python on msys2. However, the setup of the\nlatest python doesn't fit our detection code. I see this:\n\n\n# /c/Python37/python -c 'import distutils.sysconfig;\nprint(distutils.sysconfig.get_config_vars());'\n\n{'LIBDEST': 'C:\\\\Python37\\\\Lib', 'BINLIBDEST': 'C:\\\\Python37\\\\Lib',\n'INCLUDEPY': 'C:\\\\Python37\\\\include', 'EXT_SUFFIX':\n'.cp37-win_amd64.pyd', 'EXE': '.exe', 'VERSION': '37', 'BINDIR':\n'C:\\\\Python37', 'prefix': 'C:\\\\Python37', 'exec_prefix': 'C:\\\\Python37',\n'SO': '.cp37-win_amd64.pyd', 'srcdir': 'C:\\\\Python37'}\n\n\nThe python3.dll and python37.dll files are in c:\\\\python37, i.e. the\nBINDIR as one might expect on Windows.\n\n\nIt would be nice to get this working.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 29 Sep 2019 16:16:08 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "python detection v windows" } ]
[ { "msg_contents": "\nThe configure code currently has this:\n\n\n# readline on MinGW has problems with backslashes in psql and other bugs.\n# This is particularly a problem with non-US code pages.\n# Therefore disable its use until we understand the cause. 2004-07-20\nif test \"$PORTNAME\" = \"win32\"; then\n  if test \"$with_readline\" = yes; then\n    AC_MSG_WARN([*** Readline does not work on MinGW --- disabling])\n    with_readline=no\n  fi\nfi\n\n\n2004 is a very long time ago. Has anyone looked at this more recently?\nIt would certainly be nice to have readline-enabled psql on Windows if\npossible.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 29 Sep 2019 16:55:53 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Windows v readline" }, { "msg_contents": "I found some problem with tab-complete in the 12 version.  I checked \nthis in the Windows.\n\n\nVictor Spirin\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 24 Oct 2019 19:11:59 +0300", "msg_from": "Victor Spirin <v.spirin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "psql tab-complete" }, { "msg_contents": "Sorry for wrong place and contents of my message.\n\nIt seems that the VA_ARGS_NARGS (__ VA_ARGS__) macros always return 1 on \nWindows.\n\nVictor Spirin\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n24.10.2019 19:11, Victor Spirin пишет:\n> I found some problem with tab-complete in the 12 version.  I checked \n> this in the Windows.\n>\n>\n> Victor Spirin\n> Postgres Professional:http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\n\n", "msg_date": "Fri, 25 Oct 2019 00:31:11 +0300", "msg_from": "Victor Spirin <v.spirin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" }, { "msg_contents": "Victor Spirin <v.spirin@postgrespro.ru> writes:\n> I found some problem with tab-complete in the 12 version.  I checked \n> this in the Windows.\n\nThis change seems to break the case intended by the comment,\nie given the context\n\n\tSELECT * FROM tablename WHERE <tab>\n\nwe want to offer the columns of \"tablename\" as completions.\n\nI'd be the first to agree that that's completely lame, as there\nare any number of related cases it fails to cover ... but this\npatch isn't making it better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Oct 2019 17:48:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" }, { "msg_contents": "Yes, I found, that VA_ARGS_NARGS(__ VA_ARGS__) macros always return 1 on \nWindows.\n\nVictor Spirin\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n25.10.2019 0:48, Tom Lane пишет:\n> Victor Spirin <v.spirin@postgrespro.ru> writes:\n>> I found some problem with tab-complete in the 12 version.  I checked\n>> this in the Windows.\n> This change seems to break the case intended by the comment,\n> ie given the context\n>\n> \tSELECT * FROM tablename WHERE <tab>\n>\n> we want to offer the columns of \"tablename\" as completions.\n>\n> I'd be the first to agree that that's completely lame, as there\n> are any number of related cases it fails to cover ... but this\n> patch isn't making it better.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n\n", "msg_date": "Fri, 25 Oct 2019 00:53:02 +0300", "msg_from": "Victor Spirin <v.spirin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" }, { "msg_contents": "This patch resolved one problem in the tab-complete.c on MSVC. The \nVA_ARGS_NARGS macros now work correctly on Windows.\n\n\nVictor Spirin\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n25.10.2019 0:53, Victor Spirin пишет:\n> Yes, I found, that VA_ARGS_NARGS(__ VA_ARGS__) macros always return 1 \n> on Windows.\n>\n> Victor Spirin\n> Postgres Professional:http://www.postgrespro.com\n> The Russian Postgres Company\n>\n> 25.10.2019 0:48, Tom Lane пишет:\n>> Victor Spirin <v.spirin@postgrespro.ru> writes:\n>>> I found some problem with tab-complete in the 12 version.  I checked\n>>> this in the Windows.\n>> This change seems to break the case intended by the comment,\n>> ie given the context\n>>\n>>     SELECT * FROM tablename WHERE <tab>\n>>\n>> we want to offer the columns of \"tablename\" as completions.\n>>\n>> I'd be the first to agree that that's completely lame, as there\n>> are any number of related cases it fails to cover ... but this\n>> patch isn't making it better.\n>>\n>>             regards, tom lane\n>>\n>>\n>\n>", "msg_date": "Fri, 25 Oct 2019 11:57:18 +0300", "msg_from": "Victor Spirin <v.spirin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" }, { "msg_contents": "On Fri, Oct 25, 2019 at 11:57:18AM +0300, Victor Spirin wrote:\n> This patch resolved one problem in the tab-complete.c on MSVC. The\n> VA_ARGS_NARGS macros now work correctly on Windows.\n\nCan you explain why and in what the use of EXPAND() helps with MSVC\nbuilds? Any references which help to understand why this is better?\nIf this change is needed, this also surely needs a comment to explain\nthe difference.\n--\nMichael", "msg_date": "Sat, 26 Oct 2019 12:59:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" }, { "msg_contents": "On 2019-09-29 22:55, Andrew Dunstan wrote:\n> \n> The configure code currently has this:\n> \n> \n> # readline on MinGW has problems with backslashes in psql and other bugs.\n> # This is particularly a problem with non-US code pages.\n> # Therefore disable its use until we understand the cause. 2004-07-20\n> if test \"$PORTNAME\" = \"win32\"; then\n>   if test \"$with_readline\" = yes; then\n>     AC_MSG_WARN([*** Readline does not work on MinGW --- disabling])\n>     with_readline=no\n>   fi\n> fi\n> \n> \n> 2004 is a very long time ago. Has anyone looked at this more recently?\n> It would certainly be nice to have readline-enabled psql on Windows if\n> possible.\n\nI tried this out. First, it doesn't build, because readline doesn't do \nthe dllimport/dllexport dance on global variables, so all references to \nrl_* global variables in tab-complete.c fail (similar to [0]). After \npatching those out, it builds, but it doesn't work. It doesn't print a \nprompt, keys don't do anything sensible. I can enter SQL commands and \nget results back, but the readline part doesn't do anything sensible AFAICT.\n\nPerhaps I did something wrong. You can still use readline without \nglobal variables, but it seems like a serious restriction and makes me \nwonder whether this has actually ever been used before. It's curious \nthat MSYS2 ships a readline build for mingw. Is there other software \nthat uses it on Windows?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/001101c3eb6a$3b275500$f800a8c0@kuczek.pl\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 30 Dec 2019 10:08:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Windows v readline" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-09-29 22:55, Andrew Dunstan wrote:\n>> It would certainly be nice to have readline-enabled psql on Windows if\n>> possible.\n\n> I tried this out. First, it doesn't build, because readline doesn't do \n> the dllimport/dllexport dance on global variables, so all references to \n> rl_* global variables in tab-complete.c fail (similar to [0]). After \n> patching those out, it builds, but it doesn't work.\n\nWhat do you mean by \"patching those out\" --- you removed all of\ntab-complete's external variable assignments? That would certainly\ndisable tab completion, at a minimum.\n\n> It doesn't print a \n> prompt, keys don't do anything sensible. I can enter SQL commands and \n> get results back, but the readline part doesn't do anything sensible AFAICT.\n\nI wonder if readline was confused about the terminal type, or if it\ndecided that the input file was not-a-tty for some reason.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Dec 2019 08:28:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Windows v readline" }, { "msg_contents": "On 2019-12-30 14:28, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2019-09-29 22:55, Andrew Dunstan wrote:\n>>> It would certainly be nice to have readline-enabled psql on Windows if\n>>> possible.\n> \n>> I tried this out. First, it doesn't build, because readline doesn't do\n>> the dllimport/dllexport dance on global variables, so all references to\n>> rl_* global variables in tab-complete.c fail (similar to [0]). After\n>> patching those out, it builds, but it doesn't work.\n> \n> What do you mean by \"patching those out\" --- you removed all of\n> tab-complete's external variable assignments? That would certainly\n> disable tab completion, at a minimum.\n\nYeah, basically remove tab completion altogether. But readline would \nstill be useful without that for line editing and history.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jan 2020 09:06:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Windows v readline" }, { "msg_contents": "On Sat, Oct 26, 2019 at 4:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Oct 25, 2019 at 11:57:18AM +0300, Victor Spirin wrote:\n> > This patch resolved one problem in the tab-complete.c on MSVC. The\n> > VA_ARGS_NARGS macros now work correctly on Windows.\n>\n> Can you explain why and in what the use of EXPAND() helps with MSVC\n> builds? Any references which help to understand why this is better?\n> If this change is needed, this also surely needs a comment to explain\n> the difference.\n\nSince I really want to be able to use VA_ARGS_NARGS() elsewhere, I\nlooked into this. There are various derivatives of that macro, some\nusing GCC/Clang-only syntax and that work on GCC and MSVC, splattered\nall over the internet, but the original, coming as it does from a C\nstandards newsgroup[1], does not. There are also lots of complaints\nthat the original standard version doesn't work on MSVC, with\nanalysis:\n\nhttps://stackoverflow.com/questions/5134523/msvc-doesnt-expand-va-args-correctly\nhttps://stackoverflow.com/questions/32399191/va-args-expansion-using-msvc\nhttps://learn.microsoft.com/en-us/cpp/build/reference/zc-preprocessor?view=msvc-170\n\nThe short version is that __VA_ARGS__ is not tokenized the way the\nstandard requires (it's considered to be a single token unless you\nshove it back through the preprocessor again, which is what EXPAND()\ndoes), but you can fix that with /Zc:preprocessor. That switch only\nworks in Visual Studio 2019 and up, and maybe also 2017 if you spell\nit /experimental:preprocessor. We still claim to support older\ncompilers. Assuming those switches actually work as claimed, I see\ntwo choices: commit this hack with a comment reminding us to clean it\nup later, or drop 2015.\n\n[1] https://groups.google.com/g/comp.std.c/c/d-6Mj5Lko_s\n\n\n", "msg_date": "Thu, 22 Dec 2022 11:41:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Since I really want to be able to use VA_ARGS_NARGS() elsewhere,\n\n+1, that'd be handy.\n\n> ... Assuming those switches actually work as claimed, I see\n> two choices: commit this hack with a comment reminding us to clean it\n> up later, or drop 2015.\n\nAs long as we can hide the messiness inside a macro definition,\nI'd vote for the former.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Dec 2022 17:56:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" }, { "msg_contents": "On Wed, Dec 21, 2022 at 05:56:05PM -0500, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> Since I really want to be able to use VA_ARGS_NARGS() elsewhere,\n> \n> +1, that'd be handy.\n> \n>> ... Assuming those switches actually work as claimed, I see\n>> two choices: commit this hack with a comment reminding us to clean it\n>> up later, or drop 2015.\n> \n> As long as we can hide the messiness inside a macro definition,\n> I'd vote for the former.\n\nAgreed, even if it is worth noting that all the buildfarm members\nwith MSVC use 2017 or newer.\n--\nMichael", "msg_date": "Thu, 22 Dec 2022 11:25:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" }, { "msg_contents": "On Thu, Dec 22, 2022 at 3:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Dec 21, 2022 at 05:56:05PM -0500, Tom Lane wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> >> Since I really want to be able to use VA_ARGS_NARGS() elsewhere,\n> >\n> > +1, that'd be handy.\n> >\n> >> ... Assuming those switches actually work as claimed, I see\n> >> two choices: commit this hack with a comment reminding us to clean it\n> >> up later, or drop 2015.\n> >\n> > As long as we can hide the messiness inside a macro definition,\n> > I'd vote for the former.\n>\n> Agreed, even if it is worth noting that all the buildfarm members\n> with MSVC use 2017 or newer.\n\nThanks. Pushed.\n\nPS is it a mistake that we still mention SDK 8.1 in the docs?\n\n\n", "msg_date": "Thu, 22 Dec 2022 18:36:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tab-complete" } ]
[ { "msg_contents": "$ git grep Postger\nsrc/backend/po/tr.po:\"Bu durum, sistemin semaphore set (SEMMNI) veya semaphore (SEMMNS) sayı sınırlaması aşmasında meydana gelmektedir. Belirtilen parametrelerin değerleri yükseltmelisiniz. Başka seçeneğiniz ise PostgerSQL sisteminin semaphore tütekitimini max_connections parametresini şu an %d) düşürerek azaltabilirsiniz.\\n\"\n\ncommit 3c439a58df83ae51f650cfae9878df1f9b70c4b8\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Mon May 20 16:00:53 2019 +0200\n\n Translation updates\n \n Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git\n Source-Git-Hash: a20bf6b8a5b4e32450967055eb5b07cee4704edd\n\n\n", "msg_date": "Sun, 29 Sep 2019 17:43:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "typo: postGER" }, { "msg_contents": "On 2019-09-30 00:43, Justin Pryzby wrote:\n> $ git grep Postger\n> src/backend/po/tr.po:\"Bu durum, sistemin semaphore set (SEMMNI) veya semaphore (SEMMNS) sayı sınırlaması aşmasında meydana gelmektedir. Belirtilen parametrelerin değerleri yükseltmelisiniz. Başka seçeneğiniz ise PostgerSQL sisteminin semaphore tütekitimini max_connections parametresini şu an %d) düşürerek azaltabilirsiniz.\\n\"\n\nThis has been fixed in the translation repository and will be applied in\nthe next minor releases.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 25 Oct 2019 22:39:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: typo: postGER" } ]
[ { "msg_contents": " From looking around the code, I've made these tentative observations\nabout TupleDescs:\n\n1. If the TupleDesc was obtained straight from the relcache for some\n relation, then all of its attributes should have nonzero attrelid\n identifying that relation, but in (every? nearly every?) other case,\n the attributes found in a TupleDesc will have a dummy attrelid of zero.\n\n2. The attributes in a TupleDesc will (always?) have consecutive attnum\n corresponding to their positions in the TupleDesc (and therefore\n redundant). A query, say, that projects out a subset of columns\n from a relation will not have a result TupleDesc with attributes\n still bearing their original attrelid and attnum; they'll have\n attrelid zero and consecutive renumbered attnum.\n\n Something like SendRowDescriptionCols_3 that wants the original table\n and attnum has to reconstruct them from the targetlist if available,\n\nHave I mistaken any of that?\n\nThanks,\n-Chap\n\n\n", "msg_date": "Sun, 29 Sep 2019 20:13:32 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "checking my understanding of TupleDesc" }, { "msg_contents": "On 09/29/19 20:13, Chapman Flack wrote:\n> From looking around the code, I've made these tentative observations\n> about TupleDescs:\n> \n> 1. If the TupleDesc was obtained straight from the relcache for some\n> relation, then all of its attributes should have nonzero attrelid\n> identifying that relation, but in (every? nearly every?) other case,\n> the attributes found in a TupleDesc will have a dummy attrelid of zero.\n> \n> 2. The attributes in a TupleDesc will (always?) have consecutive attnum\n> corresponding to their positions in the TupleDesc (and therefore\n> redundant). A query, say, that projects out a subset of columns\n> from a relation will not have a result TupleDesc with attributes\n> still bearing their original attrelid and attnum; they'll have\n> attrelid zero and consecutive renumbered attnum.\n> \n> Something like SendRowDescriptionCols_3 that wants the original table\n> and attnum has to reconstruct them from the targetlist if available,\n> \n> Have I mistaken any of that?\n\nAnd one more:\n\n 3. One could encounter a TupleDesc with one or more 'attisdropped'\n attributes, which do have their original attnums (corresponding\n to their positions in the TupleDesc and therefore redundant),\n so the attnums of nondropped attributes may be discontiguous.\n In building a corresponding tuple, any dropped attribute should\n have its null flag set.\n\n Is it simple to say under what circumstances a TupleDesc possibly\n with dropped members could be encountered, and under what other\n circumstances one would only encounter 'cleaned up' TupleDescs with\n no dropped attributes, and contiguous numbers for the real ones?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 11 Nov 2019 22:02:36 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: checking my understanding of TupleDesc" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 09/29/19 20:13, Chapman Flack wrote:\n>> From looking around the code, I've made these tentative observations\n>> about TupleDescs:\n>> \n>> 1. If the TupleDesc was obtained straight from the relcache for some\n>> relation, then all of its attributes should have nonzero attrelid\n>> identifying that relation, but in (every? nearly every?) other case,\n>> the attributes found in a TupleDesc will have a dummy attrelid of zero.\n\nI'm not sure about every vs. nearly every, but otherwise this seems\naccurate. Generally attrelid is meaningful in a pg_attribute catalog\nentry, but not in TupleDescs in memory. It appears valid in relcache\nentry tupdescs only because they are built straight from pg_attribute.\n\n>> 2. The attributes in a TupleDesc will (always?) have consecutive attnum\n>> corresponding to their positions in the TupleDesc (and therefore\n>> redundant).\n\nCorrect.\n\n> And one more:\n\n> 3. One could encounter a TupleDesc with one or more 'attisdropped'\n> attributes, which do have their original attnums (corresponding\n> to their positions in the TupleDesc and therefore redundant),\n> so the attnums of nondropped attributes may be discontiguous.\n\nRight.\n\n> Is it simple to say under what circumstances a TupleDesc possibly\n> with dropped members could be encountered,\n\nAny tupdesc that's describing the rowtype of a table with dropped columns\nwould look like that.\n\n> and under what other\n> circumstances one would only encounter 'cleaned up' TupleDescs with\n> no dropped attributes, and contiguous numbers for the real ones?\n\nI don't believe we ever include dropped columns in a projection result,\nso generally speaking, the output of a query plan node wouldn't have them.\n\nThere's a semi-exception, which is that the planner might decide that we\ncan skip a projection step for the output of a table scan node, in which\ncase dropped columns would be included in its output. But that would only\nbe true if there are upper plan nodes that are doing some projections of\ntheir own. The final query output will definitely not have them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Nov 2019 17:39:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: checking my understanding of TupleDesc" }, { "msg_contents": "Hi,\n\nOn 2019-11-12 17:39:20 -0500, Tom Lane wrote:\n> > and under what other\n> > circumstances one would only encounter 'cleaned up' TupleDescs with\n> > no dropped attributes, and contiguous numbers for the real ones?\n> \n> I don't believe we ever include dropped columns in a projection result,\n> so generally speaking, the output of a query plan node wouldn't have them.\n> \n> There's a semi-exception, which is that the planner might decide that we\n> can skip a projection step for the output of a table scan node, in which\n> case dropped columns would be included in its output. But that would only\n> be true if there are upper plan nodes that are doing some projections of\n> their own. The final query output will definitely not have them.\n\nI *think* we don't even do that, because build_physical_tlist() bails\nout if there's a dropped (or missing) column. Or are you thinking of\nsomething else?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Nov 2019 15:13:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: checking my understanding of TupleDesc" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-11-12 17:39:20 -0500, Tom Lane wrote:\n>> There's a semi-exception, which is that the planner might decide that we\n>> can skip a projection step for the output of a table scan node, in which\n>> case dropped columns would be included in its output. But that would only\n>> be true if there are upper plan nodes that are doing some projections of\n>> their own. The final query output will definitely not have them.\n\n> I *think* we don't even do that, because build_physical_tlist() bails\n> out if there's a dropped (or missing) column.\n\nAh, right. Probably because we need to insist on every column of an\nexecution-time tupdesc having a valid atttypid ... although I wonder,\nis that really necessary?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Nov 2019 18:20:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: checking my understanding of TupleDesc" }, { "msg_contents": "Hi,\n\nOn 2019-11-12 18:20:56 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-11-12 17:39:20 -0500, Tom Lane wrote:\n> >> There's a semi-exception, which is that the planner might decide that we\n> >> can skip a projection step for the output of a table scan node, in which\n> >> case dropped columns would be included in its output. But that would only\n> >> be true if there are upper plan nodes that are doing some projections of\n> >> their own. The final query output will definitely not have them.\n> \n> > I *think* we don't even do that, because build_physical_tlist() bails\n> > out if there's a dropped (or missing) column.\n> \n> Ah, right. Probably because we need to insist on every column of an\n> execution-time tupdesc having a valid atttypid ... although I wonder,\n> is that really necessary?\n\nYea, the stated reasoning is ExecTypeFromTL():\n *\n * Exception: if there are any dropped or missing columns, we punt and return\n * NIL. Ideally we would like to handle these cases too. However this\n * creates problems for ExecTypeFromTL, which may be asked to build a tupdesc\n * for a tlist that includes vars of no-longer-existent types. In theory we\n * could dig out the required info from the pg_attribute entries of the\n * relation, but that data is not readily available to ExecTypeFromTL.\n * For now, we don't apply the physical-tlist optimization when there are\n * dropped cols.\n\nI think the main problem is that we don't even have a convenient way to\nidentify that a targetlist expression is actually a dropped column, and\ntreat that differently. If we were to expand physical tlists to cover\ndropped and missing columns, we'd need to be able to add error checks to\nat least ExecInitExprRec, and to printtup_prepare_info().\n\nI wonder if we could get away with making build_physical_tlist()\nreturning a TargetEntry for a Const instead of a Var for the dropped\ncolumns? That'd contain enough information for tuple deforming to work\non higher query levels? Or perhaps we ought to invent a DroppedVar\nnode, that includes the type information? That'd make it trivial to\nerror out when such an expression is actually evaluated, and allow to\ndetect such columns. We already put Const nodes in some places like\nthat IIRC...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Nov 2019 15:54:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: checking my understanding of TupleDesc" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-11-12 18:20:56 -0500, Tom Lane wrote:\n>> Ah, right. Probably because we need to insist on every column of an\n>> execution-time tupdesc having a valid atttypid ... although I wonder,\n>> is that really necessary?\n\n> Yea, the stated reasoning is ExecTypeFromTL():\n> [ ExecTypeFromTL needs to see subexpressions with valid data types ]\n\n> I wonder if we could get away with making build_physical_tlist()\n> returning a TargetEntry for a Const instead of a Var for the dropped\n> columns? That'd contain enough information for tuple deforming to work\n> on higher query levels? Or perhaps we ought to invent a DroppedVar\n> node, that includes the type information? That'd make it trivial to\n> error out when such an expression is actually evaluated, and allow to\n> detect such columns. We already put Const nodes in some places like\n> that IIRC...\n\nYeah, a DroppedVar thing might not be a bad idea, it could substitute\nfor the dummy null constants we currently use. Note that an interesting\nproperty of such a node is that it doesn't actually *have* a type.\nA dropped column might be of a type that's been dropped too (and,\nif memory serves, we reset the column's atttypid to zero anyway).\nWhat we'd have to do is excavate atttyplen and attalign from the\npg_attribute entry and store those in the DroppedVar node. Then,\nanything reconstructing a tupdesc would have to use those fields\nand avoid a pg_type lookup.\n\nI'm not sure whether the execution-time behavior of such a node\nought to be \"throw error\" or just \"return NULL\". The precedent\nof the dummy constants suggests the latter. What would error out\nis anything that wants to extract an actual type OID from the\nexpression.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Nov 2019 09:25:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: checking my understanding of TupleDesc" } ]
[ { "msg_contents": "Hi,\n\nThe documentation for CREATE TYPE has this to say about alignment:\n\n\"The alignment parameter specifies the storage alignment required for the\ndata type. The allowed values equate to alignment on 1, 2, 4, or 8 byte\nboundaries.\"\n\n... while the documentation for pg_type has:\n\n \"c = char alignment, i.e., no alignment needed.\n s = short alignment (2 bytes on most machines).\n i = int alignment (4 bytes on most machines).\n d = double alignment (8 bytes on many machines, but by no means all).\"\n\nso, in 2019, are the alignments weaselly and variable, or are they 1,2,4,8?\n\nRegards,\n-Chap\n\n\n\n", "msg_date": "Sun, 29 Sep 2019 21:24:08 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "documentation inconsistent re: alignment" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> The documentation for CREATE TYPE has this to say about alignment:\n\n> \"The alignment parameter specifies the storage alignment required for the\n> data type. The allowed values equate to alignment on 1, 2, 4, or 8 byte\n> boundaries.\"\n\n> ... while the documentation for pg_type has:\n\n> \"c = char alignment, i.e., no alignment needed.\n> s = short alignment (2 bytes on most machines).\n> i = int alignment (4 bytes on most machines).\n> d = double alignment (8 bytes on many machines, but by no means all).\"\n\n> so, in 2019, are the alignments weaselly and variable, or are they 1,2,4,8?\n\nProbably the statement in CREATE TYPE is too strong. There are, I\nbelieve, still machines in the buildfarm where maxalign is just 4.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Oct 2019 14:47:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: documentation inconsistent re: alignment" }, { "msg_contents": "On 10/20/19 14:47, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> data type. The allowed values equate to alignment on 1, 2, 4, or 8 byte\n>> boundaries.\"\n>> ... while the documentation for pg_type has:\n>> \"c = char alignment, i.e., no alignment needed.\n>> s = short alignment (2 bytes on most machines).\n>> i = int alignment (4 bytes on most machines).\n>> d = double alignment (8 bytes on many machines, but by no means all).\"\n> \n> Probably the statement in CREATE TYPE is too strong. There are, I\n> believe, still machines in the buildfarm where maxalign is just 4.\n\nSo just closing the circle on this, the low-down seems to be that\nthe alignments called s, i, and d (by pg_type), and int2, int4, and\ndouble (by CREATE TYPE) refer to the machine values configure picks\nfor ALIGNOF_SHORT, ALIGNOF_INT, and ALIGNOF_DOUBLE, respectively.\nAnd while configure also defines an ALIGNOF_LONG, and there are\nLONGALIGN macros in c.h that use it, that one isn't a choice when\ncreating a type, presumably because it's never been usefully different\non any interesting platform?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 11 Nov 2019 22:19:24 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: documentation inconsistent re: alignment" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 10/20/19 14:47, Tom Lane wrote:\n>> Probably the statement in CREATE TYPE is too strong. There are, I\n>> believe, still machines in the buildfarm where maxalign is just 4.\n\n> So just closing the circle on this, the low-down seems to be that\n> the alignments called s, i, and d (by pg_type), and int2, int4, and\n> double (by CREATE TYPE) refer to the machine values configure picks\n> for ALIGNOF_SHORT, ALIGNOF_INT, and ALIGNOF_DOUBLE, respectively.\n\nRight.\n\n> And while configure also defines an ALIGNOF_LONG, and there are\n> LONGALIGN macros in c.h that use it, that one isn't a choice when\n> creating a type, presumably because it's never been usefully different\n> on any interesting platform?\n\nThe problem with \"long int\" is that it's 32 bits on some platforms\nand 64 bits on others, so it's not terribly useful as a basis for\na user-visible SQL type. That's why it's not accounted for in the\ntypalign options. I think ALIGNOF_LONG is just there for completeness'\nsake --- it doesn't look to me like we actually use that, or LONGALIGN,\nanyplace.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Nov 2019 10:23:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: documentation inconsistent re: alignment" } ]
[ { "msg_contents": "Hello.\n\nWhile I looked around shutdown sequence, pmdie() uses\n\"BACKEND_TYPE_AUTOVAC | BACKEND_TYPE_BGWORKER\" for sending signal\nand PostmasterStateMachine counts them using\nBACKEND_TYPE_WORKER. It is the only usage of the combined one. It\nseems to me just a leftover of da07a1e856.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 30 Sep 2019 16:39:59 +0900 (Tokyo Standard Time)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Inconsistent usage of BACKEND_* symbols" }, { "msg_contents": "On Mon, Sep 30, 2019 at 04:39:59PM +0900, Kyotaro Horiguchi wrote:\n> @@ -2740,8 +2740,8 @@ pmdie(SIGNAL_ARGS)\n> {\n> /* autovac workers are told to shut down immediately */\n> /* and bgworkers too; does this need tweaking? */\n> - SignalSomeChildren(SIGTERM,\n> - BACKEND_TYPE_AUTOVAC | BACKEND_TYPE_BGWORKER);\n> + SignalSomeChildren(SIGTERM, BACKEND_TYPE_WORKER);\n> +\n\nFor this one the comment would be inconsistent with the flags listed.\n\n> /* and the autovac launcher too */\n> if (AutoVacPID != 0)\n> signal_child(AutoVacPID, SIGTERM);\n> @@ -2821,8 +2821,7 @@ pmdie(SIGNAL_ARGS)\n> (errmsg(\"aborting any active transactions\")));\n> /* shut down all backends and workers */\n> SignalSomeChildren(SIGTERM,\n> - BACKEND_TYPE_NORMAL | BACKEND_TYPE_AUTOVAC |\n> - BACKEND_TYPE_BGWORKER);\n> + BACKEND_TYPE_NORMAL | BACKEND_TYPE_WORKER);\n\nOkay for this one.\n--\nMichael", "msg_date": "Wed, 2 Oct 2019 16:17:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Inconsistent usage of BACKEND_* symbols" } ]
[ { "msg_contents": "Detach partition does not remove the partition trigger dependency as seen\nin below scenario.\n\nrm44010_p had 2 partition p1 and p2 and p2 was detached.\n\nA. Description of a partitioned table\n\\d+ rm44010_p\n Partitioned table \"public.rm44010_p\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target\n| Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n c1 | integer | | | | plain |\n |\n c2 | integer | | | | plain |\n |\nPartition key: RANGE (c2)\nTriggers:\n rm44010_trig1 AFTER INSERT ON rm44010_p FOR EACH ROW EXECUTE FUNCTION\ntrig_func()\nPartitions: rm44010_p1 FOR VALUES FROM (1) TO (100)\n\nB. Description of the detached partition still shows the trigger.\n\\d+ rm44010_p2\n Table \"public.rm44010_p2\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target\n| Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n c1 | integer | | | | plain |\n |\n c2 | integer | | | | plain |\n |\nTriggers:\n rm44010_trig1 AFTER INSERT ON rm44010_p2 FOR EACH ROW EXECUTE FUNCTION\ntrig_func()\nAccess method: heap\n\nC. Drop Trigger on partitioned table also removes the trigger on the\ndetached partition.\nDROP TRIGGER RM44010_trig1 ON RM44010_p;\nDROP TRIGGER\n\\d+ rm44010_p2\n Table \"public.rm44010_p2\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target\n| Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n c1 | integer | | | | plain |\n |\n c2 | integer | | | | plain |\n |\nAccess method: heap\n\n\n\n--\nBeena Emerson\n\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nDetach partition does not remove the partition trigger dependency as seen in below scenario.rm44010_p had 2 partition p1 and p2 and p2 was detached.A. Description of a partitioned table\\d+ rm44010_p                           Partitioned table \"public.rm44010_p\" Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description --------+---------+-----------+----------+---------+---------+--------------+------------- c1     | integer |           |          |         | plain   |              |  c2     | integer |           |          |         | plain   |              | Partition key: RANGE (c2)Triggers:    rm44010_trig1 AFTER INSERT ON rm44010_p FOR EACH ROW EXECUTE FUNCTION trig_func()Partitions: rm44010_p1 FOR VALUES FROM (1) TO (100)B. Description of the detached partition still shows the trigger.\\d+ rm44010_p2                                Table \"public.rm44010_p2\" Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description --------+---------+-----------+----------+---------+---------+--------------+------------- c1     | integer |           |          |         | plain   |              |  c2     | integer |           |          |         | plain   |              | Triggers:    rm44010_trig1 AFTER INSERT ON rm44010_p2 FOR EACH ROW EXECUTE FUNCTION trig_func()Access method: heapC. Drop Trigger on partitioned table also removes the trigger on the detached partition.DROP TRIGGER RM44010_trig1 ON RM44010_p;DROP TRIGGER\\d+ rm44010_p2                                Table \"public.rm44010_p2\" Column |  Type   | Collation | Nullable | Default | Storage | Stats target | Description --------+---------+-----------+----------+---------+---------+--------------+------------- c1     | integer |           |          |         | plain   |              |  c2     | integer |           |          |         | plain   |              | Access method: heap--Beena EmersonEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 30 Sep 2019 15:28:53 +0530", "msg_from": "M Beena Emerson <mbeena.emerson@gmail.com>", "msg_from_op": true, "msg_subject": "Drop Trigger Mechanism with Detached partitions" }, { "msg_contents": "On Mon, Sep 30, 2019 at 5:59 AM M Beena Emerson\n<mbeena.emerson@gmail.com> wrote:\n> Detach partition does not remove the partition trigger dependency as seen in below scenario.\n\nSounds like a bug.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 1 Oct 2019 08:14:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Drop Trigger Mechanism with Detached partitions" } ]
[ { "msg_contents": "So we now support `ANALYZE partitioned_table` which will gather statistics\nfor the main table by gathering stats from all the partitions.\n\nHowever as far as I can tell autovacuum will never actually trigger this\nanalyze. Because we never generate any update records for the parent table\nin the statistics. Have I missed something?\n\nI didn't find any discussion of this in the threads from when partitioning\nwas committed but there were a lot of discussions and I could easily have\nmissed it.\n\nIs there a story for this? Some way to configure things so that autovacuum\nwill analyze partitioned tables?\n\nOr should we look at doing something? Maybe whether we analyze a child we\nshould also update the parent -- and if there's no stats yet run analyze on\nit?\n\nThis may be a serious enough problem for users that it may warrant\nbackpatching. Not having any stats is resulting in some pretty weird plans\nfor us.\n\nSo we now support `ANALYZE partitioned_table` which will gather statistics for the main table by gathering stats from all the partitions.However as far as I can tell autovacuum will never actually trigger this analyze. Because we never generate any update records for the parent table in the statistics. Have I missed something?I didn't find any discussion of this in the threads from when partitioning was committed but there were a lot of discussions and I could easily have missed it.Is there a story for this? Some way to configure things so that autovacuum will analyze partitioned tables?Or should we look at doing something? Maybe whether we analyze a child we should also update the parent -- and if there's no stats yet run analyze on it?This may be a serious enough problem for users that it may warrant backpatching. Not having any stats is resulting in some pretty weird plans for us.", "msg_date": "Mon, 30 Sep 2019 13:48:19 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Partitioning versus autovacuum" }, { "msg_contents": "Actually I did just find it in the To-do wiki:\n\nHave autoanalyze of parent tables occur when child tables are modified\n\n\n - http://archives.postgresql.org/pgsql-performance/2010-06/msg00137.php\n\n\nOn Mon., Sep. 30, 2019, 1:48 p.m. Greg Stark, <stark@mit.edu> wrote:\n\n> So we now support `ANALYZE partitioned_table` which will gather statistics\n> for the main table by gathering stats from all the partitions.\n>\n> However as far as I can tell autovacuum will never actually trigger this\n> analyze. Because we never generate any update records for the parent table\n> in the statistics. Have I missed something?\n>\n> I didn't find any discussion of this in the threads from when partitioning\n> was committed but there were a lot of discussions and I could easily have\n> missed it.\n>\n> Is there a story for this? Some way to configure things so that autovacuum\n> will analyze partitioned tables?\n>\n> Or should we look at doing something? Maybe whether we analyze a child we\n> should also update the parent -- and if there's no stats yet run analyze on\n> it?\n>\n> This may be a serious enough problem for users that it may warrant\n> backpatching. Not having any stats is resulting in some pretty weird plans\n> for us.\n>\n\nActually I did just find it in the To-do wiki:Have autoanalyze of parent tables occur when child tables are modifiedhttp://archives.postgresql.org/pgsql-performance/2010-06/msg00137.phpOn Mon., Sep. 30, 2019, 1:48 p.m. Greg Stark, <stark@mit.edu> wrote:So we now support `ANALYZE partitioned_table` which will gather statistics for the main table by gathering stats from all the partitions.However as far as I can tell autovacuum will never actually trigger this analyze. Because we never generate any update records for the parent table in the statistics. Have I missed something?I didn't find any discussion of this in the threads from when partitioning was committed but there were a lot of discussions and I could easily have missed it.Is there a story for this? Some way to configure things so that autovacuum will analyze partitioned tables?Or should we look at doing something? Maybe whether we analyze a child we should also update the parent -- and if there's no stats yet run analyze on it?This may be a serious enough problem for users that it may warrant backpatching. Not having any stats is resulting in some pretty weird plans for us.", "msg_date": "Mon, 30 Sep 2019 14:21:24 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Partitioning versus autovacuum" }, { "msg_contents": "Actually -- I'm sorry to followup to myself (twice) -- but that's\nwrong. That Todo item predates the modern partitioning code. It came\nfrom when the partitioned statistics were added for inheritance trees.\nThe resulting comment almost doesn't make sense any more since it\ntalks about updates to the parent table and treats them as distinct\nfrom updates to the children.\n\nIn any case it's actually not true any more as updates to the parent\ntable aren't even tracked any more -- see below. My modest proposal is\nthat we should count any updates that arrive through the parent table\nas mods for both the parent and child.\n\nA more ambitious proposal would have updates to the children also\ncount against the parent somehow but I'm not sure exactly how. And I'm\nnot sure we shouldn't be updating the parent statistics whenever we\nrun analyze on a child anyways but again I'm not sure how.\n\npostgres=# postgres=# create table p (i integer primary key, t text)\npartition by range (i) ;\nCREATE TABLE\npostgres=# create table p0 partition of p for values from (0) to (10);\nCREATE TABLE\npostgres=# analyze p;\nANALYZE\npostgres=# analyze p0;\nANALYZE\npostgres=# select pg_stat_get_mod_since_analyze('p'::regclass) as p,\npg_stat_get_mod_since_analyze('p0'::regclass) as p0;\n p | p0\n---+----\n 0 | 0\n(1 row)\n\npostgres=# insert into p values (2);\nINSERT 0 1\npostgres=# select pg_stat_get_mod_since_analyze('p'::regclass) as p,\npg_stat_get_mod_since_analyze('p0'::regclass) as p0;\n p | p0\n---+----\n 0 | 1\n(1 row)\n\n\n", "msg_date": "Mon, 30 Sep 2019 15:03:05 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Partitioning versus autovacuum" }, { "msg_contents": "Hi Greg,\n\nOn Tue, Oct 1, 2019 at 4:03 AM Greg Stark <stark@mit.edu> wrote:\n>\n> Actually -- I'm sorry to followup to myself (twice) -- but that's\n> wrong. That Todo item predates the modern partitioning code. It came\n> from when the partitioned statistics were added for inheritance trees.\n> The resulting comment almost doesn't make sense any more since it\n> talks about updates to the parent table and treats them as distinct\n> from updates to the children.\n>\n> In any case it's actually not true any more as updates to the parent\n> table aren't even tracked any more -- see below. My modest proposal is\n> that we should count any updates that arrive through the parent table\n> as mods for both the parent and child.\n\nYeah, we need to teach autovacuum to consider analyzing partitioned\ntables. That is still a TODO for declarative partitioning.\n\nWe do need to weigh the trade-offs here. In the thread quoted in your\nprevious email, Tom expresses a concern [1] about ending up doing\nexcessive work, because partitions would be scanned twice -- first to\ncollect their own statistics and then to collect the parent's when the\nparent table is analyzed. Maybe if we find a way to calculate\nparent's stats from the partitions' stats without scanning the\npartitions, that would be great.\n\nAnother thing to consider is that users now (as of v11) have the\noption of using partitionwise plans. Consider joining two huge\npartitioned tables. If they are identically partitioned, Postgres\nplanner considers joining pairs of matching partitions and appending\nthe outputs of these smaller joins. In this case, even if the\nnon-partitionwise join couldn't use hash join, individual smaller\njoins could, because partition stats would be up to date. The\nrequirements that the tables being joined be identically partitioned\n(or be partitioned at all) might be a bit too restrictive though.\n\n> A more ambitious proposal would have updates to the children also\n> count against the parent somehow but I'm not sure exactly how. And I'm\n> not sure we shouldn't be updating the parent statistics whenever we\n> run analyze on a child anyways but again I'm not sure how.\n\nAs I mentioned above, we could try to figure out a way to \"merge\" the\nindividual partitions' statistics when they're refreshed into the\nparent's stats.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/489.1276114285%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 4 Oct 2019 11:13:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning versus autovacuum" }, { "msg_contents": "At the risk of forking this thread... I think there's actually a\r\nplanner estimation bug here too.\r\n\r\nConsider this test case of a simple partitioned table and a simple\r\njoin. The cardinality estimates for each partition and the Append node\r\nare all perfectly accurate. But the estimate for the join is way off.\r\nThe corresponding test case without partitioning produces a perfect\r\ncardinality estimate for the join.\r\n\r\nI've never completely wrapped my head around the planner selectivity\r\nestimations. IIRC join restrictions are treated differently from\r\nsingle-relation restrictions. Perhaps what's happening here is that\r\nthe single-relation restrictions are being correctly estimated based\r\non the child partitions but the join restriction code hasn't been\r\ntaught the same tricks?\r\n\r\n\r\n\r\nstark=# create table p (i integer, j integer) partition by list (i);\r\nCREATE TABLE\r\n\r\nstark=# create table p0 partition of p for values in (0);\r\nCREATE TABLE\r\nstark=# create table p1 partition of p for values in (1);\r\nCREATE TABLE\r\n\r\nstark=# insert into p select 0,generate_series(1,1000);\r\nINSERT 0 1000\r\nstark=# insert into p select 1,generate_series(1,1000);\r\nINSERT 0 1000\r\n\r\nstark=# analyze p0;\r\nANALYZE\r\nstark=# analyze p1;\r\nANALYZE\r\n\r\nstark=# create table q (i integer);\r\nCREATE TABLE\r\nstark=# insert into q values (0);\r\nINSERT 0 1\r\nstark=# analyze q;\r\nANALYZE\r\n\r\n-- Query partitioned table, get wildly off row estimates for join\r\n\r\nstark=# explain analyze select * from q join p using (i) where j\r\nbetween 1 and 500;\r\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n├─────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Hash Join (cost=1.02..44.82 rows=5 width=8) (actual\r\ntime=0.060..1.614 rows=500 loops=1) │\r\n│ Hash Cond: (p0.i = q.i)\r\n │\r\n│ -> Append (cost=0.00..40.00 rows=1000 width=8) (actual\r\ntime=0.030..1.127 rows=1000 loops=1) │\r\n│ -> Seq Scan on p0 (cost=0.00..20.00 rows=500 width=8)\r\n(actual time=0.029..0.440 rows=500 loops=1) │\r\n│ Filter: ((j >= 1) AND (j <= 500))\r\n │\r\n│ Rows Removed by Filter: 500\r\n │\r\n│ -> Seq Scan on p1 (cost=0.00..20.00 rows=500 width=8)\r\n(actual time=0.018..0.461 rows=500 loops=1) │\r\n│ Filter: ((j >= 1) AND (j <= 500))\r\n │\r\n│ Rows Removed by Filter: 500\r\n │\r\n│ -> Hash (cost=1.01..1.01 rows=1 width=4) (actual\r\ntime=0.011..0.012 rows=1 loops=1) │\r\n│ Buckets: 1024 Batches: 1 Memory Usage: 9kB\r\n │\r\n│ -> Seq Scan on q (cost=0.00..1.01 rows=1 width=4) (actual\r\ntime=0.005..0.006 rows=1 loops=1) │\r\n│ Planning time: 0.713 ms\r\n │\r\n│ Execution time: 1.743 ms\r\n │\r\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(14 rows)\r\n\r\n\r\n-- Query non-partitioned table get accurate row estimates for join\r\n\r\nstark=# create table pp as (Select * from p);\r\nSELECT 2000\r\nstark=# analyze pp;\r\nANALYZE\r\n\r\nstark=# explain analyze select * from q join pp using (i) where j\r\nbetween 1 and 500;\r\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n├─────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Hash Join (cost=1.02..48.77 rows=500 width=8) (actual\r\ntime=0.027..0.412 rows=500 loops=1) │\r\n│ Hash Cond: (pp.i = q.i)\r\n │\r\n│ -> Seq Scan on pp (cost=0.00..39.00 rows=1000 width=8) (actual\r\ntime=0.014..0.243 rows=1000 loops=1) │\r\n│ Filter: ((j >= 1) AND (j <= 500))\r\n │\r\n│ Rows Removed by Filter: 1000\r\n │\r\n│ -> Hash (cost=1.01..1.01 rows=1 width=4) (actual\r\ntime=0.005..0.005 rows=1 loops=1) │\r\n│ Buckets: 1024 Batches: 1 Memory Usage: 9kB\r\n │\r\n│ -> Seq Scan on q (cost=0.00..1.01 rows=1 width=4) (actual\r\ntime=0.003..0.003 rows=1 loops=1) │\r\n│ Planning time: 0.160 ms\r\n │\r\n│ Execution time: 0.456 ms\r\n │\r\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(10 rows)\r\n", "msg_date": "Fri, 18 Oct 2019 05:21:52 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Re: Partitioning versus autovacuum" }, { "msg_contents": "Hello Greg,\n\n> At the risk of forking this thread... I think there's actually a\n> planner estimation bug here too.\n>\nI think that is not a bug. The estimation error occurred there were no\nparent's statistics. We should run analyze on *partitioned table*.\n\nHere is your test case:\ncreate table p (i integer, j integer) partition by list (i);\ncreate table p0 partition of p for values in (0);\ncreate table p1 partition of p for values in (1);\ninsert into p select 0,generate_series(1,1000);\ninsert into p select 1,generate_series(1,1000);\nanalyze p;\n\nexplain analyze select * from q join p using (i) where j between 1 and 500;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.02..54.77 rows=500 width=8) (actual\ntime=0.180..2.960 rows=500 loops=1)\n Hash Cond: (p0.i = q.i)\n -> Append (cost=0.00..45.00 rows=1000 width=8) (actual\ntime=0.033..1.887 rows=1000 loops=1)\n -> Seq Scan on p0 (cost=0.00..20.00 rows=500 width=8)\n(actual time=0.025..0.524 rows=500 loops=1)\n Filter: ((j >= 1) AND (j <= 500))\n Rows Removed by Filter: 500\n -> Seq Scan on p1 (cost=0.00..20.00 rows=500 width=8)\n(actual time=0.014..0.499 rows=500 loops=1)\n Filter: ((j >= 1) AND (j <= 500))\n Rows Removed by Filter: 500\n -> Hash (cost=1.01..1.01 rows=1 width=4) (actual\ntime=0.103..0.104 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on q (cost=0.00..1.01 rows=1 width=4) (actual\ntime=0.072..0.074 rows=1 loops=1)\n Planning Time: 0.835 ms\n Execution Time: 3.310 ms\n(14 rows)\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 2 Dec 2019 18:25:37 +0900", "msg_from": "yuzuko <yuzukohosoya@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partitioning versus autovacuum" } ]
[ { "msg_contents": "Instead of AC_STRUCT_TIMEZONE we use our own variant called\nPGAC_STRUCT_TIMEZONE that checks for tzname even if other variants were\nfound first. But since 63bd0db12199c5df043e1dea0f2b574f622b3a4c we\ndon't use tzname anymore, so we don't need this anymore.\n\nThe attached patches revert back to the standard AC_STRUCT_TIMEZONE\nmacro and do some related cleanup.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Sep 2019 21:17:50 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Revert back to standard AC_STRUCT_TIMEZONE Autoconf macro" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Instead of AC_STRUCT_TIMEZONE we use our own variant called\n> PGAC_STRUCT_TIMEZONE that checks for tzname even if other variants were\n> found first. But since 63bd0db12199c5df043e1dea0f2b574f622b3a4c we\n> don't use tzname anymore, so we don't need this anymore.\n\nHmm. I wonder if we need AC_STRUCT_TIMEZONE either? Seems like\nwe should only be using our own struct pg_tm. If we could get\nrid of that configure macro altogether, we could remove some dubious\njunk like plpython.h's \"#undef HAVE_TZNAME\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2019 15:36:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Revert back to standard AC_STRUCT_TIMEZONE Autoconf macro" }, { "msg_contents": "On 2019-09-30 21:36, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Instead of AC_STRUCT_TIMEZONE we use our own variant called\n>> PGAC_STRUCT_TIMEZONE that checks for tzname even if other variants were\n>> found first. But since 63bd0db12199c5df043e1dea0f2b574f622b3a4c we\n>> don't use tzname anymore, so we don't need this anymore.\n> \n> Hmm. I wonder if we need AC_STRUCT_TIMEZONE either? Seems like\n> we should only be using our own struct pg_tm.\n\nThere are a few places that seem to need it, such as initdb/findtimezone.c.\n\n> If we could get\n> rid of that configure macro altogether, we could remove some dubious\n> junk like plpython.h's \"#undef HAVE_TZNAME\".\n\nWe could keep just the part of AC_STRUCT_TIMEZONE that we need, namely\nthe check for tm_zone, and remove the part about tzname.\n\nNew patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 2 Oct 2019 07:30:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Revert back to standard AC_STRUCT_TIMEZONE Autoconf macro" }, { "msg_contents": "On 2019-10-02 07:30, Peter Eisentraut wrote:\n> On 2019-09-30 21:36, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Instead of AC_STRUCT_TIMEZONE we use our own variant called\n>>> PGAC_STRUCT_TIMEZONE that checks for tzname even if other variants were\n>>> found first. But since 63bd0db12199c5df043e1dea0f2b574f622b3a4c we\n>>> don't use tzname anymore, so we don't need this anymore.\n>>\n>> Hmm. I wonder if we need AC_STRUCT_TIMEZONE either? Seems like\n>> we should only be using our own struct pg_tm.\n> \n> There are a few places that seem to need it, such as initdb/findtimezone.c.\n> \n>> If we could get\n>> rid of that configure macro altogether, we could remove some dubious\n>> junk like plpython.h's \"#undef HAVE_TZNAME\".\n> \n> We could keep just the part of AC_STRUCT_TIMEZONE that we need, namely\n> the check for tm_zone, and remove the part about tzname.\n> \n> New patch attached.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 7 Oct 2019 16:55:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Revert back to standard AC_STRUCT_TIMEZONE Autoconf macro" } ]
[ { "msg_contents": "We rely on pexports to extract exported symbols from DLL files (e.g. for\nlinking in PLs) when building with mingw. However, this program isn't\npresent in msys2. Instead the approved way is apparently to call\n\"gendef\" from the appropriate toolset (e.g. /mingw64/bin). I have worked\naround this on my new test machine but adding a script liker this in\n/usr/bin/pexports:\n\n\n #!/bin/sh\n gendef - \"$@\"\n\n\nHowever, requiring that is a bit ugly. Instead I think we should do\nsomething like the attached.\n\n\nI would not be surprised if we need to test the msys version elsewhere\nas time goes on, so this would stand us in good stead if we do.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Sep 2019 17:06:15 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "msys2 is missing pexports" } ]
[ { "msg_contents": "For full-cluster Transparent Data Encryption (TDE), the current plan is\nto encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\noverflow). The plan is:\n\n\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n\nWe don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, or\nother files. Is that correct? Do any other PGDATA files contain user\ndata?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 30 Sep 2019 17:26:33 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Moin,\n\nOn 2019-09-30 23:26, Bruce Momjian wrote:\n> For full-cluster Transparent Data Encryption (TDE), the current plan is\n> to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> overflow). The plan is:\n> \n> \thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> \n> We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, \n> or\n> other files. Is that correct? Do any other PGDATA files contain user\n> data?\n\nIMHO the general rule in crypto is: encrypt everything, or don't bother.\n\nIf you don't encrypt some things, somebody is going to find loopholes \nand sidechannels\nand partial-plaintext attacks. Just a silly example: If you trick the DB \ninto putting only one row per page,\nany \"bit-per-page\" map suddenly reveals information about a single \nencrypted row that it shouldn't reveal.\n\nMany people with a lot of free time on their hands will sit around, \ndrink a nice cup of tea and come up\nwith all sorts of attacks on these things that you didn't (and couldn't) \nanticipate now.\n\nSo IMHO it would be much better to err on the side of caution and \nencrypt everything possible.\n\nBest regards,\n\nTels\n\n\n", "msg_date": "Tue, 01 Oct 2019 09:32:56 +0200", "msg_from": "Tels <nospam-pg-abuse@bloodgate.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Dear Tels.\n\nOn Tue, Oct 1, 2019 at 4:33 PM Tels <nospam-pg-abuse@bloodgate.com> wrote:\n>\n> Moin,\n>\n> On 2019-09-30 23:26, Bruce Momjian wrote:\n> > For full-cluster Transparent Data Encryption (TDE), the current plan is\n> > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> > overflow). The plan is:\n> >\n> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> >\n> > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact,\n> > or\n> > other files. Is that correct? Do any other PGDATA files contain user\n> > data?\n>\n> IMHO the general rule in crypto is: encrypt everything, or don't bother.\n>\n> If you don't encrypt some things, somebody is going to find loopholes\n> and sidechannels\n> and partial-plaintext attacks. Just a silly example: If you trick the DB\n> into putting only one row per page,\n> any \"bit-per-page\" map suddenly reveals information about a single\n> encrypted row that it shouldn't reveal.\n>\n> Many people with a lot of free time on their hands will sit around,\n> drink a nice cup of tea and come up\n> with all sorts of attacks on these things that you didn't (and couldn't)\n> anticipate now.\n\nThis is my thinks, but to minimize overhead, we try not to encrypt\ndata that does not store confidential data.\n\nAnd I'm not a security expert, so my thoughts may be wrong.\nBut isn't it more dangerous to encrypt predictable data?\n\nFor example, when encrypting data other than the data entered by the user,\nit is possible(maybe..) to predict the plain text data.\nAnd if these data are encrypted, I think that there will be a security problem.\n\nOf course, the encryption key will use separately.\nBut I thought it would be a problem if there were confidential data\nencrypted using the same key as the attacked data.\n\nBest regards.\nMoon.\n\n\n>\n> So IMHO it would be much better to err on the side of caution and\n> encrypt everything possible.\n>\n> Best regards,\n>\n> Tels\n>\n>\n\n\n", "msg_date": "Tue, 1 Oct 2019 17:10:49 +0900", "msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Tue, Oct 1, 2019 at 9:33 AM Tels <nospam-pg-abuse@bloodgate.com> wrote:\n\n> Moin,\n>\n> On 2019-09-30 23:26, Bruce Momjian wrote:\n> > For full-cluster Transparent Data Encryption (TDE), the current plan is\n> > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> > overflow). The plan is:\n> >\n> >\n> https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> >\n> > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact,\n> > or\n> > other files. Is that correct? Do any other PGDATA files contain user\n> > data?\n>\n> IMHO the general rule in crypto is: encrypt everything, or don't bother.\n>\n> If you don't encrypt some things, somebody is going to find loopholes\n> and sidechannels\n> and partial-plaintext attacks. Just a silly example: If you trick the DB\n> into putting only one row per page,\n> any \"bit-per-page\" map suddenly reveals information about a single\n> encrypted row that it shouldn't reveal.\n>\n> Many people with a lot of free time on their hands will sit around,\n> drink a nice cup of tea and come up\n> with all sorts of attacks on these things that you didn't (and couldn't)\n> anticipate now.\n>\n> So IMHO it would be much better to err on the side of caution and\n> encrypt everything possible.\n>\n\n+1.\n\nUnless we are *absolutely* certain, I bet someone will be able to find a\nside-channel that somehow leaks some data or data-about-data, if we don't\nencrypt everything. If nothing else, you can get use patterns out of it,\nand you can make a lot from that. (E.g. by whether transactions are using\nmultixacts or not you can potentially determine which transaction they are,\nif you know what type of transactions are being issued by the application.\nIn the simplest case, there might be a single pattern where multixacts end\nup actually being used, and in that case being able to see the multixact\ndata tells you a lot about the system).\n\nAs for other things -- by default, we store the log files in text format in\nthe data directory. That contains *loads* of sensitive data in a lot of\ncases. Will those also be encrypted?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Oct 1, 2019 at 9:33 AM Tels <nospam-pg-abuse@bloodgate.com> wrote:Moin,\n\nOn 2019-09-30 23:26, Bruce Momjian wrote:\n> For full-cluster Transparent Data Encryption (TDE), the current plan is\n> to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> overflow).  The plan is:\n> \n>       https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> \n> We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, \n> or\n> other files.  Is that correct?  Do any other PGDATA files contain user\n> data?\n\nIMHO the general rule in crypto is: encrypt everything, or don't bother.\n\nIf you don't encrypt some things, somebody is going to find loopholes \nand sidechannels\nand partial-plaintext attacks. Just a silly example: If you trick the DB \ninto putting only one row per page,\nany \"bit-per-page\" map suddenly reveals information about a single \nencrypted row that it shouldn't reveal.\n\nMany people with a lot of free time on their hands will sit around, \ndrink a nice cup of tea and come up\nwith all sorts of attacks on these things that you didn't (and couldn't) \nanticipate now.\n\nSo IMHO it would be much better to err on the side of caution and \nencrypt everything possible.+1.Unless we are *absolutely* certain, I bet someone will be able to find a side-channel that somehow leaks some data or data-about-data, if we don't encrypt everything. If nothing else, you can get use patterns out of it, and you can make a lot from that. (E.g. by whether transactions are using multixacts or not you can potentially determine which transaction they are, if you know what type of transactions are being issued by the application. In the simplest case, there might be a single pattern where multixacts end up actually being used, and in that case being able to see the multixact data tells you a lot about the system).As for other things -- by default, we store the log files in text format in the data directory. That contains *loads* of sensitive data in a lot of cases. Will those also be encrypted? --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 1 Oct 2019 10:37:32 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Dear Magnus Hagander.\n\nOn Tue, Oct 1, 2019 at 5:37 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n>\n>\n> On Tue, Oct 1, 2019 at 9:33 AM Tels <nospam-pg-abuse@bloodgate.com> wrote:\n>>\n>> Moin,\n>>\n>> On 2019-09-30 23:26, Bruce Momjian wrote:\n>> > For full-cluster Transparent Data Encryption (TDE), the current plan is\n>> > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n>> > overflow). The plan is:\n>> >\n>> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n>> >\n>> > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact,\n>> > or\n>> > other files. Is that correct? Do any other PGDATA files contain user\n>> > data?\n>>\n>> IMHO the general rule in crypto is: encrypt everything, or don't bother.\n>>\n>> If you don't encrypt some things, somebody is going to find loopholes\n>> and sidechannels\n>> and partial-plaintext attacks. Just a silly example: If you trick the DB\n>> into putting only one row per page,\n>> any \"bit-per-page\" map suddenly reveals information about a single\n>> encrypted row that it shouldn't reveal.\n>>\n>> Many people with a lot of free time on their hands will sit around,\n>> drink a nice cup of tea and come up\n>> with all sorts of attacks on these things that you didn't (and couldn't)\n>> anticipate now.\n>>\n>> So IMHO it would be much better to err on the side of caution and\n>> encrypt everything possible.\n>\n>\n> +1.\n>\n> Unless we are *absolutely* certain, I bet someone will be able to find a side-channel that somehow leaks some data or data-about-data, if we don't encrypt everything. If nothing else, you can get use patterns out of it, and you can make a lot from that. (E.g. by whether transactions are using multixacts or not you can potentially determine which transaction they are, if you know what type of transactions are being issued by the application. In the simplest case, there might be a single pattern where multixacts end up actually being used, and in that case being able to see the multixact data tells you a lot about the system).\n>\n> As for other things -- by default, we store the log files in text format in the data directory. That contains *loads* of sensitive data in a lot of cases. Will those also be encrypted?\n\n\nMaybe...as a result of the discussion so far, we are not encrypted of\nthe server log.\n\nhttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#What_to_encrypt.2Fdecrypt\n\nI think Encrypting server logs can be a very difficult challenge,\nand will probably need to develop another application to see the\nencrypted server logs.\n\nBest regards.\nMoon.\n\n\n>\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/\n> Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 1 Oct 2019 18:30:39 +0900", "msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Tue, Oct 01, 2019 at 06:30:39PM +0900, Moon, Insung wrote:\n>Dear Magnus Hagander.\n>\n>On Tue, Oct 1, 2019 at 5:37 PM Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>>\n>>\n>> On Tue, Oct 1, 2019 at 9:33 AM Tels <nospam-pg-abuse@bloodgate.com> wrote:\n>>>\n>>> Moin,\n>>>\n>>> On 2019-09-30 23:26, Bruce Momjian wrote:\n>>> > For full-cluster Transparent Data Encryption (TDE), the current plan is\n>>> > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n>>> > overflow). The plan is:\n>>> >\n>>> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n>>> >\n>>> > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact,\n>>> > or\n>>> > other files. Is that correct? Do any other PGDATA files contain user\n>>> > data?\n>>>\n>>> IMHO the general rule in crypto is: encrypt everything, or don't bother.\n>>>\n>>> If you don't encrypt some things, somebody is going to find loopholes\n>>> and sidechannels\n>>> and partial-plaintext attacks. Just a silly example: If you trick the DB\n>>> into putting only one row per page,\n>>> any \"bit-per-page\" map suddenly reveals information about a single\n>>> encrypted row that it shouldn't reveal.\n>>>\n>>> Many people with a lot of free time on their hands will sit around,\n>>> drink a nice cup of tea and come up\n>>> with all sorts of attacks on these things that you didn't (and couldn't)\n>>> anticipate now.\n>>>\n>>> So IMHO it would be much better to err on the side of caution and\n>>> encrypt everything possible.\n>>\n>>\n>> +1.\n>>\n>> Unless we are *absolutely* certain, I bet someone will be able to find a side-channel that somehow leaks some data or data-about-data, if we don't encrypt everything. If nothing else, you can get use patterns out of it, and you can make a lot from that. (E.g. by whether transactions are using multixacts or not you can potentially determine which transaction they are, if you know what type of transactions are being issued by the application. In the simplest case, there might be a single pattern where multixacts end up actually being used, and in that case being able to see the multixact data tells you a lot about the system).\n>>\n>> As for other things -- by default, we store the log files in text format in the data directory. That contains *loads* of sensitive data in a lot of cases. Will those also be encrypted?\n>\n>\n>Maybe...as a result of the discussion so far, we are not encrypted of\n>the server log.\n>\n>https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#What_to_encrypt.2Fdecrypt\n>\n>I think Encrypting server logs can be a very difficult challenge,\n>and will probably need to develop another application to see the\n>encrypted server logs.\n>\n\nIMO leaks of sensitive data into the server log (say, as part of error\nmessages, slow queries, ...) are a serious issue. It's one of the main\nissues with pgcrypto-style encryption, because it's trivial to leak e.g.\nkeys into the server log. Even if proper key management prevents leaking\nkeys, there are still user data - say, credit card numbers, and such.\n\nSo I don't see how we could not encrypt the server log, in the end.\n\nBut yes, you're right it's a challenging topis.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 1 Oct 2019 15:48:31 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Tue, Oct 1, 2019 at 03:48:31PM +0200, Tomas Vondra wrote:\n> IMO leaks of sensitive data into the server log (say, as part of error\n> messages, slow queries, ...) are a serious issue. It's one of the main\n> issues with pgcrypto-style encryption, because it's trivial to leak e.g.\n> keys into the server log. Even if proper key management prevents leaking\n> keys, there are still user data - say, credit card numbers, and such.\n\nFortunately, the full-cluster encryption keys are stored encrypted in\npg_control and are never accessible unencrypted at the SQL level.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 1 Oct 2019 10:51:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Sep 30, 2019 at 05:26:33PM -0400, Bruce Momjian wrote:\n> For full-cluster Transparent Data Encryption (TDE), the current plan is\n> to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> overflow). The plan is:\n> \n> \thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> \n> We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, or\n> other files. Is that correct? Do any other PGDATA files contain user\n> data?\n\nOh, there is also consideration that the pg_replslot directory might\nalso contain user data.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 1 Oct 2019 21:39:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Sep 30, 2019 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n> For full-cluster Transparent Data Encryption (TDE), the current plan is\n> to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> overflow). The plan is:\n>\n> https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n>\n> We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, or\n> other files. Is that correct? Do any other PGDATA files contain user\n> data?\n\nAs others have said, that sounds wrong to me. I think you need to\nencrypt everything.\n\nI'm not sold on the comments that have been made about encrypting the\nserver log. I agree that could leak data, but that seems like somebody\nelse's problem: the log files aren't really under PostgreSQL's\nmanagement in the same way as pg_clog is. If you want to secure your\nlogs, send them to syslog and configure it to do whatever you need.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 3 Oct 2019 10:29:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Sep 30, 2019 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > For full-cluster Transparent Data Encryption (TDE), the current plan is\n> > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> > overflow). The plan is:\n> >\n> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> >\n> > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, or\n> > other files. Is that correct? Do any other PGDATA files contain user\n> > data?\n> \n> As others have said, that sounds wrong to me. I think you need to\n> encrypt everything.\n\nThat isn't what other database systems do though and isn't what people\nactually asking for this feature are expecting to have or deal with.\n\nPeople who are looking for 'encrypt all the things' should and will be\nlooking at filesytem-level encryption options. That's not what this\nfeature is about.\n\n> I'm not sold on the comments that have been made about encrypting the\n> server log. I agree that could leak data, but that seems like somebody\n> else's problem: the log files aren't really under PostgreSQL's\n> management in the same way as pg_clog is. If you want to secure your\n> logs, send them to syslog and configure it to do whatever you need.\n\nI agree with this.\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Oct 2019 10:40:40 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Thu, Oct 03, 2019 at 10:40:40AM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Robert Haas (robertmhaas@gmail.com) wrote:\n>> On Mon, Sep 30, 2019 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> > For full-cluster Transparent Data Encryption (TDE), the current plan is\n>> > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n>> > overflow). The plan is:\n>> >\n>> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n>> >\n>> > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, or\n>> > other files. Is that correct? Do any other PGDATA files contain user\n>> > data?\n>>\n>> As others have said, that sounds wrong to me. I think you need to\n>> encrypt everything.\n>\n>That isn't what other database systems do though and isn't what people\n>actually asking for this feature are expecting to have or deal with.\n>\n>People who are looking for 'encrypt all the things' should and will be\n>looking at filesytem-level encryption options. That's not what this\n>feature is about.\n>\n\nThat's almost certainly not true, at least not universally.\n\nIt may be true for some people, but a a lot of the people asking for\nin-database encryption essentially want to do filesystem encryption but\ncan't use it for various reasons. E.g. because they're running in\nenvironments that make filesystem encryption impossible to use (OS not\nsupporting it directly, no access to the block device, lack of admin\nprivileges, ...). Or maybe they worry about people with fs access.\n\nIf you look at how the two threads discussing the FDE design, both of\nthem pretty much started as \"let's do FDE in the database\".\n\n>> I'm not sold on the comments that have been made about encrypting the\n>> server log. I agree that could leak data, but that seems like somebody\n>> else's problem: the log files aren't really under PostgreSQL's\n>> management in the same way as pg_clog is. If you want to secure your\n>> logs, send them to syslog and configure it to do whatever you need.\n>\n>I agree with this.\n>\n\nI don't. I know it's not an easy problem to solve, but it may contain\nuser data (which is what we manage). We may allow disabling that, at\nwhich point it becomes someone else's problem.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Oct 2019 17:20:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Thu, Oct 03, 2019 at 10:40:40AM -0400, Stephen Frost wrote:\n> >People who are looking for 'encrypt all the things' should and will be\n> >looking at filesytem-level encryption options. That's not what this\n> >feature is about.\n> \n> That's almost certainly not true, at least not universally.\n> \n> It may be true for some people, but a a lot of the people asking for\n> in-database encryption essentially want to do filesystem encryption but\n> can't use it for various reasons. E.g. because they're running in\n> environments that make filesystem encryption impossible to use (OS not\n> supporting it directly, no access to the block device, lack of admin\n> privileges, ...). Or maybe they worry about people with fs access.\n\nAnyone coming from other database systems isn't asking for that though\nand it wouldn't be a comparable offering to other systems.\n\n> If you look at how the two threads discussing the FDE design, both of\n> them pretty much started as \"let's do FDE in the database\".\n\nAnd that's how some folks continue to see it- let's just encrypt all the\nthings, until they actually look at it and start thinking about what\nthat means and how to implement it.\n\nYeah, it'd be great to just encrypt everything, with a bunch of\ndifferent keys, all of which are stored somewhere else, and can be\nupdated and changed by the user when they need to do a rekeying, but\nthen you start have to asking about what keys need to be available when\nfor doing crash recovery, how do you handle a crash in the middle of a\nrekeying, how do you handle updating keys from the user, etc..\n\nSure, we could offer a dead simple \"here, use this one key at database\nstart to just encrypt everything\" and that would be enough for some set\nof users (a very small set, imv, but that's subjective, obviously), but\nI don't think we could dare promote that as having TDE because it\nwouldn't be at all comparable to what other databases have, and it\nwouldn't materially move us in the direction of having real TDE.\n\n> >>I'm not sold on the comments that have been made about encrypting the\n> >>server log. I agree that could leak data, but that seems like somebody\n> >>else's problem: the log files aren't really under PostgreSQL's\n> >>management in the same way as pg_clog is. If you want to secure your\n> >>logs, send them to syslog and configure it to do whatever you need.\n> >\n> >I agree with this.\n> \n> I don't. I know it's not an easy problem to solve, but it may contain\n> user data (which is what we manage). We may allow disabling that, at\n> which point it becomes someone else's problem.\n\nWe also send user data to clients, but I don't imagine we're suggesting\nthat we need to control what some downstream application does with that\ndata or how it gets stored. There's definitely a lot of room for\nimprovement in our logging (in an ideal world, we'd have a way to\nactually store the logs in the database, at which point it could be\nencrypted or not that way...), but I'm not seeing the need for us to\nhave a way to encrypt the log files. If we did encrypt them, we'd have\nto make sure to do it in a way that users could still access them\nwithout the database being up and running, which might be tricky if the\nkey is in the vault...\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Oct 2019 11:51:41 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On 2019-10-03 16:40, Stephen Frost wrote:\n>> As others have said, that sounds wrong to me. I think you need to\n>> encrypt everything.\n> That isn't what other database systems do though and isn't what people\n> actually asking for this feature are expecting to have or deal with.\n\nIt is what some other database systems do. Perhaps some others don't.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Oct 2019 17:57:08 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-10-03 16:40, Stephen Frost wrote:\n> >> As others have said, that sounds wrong to me. I think you need to\n> >> encrypt everything.\n> > That isn't what other database systems do though and isn't what people\n> > actually asking for this feature are expecting to have or deal with.\n> \n> It is what some other database systems do. Perhaps some others don't.\n\nI looked at the contemporary databases and provided details about all of\nthem earlier in the thread. Please feel free to review that and let me\nknow if your research shows differently.\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Oct 2019 11:58:55 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Thu, Oct 03, 2019 at 11:51:41AM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> On Thu, Oct 03, 2019 at 10:40:40AM -0400, Stephen Frost wrote:\n>> >People who are looking for 'encrypt all the things' should and will be\n>> >looking at filesytem-level encryption options. That's not what this\n>> >feature is about.\n>>\n>> That's almost certainly not true, at least not universally.\n>>\n>> It may be true for some people, but a a lot of the people asking for\n>> in-database encryption essentially want to do filesystem encryption but\n>> can't use it for various reasons. E.g. because they're running in\n>> environments that make filesystem encryption impossible to use (OS not\n>> supporting it directly, no access to the block device, lack of admin\n>> privileges, ...). Or maybe they worry about people with fs access.\n>\n>Anyone coming from other database systems isn't asking for that though\n>and it wouldn't be a comparable offering to other systems.\n>\n\nI don't think that's quite accurate. In the previous message you claimed\n(1) this isn't what other database systems do and (2) people who want to\nencrypt everything should just use fs encryption, because that's not\nwhat TDE is about.\n\nRegarding (1), I'm pretty sure Oracle TDE does pretty much exactly this,\nat least in the mode with tablespace-level encryption. It's true there\nis also column-level mode, but from my experience it's far less used\nbecause it has a number of annoying limitations.\n\nSo I'm somewhat puzzled by your claim that people coming from other\nsystems are asking for the column-level mode. At least I'm assuming\nthat's what they're asking for, because I don't see other options.\n\n>> If you look at how the two threads discussing the FDE design, both of\n>> them pretty much started as \"let's do FDE in the database\".\n>\n>And that's how some folks continue to see it- let's just encrypt all the\n>things, until they actually look at it and start thinking about what\n>that means and how to implement it.\n>\n\nThis argument also works the other way, though. On Oracle, people often\nstart with the column-level encryption because it seems naturally\nsuperior (hey, I can encrypt just the columns I want, ...) and then they\nstart running into the various limitations and eventually just switch to\nthe tablespace-level encryption.\n\nNow, maybe we'll be able to solve those limitations - but I think it's\npretty unlikely, because those limitations seem quite inherent to how\nencryption affects indexes etc.\n\n>Yeah, it'd be great to just encrypt everything, with a bunch of\n>different keys, all of which are stored somewhere else, and can be\n>updated and changed by the user when they need to do a rekeying, but\n>then you start have to asking about what keys need to be available when\n>for doing crash recovery, how do you handle a crash in the middle of a\n>rekeying, how do you handle updating keys from the user, etc..\n>\n>Sure, we could offer a dead simple \"here, use this one key at database\n>start to just encrypt everything\" and that would be enough for some set\n>of users (a very small set, imv, but that's subjective, obviously), but\n>I don't think we could dare promote that as having TDE because it\n>wouldn't be at all comparable to what other databases have, and it\n>wouldn't materially move us in the direction of having real TDE.\n>\n\nI think that very much depends on the definition of what \"real TDE\". I\ndon't know what exactly that means at this point. And as I said before,\nI think such simple mode *is* comparable to (at least some) solutions\navailable in other databases (as explained above).\n\nAs for the users, I don't have any objective data about this, but I\nthink the amount of people wanting such simple solution is non-trivial.\nThat does not mean we can't extend it to support more advanced features.\n\n>> >>I'm not sold on the comments that have been made about encrypting the\n>> >>server log. I agree that could leak data, but that seems like somebody\n>> >>else's problem: the log files aren't really under PostgreSQL's\n>> >>management in the same way as pg_clog is. If you want to secure your\n>> >>logs, send them to syslog and configure it to do whatever you need.\n>> >\n>> >I agree with this.\n>>\n>> I don't. I know it's not an easy problem to solve, but it may contain\n>> user data (which is what we manage). We may allow disabling that, at\n>> which point it becomes someone else's problem.\n>\n>We also send user data to clients, but I don't imagine we're suggesting\n>that we need to control what some downstream application does with that\n>data or how it gets stored. There's definitely a lot of room for\n>improvement in our logging (in an ideal world, we'd have a way to\n>actually store the logs in the database, at which point it could be\n>encrypted or not that way...), but I'm not seeing the need for us to\n>have a way to encrypt the log files. If we did encrypt them, we'd have\n>to make sure to do it in a way that users could still access them\n>without the database being up and running, which might be tricky if the\n>key is in the vault...\n>\n\nThat's a bit of a straw-man argument, really. The client is obviously\nmeant to receive and handle sensitive data, that's it's main purpose.\nFor logging systems the situation is a bit different, it's a general\npurpose tool, with no idea what the data is.\n\nI do understand it's pretty pointless to send encrypted message to such\nexternal tools, but IMO it's be good to implement that at least for our\ninternal logging collector.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Oct 2019 18:43:25 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Thu, Oct 03, 2019 at 11:58:55AM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n>> On 2019-10-03 16:40, Stephen Frost wrote:\n>> >> As others have said, that sounds wrong to me. I think you need to\n>> >> encrypt everything.\n>> > That isn't what other database systems do though and isn't what people\n>> > actually asking for this feature are expecting to have or deal with.\n>>\n>> It is what some other database systems do. Perhaps some others don't.\n>\n>I looked at the contemporary databases and provided details about all of\n>them earlier in the thread. Please feel free to review that and let me\n>know if your research shows differently.\n>\n\nI assume you mean this (in one of the other threads):\n\nhttps://www.postgresql.org/message-id/20190817175217.GE16436%40tamriel.snowman.net\n\nFWIW I don't see anything contradicting the idea of just encrypting\neverything (including vm, fsm etc.). The only case that seems to be an\nexception is the column-level encryption in Oracle, all the other\noptions (especially the database-level ones) seem to be consistent with\nthis principle.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Oct 2019 18:52:21 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Thu, Oct 03, 2019 at 11:51:41AM -0400, Stephen Frost wrote:\n> >* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> >>On Thu, Oct 03, 2019 at 10:40:40AM -0400, Stephen Frost wrote:\n> >>>People who are looking for 'encrypt all the things' should and will be\n> >>>looking at filesytem-level encryption options. That's not what this\n> >>>feature is about.\n> >>\n> >>That's almost certainly not true, at least not universally.\n> >>\n> >>It may be true for some people, but a a lot of the people asking for\n> >>in-database encryption essentially want to do filesystem encryption but\n> >>can't use it for various reasons. E.g. because they're running in\n> >>environments that make filesystem encryption impossible to use (OS not\n> >>supporting it directly, no access to the block device, lack of admin\n> >>privileges, ...). Or maybe they worry about people with fs access.\n> >\n> >Anyone coming from other database systems isn't asking for that though\n> >and it wouldn't be a comparable offering to other systems.\n> \n> I don't think that's quite accurate. In the previous message you claimed\n> (1) this isn't what other database systems do and (2) people who want to\n> encrypt everything should just use fs encryption, because that's not\n> what TDE is about.\n> \n> Regarding (1), I'm pretty sure Oracle TDE does pretty much exactly this,\n> at least in the mode with tablespace-level encryption. It's true there\n> is also column-level mode, but from my experience it's far less used\n> because it has a number of annoying limitations.\n\nWe're probably being too general and that's ending up with us talking\npast each other. Yes, Oracle provides tablespace and column level\nencryption, but neither case results in *everything* being encrypted.\n\n> So I'm somewhat puzzled by your claim that people coming from other\n> systems are asking for the column-level mode. At least I'm assuming\n> that's what they're asking for, because I don't see other options.\n\nI've seen asks for tablespace, table, and column-level, but it's always\nbeen about the actual data. Something like clog is an entirely internal\nstructure that doesn't include the actual data. Yes, it's possible it\ncould somehow be used for a side-channel attack, as could other things,\nsuch as WAL, and as such I'm not sure that forcing a policy of \"encrypt\neverything\" is actually a sensible approach and it definitely adds\ncomplexity and makes it a lot more difficult to come up with a sensible\nsolution.\n\n> >>If you look at how the two threads discussing the FDE design, both of\n> >>them pretty much started as \"let's do FDE in the database\".\n> >\n> >And that's how some folks continue to see it- let's just encrypt all the\n> >things, until they actually look at it and start thinking about what\n> >that means and how to implement it.\n> \n> This argument also works the other way, though. On Oracle, people often\n> start with the column-level encryption because it seems naturally\n> superior (hey, I can encrypt just the columns I want, ...) and then they\n> start running into the various limitations and eventually just switch to\n> the tablespace-level encryption.\n> \n> Now, maybe we'll be able to solve those limitations - but I think it's\n> pretty unlikely, because those limitations seem quite inherent to how\n> encryption affects indexes etc.\n\nIt would probably be useful to discuss the specific limitations that\nyou've seen causes people to move away from column-level encryption.\n\nI definitely agree that figuring out how to make things work with\nindexes is a non-trivial challenge, though I'm hopeful that we can come\nup with something sensible.\n\n> >Yeah, it'd be great to just encrypt everything, with a bunch of\n> >different keys, all of which are stored somewhere else, and can be\n> >updated and changed by the user when they need to do a rekeying, but\n> >then you start have to asking about what keys need to be available when\n> >for doing crash recovery, how do you handle a crash in the middle of a\n> >rekeying, how do you handle updating keys from the user, etc..\n> >\n> >Sure, we could offer a dead simple \"here, use this one key at database\n> >start to just encrypt everything\" and that would be enough for some set\n> >of users (a very small set, imv, but that's subjective, obviously), but\n> >I don't think we could dare promote that as having TDE because it\n> >wouldn't be at all comparable to what other databases have, and it\n> >wouldn't materially move us in the direction of having real TDE.\n> \n> I think that very much depends on the definition of what \"real TDE\". I\n> don't know what exactly that means at this point. And as I said before,\n> I think such simple mode *is* comparable to (at least some) solutions\n> available in other databases (as explained above).\n\nWhen I was researching this, I couldn't find any example of a database\nthat wouldn't start without the one magic key that encrypts everything.\nI'm happy to be told that I was wrong in my understanding of that, with\nsome examples.\n\n> As for the users, I don't have any objective data about this, but I\n> think the amount of people wanting such simple solution is non-trivial.\n> That does not mean we can't extend it to support more advanced features.\n\nThe concern that I raised before and that I continue to worry about is\nthat providing such a simple capability will have a lot of limitations\ntoo (such as having a single key and only being able to rekey during a\ncomplete downtime, because we have to re-encrypt clog, etc, etc), and\nI don't see it helping us get to more granular TDE because, for that,\nwhere we really need to start is by building a vault of some kind to\nstore the keys in and then figuring out how we do things like crash\nrecovery in a sensible way and, ideally, without needing to have access\nto all of (any of?) the keys.\n\n> >>>>I'm not sold on the comments that have been made about encrypting the\n> >>>>server log. I agree that could leak data, but that seems like somebody\n> >>>>else's problem: the log files aren't really under PostgreSQL's\n> >>>>management in the same way as pg_clog is. If you want to secure your\n> >>>>logs, send them to syslog and configure it to do whatever you need.\n> >>>\n> >>>I agree with this.\n> >>\n> >>I don't. I know it's not an easy problem to solve, but it may contain\n> >>user data (which is what we manage). We may allow disabling that, at\n> >>which point it becomes someone else's problem.\n> >\n> >We also send user data to clients, but I don't imagine we're suggesting\n> >that we need to control what some downstream application does with that\n> >data or how it gets stored. There's definitely a lot of room for\n> >improvement in our logging (in an ideal world, we'd have a way to\n> >actually store the logs in the database, at which point it could be\n> >encrypted or not that way...), but I'm not seeing the need for us to\n> >have a way to encrypt the log files. If we did encrypt them, we'd have\n> >to make sure to do it in a way that users could still access them\n> >without the database being up and running, which might be tricky if the\n> >key is in the vault...\n> \n> That's a bit of a straw-man argument, really. The client is obviously\n> meant to receive and handle sensitive data, that's it's main purpose.\n> For logging systems the situation is a bit different, it's a general\n> purpose tool, with no idea what the data is.\n\nThe argument you're making is that the log isn't intended to have\nsensitive data, but while that might be a nice place to get to, we\ncertainly aren't there today, which means that people should really be\nsending the logs to a location that's trusted.\n\n> I do understand it's pretty pointless to send encrypted message to such\n> external tools, but IMO it's be good to implement that at least for our\n> internal logging collector.\n\nIt's also less than user friendly to log to encrypted files that you\ncan't read without having the database system being up, so we'd have to\nfigure out at least a solution to that problem, and then if you have\ndownstream systems where the logs are going to, you have to decrypt\nthem, or have a way to have them not be encrypted perhaps.\n\nIn general, wrt the logs, I feel like it's at least a reasonably small\nand independent piece of this, though I wonder if it'll cause similar\nproblems when it comes to dealing with crash recovery (how do we log if\nwe don't have the key from the vault because we haven't done crash\nrecovery yet, for example...).\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Oct 2019 13:26:55 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Thu, Oct 03, 2019 at 11:58:55AM -0400, Stephen Frost wrote:\n> >* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> >>On 2019-10-03 16:40, Stephen Frost wrote:\n> >>>> As others have said, that sounds wrong to me. I think you need to\n> >>>> encrypt everything.\n> >>> That isn't what other database systems do though and isn't what people\n> >>> actually asking for this feature are expecting to have or deal with.\n> >>\n> >>It is what some other database systems do. Perhaps some others don't.\n> >\n> >I looked at the contemporary databases and provided details about all of\n> >them earlier in the thread. Please feel free to review that and let me\n> >know if your research shows differently.\n> \n> I assume you mean this (in one of the other threads):\n> \n> https://www.postgresql.org/message-id/20190817175217.GE16436%40tamriel.snowman.net\n> \n> FWIW I don't see anything contradicting the idea of just encrypting\n> everything (including vm, fsm etc.). The only case that seems to be an\n> exception is the column-level encryption in Oracle, all the other\n> options (especially the database-level ones) seem to be consistent with\n> this principle.\n\nI don't think I was arguing specifically about VM/FSM in particular but\nrather about things which, for us, are cluster level. Admittedly, some\nother database systems put more things into tablespaces or databases\nthan we do (it'd sure be nice if we did in some cases too, but we\ndon't...), but they do also have things *outside* of those, such that\nyou can at least bring the system up, to some extent, even if you can't\naccess a given tablespace or database.\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Oct 2019 13:29:46 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Thu, Oct 3, 2019 at 1:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I don't think I was arguing specifically about VM/FSM in particular but\n> rather about things which, for us, are cluster level. Admittedly, some\n> other database systems put more things into tablespaces or databases\n> than we do (it'd sure be nice if we did in some cases too, but we\n> don't...), but they do also have things *outside* of those, such that\n> you can at least bring the system up, to some extent, even if you can't\n> access a given tablespace or database.\n\nIt sounds like you're making this up as you go along. The security\nramifications of encrypting a file don't depend on whether that file\nis database-level or cluster-level, but rather on whether the contents\ncould be useful to an attacker. It doesn't seem like it would require\nmuch work at all to construct an argument that a hacker might enjoy\nhaving unfettered access to pg_clog even if no other part of the\ndatabase can be read.\n\nMy perspective on this feature is, and has always been, that there are\ntwo different things somebody might want, both of which we seem to be\ncalling \"TDE.\" One is to encrypt every single data page in the cluster\n(and possibly things other than data pages, but at least those) with a\nsingle encryption key, much as filesystem encryption would do, but\ninternal to the database. Contrary to your assertions, such a solution\nhas useful properties. One is that it will work the same way on any\nsystem where PostgreSQL runs, whereas filesystem encryption solutions\nvary. Another is that it does not require the cooperation of the\nperson who has root in order to set up. A third is that someone with\naccess to the system does not have automatic and unfettered access to\nthe database's data; sure, they can get it with enough work, but it's\nsignificantly harder to finish the encryption keys out of the memory\nspace of a running process than to tar up the data directory that the\nfilesystem has already decrypted for you. I would personally not care\nabout any of this based on my own background as somebody who generally\nhad to do set up systems from scratch, starting with buying the\nhardware, but in enterprise and government environments they can pose\nsignificant problems.\n\nThe other thing people sometimes want is to encrypt some of the data\nwithin the database but not all of it. In my view, trying to implement\nthis is not a great idea, because it's vastly more complicated than\njust encrypting everything with one key. Would I like to have the\nfeature? Sure. Do I expect that we're going to get that feature any\ntime soon? Nope. Even the thing I described in the previous paragraph,\nas limited as it is, is complicated and could take several release\ncycles to get into committable shape. Fine-grained encryption is\nprobably an order of magnitude more complicated. The problem of\nfiguring out which keys apply to which objects does not seem to have\nany reasonably simple solution, assuming you want something that's\nneither insecure nor a badly-done hack.\n\nI am unsure what the thought process is among people, such as\nyourself, who are arguing that fine-grained encryption is the only way\nto go. It seems like you're determined to refuse a free Honda Civic on\nthe grounds that it's not a Cadillac. It's not even like accepting the\npatch for the Honda Civic solution would some how block accepting the\nCadillac if that shows up later. It wouldn't. It would just mean that,\nunless or until that patch shows up, we'd have something rather than\nnothing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 3 Oct 2019 18:44:25 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Oct 3, 2019 at 1:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I don't think I was arguing specifically about VM/FSM in particular but\n> > rather about things which, for us, are cluster level. Admittedly, some\n> > other database systems put more things into tablespaces or databases\n> > than we do (it'd sure be nice if we did in some cases too, but we\n> > don't...), but they do also have things *outside* of those, such that\n> > you can at least bring the system up, to some extent, even if you can't\n> > access a given tablespace or database.\n> \n> It sounds like you're making this up as you go along. \n\nI'm not surprised, and I doubt that's really got much to do with the\nactual topic.\n\n> The security\n> ramifications of encrypting a file don't depend on whether that file\n> is database-level or cluster-level, but rather on whether the contents\n> could be useful to an attacker.\n\nI don't believe that I claimed otherwise. I agree with this.\n\n> It doesn't seem like it would require\n> much work at all to construct an argument that a hacker might enjoy\n> having unfettered access to pg_clog even if no other part of the\n> database can be read.\n\nThe question isn't about what hackers would like to have access to, it's\nabout what would actually provide them with a channel to get information\nthat's sensitive, and at what rate. Perhaps there's an argument to be\nmade that clog would provide a high enough rate of information that\ncould be used to glean sensitive information, but that's certainly not\nan argument that's been put forth, instead it's the knee-jerk reaction\nof \"oh goodness, if anything isn't encrypted then hackers will be able\nto get access to everything\" and that's just not a real argument.\n\n> My perspective on this feature is, and has always been, that there are\n> two different things somebody might want, both of which we seem to be\n> calling \"TDE.\" One is to encrypt every single data page in the cluster\n> (and possibly things other than data pages, but at least those) with a\n> single encryption key, much as filesystem encryption would do, but\n> internal to the database. \n\nMaking it all up as I go along notwithstanding, I did go look at other\ndatabase systems which I considered on-par with PG, shared that\ninformation here, and am basing my comments on that review.\n\nWhich database systems have you looked at which have the properties\nyou're describing above that we should be working hard towards?\n\n> The other thing people sometimes want is to encrypt some of the data\n> within the database but not all of it. In my view, trying to implement\n> this is not a great idea, because it's vastly more complicated than\n> just encrypting everything with one key. \n\nWhich database systems that you'd consider to be on-par with PG, and\nwhich do have TDE, don't have some mechanism for supporting multiple\nkeys and for encrypting only a subset of the data?\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Oct 2019 21:42:39 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 4, 2019 at 3:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> > It doesn't seem like it would require\n> > much work at all to construct an argument that a hacker might enjoy\n> > having unfettered access to pg_clog even if no other part of the\n> > database can be read.\n>\n> The question isn't about what hackers would like to have access to, it's\n> about what would actually provide them with a channel to get information\n> that's sensitive, and at what rate. Perhaps there's an argument to be\n> made that clog would provide a high enough rate of information that\n> could be used to glean sensitive information, but that's certainly not\n> an argument that's been put forth, instead it's the knee-jerk reaction\n> of \"oh goodness, if anything isn't encrypted then hackers will be able\n> to get access to everything\" and that's just not a real argument.\n>\n\nHuh. That is *exactly* the argument I made. Though granted the example was\non multixact primarily, because I think that is much more likely to leak\ninteresting information, but the basis certainly applies to all the\nmetadata.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Oct 4, 2019 at 3:42 AM Stephen Frost <sfrost@snowman.net> wrote:> It doesn't seem like it would require\n> much work at all to construct an argument that a hacker might enjoy\n> having unfettered access to pg_clog even if no other part of the\n> database can be read.\n\nThe question isn't about what hackers would like to have access to, it's\nabout what would actually provide them with a channel to get information\nthat's sensitive, and at what rate.  Perhaps there's an argument to be\nmade that clog would provide a high enough rate of information that\ncould be used to glean sensitive information, but that's certainly not\nan argument that's been put forth, instead it's the knee-jerk reaction\nof \"oh goodness, if anything isn't encrypted then hackers will be able\nto get access to everything\" and that's just not a real argument.Huh. That is *exactly* the argument I made. Though granted the example was on multixact primarily, because I think that is much more likely to leak interesting information, but the basis certainly applies to all the metadata.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 4 Oct 2019 07:52:48 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Thu, Oct 3, 2019 at 4:40 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Mon, Sep 30, 2019 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > For full-cluster Transparent Data Encryption (TDE), the current plan is\n> > > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> > > overflow). The plan is:\n> > >\n> > >\n> https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> > >\n> > > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact,\n> or\n> > > other files. Is that correct? Do any other PGDATA files contain user\n> > > data?\n> >\n> > As others have said, that sounds wrong to me. I think you need to\n> > encrypt everything.\n>\n> That isn't what other database systems do though and isn't what people\n> actually asking for this feature are expecting to have or deal with.\n>\n\nDo any of said other database even *have* the equivalence of say pg_clog or\npg_multixact *stored outside their tablespaces*? (Because as long as the\ndata is in the tablespace, it's encrypted when using tablespace\nencryption..)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Oct 3, 2019 at 4:40 PM Stephen Frost <sfrost@snowman.net> wrote:\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Sep 30, 2019 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > For full-cluster Transparent Data Encryption (TDE), the current plan is\n> > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> > overflow).  The plan is:\n> >\n> >         https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> >\n> > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, or\n> > other files.  Is that correct?  Do any other PGDATA files contain user\n> > data?\n> \n> As others have said, that sounds wrong to me.  I think you need to\n> encrypt everything.\n\nThat isn't what other database systems do though and isn't what people\nactually asking for this feature are expecting to have or deal with.Do any of said other database even *have* the equivalence of say pg_clog or pg_multixact *stored outside their tablespaces*? (Because as long as the data is in the tablespace, it's encrypted when using tablespace encryption..)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 4 Oct 2019 07:54:16 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Thu, Oct 3, 2019 at 9:42 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > It doesn't seem like it would require\n> > much work at all to construct an argument that a hacker might enjoy\n> > having unfettered access to pg_clog even if no other part of the\n> > database can be read.\n>\n> The question isn't about what hackers would like to have access to, it's\n> about what would actually provide them with a channel to get information\n> that's sensitive, and at what rate. Perhaps there's an argument to be\n> made that clog would provide a high enough rate of information that\n> could be used to glean sensitive information, but that's certainly not\n> an argument that's been put forth, instead it's the knee-jerk reaction\n> of \"oh goodness, if anything isn't encrypted then hackers will be able\n> to get access to everything\" and that's just not a real argument.\n\nWell, I gather that you didn't much like my characterization of your\nargument as \"making it up as you go along,\" which is probably fair,\nbut I doubt that the people who are arguing that we should encrypt\nanything will appreciate your characterization of their argument as\n\"knee-jerk\" any better.\n\nI think everyone would agree that if you have no information about a\ndatabase other than the contents of pg_clog, that's not a meaningful\ninformation leak. You would be able to tell which transactions\ncommitted and which transactions aborted, but since you know nothing\nabout the data inside those transactions, it's of no use to you.\nHowever, in that situation, you probably wouldn't be attacking the\ndatabase in the first place. Most likely you have some knowledge about\nwhat it contains. Maybe there's a stream of sensor data that flows\ninto the database, and you can see that stream. By watching pg_clog,\nyou can see when a particular bit of data is rejected. That could be\nvaluable.\n\nTo take a highly artificial example, suppose that the database is fed\nby secret video cameras which identify the faces of everyone who\nboards a commercial aircraft and records all of those names in a\ndatabase, but high-ranking government officials are exempt from the\nprogram and there's a white-list of people whose names can't be\ninserted. When the system tries, a constraint violation occurs and the\ntransaction aborts. Now, if you see a transaction abort show up in\npg_clog, you know that either a high-ranking government official just\ntried to walk onto a plane, or the system is broken. If you see a\nwhole bunch of aborts within a few hours of each other, separated by\nlots of successful insertions, maybe you can infer a cabinet meeting.\nI don't know. That's a little bit of a stretch, but I don't see any\nreason why something like that can't happen. There are probably more\nplausible examples.\n\nThe point is that it's unreasonable, at least in my view, to decide\nthat the knowledge of which transactions commit and which transactions\nabort isn't sensitive. Yeah, on a lot of systems it won't be, but on\nsome systems it might be, so it should be encrypted.\n\nWhat I really find puzzling here is that Cybertec had a patch that\nencrypted -- well, I don't remember whether it encrypted this, but it\nencrypted a lot of stuff, and it spent a lot of time being concerned\nabout these exact kinds of issues. I know for example that they\nthought about the stats file, which is an even more clear vector for\ninformation leakage than we're talking about here. They thought about\nlogical decoding spill files, also a clear vector for information\nleakage. Pretty sure they also thought about WAL. That's all really\nimportant stuff, and one thing I learned from reading that patch is\nthat you can't solve those problems in a trivial, mechanical way. Some\nof those systems currently write data byte-by-byte, and converting\nthem to work block-by-block makes encrypting them a lot easier. So it\nseems to me that even if you think that patch had the dumbest key\nmanagement system in the history of the universe, you ought to be\nembracing some of the ideas that are in that patch because they'll\nmake any future encryption project easier. Instead of arguing about\nwhether these side-channel attacks are important -- and I seem not to\nbe alone here in believing that they are -- we could be working to get\ncode that has already been written to help solve those problems\ncommitted.\n\nI ask again -- why are you so opposed to a single-key,\nencrypt-everything approach? Even if you think multiple-key,\nencrypt-only-some-things is better, they don't have to block each\nother.\n\n> Which database systems have you looked at which have the properties\n> you're describing above that we should be working hard towards?\n\nI haven't studied other database systems much myself. I have, however,\ntalked with coworkers of mine who are trying to convince people to use\nPostgreSQL and/or Advanced Server, and I've heard a lot from them\nabout what the customers with whom they work would like to see. I base\nmy comments on those conversations. What I hear from them is basically\nthat anything we could give them would help. More would be better than\nless, of course. People would like a solution with key rotation better\nthan one without; fine-grained encryption better than coarse-grained\nencryption; less performance overhead better than more; and an\nencryption algorithm perceived as highly secure better than one\nperceived as less secure. But having anything at all would help.\n\nSecondarily, what I hear is that a lot of EnterpriseDB customers or\npotential customers reject filesystem encryption not so much because\nit's not sufficiently fine-grained, but rather because it depends on\nroot@localhost. Getting root@localhost to cooperate is difficult and\nundesirable, and also filesystem encryption doesn't help at all to\nprotect against root@localhost. I've pointed out repeatedly to many\npeople that putting the encryption inside the database doesn't\n*really* fix this problem, because root@localhost can ultimately do\nanything. But, as I said in my earlier email, people perceive that if\nthe filesystem does the encryption, root can just cp all the files and\nwin, whereas if the database does the encryption, that doesn't work,\nand root's got to work harder. That seems to matter to a lot of people\nwho are talking to my colleagues here at EnterpriseDB. That may, of\ncourse, not matter to your users, and that's fine. I'm not trying to\nblock people from attacking this problem from other angles; but I *am*\nfrustrated that you seem to be trying to block what seems to me to be\nthe most promising angle.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 4 Oct 2019 09:18:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 4, 2019 at 09:18:58AM -0400, Robert Haas wrote:\n> I think everyone would agree that if you have no information about a\n> database other than the contents of pg_clog, that's not a meaningful\n> information leak. You would be able to tell which transactions\n> committed and which transactions aborted, but since you know nothing\n> about the data inside those transactions, it's of no use to you.\n> However, in that situation, you probably wouldn't be attacking the\n> database in the first place. Most likely you have some knowledge about\n> what it contains. Maybe there's a stream of sensor data that flows\n> into the database, and you can see that stream. By watching pg_clog,\n> you can see when a particular bit of data is rejected. That could be\n> valuable.\n\nIt is certainly true that seeing activity in _any_ cluster file could\nleak information. However, even if we encrypted all the cluster files,\nbad actors could still get information by analyzing the file sizes and\nsize changes of relation files, and the speed of WAL creation, and even\nmonitor WAL for write activity (WAL file byte changes). I would think\nthat would leak more information than clog.\n\nI am not sure how you could secure against that information leak. While\nfile system encryption might do that at the storage layer, it doesn't do\nanything at the mounted file system layer.\n\nThe current approach is to encrypt anything that contains user data,\nwhich includes heap, index, and WAL files. I think replication slots\nand logical replication might also fall into that category, which is why\nI started this thread.\n\nI can see some saying that all cluster files should be encrypted, and I\ncan respect that argument. However, as outlined in the diagram linked\nto from the blog entry:\n\n\thttps://momjian.us/main/blogs/pgblog/2019.html#September_27_2019\n\nI feel that TDE, since it has limited value, and can't really avoid all\ninformation leakage, should strive to find the intersection of ease of\nimplementation, security, and compliance. If people don't think that\nlimited file encryption is secure, I get it. However, encrypting most\nor all files I think would lead us into such a \"difficult to implement\"\nscope that I would not longer be able to work on this feature. I think\nthe code complexity, fragility, potential unreliability, and even\noverhead of trying to encrypt most/all files would lead TDE to be\ngreatly delayed or never implemented. I just couldn't recommend it. \nNow, I might be totally wrong, and encryption of everything might be\njust fine, but I have to pick my projects, and such an undertaking seems\nfar too risky for me.\n\nJust for some detail, we have solved the block-level encryption problem\nby using CTR mode in most cases, but there is still a requirement for a\nnonce for every encryption operation. You can use derived keys too, but\nyou need to set up those keys for every write to encrypt files. Maybe\nit is possible to set up a write API that handles this transparently in\nthe code, but I don't know how to do that cleanly, and I doubt if the\nvalue of encrypting everything is worth it.\n\nAs far as encrypting the log file, I can see us adding documentation to\nwarn about that, and even issue a server log message if encryption is\nenabled and syslog is not being used. (I don't know how to test if\nsyslog is being shipped to a remote server.)\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 4 Oct 2019 15:57:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 04, 2019 at 07:52:48AM +0200, Magnus Hagander wrote:\n>On Fri, Oct 4, 2019 at 3:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n>>\n>> > It doesn't seem like it would require\n>> > much work at all to construct an argument that a hacker might enjoy\n>> > having unfettered access to pg_clog even if no other part of the\n>> > database can be read.\n>>\n>> The question isn't about what hackers would like to have access to, it's\n>> about what would actually provide them with a channel to get information\n>> that's sensitive, and at what rate. Perhaps there's an argument to be\n>> made that clog would provide a high enough rate of information that\n>> could be used to glean sensitive information, but that's certainly not\n>> an argument that's been put forth, instead it's the knee-jerk reaction\n>> of \"oh goodness, if anything isn't encrypted then hackers will be able\n>> to get access to everything\" and that's just not a real argument.\n>>\n>\n>Huh. That is *exactly* the argument I made. Though granted the example was\n>on multixact primarily, because I think that is much more likely to leak\n>interesting information, but the basis certainly applies to all the\n>metadata.\n>\n\nIMHO we should treat everything as a serious side-channel by default,\nand only consider not encrypting it after presenting arguments why\nthat's not the case. So we shouldn't be starting with unencrypted clog\nand waiting for folks to come up with attacks leveraging that.\n\nOf course, it's impossible to prove that something is not a serious\nside-channel (it might be safe on it's own, but not necessarily when\ncombined with other side-channels). And it's not black-and-white, i.e.\nthe side-channel may be leaking so little information it's not worth\nbothering with. And ultimately it's a trade-off between complexity of\nimplementation and severity of the side-channel.\n\nBut without at least trying to quantify the severity of the side-channel\nwe can't really have a discussion whether it's OK not to encrypt clog,\nwhether it can be omitted from v1 etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 4 Oct 2019 22:01:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Thu, Oct 03, 2019 at 01:26:55PM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> On Thu, Oct 03, 2019 at 11:51:41AM -0400, Stephen Frost wrote:\n>> >* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> >>On Thu, Oct 03, 2019 at 10:40:40AM -0400, Stephen Frost wrote:\n>> >>>People who are looking for 'encrypt all the things' should and will be\n>> >>>looking at filesytem-level encryption options. That's not what this\n>> >>>feature is about.\n>> >>\n>> >>That's almost certainly not true, at least not universally.\n>> >>\n>> >>It may be true for some people, but a a lot of the people asking for\n>> >>in-database encryption essentially want to do filesystem encryption but\n>> >>can't use it for various reasons. E.g. because they're running in\n>> >>environments that make filesystem encryption impossible to use (OS not\n>> >>supporting it directly, no access to the block device, lack of admin\n>> >>privileges, ...). Or maybe they worry about people with fs access.\n>> >\n>> >Anyone coming from other database systems isn't asking for that though\n>> >and it wouldn't be a comparable offering to other systems.\n>>\n>> I don't think that's quite accurate. In the previous message you claimed\n>> (1) this isn't what other database systems do and (2) people who want to\n>> encrypt everything should just use fs encryption, because that's not\n>> what TDE is about.\n>>\n>> Regarding (1), I'm pretty sure Oracle TDE does pretty much exactly this,\n>> at least in the mode with tablespace-level encryption. It's true there\n>> is also column-level mode, but from my experience it's far less used\n>> because it has a number of annoying limitations.\n>\n>We're probably being too general and that's ending up with us talking\n>past each other. Yes, Oracle provides tablespace and column level\n>encryption, but neither case results in *everything* being encrypted.\n>\n\nPossibly. There are far too many different TDE definitions in all those\nvarious threads.\n\n>> So I'm somewhat puzzled by your claim that people coming from other\n>> systems are asking for the column-level mode. At least I'm assuming\n>> that's what they're asking for, because I don't see other options.\n>\n>I've seen asks for tablespace, table, and column-level, but it's always\n>been about the actual data. Something like clog is an entirely internal\n>structure that doesn't include the actual data. Yes, it's possible it\n>could somehow be used for a side-channel attack, as could other things,\n>such as WAL, and as such I'm not sure that forcing a policy of \"encrypt\n>everything\" is actually a sensible approach and it definitely adds\n>complexity and makes it a lot more difficult to come up with a sensible\n>solution.\n>\n\nIMHO the proven design principle is \"deny all\" by default, i.e. we\nshould start with the assumption that clog is encrypted and then present\narguments why it's not needed. Maybe it's 100% fine and we don't need to\nencrypt it, or maybe it's a minor information leak and is not worth the\nextra complexity, or maybe it's not needed for v1. But how do you know?\nI don't think that discussion happened anywhere in those threads.\n\n\n>> >>If you look at how the two threads discussing the FDE design, both of\n>> >>them pretty much started as \"let's do FDE in the database\".\n>> >\n>> >And that's how some folks continue to see it- let's just encrypt all the\n>> >things, until they actually look at it and start thinking about what\n>> >that means and how to implement it.\n>>\n>> This argument also works the other way, though. On Oracle, people often\n>> start with the column-level encryption because it seems naturally\n>> superior (hey, I can encrypt just the columns I want, ...) and then they\n>> start running into the various limitations and eventually just switch to\n>> the tablespace-level encryption.\n>>\n>> Now, maybe we'll be able to solve those limitations - but I think it's\n>> pretty unlikely, because those limitations seem quite inherent to how\n>> encryption affects indexes etc.\n>\n>It would probably be useful to discuss the specific limitations that\n>you've seen causes people to move away from column-level encryption.\n>\n>I definitely agree that figuring out how to make things work with\n>indexes is a non-trivial challenge, though I'm hopeful that we can come\n>up with something sensible.\n>\n\nHope is hardly something we should use to drive design decisions ...\n\nAs for the limitations, the column-level limitations in Oracle, this is\nwhat the docs [1] say:\n\n----- <quote> -----\nDo not use TDE column encryption with the following database features:\n\n Index types other than B-tree\n\n Range scan search through an index\n\n Synchronous change data capture\n\n Transportable tablespaces\n\n Columns that have been created as identity columns\n\nIn addition, you cannot use TDE column encryption to encrypt columns\nused in foreign key constraints.\n----- </quote> -----\n\nNow, some of that is obviously specific to Oracle, but at least some of\nit seems to affect us too - certainly range scans through indexes,\npossibly data capture (I believe that's mostly logical decoding),\nnon-btree indexes and identity columns.\n\nOracle also has a handy \"TDE best practices\" document [2], which says\nwhen to use column-level encryption - let me quote a couple of points:\n\n* Location of sensitive information is known\n\n* Less than 5% of all application columns are encryption candidates\n\n* Encryption candidates are not foreign-key columns\n\n* Indexes over encryption candidates are normal B-tree indexes (this\n also means no support for indexes on expressions, and likely partial\n indexes)\n\n* No support from hardware crypto acceleration.\n\nNow, maybe we can relax some of those limitations, or maybe those\nlimitations are acceptable for some applications. But it certainly does\nnot seem like a clearly superior choice.\n\nThere are other interesting arguments in that [2], it's worth a read.\n\n>> >Yeah, it'd be great to just encrypt everything, with a bunch of\n>> >different keys, all of which are stored somewhere else, and can be\n>> >updated and changed by the user when they need to do a rekeying, but\n>> >then you start have to asking about what keys need to be available when\n>> >for doing crash recovery, how do you handle a crash in the middle of a\n>> >rekeying, how do you handle updating keys from the user, etc..\n>> >\n>> >Sure, we could offer a dead simple \"here, use this one key at database\n>> >start to just encrypt everything\" and that would be enough for some set\n>> >of users (a very small set, imv, but that's subjective, obviously), but\n>> >I don't think we could dare promote that as having TDE because it\n>> >wouldn't be at all comparable to what other databases have, and it\n>> >wouldn't materially move us in the direction of having real TDE.\n>>\n>> I think that very much depends on the definition of what \"real TDE\". I\n>> don't know what exactly that means at this point. And as I said before,\n>> I think such simple mode *is* comparable to (at least some) solutions\n>> available in other databases (as explained above).\n>\n>When I was researching this, I couldn't find any example of a database\n>that wouldn't start without the one magic key that encrypts everything.\n>I'm happy to be told that I was wrong in my understanding of that, with\n>some examples.\n>\n>> As for the users, I don't have any objective data about this, but I\n>> think the amount of people wanting such simple solution is non-trivial.\n>> That does not mean we can't extend it to support more advanced features.\n>\n>The concern that I raised before and that I continue to worry about is\n>that providing such a simple capability will have a lot of limitations\n>too (such as having a single key and only being able to rekey during a\n>complete downtime, because we have to re-encrypt clog, etc, etc), and\n>I don't see it helping us get to more granular TDE because, for that,\n>where we really need to start is by building a vault of some kind to\n>store the keys in and then figuring out how we do things like crash\n>recovery in a sensible way and, ideally, without needing to have access\n>to all of (any of?) the keys.\n>\n\nEh? I don't think that \"simple mode\" has to use a single encryption key\ninternally, I think the design with single *master* key and multiple\nencryption keys works just fine. So when changing the master key, it's\nenough to re-encrypt the encryption keys. No need for a downtime etc.\n\nOf course, in some cases it may be desirable to change those encryption\nkeys too, but that seems like a pretty inherent feature.\n\n>> >>>>I'm not sold on the comments that have been made about encrypting the\n>> >>>>server log. I agree that could leak data, but that seems like somebody\n>> >>>>else's problem: the log files aren't really under PostgreSQL's\n>> >>>>management in the same way as pg_clog is. If you want to secure your\n>> >>>>logs, send them to syslog and configure it to do whatever you need.\n>> >>>\n>> >>>I agree with this.\n>> >>\n>> >>I don't. I know it's not an easy problem to solve, but it may contain\n>> >>user data (which is what we manage). We may allow disabling that, at\n>> >>which point it becomes someone else's problem.\n>> >\n>> >We also send user data to clients, but I don't imagine we're suggesting\n>> >that we need to control what some downstream application does with that\n>> >data or how it gets stored. There's definitely a lot of room for\n>> >improvement in our logging (in an ideal world, we'd have a way to\n>> >actually store the logs in the database, at which point it could be\n>> >encrypted or not that way...), but I'm not seeing the need for us to\n>> >have a way to encrypt the log files. If we did encrypt them, we'd have\n>> >to make sure to do it in a way that users could still access them\n>> >without the database being up and running, which might be tricky if the\n>> >key is in the vault...\n>>\n>> That's a bit of a straw-man argument, really. The client is obviously\n>> meant to receive and handle sensitive data, that's it's main purpose.\n>> For logging systems the situation is a bit different, it's a general\n>> purpose tool, with no idea what the data is.\n>\n>The argument you're making is that the log isn't intended to have\n>sensitive data, but while that might be a nice place to get to, we\n>certainly aren't there today, which means that people should really be\n>sending the logs to a location that's trusted.\n>\n\nWhich means they can't really send it anywhere, because they don't have\ncontrol over what will be in error messages etc.\n\nLet me quote the PCI DSS standard, which seems like a good example:\n\n 3.4 Render Primary Account Number (PAN), at minimum, unreadable\n anywhere it is stored (including data on portable digital media,\n backup media, in logs) by using any of the following approaches:\n\n * One-way hashes based on strong cryptography\n\n * Truncation\n\n * Index tokens and pads (pads must be securely stored)\n\n * Strong cryptography with associated key management processes and\n procedures.\n\nI'm no PCI DSS expert, but how can you comply with this (assuming you\nwant tostore PAN in the database) by only sending the data to trusted\nsystems?\n\n>> I do understand it's pretty pointless to send encrypted message to such\n>> external tools, but IMO it's be good to implement that at least for our\n>> internal logging collector.\n>\n>It's also less than user friendly to log to encrypted files that you\n>can't read without having the database system being up, so we'd have to\n>figure out at least a solution to that problem, and then if you have\n>downstream systems where the logs are going to, you have to decrypt\n>them, or have a way to have them not be encrypted perhaps.\n>\n\nI don't see why the database would have to be up, as long as the vault\nis accessible somehow (i.e. I can imagine a tool for reading encrypted\nlogs, requesting the key from the same vault).\n\n>In general, wrt the logs, I feel like it's at least a reasonably small\n>and independent piece of this, though I wonder if it'll cause similar\n>problems when it comes to dealing with crash recovery (how do we log if\n>we don't have the key from the vault because we haven't done crash\n>recovery yet, for example...).\n>\n\nPossibly, I don't have an opinion on this.\n\nregards\n\n\n[1] https://docs.oracle.com/en/database/oracle/oracle-database/18/asoag/configuring-transparent-data-encryption.html\n\n[2] https://www.oracle.com/technetwork/database/security/twp-transparent-data-encryption-bes-130696.pdf\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 4 Oct 2019 22:46:57 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 4, 2019 at 10:46:57PM +0200, Tomas Vondra wrote:\n> Oracle also has a handy \"TDE best practices\" document [2], which says\n> when to use column-level encryption - let me quote a couple of points:\n> \n> * Location of sensitive information is known\n> \n> * Less than 5% of all application columns are encryption candidates\n> \n> * Encryption candidates are not foreign-key columns\n> \n> * Indexes over encryption candidates are normal B-tree indexes (this\n> also means no support for indexes on expressions, and likely partial\n> indexes)\n> \n> * No support from hardware crypto acceleration.\n\nAren't all modern systems going to have hardware crypto acceleration,\ni.e., AES-NI CPU extensions. Does that mean there is no value of\npartial encryption on such systems? Looking at the overhead numbers I\nhave seen for AES-NI-enabled systems, I believe it.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 4 Oct 2019 16:58:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 04, 2019 at 03:57:32PM -0400, Bruce Momjian wrote:\n>On Fri, Oct 4, 2019 at 09:18:58AM -0400, Robert Haas wrote:\n>> I think everyone would agree that if you have no information about a\n>> database other than the contents of pg_clog, that's not a meaningful\n>> information leak. You would be able to tell which transactions\n>> committed and which transactions aborted, but since you know nothing\n>> about the data inside those transactions, it's of no use to you.\n>> However, in that situation, you probably wouldn't be attacking the\n>> database in the first place. Most likely you have some knowledge about\n>> what it contains. Maybe there's a stream of sensor data that flows\n>> into the database, and you can see that stream. By watching pg_clog,\n>> you can see when a particular bit of data is rejected. That could be\n>> valuable.\n>\n>It is certainly true that seeing activity in _any_ cluster file could\n>leak information. However, even if we encrypted all the cluster files,\n>bad actors could still get information by analyzing the file sizes and\n>size changes of relation files, and the speed of WAL creation, and even\n>monitor WAL for write activity (WAL file byte changes). I would think\n>that would leak more information than clog.\n>\n\nYes, those information leaks seem unavoidable. \n\n>I am not sure how you could secure against that information leak. While\n>file system encryption might do that at the storage layer, it doesn't do\n>anything at the mounted file system layer.\n>\n\nThat's because FDE is only meant to protect against passive attacker,\nessentially stealing the device. It's useless when someone gains access\nto a mounted disk, so these information leaks are irrelevant.\n\n(I'm only talking about encryption at the block device level. I'm not\nsure about details e.g. for the encryption built into ext4, etc.)\n\n>The current approach is to encrypt anything that contains user data,\n>which includes heap, index, and WAL files. I think replication slots\n>and logical replication might also fall into that category, which is why\n>I started this thread.\n>\n\nYes, I think those bits have to be encrypted too.\n\nBTW I'm not sure why you list replication slots and logical replication\nindependently, those are mostly the same thing I think. For physical\nslots we probably don't need to encrypt anything, but for logical slots\nwe may spill decoded data to files (so those will contain user data).\n\n>I can see some saying that all cluster files should be encrypted, and I\n>can respect that argument. However, as outlined in the diagram linked\n>to from the blog entry:\n>\n>\thttps://momjian.us/main/blogs/pgblog/2019.html#September_27_2019\n>\n>I feel that TDE, since it has limited value, and can't really avoid all\n>information leakage, should strive to find the intersection of ease of\n>implementation, security, and compliance. If people don't think that\n>limited file encryption is secure, I get it. However, encrypting most\n>or all files I think would lead us into such a \"difficult to implement\"\n>scope that I would not longer be able to work on this feature. I think\n>the code complexity, fragility, potential unreliability, and even\n>overhead of trying to encrypt most/all files would lead TDE to be\n>greatly delayed or never implemented. I just couldn't recommend it.\n>Now, I might be totally wrong, and encryption of everything might be\n>just fine, but I have to pick my projects, and such an undertaking seems\n>far too risky for me.\n>\n\nI agree some trade-offs will be needed, to make the implementation at\nall possible (irrespectedly of the exact design). But I think those\ntrade-offs need to be conscious, based on some technical arguments why\nit's OK to consider a particular information leak acceptable, etc. For\nexample it may be fine when assuming the attacker only gets a single\nstatic copy of the data directory, but not when having the ability to\nobserve changes made by a running instance.\n\nIn a way, my concern is somehat the opposite of yours - that we'll end\nup with a feature (which necessarily adds complexity) that however does\nnot provide sufficient security for various use cases.\n\nAnd I don't know where exactly the middle ground is, TBH.\n\n>Just for some detail, we have solved the block-level encryption problem\n>by using CTR mode in most cases, but there is still a requirement for a\n>nonce for every encryption operation. You can use derived keys too, but\n>you need to set up those keys for every write to encrypt files. Maybe\n>it is possible to set up a write API that handles this transparently in\n>the code, but I don't know how to do that cleanly, and I doubt if the\n>value of encrypting everything is worth it.\n>\n>As far as encrypting the log file, I can see us adding documentation to\n>warn about that, and even issue a server log message if encryption is\n>enabled and syslog is not being used. (I don't know how to test if\n>syslog is being shipped to a remote server.)\n>\n\nNot sure. I wonder if it's possible to setup syslog so that it encrypts\nthe data on storage, and if that would be a suitable solution e.g. for\nPCI DSS purposes. (It seems at least rsyslogd supports that.)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 4 Oct 2019 23:31:00 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 04, 2019 at 04:58:14PM -0400, Bruce Momjian wrote:\n>On Fri, Oct 4, 2019 at 10:46:57PM +0200, Tomas Vondra wrote:\n>> Oracle also has a handy \"TDE best practices\" document [2], which says\n>> when to use column-level encryption - let me quote a couple of points:\n>>\n>> * Location of sensitive information is known\n>>\n>> * Less than 5% of all application columns are encryption candidates\n>>\n>> * Encryption candidates are not foreign-key columns\n>>\n>> * Indexes over encryption candidates are normal B-tree indexes (this\n>> also means no support for indexes on expressions, and likely partial\n>> indexes)\n>>\n>> * No support from hardware crypto acceleration.\n>\n>Aren't all modern systems going to have hardware crypto acceleration,\n>i.e., AES-NI CPU extensions. Does that mean there is no value of\n>partial encryption on such systems? Looking at the overhead numbers I\n>have seen for AES-NI-enabled systems, I believe it.\n>\n\n\nThat's a good question, I don't know the answer. You're right most\nsystems have CPUs with AES-NI these days, and I'm not sure why the\ncolumn encryption does not leverage that.\n\nMaybe it's because column encryption has to encrypt/decrypt much smaller\nchunks of data, and AES-NI is not efficient for that? I don't know.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 4 Oct 2019 23:48:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 4, 2019 at 11:31:00PM +0200, Tomas Vondra wrote:\n> On Fri, Oct 04, 2019 at 03:57:32PM -0400, Bruce Momjian wrote:\n> > The current approach is to encrypt anything that contains user data,\n> > which includes heap, index, and WAL files. I think replication slots\n> > and logical replication might also fall into that category, which is why\n> > I started this thread.\n> \n> Yes, I think those bits have to be encrypted too.\n> \n> BTW I'm not sure why you list replication slots and logical replication\n> independently, those are mostly the same thing I think. For physical\n> slots we probably don't need to encrypt anything, but for logical slots\n> we may spill decoded data to files (so those will contain user data).\n\nIn this thread, I am really looking for experts who can explain exactly\nwhere sensitive data is stored in PGDATA. Oh, pgsql_tmp must be\nencrypted too. I would say we know which things must be encrypted, but\nwe now need to go through the rest of PGDATA to determine which parts\nare safe to leave unencrypted, and which must be encrypted.\n\n> > I can see some saying that all cluster files should be encrypted, and I\n> > can respect that argument. However, as outlined in the diagram linked\n> > to from the blog entry:\n> > \n> > \thttps://momjian.us/main/blogs/pgblog/2019.html#September_27_2019\n> > \n> > I feel that TDE, since it has limited value, and can't really avoid all\n> > information leakage, should strive to find the intersection of ease of\n> > implementation, security, and compliance. If people don't think that\n> > limited file encryption is secure, I get it. However, encrypting most\n> > or all files I think would lead us into such a \"difficult to implement\"\n> > scope that I would not longer be able to work on this feature. I think\n> > the code complexity, fragility, potential unreliability, and even\n> > overhead of trying to encrypt most/all files would lead TDE to be\n> > greatly delayed or never implemented. I just couldn't recommend it.\n> > Now, I might be totally wrong, and encryption of everything might be\n> > just fine, but I have to pick my projects, and such an undertaking seems\n> > far too risky for me.\n> > \n> \n> I agree some trade-offs will be needed, to make the implementation at\n> all possible (irrespectedly of the exact design). But I think those\n> trade-offs need to be conscious, based on some technical arguments why\n> it's OK to consider a particular information leak acceptable, etc. For\n> example it may be fine when assuming the attacker only gets a single\n> static copy of the data directory, but not when having the ability to\n> observe changes made by a running instance.\n\nYes, we need to be explicit in what we don't encrypt --- that it is\nreasonably safe.\n\n> In a way, my concern is somehat the opposite of yours - that we'll end\n> up with a feature (which necessarily adds complexity) that however does\n> not provide sufficient security for various use cases.\n\nYep, if we can't do it safely, there is no point in doing it.\n\n> And I don't know where exactly the middle ground is, TBH.\n\nWe spend a lot of time figuring out exactly how to safely encrypt WAL,\nheap, index, and pgsql_tmp files. The idea of doing this for another\n20 types of files --- to find a safe nonce, to be sure a file rewrite\ndoesn't reuse the nonce, figuring the API, crash recovery, forensics,\ntool interface --- is something I would like to avoid. I want to avoid\nit not because I don't like work, but because I am afraid the code\nimpact and fragility will doom the feature.\n\n> > Just for some detail, we have solved the block-level encryption problem\n> > by using CTR mode in most cases, but there is still a requirement for a\n> > nonce for every encryption operation. You can use derived keys too, but\n> > you need to set up those keys for every write to encrypt files. Maybe\n> > it is possible to set up a write API that handles this transparently in\n> > the code, but I don't know how to do that cleanly, and I doubt if the\n> > value of encrypting everything is worth it.\n> > \n> > As far as encrypting the log file, I can see us adding documentation to\n> > warn about that, and even issue a server log message if encryption is\n> > enabled and syslog is not being used. (I don't know how to test if\n> > syslog is being shipped to a remote server.)\n> > \n> \n> Not sure. I wonder if it's possible to setup syslog so that it encrypts\n> the data on storage, and if that would be a suitable solution e.g. for\n> PCI DSS purposes. (It seems at least rsyslogd supports that.)\n\nWell, users don't want the data visible in a mounted file system, which\nis why we were thinking a remote secure server would help.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 4 Oct 2019 17:49:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 4, 2019 at 11:48:19PM +0200, Tomas Vondra wrote:\n> On Fri, Oct 04, 2019 at 04:58:14PM -0400, Bruce Momjian wrote:\n> > On Fri, Oct 4, 2019 at 10:46:57PM +0200, Tomas Vondra wrote:\n> > > Oracle also has a handy \"TDE best practices\" document [2], which says\n> > > when to use column-level encryption - let me quote a couple of points:\n> > > \n> > > * Location of sensitive information is known\n> > > \n> > > * Less than 5% of all application columns are encryption candidates\n> > > \n> > > * Encryption candidates are not foreign-key columns\n> > > \n> > > * Indexes over encryption candidates are normal B-tree indexes (this\n> > > also means no support for indexes on expressions, and likely partial\n> > > indexes)\n> > > \n> > > * No support from hardware crypto acceleration.\n> > \n> > Aren't all modern systems going to have hardware crypto acceleration,\n> > i.e., AES-NI CPU extensions. Does that mean there is no value of\n> > partial encryption on such systems? Looking at the overhead numbers I\n> > have seen for AES-NI-enabled systems, I believe it.\n> > \n> \n> \n> That's a good question, I don't know the answer. You're right most\n> systems have CPUs with AES-NI these days, and I'm not sure why the\n> column encryption does not leverage that.\n> \n> Maybe it's because column encryption has to encrypt/decrypt much smaller\n> chunks of data, and AES-NI is not efficient for that? I don't know.\n\nFor full-cluster TDE with AES-NI-enabled, the performance impact is\nusually ~4%, so doing anything more granular doesn't seem useful. See\nthis PGCon presentation with charts:\n\n\thttps://www.youtube.com/watch?v=TXKoo2SNMzk#t=27m50s\n\nHaving anthing more fine-grained that all-cluster didn't seem worth it. \nUsing per-user keys is useful, but also much harder to implement.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 4 Oct 2019 18:06:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 04, 2019 at 06:06:10PM -0400, Bruce Momjian wrote:\n>On Fri, Oct 4, 2019 at 11:48:19PM +0200, Tomas Vondra wrote:\n>> On Fri, Oct 04, 2019 at 04:58:14PM -0400, Bruce Momjian wrote:\n>> > On Fri, Oct 4, 2019 at 10:46:57PM +0200, Tomas Vondra wrote:\n>> > > Oracle also has a handy \"TDE best practices\" document [2], which says\n>> > > when to use column-level encryption - let me quote a couple of points:\n>> > >\n>> > > * Location of sensitive information is known\n>> > >\n>> > > * Less than 5% of all application columns are encryption candidates\n>> > >\n>> > > * Encryption candidates are not foreign-key columns\n>> > >\n>> > > * Indexes over encryption candidates are normal B-tree indexes (this\n>> > > also means no support for indexes on expressions, and likely partial\n>> > > indexes)\n>> > >\n>> > > * No support from hardware crypto acceleration.\n>> >\n>> > Aren't all modern systems going to have hardware crypto acceleration,\n>> > i.e., AES-NI CPU extensions. Does that mean there is no value of\n>> > partial encryption on such systems? Looking at the overhead numbers I\n>> > have seen for AES-NI-enabled systems, I believe it.\n>> >\n>>\n>>\n>> That's a good question, I don't know the answer. You're right most\n>> systems have CPUs with AES-NI these days, and I'm not sure why the\n>> column encryption does not leverage that.\n>>\n>> Maybe it's because column encryption has to encrypt/decrypt much smaller\n>> chunks of data, and AES-NI is not efficient for that? I don't know.\n>\n>For full-cluster TDE with AES-NI-enabled, the performance impact is\n>usually ~4%, so doing anything more granular doesn't seem useful. See\n>this PGCon presentation with charts:\n>\n>\thttps://www.youtube.com/watch?v=TXKoo2SNMzk#t=27m50s\n>\n>Having anthing more fine-grained that all-cluster didn't seem worth it.\n>Using per-user keys is useful, but also much harder to implement.\n>\n\nNot sure I follow. I thought you are asking why Oracle apparently does\nnot leverage AES-NI for column-level encryption (at least according to\nthe document I linked)? And I don't know why that's the case.\n\nFWIW performance is just one (supposed) benefit of column encryption,\neven if all-cluster encryption is just as fast, there might be other\nreasons to support it.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 5 Oct 2019 00:54:35 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Sat, Oct 5, 2019 at 12:54:35AM +0200, Tomas Vondra wrote:\n> On Fri, Oct 04, 2019 at 06:06:10PM -0400, Bruce Momjian wrote:\n> > For full-cluster TDE with AES-NI-enabled, the performance impact is\n> > usually ~4%, so doing anything more granular doesn't seem useful. See\n> > this PGCon presentation with charts:\n> > \n> > \thttps://www.youtube.com/watch?v=TXKoo2SNMzk#t=27m50s\n> > \n> > Having anthing more fine-grained that all-cluster didn't seem worth it.\n> > Using per-user keys is useful, but also much harder to implement.\n> > \n> \n> Not sure I follow. I thought you are asking why Oracle apparently does\n> not leverage AES-NI for column-level encryption (at least according to\n> the document I linked)? And I don't know why that's the case.\n\nNo, I read it as Oracle saying that there isn't much value to per-column\nencryption if you have crypto hardware acceleration, because the\nall-cluster encryption overhead is so minor.\n\n> FWIW performance is just one (supposed) benefit of column encryption,\n> even if all-cluster encryption is just as fast, there might be other\n> reasons to support it.\n\nWell, there is per-user/db encryption, but I think that needs to be done\nat the SQL level.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 4 Oct 2019 20:14:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 04, 2019 at 08:14:44PM -0400, Bruce Momjian wrote:\n>On Sat, Oct 5, 2019 at 12:54:35AM +0200, Tomas Vondra wrote:\n>> On Fri, Oct 04, 2019 at 06:06:10PM -0400, Bruce Momjian wrote:\n>> > For full-cluster TDE with AES-NI-enabled, the performance impact is\n>> > usually ~4%, so doing anything more granular doesn't seem useful. See\n>> > this PGCon presentation with charts:\n>> >\n>> > \thttps://www.youtube.com/watch?v=TXKoo2SNMzk#t=27m50s\n>> >\n>> > Having anthing more fine-grained that all-cluster didn't seem worth it.\n>> > Using per-user keys is useful, but also much harder to implement.\n>> >\n>>\n>> Not sure I follow. I thought you are asking why Oracle apparently does\n>> not leverage AES-NI for column-level encryption (at least according to\n>> the document I linked)? And I don't know why that's the case.\n>\n>No, I read it as Oracle saying that there isn't much value to per-column\n>encryption if you have crypto hardware acceleration, because the\n>all-cluster encryption overhead is so minor.\n>\n\nSo essentially the argument is - if you have hw crypto acceleration (aka\nAES-NI), then the overhead of all-cluster encryption is so low it does\nnot make sense to bother with lowering it with column encryption.\n\nIMO that's a good argument against column encryption (at least when used\nto reduce overhead), although 10% still quite a bit.\n\nBut I'm not sure it's what the document is saying. I'm sure if they\ncould, they'd use AES-NI even for column encryption, to make it more\nefficient. Because why wouldn't you do that? But the doc explicitly\nsays:\n\n Hardware cryptographic acceleration for TDE column encryption is\n not supported.\n\nSo there has to be a reason why that's not supported. Either there's\nsomething that prevents this mode from using AES-NI at all, or it simply\ncan't be sped-up.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 5 Oct 2019 21:13:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Fri, Oct 4, 2019 at 5:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n> We spend a lot of time figuring out exactly how to safely encrypt WAL,\n> heap, index, and pgsql_tmp files. The idea of doing this for another\n> 20 types of files --- to find a safe nonce, to be sure a file rewrite\n> doesn't reuse the nonce, figuring the API, crash recovery, forensics,\n> tool interface --- is something I would like to avoid. I want to avoid\n> it not because I don't like work, but because I am afraid the code\n> impact and fragility will doom the feature.\n\nI'm concerned about that, too, but there's no getting around the fact\nthat there are a bunch of types of files and that they do all need to\nbe dealt with. If we have a good scheme for doing that, hopefully\nextending it to additional types of files is not that bad, which would\nthen spare us the trouble of arguing about each one individually, and\nalso be more secure.\n\nAs I also said to Stephen, the people who are discussing this here\nshould *really really really* be looking at the Cybertec patch instead\nof trying to invent everything from scratch - unless that patch has,\nlike, typhoid, or something, in which case please let me know so that\nI, too, can avoid looking at it. Even if you wanted to use 0% of the\ncode, you could look at the list of file types that they consider\nencrypting and think about whether you agree with the decisions they\nmade. I suspect that you would quickly find that you've left some\nthings out of your list. In fact, I can think of a couple pretty clear\nexamples, like the stats files, which clearly contain user data.\n\nAnother reason that you should go look at that patch is because it\nactually tries to grapple with the exact problem that you're worrying\nabout in the abstract: there are a LOT of different kinds of files and\nthey all need to be handled somehow. Even if you can convince yourself\nthat things like pg_clog don't need encryption, which I think is a\npretty tough sell, there are LOT of file types that directly contain\nuser data and do need to be handled. A lot of the code that writes\nthose various types of files is pretty ad-hoc. It doesn't necessarily\ndo nice things like build up a block of data and then write it out\ntogether; it may for example write a byte a time. That's not going to\nwork well for encryption, I think, so the Cybertec patch changes that\nstuff around. I personally don't think that the patch does that in a\nway that is sufficiently clean and carefully considered for it to be\nintegrated into core, and my plan had been to work on that with the\npatch authors.\n\nHowever, that plan has been somewhat derailed by the fact that we now\nhave hundreds of emails arguing about the design, because I don't want\nto be trying to push water up a hill if everyone else is going in a\ndifferent direction. It looks to me, though, like we haven't really\ngotten beyond the point where that patch already was. The issues of\nnonce and many file types have already been thought about carefully\nthere. I rather suspect that they did not get it all right. But, it\nseems to me that it would be a lot more useful to look at the code\nactually written and think about what it gets right and wrong than to\ndiscuss these points as a strictly theoretical matter.\n\nIn other words: maybe I'm wrong here, but it looks to me like we're\nlaboriously reinventing the wheel when we could be working on\nimproving the working prototype.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 09:44:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Sat, Oct 5, 2019 at 09:13:59PM +0200, Tomas Vondra wrote:\n> On Fri, Oct 04, 2019 at 08:14:44PM -0400, Bruce Momjian wrote:\n> > On Sat, Oct 5, 2019 at 12:54:35AM +0200, Tomas Vondra wrote:\n> > > On Fri, Oct 04, 2019 at 06:06:10PM -0400, Bruce Momjian wrote:\n> > > > For full-cluster TDE with AES-NI-enabled, the performance impact is\n> > > > usually ~4%, so doing anything more granular doesn't seem useful. See\n> > > > this PGCon presentation with charts:\n> > > >\n> > > > \thttps://www.youtube.com/watch?v=TXKoo2SNMzk#t=27m50s\n> > > >\n> > > > Having anthing more fine-grained that all-cluster didn't seem worth it.\n> > > > Using per-user keys is useful, but also much harder to implement.\n> > > >\n> > > \n> > > Not sure I follow. I thought you are asking why Oracle apparently does\n> > > not leverage AES-NI for column-level encryption (at least according to\n> > > the document I linked)? And I don't know why that's the case.\n> > \n> > No, I read it as Oracle saying that there isn't much value to per-column\n> > encryption if you have crypto hardware acceleration, because the\n> > all-cluster encryption overhead is so minor.\n> > \n> \n> So essentially the argument is - if you have hw crypto acceleration (aka\n> AES-NI), then the overhead of all-cluster encryption is so low it does\n> not make sense to bother with lowering it with column encryption.\n\nYes, I think that is true. Column-level encryption can be useful in\ngiving different people control of the keys, but I think that feature\nshould be developed at the SQL level so clients can unlock the key and\nbackups include the encryption keys.\n\n> IMO that's a good argument against column encryption (at least when used\n> to reduce overhead), although 10% still quite a bit.\n\nI think that test was a worst-case one and I think it needs to be\noptimized before we draw any conclusions.\n\n> But I'm not sure it's what the document is saying. I'm sure if they\n> could, they'd use AES-NI even for column encryption, to make it more\n> efficient. Because why wouldn't you do that? But the doc explicitly\n> says:\n> \n> Hardware cryptographic acceleration for TDE column encryption is\n> not supported.\n\nOh, wow, that is something!\n\n> So there has to be a reason why that's not supported. Either there's\n> something that prevents this mode from using AES-NI at all, or it simply\n> can't be sped-up.\n\nYeah, good question.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 7 Oct 2019 10:22:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 09:44:30AM -0400, Robert Haas wrote:\n> On Fri, Oct 4, 2019 at 5:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > We spend a lot of time figuring out exactly how to safely encrypt WAL,\n> > heap, index, and pgsql_tmp files. The idea of doing this for another\n> > 20 types of files --- to find a safe nonce, to be sure a file rewrite\n> > doesn't reuse the nonce, figuring the API, crash recovery, forensics,\n> > tool interface --- is something I would like to avoid. I want to avoid\n> > it not because I don't like work, but because I am afraid the code\n> > impact and fragility will doom the feature.\n> \n> I'm concerned about that, too, but there's no getting around the fact\n> that there are a bunch of types of files and that they do all need to\n> be dealt with. If we have a good scheme for doing that, hopefully\n> extending it to additional types of files is not that bad, which would\n> then spare us the trouble of arguing about each one individually, and\n> also be more secure.\n\nWell, do to encryption properly, there is the requirement of the nonce. \nIf you ever rewrite a bit, you technically have to have a new nonce. \nFor WAL, since it is append-only, you can use the WAL file name. For\nheap/index files, we change the LSN on every rewrite (with\nwal_log_hints=on), and we never use the same LSN for writing multiple\nrelations, so LSN+page-offset is a sufficient nonce.\n\nFor clog, it is not append-only, and bytes are rewritten (from zero to\nnon-zero), so there would have to be a new nonce for every clog file\nwrite to the file system. We can store the nonce in a separate file,\nbut the clog contents and nonce would have to be always synchronized or\nthe file could not be properly read. Basically every file we want to\nencrypt, needs this kind of study.\n\n> As I also said to Stephen, the people who are discussing this here\n> should *really really really* be looking at the Cybertec patch instead\n> of trying to invent everything from scratch - unless that patch has,\n\nSomeone from Cybertec is on the voice calls we have, and is actively\ninvolved.\n\n> like, typhoid, or something, in which case please let me know so that\n> I, too, can avoid looking at it. Even if you wanted to use 0% of the\n> code, you could look at the list of file types that they consider\n> encrypting and think about whether you agree with the decisions they\n> made. I suspect that you would quickly find that you've left some\n> things out of your list. In fact, I can think of a couple pretty clear\n> examples, like the stats files, which clearly contain user data.\n\nI am asking here because I don't think the Cybertec approach has gotten\nenough study compared to what this group can contribute.\n\n> Another reason that you should go look at that patch is because it\n> actually tries to grapple with the exact problem that you're worrying\n> about in the abstract: there are a LOT of different kinds of files and\n> they all need to be handled somehow. Even if you can convince yourself\n> that things like pg_clog don't need encryption, which I think is a\n> pretty tough sell, there are LOT of file types that directly contain\n> user data and do need to be handled. A lot of the code that writes\n> those various types of files is pretty ad-hoc. It doesn't necessarily\n> do nice things like build up a block of data and then write it out\n> together; it may for example write a byte a time. That's not going to\n> work well for encryption, I think, so the Cybertec patch changes that\n\nActually, byte-at-a-time works fine with CTR mode, though that mode is\nvery sensitive to the reuse of the nonce since the user data is not part\nof the input for future encryption blocks.\n\n> stuff around. I personally don't think that the patch does that in a\n> way that is sufficiently clean and carefully considered for it to be\n> integrated into core, and my plan had been to work on that with the\n> patch authors.\n> \n> However, that plan has been somewhat derailed by the fact that we now\n> have hundreds of emails arguing about the design, because I don't want\n> to be trying to push water up a hill if everyone else is going in a\n> different direction. It looks to me, though, like we haven't really\n> gotten beyond the point where that patch already was. The issues of\n> nonce and many file types have already been thought about carefully\n> there. I rather suspect that they did not get it all right. But, it\n> seems to me that it would be a lot more useful to look at the code\n> actually written and think about what it gets right and wrong than to\n> discuss these points as a strictly theoretical matter.\n> \n> In other words: maybe I'm wrong here, but it looks to me like we're\n> laboriously reinventing the wheel when we could be working on\n> improving the working prototype.\n\nThe work being done is building on that prototype.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 7 Oct 2019 11:02:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 11:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> For clog, it is not append-only, and bytes are rewritten (from zero to\n> non-zero), so there would have to be a new nonce for every clog file\n> write to the file system. We can store the nonce in a separate file,\n> but the clog contents and nonce would have to be always synchronized or\n> the file could not be properly read. Basically every file we want to\n> encrypt, needs this kind of study.\n\nYeah. It's a big problem/project.\n\nAnother approach to this problem would be to adjust the block format\nto leave room for the nonce. If encryption is not in use, then those\nbytes would just be zeroed or something. That would make upgrading a\nbit tricky, but pg_upgrade could be taught to do the necessary\nconversions for SLRUs without too much pain, I think.\n\nIn my opinion, it is desirable to maintain as much consistency as\npossible between what we store on disk in the encrypted case and what\nwe store on disk in the not-encrypted case. If we have to add\nadditional forks in the encrypted case, or change the file of the\nformat and not just the contents, it seems likely to add complexity\nand bugs that we might be able to avoid via another approach.\n\n> > In other words: maybe I'm wrong here, but it looks to me like we're\n> > laboriously reinventing the wheel when we could be working on\n> > improving the working prototype.\n>\n> The work being done is building on that prototype.\n\nThat's good, but then I'm puzzled as to why your list of things to\nencrypt doesn't include all the things it already covers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 11:26:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 11:26:24AM -0400, Robert Haas wrote:\n> On Mon, Oct 7, 2019 at 11:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > For clog, it is not append-only, and bytes are rewritten (from zero to\n> > non-zero), so there would have to be a new nonce for every clog file\n> > write to the file system. We can store the nonce in a separate file,\n> > but the clog contents and nonce would have to be always synchronized or\n> > the file could not be properly read. Basically every file we want to\n> > encrypt, needs this kind of study.\n> \n> Yeah. It's a big problem/project.\n> \n> Another approach to this problem would be to adjust the block format\n> to leave room for the nonce. If encryption is not in use, then those\n> bytes would just be zeroed or something. That would make upgrading a\n> bit tricky, but pg_upgrade could be taught to do the necessary\n> conversions for SLRUs without too much pain, I think.\n\nYes, that is exactly the complexity we have deal with, both in terms of\ncode complexity, reliability, and future maintenance. Currently the\nfile format is unchanged, but as we add more encrypted files, we might\nneed to change it. Fortunately, I think heap/index files don't need to\nchange, so pg_upgrade will not require changes.\n\n> In my opinion, it is desirable to maintain as much consistency as\n> possible between what we store on disk in the encrypted case and what\n> we store on disk in the not-encrypted case. If we have to add\n> additional forks in the encrypted case, or change the file of the\n> format and not just the contents, it seems likely to add complexity\n> and bugs that we might be able to avoid via another approach.\n\nAgreed.\n\n> > > In other words: maybe I'm wrong here, but it looks to me like we're\n> > > laboriously reinventing the wheel when we could be working on\n> > > improving the working prototype.\n> >\n> > The work being done is building on that prototype.\n> \n> That's good, but then I'm puzzled as to why your list of things to\n> encrypt doesn't include all the things it already covers.\n\nWell, I am starting with the things I _know_ need encrypting, and am\nthen waiting for others to tell me what to add. Cybertec has not\nprovided a list and reasons yet, that I have seen. This is why I\nstarted this public thread, so we could get a list and agree on it.\n\nFYI, I realize this is all very complex, and requires cryptography and\nserver internals knowledge. I am happy to discuss it via voice with\nanyone.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 7 Oct 2019 11:48:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 11:48 AM Bruce Momjian <bruce@momjian.us> wrote:\n> Well, I am starting with the things I _know_ need encrypting, and am\n> then waiting for others to tell me what to add. Cybertec has not\n> provided a list and reasons yet, that I have seen. This is why I\n> started this public thread, so we could get a list and agree on it.\n\nWell that's fine, but you could also open up the patch and have a look\nat it. Even if you just looked at which files it modifies, it would\nenable you to add some important things do your list.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 12:30:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 12:30:37PM -0400, Robert Haas wrote:\n> On Mon, Oct 7, 2019 at 11:48 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > Well, I am starting with the things I _know_ need encrypting, and am\n> > then waiting for others to tell me what to add. Cybertec has not\n> > provided a list and reasons yet, that I have seen. This is why I\n> > started this public thread, so we could get a list and agree on it.\n> \n> Well that's fine, but you could also open up the patch and have a look\n> at it. Even if you just looked at which files it modifies, it would\n> enable you to add some important things do your list.\n\nUh, I am really then just importing what one group decided, which seems\nunsafe. I think it needs a fresh look at all files.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 7 Oct 2019 12:34:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 12:34 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Mon, Oct 7, 2019 at 12:30:37PM -0400, Robert Haas wrote:\n> > On Mon, Oct 7, 2019 at 11:48 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Well, I am starting with the things I _know_ need encrypting, and am\n> > > then waiting for others to tell me what to add. Cybertec has not\n> > > provided a list and reasons yet, that I have seen. This is why I\n> > > started this public thread, so we could get a list and agree on it.\n> >\n> > Well that's fine, but you could also open up the patch and have a look\n> > at it. Even if you just looked at which files it modifies, it would\n> > enable you to add some important things do your list.\n>\n> Uh, I am really then just importing what one group decided, which seems\n> unsafe. I think it needs a fresh look at all files.\n\nA fresh look at all files is a good idea, but that doesn't making\nlooking at the work other people have already done a bad idea.\n\nI don't understand the theory that it's useful to have multiple\n100+-message email threads about what we ought to do, but that looking\nat the already-written code is not useful.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 12:48:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 5:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Oct 7, 2019 at 11:26:24AM -0400, Robert Haas wrote:\n> > On Mon, Oct 7, 2019 at 11:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > For clog, it is not append-only, and bytes are rewritten (from zero to\n> > > non-zero), so there would have to be a new nonce for every clog file\n> > > write to the file system. We can store the nonce in a separate file,\n> > > but the clog contents and nonce would have to be always synchronized or\n> > > the file could not be properly read. Basically every file we want to\n> > > encrypt, needs this kind of study.\n> >\n> > Yeah. It's a big problem/project.\n> >\n> > Another approach to this problem would be to adjust the block format\n> > to leave room for the nonce. If encryption is not in use, then those\n> > bytes would just be zeroed or something. That would make upgrading a\n> > bit tricky, but pg_upgrade could be taught to do the necessary\n> > conversions for SLRUs without too much pain, I think.\n>\n> Yes, that is exactly the complexity we have deal with, both in terms of\n> code complexity, reliability, and future maintenance. Currently the\n> file format is unchanged, but as we add more encrypted files, we might\n> need to change it. Fortunately, I think heap/index files don't need to\n> change, so pg_upgrade will not require changes.\n>\n\nIt does sound very similar to the problem of being able to add checksums to\nthe clog files (and other SLRUs). So if that can get done, it would help\nboth of those cases (if done right).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Oct 7, 2019 at 5:48 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Oct  7, 2019 at 11:26:24AM -0400, Robert Haas wrote:\n> On Mon, Oct 7, 2019 at 11:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > For clog, it is not append-only, and bytes are rewritten (from zero to\n> > non-zero), so there would have to be a new nonce for every clog file\n> > write to the file system.  We can store the nonce in a separate file,\n> > but the clog contents and nonce would have to be always synchronized or\n> > the file could not be properly read.  Basically every file we want to\n> > encrypt, needs this kind of study.\n> \n> Yeah. It's a big problem/project.\n> \n> Another approach to this problem would be to adjust the block format\n> to leave room for the nonce. If encryption is not in use, then those\n> bytes would just be zeroed or something. That would make upgrading a\n> bit tricky, but pg_upgrade could be taught to do the necessary\n> conversions for SLRUs without too much pain, I think.\n\nYes, that is exactly the complexity we have deal with, both in terms of\ncode complexity, reliability, and future maintenance.  Currently the\nfile format is unchanged, but as we add more encrypted files, we might\nneed to change it.  Fortunately, I think heap/index files don't need to\nchange, so pg_upgrade will not require changes.It does sound very similar to the problem of being able to add checksums to the clog files (and other SLRUs). So if that can get done, it would help both of those cases (if done right).--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 7 Oct 2019 20:33:02 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Oct 4, 2019 at 5:49 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > We spend a lot of time figuring out exactly how to safely encrypt WAL,\n> > heap, index, and pgsql_tmp files. The idea of doing this for another\n> > 20 types of files --- to find a safe nonce, to be sure a file rewrite\n> > doesn't reuse the nonce, figuring the API, crash recovery, forensics,\n> > tool interface --- is something I would like to avoid. I want to avoid\n> > it not because I don't like work, but because I am afraid the code\n> > impact and fragility will doom the feature.\n> \n> I'm concerned about that, too, but there's no getting around the fact\n> that there are a bunch of types of files and that they do all need to\n> be dealt with. If we have a good scheme for doing that, hopefully\n> extending it to additional types of files is not that bad, which would\n> then spare us the trouble of arguing about each one individually, and\n> also be more secure.\n> \n> As I also said to Stephen, the people who are discussing this here\n> should *really really really* be looking at the Cybertec patch instead\n> of trying to invent everything from scratch -\n\nMaybe it's enough to check the README.encryption file that [1] contains. Or\nshould I publish this (in shorter form) on the wiki [2] ?\n\n> In fact, I can think of a couple pretty clear examples, like the stats\n> files, which clearly contain user data.\n\nSpecifically this part was removed because I expected that [3] will be\ncommitted earlier than the encryption. This expectation still seems to be\nvalid.\n\nThe thread on encryption was very alive when I was working on the last version\nof our patch, so it was hard to participate in the discussion. I tried to\ncatch up later, and I think I could understand most of the problems. It became\nclear that it's better to collaborate then to incorporate the new ideas into\n[1]. I proposed to Masahiko Sawada that we're ready to collaborate on coding\nand he agreed. However the design doesn't seem to be stable enough at the\nmoment for coding to make sense.\n\nAs for the design, I spent some time thinking about it, especially on the\nper-table/tablespace keys (recovery issues etc.), but haven't invented\nanything new. If there's anything useful I can do about the feature, I'll be\nglad to help.\n\n[1] https://commitfest.postgresql.org/25/2104/\n\n[2] https://wiki.postgresql.org/wiki/Transparent_Data_Encryption\n\n[3] https://commitfest.postgresql.org/25/1708/\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 07 Oct 2019 21:02:36 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 07, 2019 at 10:22:22AM -0400, Bruce Momjian wrote:\n>On Sat, Oct 5, 2019 at 09:13:59PM +0200, Tomas Vondra wrote:\n>> On Fri, Oct 04, 2019 at 08:14:44PM -0400, Bruce Momjian wrote:\n>> > On Sat, Oct 5, 2019 at 12:54:35AM +0200, Tomas Vondra wrote:\n>> > > On Fri, Oct 04, 2019 at 06:06:10PM -0400, Bruce Momjian wrote:\n>> > > > For full-cluster TDE with AES-NI-enabled, the performance impact is\n>> > > > usually ~4%, so doing anything more granular doesn't seem useful. See\n>> > > > this PGCon presentation with charts:\n>> > > >\n>> > > > \thttps://www.youtube.com/watch?v=TXKoo2SNMzk#t=27m50s\n>> > > >\n>> > > > Having anthing more fine-grained that all-cluster didn't seem worth it.\n>> > > > Using per-user keys is useful, but also much harder to implement.\n>> > > >\n>> > >\n>> > > Not sure I follow. I thought you are asking why Oracle apparently does\n>> > > not leverage AES-NI for column-level encryption (at least according to\n>> > > the document I linked)? And I don't know why that's the case.\n>> >\n>> > No, I read it as Oracle saying that there isn't much value to per-column\n>> > encryption if you have crypto hardware acceleration, because the\n>> > all-cluster encryption overhead is so minor.\n>> >\n>>\n>> So essentially the argument is - if you have hw crypto acceleration (aka\n>> AES-NI), then the overhead of all-cluster encryption is so low it does\n>> not make sense to bother with lowering it with column encryption.\n>\n>Yes, I think that is true. Column-level encryption can be useful in\n>giving different people control of the keys, but I think that feature\n>should be developed at the SQL level so clients can unlock the key and\n>backups include the encryption keys.\n>\n\nFWIW that's not how the column encryption (at least in Oracle works). It\nuses the same encryption keys (with 2-tier key architecture), and the\nkeys are stored in a wallet. The user only supplies a passphrase (well,\na DBA does that, because it happens only once after the instance starts).\n\nNot sure what exactly you mean by \"SQL level\" but I agree it's clearly\nmuch higher up the stack than encryption at the block level.\n\n>> IMO that's a good argument against column encryption (at least when used\n>> to reduce overhead), although 10% still quite a bit.\n>\n>I think that test was a worst-case one and I think it needs to be\n>optimized before we draw any conclusions.\n>\n\nWhat test? I was really referring to the PDF, which talks about 10%\nthreshold for the tablespace encryption. And in another section it says\n\n Internal benchmark tests and customers reported a performance impact of 4\n to 8% in end-user response time, and an increase of 1 to 5% in CPU usage.\n\nOf course, this is not on PostgreSQL, but I'd expect to have comparable\noverhead, despite architectural differences. Ultimately, even if it's 15\nor 20%, the general rule is likely to remain the same, i.e. column\nencryption has significantly higher overhead, and can only beat\ntablespace encryption when very small fraction of columns is encrypted.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 7 Oct 2019 21:40:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 3:01 PM Antonin Houska <ah@cybertec.at> wrote:\n> However the design doesn't seem to be stable enough at the\n> moment for coding to make sense.\n\nWell, I think the question is whether working further on your patch\ncould produce some things that everyone would agree are a step\nforward. If every iota of that patch is garbage dredged up from the\ndepths of the Mos Eisley sewers, then let's forget about it, but I\ndon't think that's the case. As I said on the thread about that patch,\nand have also said here, what I learned from looking at that patch is\nthat the system probably needs some significant restructuring before\nthere's any hope of incorporating encryption in a reasonably-sized,\nreasonably clean patch. For example, some files need to be written a\nblock at a time instead of a character at a time. The idea just\ndiscussed -- changing the CLOG page format to leave room for the\nencryption nonce and a checksum -- also fall into that category. I\nthink there are probably a number of others.\n\nNo matter what anybody thinks about whether we should have one key,\nmultiple keys, passwords inside the database, passwords outside the\ndatabase, whatever ... that kind of restructuring work has got to be\ndone first. And it seems like by having all this discussion about the\ndesign, we're basically getting to a situation where we're making no\nprogress on that stuff. So that's bad. There's nothing *wrong* with\ntalking about how many keys we had and how key management ought to\nwork and where passwords should be stored, and we need to make sure\nthat whatever we do initially doesn't close the door to doing more and\nbetter things later. But, if those discussions have the effect of\nblocking work on the basic infrastructure tasks that need to be done,\nthat's actually counterproductive at this stage.\n\nWe should all put our heads together and agree that however we think\nkey management ought to be handled, it'll be a lot easier to get our\npreferred form of key management into PostgreSQL if, while that\ndiscussion rages on, we knocked down some of the infrastructure\nproblems that *absolutely any patch* for this kind of feature is\ncertain to face.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 15:50:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 09:40:22PM +0200, Tomas Vondra wrote:\n> On Mon, Oct 07, 2019 at 10:22:22AM -0400, Bruce Momjian wrote:\n> > > So essentially the argument is - if you have hw crypto acceleration (aka\n> > > AES-NI), then the overhead of all-cluster encryption is so low it does\n> > > not make sense to bother with lowering it with column encryption.\n> > \n> > Yes, I think that is true. Column-level encryption can be useful in\n> > giving different people control of the keys, but I think that feature\n> > should be developed at the SQL level so clients can unlock the key and\n> > backups include the encryption keys.\n> > \n> \n> FWIW that's not how the column encryption (at least in Oracle works). It\n> uses the same encryption keys (with 2-tier key architecture), and the\n> keys are stored in a wallet. The user only supplies a passphrase (well,\n> a DBA does that, because it happens only once after the instance starts).\n> \n> Not sure what exactly you mean by \"SQL level\" but I agree it's clearly\n> much higher up the stack than encryption at the block level.\n\nRight, what I was saying is that column encryption where they keys are\nunlocked by the administrator are really only useful to reduce\nencryption overhead, and I think we will find it just isn't worth the\nAPI complexity to allow that.\n\nPer-user keys are useful for cases beyond performance, but require\nSQL-level control.\n\n> > > IMO that's a good argument against column encryption (at least when used\n> > > to reduce overhead), although 10% still quite a bit.\n> > \n> > I think that test was a worst-case one and I think it needs to be\n> > optimized before we draw any conclusions.\n> \n> What test? I was really referring to the PDF, which talks about 10%\n> threshold for the tablespace encryption. And in another section it says\n> \n> Internal benchmark tests and customers reported a performance impact of 4\n> to 8% in end-user response time, and an increase of 1 to 5% in CPU usage.\n> \n> Of course, this is not on PostgreSQL, but I'd expect to have comparable\n> overhead, despite architectural differences. Ultimately, even if it's 15\n> or 20%, the general rule is likely to remain the same, i.e. column\n> encryption has significantly higher overhead, and can only beat\n> tablespace encryption when very small fraction of columns is encrypted.\n\nRight, and I doubt it will be worth it, but I think we need to complete\nall-cluster encryption and then run some tests so see what the overhead\nis.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 7 Oct 2019 15:58:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, 7 Oct 2019 at 18:02, Bruce Momjian <bruce@momjian.us> wrote:\n\n> Well, do to encryption properly, there is the requirement of the nonce.\n> If you ever rewrite a bit, you technically have to have a new nonce.\n> For WAL, since it is append-only, you can use the WAL file name. For\n> heap/index files, we change the LSN on every rewrite (with\n> wal_log_hints=on), and we never use the same LSN for writing multiple\n> relations, so LSN+page-offset is a sufficient nonce.\n>\n> For clog, it is not append-only, and bytes are rewritten (from zero to\n> non-zero), so there would have to be a new nonce for every clog file\n> write to the file system. We can store the nonce in a separate file,\n> but the clog contents and nonce would have to be always synchronized or\n> the file could not be properly read. Basically every file we want to\n> encrypt, needs this kind of study.\n>\n\nYes. That is the reason why our current version doesn't encrypt SLRU's.\nThere is some security in encrypting without a nonce when considering an\nattack vector that only sees one version of the encrypted page. But I think\nto make headway on this we need to figure out if TDE feature is useful\nwithour SLRU encryption (I think yes), and how hard would it be to properly\nencrypt SLRU's? Would the solution be acceptable for inclusion?\n\nI can think of 3 options:\n\na) A separate nonce storage. Seems pretty bad complexity wise. New\ndata-structures would need to be created. SLRU writes would need to be WAL\nlogged with a full page image.\nb) Inline nonces, number of items per SLRU page is variable depending on if\nencryption is enabled or not.\nc) Inline nonces we reserve a header structure on all SLRU pages.\npg_upgrade needs to rewrite persistent SLRUs.\n\nNone of the options seem great, but c) has the benefit of also carving out\nthe space for SLRU checksums.\n\n> As I also said to Stephen, the people who are discussing this here\n> > should *really really really* be looking at the Cybertec patch instead\n> > of trying to invent everything from scratch - unless that patch has,\n>\n> Someone from Cybertec is on the voice calls we have, and is actively\n> involved.\n>\n\nAs far as I can tell no-one from us is on the call. I personally missed the\ninvitation when it was sent out. I would gladly share our learnings, a lot\nof what I see here is retreading what we already went through with our\npatch. However, I think that at the very least the conclusions, problems to\nwork on and WIP patch should be shared on list. It's hard for anybody\noutside to have any input if there are no concrete design proposals or code\nto review. Moreover, I think e-mail is a much better media for having a\nreasoned discussion about technical design decisions.\n\n\n> > In other words: maybe I'm wrong here, but it looks to me like we're\n>\n> laboriously reinventing the wheel when we could be working on\n> > improving the working prototype.\n>\n> The work being done is building on that prototype.\n>\n\nWe would like to help on that front.\n\nRegards,\nAnts Aasma\nWeb: https://www.cybertec-postgresql.com\n\nOn Mon, 7 Oct 2019 at 18:02, Bruce Momjian <bruce@momjian.us> wrote:Well, do to encryption properly, there is the requirement of the nonce. \nIf you ever rewrite a bit, you technically have to have a new nonce. \nFor WAL, since it is append-only, you can use the WAL file name.  For\nheap/index files, we change the LSN on every rewrite (with\nwal_log_hints=on), and we never use the same LSN for writing multiple\nrelations, so LSN+page-offset is a sufficient nonce.\n\nFor clog, it is not append-only, and bytes are rewritten (from zero to\nnon-zero), so there would have to be a new nonce for every clog file\nwrite to the file system.  We can store the nonce in a separate file,\nbut the clog contents and nonce would have to be always synchronized or\nthe file could not be properly read.  Basically every file we want to\nencrypt, needs this kind of study. Yes. That is the reason why our current version doesn't encrypt SLRU's. There is some security in encrypting without a nonce when considering an attack vector that only sees one version of the encrypted page. But I think to make headway on this we need to figure out if TDE feature is useful withour SLRU encryption (I think yes), and how hard would it be to properly encrypt SLRU's? Would the solution be acceptable for inclusion?I can think of 3 options:a) A separate nonce storage. Seems pretty bad complexity wise. New data-structures would need to be created. SLRU writes would need to be WAL logged with a full page image.b) Inline nonces, number of items per SLRU page is variable depending on if encryption is enabled or not.c) Inline nonces we reserve a header structure on all SLRU pages. pg_upgrade needs to rewrite persistent SLRUs.None of the options seem great, but c) has the benefit of also carving out the space for SLRU checksums.\n> As I also said to Stephen, the people who are discussing this here\n> should *really really really* be looking at the Cybertec patch instead\n> of trying to invent everything from scratch - unless that patch has,\n\nSomeone from Cybertec is on the voice calls we have, and is actively\ninvolved.As far as I can tell no-one from us is on the call. I personally missed the invitation when it was sent out. I would gladly share our learnings, a lot of what I see here is retreading what we already went through with our patch. However, I think that at the very least the conclusions, problems to work on and WIP patch should be shared on list. It's hard for anybody outside to have any input if there are no concrete design proposals or code to review. Moreover, I think e-mail is a much better media for having a reasoned discussion about technical design decisions. > In other words: maybe I'm wrong here, but it looks to me like we're\n> laboriously reinventing the wheel when we could be working on\n> improving the working prototype.\n\nThe work being done is building on that prototype.We would like to help on that front.Regards,Ants Aasma Web: https://www.cybertec-postgresql.com", "msg_date": "Tue, 8 Oct 2019 12:38:43 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Ants Aasma <ants@cybertec.at> wrote:\n\n> On Mon, 7 Oct 2019 at 18:02, Bruce Momjian <bruce@momjian.us> wrote:\n> \n>> Well, do to encryption properly, there is the requirement of the nonce. \n>> If you ever rewrite a bit, you technically have to have a new nonce. \n>> For WAL, since it is append-only, you can use the WAL file name. For\n>> heap/index files, we change the LSN on every rewrite (with\n>> wal_log_hints=on), and we never use the same LSN for writing multiple\n>> relations, so LSN+page-offset is a sufficient nonce.\n>> \n>> For clog, it is not append-only, and bytes are rewritten (from zero to\n>> non-zero), so there would have to be a new nonce for every clog file\n>> write to the file system. We can store the nonce in a separate file,\n>> but the clog contents and nonce would have to be always synchronized or\n>> the file could not be properly read. Basically every file we want to\n>> encrypt, needs this kind of study.\n\n> Yes. That is the reason why our current version doesn't encrypt\n> SLRU's.\n\nActually there was one more problem: the AES-CBC cipher (or AES-XTS in the\nearlier patch version) process an encryption block of 16 bytes at a time. Thus\nif only a part of the block gets written (a torn page write), decryption of\nthe block results in garbage. Unlike relations, there's nothing like full-page\nwrite for SLRU pages, so there's no way to recover from this problem.\n\nHowever, if the current plan is to use the CTR mode, this problem should not\nhappen.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 08 Oct 2019 12:34:23 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Oct 7, 2019 at 3:01 PM Antonin Houska <ah@cybertec.at> wrote:\n> > However the design doesn't seem to be stable enough at the\n> > moment for coding to make sense.\n> \n> Well, I think the question is whether working further on your patch\n> could produce some things that everyone would agree are a step\n> forward.\n\nIt would have made a lot of sense several months ago (Masahiko Sawada actually\nused parts of our patch in the previous version of his patch (see [1]), but\nthe requirement to use a different IV for each execution of the encryption\nchanges things quite a bit.\n\nBesides the relation pages and SLRU (CLOG), which are already being discussed\nelsewhere in the thread, let's consider other two file types:\n\n* Temporary files (buffile.c): we derive the IV from PID of the process that\n created the file + segment number + block within the segment. This\n information does not change if you need to write the same block again. If\n new IV should be used for each encryption run, we can simply introduce an\n in-memory counter that generates the IV for each block. However it becomes\n trickier if the temporary file is shared by multiple backends. I think it\n might still be easier to expose the IV values to other backends via shared\n memory than to store them on disk ...\n\n* \"Buffered transient file\". This is to be used instead of OpenTransientFile()\n if user needs the option to encrypt the file. (Our patch adds this API to\n buffile.c. Currently we use it in reorderbuffer.c to encrypt the data\n changes produced by logical decoding, but there should be more use cases.)\n\n In this case we cannot keep the IVs in memory because user can close the\n file anytime and open it much later. So we derive the IV by hashing the file\n path. However if we should generate the IV again and again, we need to store\n it on disk in another way, probably one IV value per block (PGAlignedBlock).\n\n However since our implementation of both these file types shares some code,\n it might yet be easier if the shared temporary file also stored the IV on\n disk instead of exposing it via shared memory ...\n\nPerhaps this is what I can work on, but I definitely need some feedback.\n\n[1] https://www.postgresql.org/message-id/CAD21AoBjrbxvaMpTApX1cEsO=8N=nc2xVZPB0d9e-VjJ=YaRnw@mail.gmail.com\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 08 Oct 2019 13:52:57 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> Unless we are *absolutely* certain, I bet someone will be able to find a\n> side-channel that somehow leaks some data or data-about-data, if we don't\n> encrypt everything. If nothing else, you can get use patterns out of it,\n> and you can make a lot from that. (E.g. by whether transactions are using\n> multixacts or not you can potentially determine which transaction they are,\n> if you know what type of transactions are being issued by the application.\n> In the simplest case, there might be a single pattern where multixacts end\n> up actually being used, and in that case being able to see the multixact\n> data tells you a lot about the system).\n\nThanks for bringing up the concern but this still doesn't strike me, at\nleast, as being a huge gaping hole that people will have large issues\nwith. In other words, I don't agree that this is a high bandwidth side\nchannel and I don't think that it, alone, brings up a strong need to\nencrypt clog and multixact.\n\n> As for other things -- by default, we store the log files in text format in\n> the data directory. That contains *loads* of sensitive data in a lot of\n> cases. Will those also be encrypted?\n\nimv, this is a largely independent thing, as I said elsewhere, and has\nits own set of challenges and considerations to deal with.\n\nThanks,\n\nStephen", "msg_date": "Tue, 8 Oct 2019 13:55:56 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Tue, Oct 8, 2019 at 7:52 AM Antonin Houska <ah@cybertec.at> wrote:\n> * Temporary files (buffile.c): we derive the IV from PID of the process that\n> created the file + segment number + block within the segment. This\n> information does not change if you need to write the same block again. If\n> new IV should be used for each encryption run, we can simply introduce an\n> in-memory counter that generates the IV for each block. However it becomes\n> trickier if the temporary file is shared by multiple backends. I think it\n> might still be easier to expose the IV values to other backends via shared\n> memory than to store them on disk ...\n>\n> * \"Buffered transient file\". This is to be used instead of OpenTransientFile()\n> if user needs the option to encrypt the file. (Our patch adds this API to\n> buffile.c. Currently we use it in reorderbuffer.c to encrypt the data\n> changes produced by logical decoding, but there should be more use cases.)\n>\n> In this case we cannot keep the IVs in memory because user can close the\n> file anytime and open it much later. So we derive the IV by hashing the file\n> path. However if we should generate the IV again and again, we need to store\n> it on disk in another way, probably one IV value per block (PGAlignedBlock).\n>\n> However since our implementation of both these file types shares some code,\n> it might yet be easier if the shared temporary file also stored the IV on\n> disk instead of exposing it via shared memory ...\n>\n> Perhaps this is what I can work on, but I definitely need some feedback.\n\nI think this would be a valuable thing upon which to work. I'm not\nsure exactly what the right solution is, but it seems to me that it\nwould be a good thing if we tried to reuse the same solution in as\nmany places as possible. I don't know if it's realistic to use the\nsame method for storing IVs for temporary/transient files as we do for\nSLRUs, but it would be nice if it were.\n\nI think that one problem with trying to store the data in memory is\nthat these files get big enough that N bytes/block could still be\npretty big. For instance, if you're sorting 100GB of data with 8GB of\nwork_mem, you'll need to write 13 tapes and then merge them. Supposing\nan IV of 12 bytes/block, the IV vector for each 8GB tape will be 12MB,\nso once you've written all 12 types and are ready to merge them,\nyou're going to have 156MB of IV data floating around. If you keep it\nin memory, it ought to count against your work_mem budget, and while\nit's not a big fraction of your available memory, it's also not\nnegligible. Worse (but less realistic) cases can also be constructed.\nTo avoid this kind of problem, you could write the IV data to disk.\nBut notice that tuplesort.c goes to a lot of work to make I/O\nsequential, and that helps performance. If you have to intersperse\nreads of separate IV files with the reads of the main data files,\nyou're going to degrade the I/O pattern. It would really be best if\nthe IVs were in line with the data itself, I think. (The same probably\napplies, and for not unrelated reasons, to SLRU data, if we're going\nto try to encrypt that.)\n\nNow, if you could store some kind of an IV \"seed\" where we only need\none per buffile rather than one per block, then that'd probably be\nfine to story in memory. But I don't see how that would work given\nthat we can overwrite already-written blocks and need a new IV if we\ndo.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 8 Oct 2019 14:45:41 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Hello.\n\nOn Tue, Oct 8, 2019 at 8:52 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > On Mon, Oct 7, 2019 at 3:01 PM Antonin Houska <ah@cybertec.at> wrote:\n> > > However the design doesn't seem to be stable enough at the\n> > > moment for coding to make sense.\n> >\n> > Well, I think the question is whether working further on your patch\n> > could produce some things that everyone would agree are a step\n> > forward.\n>\n> It would have made a lot of sense several months ago (Masahiko Sawada actually\n> used parts of our patch in the previous version of his patch (see [1]), but\n> the requirement to use a different IV for each execution of the encryption\n> changes things quite a bit.\n>\n> Besides the relation pages and SLRU (CLOG), which are already being discussed\n> elsewhere in the thread, let's consider other two file types:\n>\n> * Temporary files (buffile.c): we derive the IV from PID of the process that\n> created the file + segment number + block within the segment. This\n> information does not change if you need to write the same block again. If\n> new IV should be used for each encryption run, we can simply introduce an\n> in-memory counter that generates the IV for each block. However it becomes\n> trickier if the temporary file is shared by multiple backends. I think it\n> might still be easier to expose the IV values to other backends via shared\n> memory than to store them on disk ...\n\nI think encrypt a temporary file in a slightly different way.\nPreviously, I had a lot of trouble with IV uniqueness, but I have\nproposed a unique encryption key for each file.\n\nFirst, in the case of the CTR mode to be used, 32 bits are used for\nthe counter in the 128-bit nonce value.\nHere, the counter increases every time 16 bytes are encrypted, and\ntheoretically, if nonce 96 bits are the same, a total of 64 GiB can be\nencrypted.\n\nTherefore, in the case of buffile.c that creates a temporary file due\nto lack of work_mem, it is possible to use up to 1GiB per file, so it\nis possible to encrypt to a simple IV value sufficiently safely.\nThe problem is that a vulnerability occurs when 96-bit nonce values\nexcluding Counter are the same values.\n\nI also tried to generate IV using PID (32bit) + tempCounter (64bit) at\nfirst, but in the worst-case PID and tempCounter are used in the same\nvalues.\nTherefore, the uniqueness of the encryption key was considered without\nconsidering the uniqueness of the IV value.\n\nThe encryption key uses a separate key for each file, as described earlier.\nFirst, it generates a hash value randomly for the file, and uses the\nhash value and KEK (or MDEK) to derive and use the key with\nHMAC-SHA256.\nIn this case, there is no need to store the encryption key separately\nif it is not necessary to keep it in a separate IV file or memory.\n(IV is a hash value of 64 bits and a counter of 32 bits.)\n\nAlso, currently, the temporary file name is specified by the current\nPID.tempFileCounter, but if this is set to\nPID.tempFileCounter.hashvalue, we can encrypt and decrypt in any\nprocess thinking about.\n\nReference URL\nhttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n\n\n>\n> * \"Buffered transient file\". This is to be used instead of OpenTransientFile()\n> if user needs the option to encrypt the file. (Our patch adds this API to\n> buffile.c. Currently we use it in reorderbuffer.c to encrypt the data\n> changes produced by logical decoding, but there should be more use cases.)\n\nAgreed.\n\nBest regards.\nMoon.\n\n>\n> In this case we cannot keep the IVs in memory because user can close the\n> file anytime and open it much later. So we derive the IV by hashing the file\n> path. However if we should generate the IV again and again, we need to store\n> it on disk in another way, probably one IV value per block (PGAlignedBlock).\n>\n> However since our implementation of both these file types shares some code,\n> it might yet be easier if the shared temporary file also stored the IV on\n> disk instead of exposing it via shared memory ...\n>\n> Perhaps this is what I can work on, but I definitely need some feedback.\n>\n> [1] https://www.postgresql.org/message-id/CAD21AoBjrbxvaMpTApX1cEsO=8N=nc2xVZPB0d9e-VjJ=YaRnw@mail.gmail.com\n>\n> --\n> Antonin Houska\n> Web: https://www.cybertec-postgresql.com\n>\n>\n\n\n", "msg_date": "Wed, 9 Oct 2019 09:28:13 +0900", "msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Dear hackers.\n\nFirst, I don't know which email thread should written a reply,\ntherefore using the first email thread.\nSorry about the inconvenience...\n\nSawada-san and I have previously researched the PostgreSQL database\ncluster file that contains user data.\nThe result has been updated to the WIKI page[1], so share it here.\n\nThis result is simply a list of files that contain user data, so we\ncan think of it as the first step in classifying which files are\nencrypted.\nAbout the SLUR file that we have talked about so far, I think that\ndiscussions are in progress on the necessity of encryption, and I hope\nthat this discussion will be useful.\n#In proceeding with the current development, we specified an encrypted\nfile using the list above.\n\nIf the survey results are different, it would be a help for this\nproject if correct to the WIKI page.\n\n[1]\nhttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_contains_of_user_data_for_PostgreSQL_files\n\nBest regards.\nMoon.\n\nOn Tue, Oct 1, 2019 at 6:26 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> For full-cluster Transparent Data Encryption (TDE), the current plan is\n> to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> overflow). The plan is:\n>\n> https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n>\n> We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact, or\n> other files. Is that correct? Do any other PGDATA files contain user\n> data?\n>\n> --\n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n>\n\n\n", "msg_date": "Wed, 9 Oct 2019 14:34:24 +0900", "msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n\n> Hello.\n> \n> On Tue, Oct 8, 2019 at 8:52 PM Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > > On Mon, Oct 7, 2019 at 3:01 PM Antonin Houska <ah@cybertec.at> wrote:\n> > > > However the design doesn't seem to be stable enough at the\n> > > > moment for coding to make sense.\n> > >\n> > > Well, I think the question is whether working further on your patch\n> > > could produce some things that everyone would agree are a step\n> > > forward.\n> >\n> > It would have made a lot of sense several months ago (Masahiko Sawada actually\n> > used parts of our patch in the previous version of his patch (see [1]), but\n> > the requirement to use a different IV for each execution of the encryption\n> > changes things quite a bit.\n> >\n> > Besides the relation pages and SLRU (CLOG), which are already being discussed\n> > elsewhere in the thread, let's consider other two file types:\n> >\n> > * Temporary files (buffile.c): we derive the IV from PID of the process that\n> > created the file + segment number + block within the segment. This\n> > information does not change if you need to write the same block again. If\n> > new IV should be used for each encryption run, we can simply introduce an\n> > in-memory counter that generates the IV for each block. However it becomes\n> > trickier if the temporary file is shared by multiple backends. I think it\n> > might still be easier to expose the IV values to other backends via shared\n> > memory than to store them on disk ...\n> \n> I think encrypt a temporary file in a slightly different way.\n> Previously, I had a lot of trouble with IV uniqueness, but I have\n> proposed a unique encryption key for each file.\n> \n> First, in the case of the CTR mode to be used, 32 bits are used for\n> the counter in the 128-bit nonce value.\n> Here, the counter increases every time 16 bytes are encrypted, and\n> theoretically, if nonce 96 bits are the same, a total of 64 GiB can be\n> encrypted.\n\n> Therefore, in the case of buffile.c that creates a temporary file due\n> to lack of work_mem, it is possible to use up to 1GiB per file, so it\n> is possible to encrypt to a simple IV value sufficiently safely.\n> The problem is that a vulnerability occurs when 96-bit nonce values\n> excluding Counter are the same values.\n\nI don't think the lower 32 bits impose any limitation, see\nCRYPTO_ctr128_encrypt_ctr32() in OpenSSL: if this lower part overflows, the\nupper part is simply incremented. So it's up to the user to decide what\nportion of the IV he wants to control and what portion should be controlled by\nOpenSSL internally. Of course the application design should be such that no\noverflows into the upper (user specific) part occur because those would result\nin duplicate IVs.\n\n> I also tried to generate IV using PID (32bit) + tempCounter (64bit) at\n> first, but in the worst-case PID and tempCounter are used in the same\n> values.\n> Therefore, the uniqueness of the encryption key was considered without\n> considering the uniqueness of the IV value.\n\nIf you consider 64bit counter insufficient (here it seems that tempCounter\ncounts the 1GB segments), then we can't even use LSN as the IV for relation\npages.\n\n> The encryption key uses a separate key for each file, as described earlier.\n\nDo you mean a separate key for the whole temporary file, or for a single (1GB)\nsegment?\n\n> First, it generates a hash value randomly for the file, and uses the\n> hash value and KEK (or MDEK) to derive and use the key with\n> HMAC-SHA256.\n> In this case, there is no need to store the encryption key separately\n> if it is not necessary to keep it in a separate IV file or memory.\n> (IV is a hash value of 64 bits and a counter of 32 bits.)\n\nYou seem to miss the fact that user of buffile.c can seek in the file and\nrewrite arbitrary part. Thus you'd have to generate a new key for the part\nbeing changed.\n\nI think it's easier to use the same key for the whole 1GB segment if not for\nthe whole temporary file, and generate an unique IV each time we write a chung\n(BLCKSZ bytes).\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 09 Oct 2019 07:42:56 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Dear Antonin Houska.\nThank you for your attention to thie matter.\n\nOn Wed, Oct 9, 2019 at 2:42 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n>\n> > Hello.\n> >\n> > On Tue, Oct 8, 2019 at 8:52 PM Antonin Houska <ah@cybertec.at> wrote:\n> > >\n> > > Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > > On Mon, Oct 7, 2019 at 3:01 PM Antonin Houska <ah@cybertec.at> wrote:\n> > > > > However the design doesn't seem to be stable enough at the\n> > > > > moment for coding to make sense.\n> > > >\n> > > > Well, I think the question is whether working further on your patch\n> > > > could produce some things that everyone would agree are a step\n> > > > forward.\n> > >\n> > > It would have made a lot of sense several months ago (Masahiko Sawada actually\n> > > used parts of our patch in the previous version of his patch (see [1]), but\n> > > the requirement to use a different IV for each execution of the encryption\n> > > changes things quite a bit.\n> > >\n> > > Besides the relation pages and SLRU (CLOG), which are already being discussed\n> > > elsewhere in the thread, let's consider other two file types:\n> > >\n> > > * Temporary files (buffile.c): we derive the IV from PID of the process that\n> > > created the file + segment number + block within the segment. This\n> > > information does not change if you need to write the same block again. If\n> > > new IV should be used for each encryption run, we can simply introduce an\n> > > in-memory counter that generates the IV for each block. However it becomes\n> > > trickier if the temporary file is shared by multiple backends. I think it\n> > > might still be easier to expose the IV values to other backends via shared\n> > > memory than to store them on disk ...\n> >\n> > I think encrypt a temporary file in a slightly different way.\n> > Previously, I had a lot of trouble with IV uniqueness, but I have\n> > proposed a unique encryption key for each file.\n> >\n> > First, in the case of the CTR mode to be used, 32 bits are used for\n> > the counter in the 128-bit nonce value.\n> > Here, the counter increases every time 16 bytes are encrypted, and\n> > theoretically, if nonce 96 bits are the same, a total of 64 GiB can be\n> > encrypted.\n>\n> > Therefore, in the case of buffile.c that creates a temporary file due\n> > to lack of work_mem, it is possible to use up to 1GiB per file, so it\n> > is possible to encrypt to a simple IV value sufficiently safely.\n> > The problem is that a vulnerability occurs when 96-bit nonce values\n> > excluding Counter are the same values.\n>\n> I don't think the lower 32 bits impose any limitation, see\n> CRYPTO_ctr128_encrypt_ctr32() in OpenSSL: if this lower part overflows, the\n> upper part is simply incremented. So it's up to the user to decide what\n> portion of the IV he wants to control and what portion should be controlled by\n> OpenSSL internally. Of course the application design should be such that no\n> overflows into the upper (user specific) part occur because those would result\n> in duplicate IVs.\n\nI'm sorry. I seem to have misunderstood.\nIf I rechecked the source code of OpenSSL, as you said, it is assumed\nthat the upper 96bit value is changed using the ctr96_inc() function.\nSorry..\n\n>\n> > I also tried to generate IV using PID (32bit) + tempCounter (64bit) at\n> > first, but in the worst-case PID and tempCounter are used in the same\n> > values.\n> > Therefore, the uniqueness of the encryption key was considered without\n> > considering the uniqueness of the IV value.\n>\n> If you consider 64bit counter insufficient (here it seems that tempCounter\n> counts the 1GB segments), then we can't even use LSN as the IV for relation\n> pages.\n\nThe worst-case here is not a lack of tempCounter, but a problem that\noccurs when PID is reused after a certain period.\nOf course, it is very unlikely to be a problem because it is a\ntemporary file, but since the file name can know the PID and\ntempFileCounter, if you accumulate some data, the same key and the\nsame IV will be used to encrypt other data. So I thought there could\nbe a problem.\n\n\n>\n> > The encryption key uses a separate key for each file, as described earlier.\n>\n> Do you mean a separate key for the whole temporary file, or for a single (1GB)\n> segment?\n\nYes, that's right. Use a separate key per file.\n\n>\n> > First, it generates a hash value randomly for the file, and uses the\n> > hash value and KEK (or MDEK) to derive and use the key with\n> > HMAC-SHA256.\n> > In this case, there is no need to store the encryption key separately\n> > if it is not necessary to keep it in a separate IV file or memory.\n> > (IV is a hash value of 64 bits and a counter of 32 bits.)\n>\n> You seem to miss the fact that user of buffile.c can seek in the file and\n> rewrite arbitrary part. Thus you'd have to generate a new key for the part\n> being changed.\n\nThat's right. I wanted to ask this too.\nIs it possible to overwrite the data already written in the actual buffile.c?\nSuch a problem seems to become a problem when BufFileWRite function is\ncalled, and BufFileSeek function is called, and BufFileRead is called.\nIn other words, the file is not written in units of 8kb, but the file\nis changed in the pos, and some data is read in another pos.\nI also thought that this would be a problem with re-creating the\nencrypted file, i.e., IV and key change would be necessary,\nSo far, my research has found no case of overwriting data in the\nprevious pos after it has already been created in File data (where\nFilWrite is called).\nCan you tell me the case overwriting buffer file?Sorry..\n\n\n>\n> I think it's easier to use the same key for the whole 1GB segment if not for\n> the whole temporary file, and generate an unique IV each time we write a chung\n> (BLCKSZ bytes).\n\nYes. I think there will probably be a discussion about how to use\nenc-key and IV to use.\nI hope to find the safest way through various discussions.\n\nBest regards.\nMoon.\n\n>\n> --\n> Antonin Houska\n> Web: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 9 Oct 2019 15:20:35 +0900", "msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n\n> On Wed, Oct 9, 2019 at 2:42 PM Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> >\n> > > I also tried to generate IV using PID (32bit) + tempCounter (64bit) at\n> > > first, but in the worst-case PID and tempCounter are used in the same\n> > > values.\n> > > Therefore, the uniqueness of the encryption key was considered without\n> > > considering the uniqueness of the IV value.\n> >\n> > If you consider 64bit counter insufficient (here it seems that tempCounter\n> > counts the 1GB segments), then we can't even use LSN as the IV for relation\n> > pages.\n> \n> The worst-case here is not a lack of tempCounter, but a problem that\n> occurs when PID is reused after a certain period.\n> Of course, it is very unlikely to be a problem because it is a\n> temporary file, but since the file name can know the PID and\n> tempFileCounter, if you accumulate some data, the same key and the\n> same IV will be used to encrypt other data. So I thought there could\n> be a problem.\n\nok\n\n> > > First, it generates a hash value randomly for the file, and uses the\n> > > hash value and KEK (or MDEK) to derive and use the key with\n> > > HMAC-SHA256.\n> > > In this case, there is no need to store the encryption key separately\n> > > if it is not necessary to keep it in a separate IV file or memory.\n> > > (IV is a hash value of 64 bits and a counter of 32 bits.)\n> >\n> > You seem to miss the fact that user of buffile.c can seek in the file and\n> > rewrite arbitrary part. Thus you'd have to generate a new key for the part\n> > being changed.\n> \n> That's right. I wanted to ask this too.\n> Is it possible to overwrite the data already written in the actual buffile.c?\n> Such a problem seems to become a problem when BufFileWRite function is\n> called, and BufFileSeek function is called, and BufFileRead is called.\n> In other words, the file is not written in units of 8kb, but the file\n> is changed in the pos, and some data is read in another pos.\n\nv04-0011-Make-buffile.c-aware-of-encryption.patch in [1] changes buffile.c so\nthat data is read and written in 8kB blocks if encryption is enabled. In order\nto record the IV per block, the computation of the buffer position within the\nfile would have to be adjusted somehow. I can check it soon but not in the\nnext few days.\n\n> I also thought that this would be a problem with re-creating the\n> encrypted file, i.e., IV and key change would be necessary,\n> So far, my research has found no case of overwriting data in the\n> previous pos after it has already been created in File data (where\n> FilWrite is called).\n> Can you tell me the case overwriting buffer file?\n\n(I suppose you mean BufFileWrite(), not FileWrite()). I don't remember if I\never checked particular use case in the PG core, but as long as buffer.c API\nallows such a thing to happen, the encryption code needs to handle it anyway.\n\nv04-0012-Add-tests-for-buffile.c.patch in [1] contains regression tests that\ndo involve temp file overwriting.\n\n[1] https://www.postgresql.org/message-id/7082.1562337694@localhost\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 09 Oct 2019 08:57:39 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Thu, Oct 3, 2019 at 4:40 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> > > On Mon, Sep 30, 2019 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > For full-cluster Transparent Data Encryption (TDE), the current plan is\n> > > > to encrypt all heap and index files, WAL, and all pgsql_tmp (work_mem\n> > > > overflow). The plan is:\n> > > >\n> > > >\n> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n> > > >\n> > > > We don't see much value to encrypting vm, fsm, pg_xact, pg_multixact,\n> > or\n> > > > other files. Is that correct? Do any other PGDATA files contain user\n> > > > data?\n> > >\n> > > As others have said, that sounds wrong to me. I think you need to\n> > > encrypt everything.\n> >\n> > That isn't what other database systems do though and isn't what people\n> > actually asking for this feature are expecting to have or deal with.\n> \n> Do any of said other database even *have* the equivalence of say pg_clog or\n> pg_multixact *stored outside their tablespaces*? (Because as long as the\n> data is in the tablespace, it's encrypted when using tablespace\n> encryption..)\n\nThat's a fair question and while I'm not specifically sure about all of\nthem, I do believe you're right that for some, the tablespace/database\nincludes that information (and WAL) instead of having it external. I'm\nalso pretty sure that there's still enough information that isn't\nencrypted to at least *start* the database server. In many ways, we are\nunfortunately the oddball when it comes to having these cluster-level\nthings that we probably do want to encrypt (I'd be thinking more about\npg_authid here than clog, and potentially the WAL).\n\nI've been meaning to write up a wiki page or something on this but I\njust haven't found time, so I'm going to give up on that and just share\nmy thoughts here and folks can do with them what they wish-\n\nWhen it comes to use-cases and attack vectors, I feel like there's\nreally two \"big\" choices, and I'd like us to support both, ideally, but\nit boils down to this: do you trust the database maintenance, et al,\nprocesses, or no? The same question, put another way, is, do you trust\nhaving unencrypted/sensitive data in shared buffers?\n\nLet's talk through these for a minute:\n\nYes, shared_buffers is trusted implies:\n\n- More data (usefully or not) can be encrypted\n - WAL, clog, multixact, pg statistics, et al\n- Various PG processes need to know the decryption keys necessary\n (autovacuum, crash recovery, being big ones)\n ... ideally, we could still *start*, which is why I continue to argue\n that we shouldn't encrypt *everything* because not being able to even\n start the database system really sucks. What exactly it is that we\n need I don't know off-hand, maybe we don't need clog, but it seems\n likely we'll need pg_controldata, for example. My gut feeling on this\n is really that we need enough to start and open up the vault- which\n probably means that the vault needs to look more like what I describe\n below in the situation where you don't trust shared_buffers, to the\n point where we might have seperate WAL/clog/et al for the vault itself\n- Fewer limitations (indexes can work more-or-less as-is, for example)\n- Attack vectors:\n - Anything that can access shared buffers can get a ton of data\n - Bugs in PG that expose memory can be leveraged to get access to data\n and keys\n - root on the system can pretty trivially gain access to everything\n - If someone steals the disks/backups, they can't get access to much\n - Or, if your cloud/storage vendor decides to snoop around they can't\n see much\n\nNo, shared_buffers is NOT trusted implies:\n\n- we need enough unencrypted data to bring the system up and online and\n working (crash recovery, autovacuum, need to work)- this likely\n implies that things like WAL, clog, et al, have to be mostly\n unencrypted, to allow these processes to work\n- Limitations on indexes (we can't have the index have unencrypted data,\n but we also have to have autovacuum able to work... I actually wonder\n if this might be something we could solve by encrypting the internal\n pages, leaving the TIDs exposed so that they can be cleaned up but\n leaf pages have their own ordering so that's not great... I suspect\n something like this is the reason for the index limitation in other\n database systems that support column-level encryption)\n- Sensitive data in WAL is already encrypted\n- All decryption happens in a given backend when it's sending data to\n the client\n- Attack vectors:\n - root can watch network traffic or individual sessions, possibly gain\n access to keys (certainly with more difficulty though)\n - Bugs in PG shouldn't make it very easy for an external attacker to\n gain access to anything except what they already had access to\n (sure, they could see shared buffers and see what's in their\n backend, but everything in shared buffers that's sensitive should be\n encrypted, and for the most part what's in their backend should only\n be things they're allowed to access anyway)\n - If someone steals the disks/backups, they could potentially figure\n out more information about what was happening on the system\n - Or, if your cloud/storage vendor decides to snoop around, they could\n possibly figure things out\n\nAnd then, of course, you can get into the fun of, well, maybe we should\nhave both options be supported at the same time.\n\nLooking from an attack-vector standpoint, if the concern is primairly\nabout external attackers through SQL injection and database bugs, not\ntrusting shared buffers is pretty clearly the way to go. If the concern\nis about stealing hard drives or backups, well, FDE is a great solution\nthere, along with encrypted backups, but, sure, if we rule those out for\nsome reason then we can say that, yes, this will be helpful for that\nkind of an attack.\n\nIn either case, we do need a vaulting system, and I think we need to be\nable to start up PG and get the vault open and accept connections.\n\nThanks,\n\nStephen", "msg_date": "Wed, 9 Oct 2019 10:30:23 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 7, 2019 at 12:34:36PM -0400, Bruce Momjian wrote:\n> On Mon, Oct 7, 2019 at 12:30:37PM -0400, Robert Haas wrote:\n> > On Mon, Oct 7, 2019 at 11:48 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Well, I am starting with the things I _know_ need encrypting, and am\n> > > then waiting for others to tell me what to add. Cybertec has not\n> > > provided a list and reasons yet, that I have seen. This is why I\n> > > started this public thread, so we could get a list and agree on it.\n> > \n> > Well that's fine, but you could also open up the patch and have a look\n> > at it. Even if you just looked at which files it modifies, it would\n> > enable you to add some important things do your list.\n> \n> Uh, I am really then just importing what one group decided, which seems\n> unsafe. I think it needs a fresh look at all files.\n\nSomeone has written a list of all PGDATA files so its TDE status can be\nrecorded:\n\n\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_contains_of_user_data_for_PostgreSQL_files\n\nFeel free to update it.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 9 Oct 2019 11:07:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Wed, 9 Oct 2019 at 22:30, Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> - All decryption happens in a given backend when it's sending data to\n> the client\n>\n\nThat is not what I think of as TDE. But upon review, it looks like I'm\nwrong, and the usual usage of TDE is for server-side-only encryption\nat-rest.\n\nBut when I'm asked about TDE, people are generally actually asking for data\nthat's encrypted at rest and in transit, where the client driver is\nresponsible for data encryption/decryption transparently to the\napplication. The server is expected to be able to mark columns as\nencrypted, so it can report the column's true datatype while storing a\nbytea-like encrypted value for it instead. In this case the server does not\nknow the column encryption/decryption key at all, and it cannot perform any\noperations on the data except for input and output.\n\nSome people ask for indexable encrypted columns, but I tend to explain to\nthem how impractical and inefficient that is. You can support hash indexes\nif you don't salt the encrypted data, but that greatly weakens the\nencryption by allowing attackers to use dictionary attacks and other brute\nforce techniques efficiently. And you can't support b-tree > and < without\nvery complex encryption schemes (\nhttps://en.wikipedia.org/wiki/Homomorphic_encryption).\n\nI see quite a lot of demand for this column level driver-assisted\nencryption. I think it'd actually be quite simple for the PostgreSQL server\nto provide support for it too, since most of the work is done by the\ndriver. But I won't go into the design here since this thread appears to be\nabout encryption at rest only, fully server-side.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 9 Oct 2019 at 22:30, Stephen Frost <sfrost@snowman.net> wrote:\n- All decryption happens in a given backend when it's sending data to\n  the clientThat is not what I think of as TDE. But upon review, it looks like I'm wrong, and the usual usage of TDE is for server-side-only encryption at-rest.But when I'm asked about TDE, people are generally actually asking for data that's encrypted at rest and in transit, where the client driver is responsible for data encryption/decryption transparently to the application. The server is expected to be able to mark columns as encrypted, so it can report the column's true datatype while storing a bytea-like encrypted value for it instead. In this case the server does not know the column encryption/decryption key at all, and it cannot perform any operations on the data except for input and output.Some people ask for indexable encrypted columns, but I tend to explain to them how impractical and inefficient that is. You can support hash indexes if you don't salt the encrypted data, but that greatly weakens the encryption by allowing attackers to use dictionary attacks and other brute force techniques efficiently. And you can't support b-tree > and < without very complex encryption schemes (https://en.wikipedia.org/wiki/Homomorphic_encryption).I see quite a lot of demand for this column level driver-assisted encryption. I think it'd actually be quite simple for the PostgreSQL server to provide support for it too, since most of the work is done by the driver. But I won't go into the design here since this thread appears to be about encryption at rest only, fully server-side.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 10 Oct 2019 07:47:46 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Craig Ringer (craig@2ndquadrant.com) wrote:\n> On Wed, 9 Oct 2019 at 22:30, Stephen Frost <sfrost@snowman.net> wrote:\n> > - All decryption happens in a given backend when it's sending data to\n> > the client\n> \n> That is not what I think of as TDE. But upon review, it looks like I'm\n> wrong, and the usual usage of TDE is for server-side-only encryption\n> at-rest.\n\nYes, that's typically what TDE is, at least in the relational DBMS\nworld.\n\n> But when I'm asked about TDE, people are generally actually asking for data\n> that's encrypted at rest and in transit, where the client driver is\n> responsible for data encryption/decryption transparently to the\n> application. The server is expected to be able to mark columns as\n> encrypted, so it can report the column's true datatype while storing a\n> bytea-like encrypted value for it instead. In this case the server does not\n> know the column encryption/decryption key at all, and it cannot perform any\n> operations on the data except for input and output.\n\nThis is definitely also a thing though I'm not sure what it's called,\nexactly. Having everything happen on the client side is also,\ncertainly, a better solution as it removes the risk of root on the\ndatabase server being able to gain access to the data. This is also\nwhat I recommend in a lot of situations- have the client side\napplication handle the encryption/decryption, working with a vaulting\nsolution ideally, but it'd definitely be neat to add this as a\ncapability to PG.\n\n> Some people ask for indexable encrypted columns, but I tend to explain to\n> them how impractical and inefficient that is. You can support hash indexes\n> if you don't salt the encrypted data, but that greatly weakens the\n> encryption by allowing attackers to use dictionary attacks and other brute\n> force techniques efficiently. And you can't support b-tree > and < without\n> very complex encryption schemes (\n> https://en.wikipedia.org/wiki/Homomorphic_encryption).\n\nI'm not sure why you wouldn't salt the hash..? That's pretty important,\nimv, and, of course, you have to store the salt but that shouldn't be\nthat big of a deal, I wouldn't think. Agreed that you can't support\nb-tree (even with complex encryption schemes..., I've read some papers\nabout how just </> is enough to be able to glean a good bit of info\nfrom, not super relevant to the overall discussion here so I won't go\nhunt them down right now, but if there's interest, I can try to do so).\n\n> I see quite a lot of demand for this column level driver-assisted\n> encryption. I think it'd actually be quite simple for the PostgreSQL server\n> to provide support for it too, since most of the work is done by the\n> driver. But I won't go into the design here since this thread appears to be\n> about encryption at rest only, fully server-side.\n\nYes, that's what this thread is about, but I very much like the idea of\ndriver-assisted encryption on the client side and would love it if\nsomeone had time to work on it.\n\nThanks,\n\nStephen", "msg_date": "Thu, 10 Oct 2019 10:40:37 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Wed, Oct 9, 2019 at 3:57 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n>\n> > On Wed, Oct 9, 2019 at 2:42 PM Antonin Houska <ah@cybertec.at> wrote:\n> > >\n> > > Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > >\n> > > > I also tried to generate IV using PID (32bit) + tempCounter (64bit) at\n> > > > first, but in the worst-case PID and tempCounter are used in the same\n> > > > values.\n> > > > Therefore, the uniqueness of the encryption key was considered without\n> > > > considering the uniqueness of the IV value.\n> > >\n> > > If you consider 64bit counter insufficient (here it seems that tempCounter\n> > > counts the 1GB segments), then we can't even use LSN as the IV for relation\n> > > pages.\n> >\n> > The worst-case here is not a lack of tempCounter, but a problem that\n> > occurs when PID is reused after a certain period.\n> > Of course, it is very unlikely to be a problem because it is a\n> > temporary file, but since the file name can know the PID and\n> > tempFileCounter, if you accumulate some data, the same key and the\n> > same IV will be used to encrypt other data. So I thought there could\n> > be a problem.\n>\n> ok\n>\n> > > > First, it generates a hash value randomly for the file, and uses the\n> > > > hash value and KEK (or MDEK) to derive and use the key with\n> > > > HMAC-SHA256.\n> > > > In this case, there is no need to store the encryption key separately\n> > > > if it is not necessary to keep it in a separate IV file or memory.\n> > > > (IV is a hash value of 64 bits and a counter of 32 bits.)\n> > >\n> > > You seem to miss the fact that user of buffile.c can seek in the file and\n> > > rewrite arbitrary part. Thus you'd have to generate a new key for the part\n> > > being changed.\n> >\n> > That's right. I wanted to ask this too.\n> > Is it possible to overwrite the data already written in the actual buffile.c?\n> > Such a problem seems to become a problem when BufFileWRite function is\n> > called, and BufFileSeek function is called, and BufFileRead is called.\n> > In other words, the file is not written in units of 8kb, but the file\n> > is changed in the pos, and some data is read in another pos.\n>\n> v04-0011-Make-buffile.c-aware-of-encryption.patch in [1] changes buffile.c so\n> that data is read and written in 8kB blocks if encryption is enabled. In order\n> to record the IV per block, the computation of the buffer position within the\n> file would have to be adjusted somehow. I can check it soon but not in the\n> next few days.\n\nAs far as I read the patch the nonce consists of pid, counter and\nblock number where the counter is the number incremented each time of\ncreating a BufFile. Therefore it could happen to rewrite the buffer\ndata with the same nonce and key, which is bad.\n\nSo I think we can have the rewrite counter of the block in the each 8k\nblock header. And then the nonce consists of block number within a\nsegment file (4 bytes), temp file counter (8 bytes), rewrite counter\n(2 bytes) and CTR mode counter (2 bytes). And then if we have a\nsingle-use encryption key per backend processes I guess we can\nguarantee the uniqueness of the combination of key and nonce.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Sat, 12 Oct 2019 19:19:05 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> On Wed, Oct 9, 2019 at 3:57 PM Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> >\n> > v04-0011-Make-buffile.c-aware-of-encryption.patch in [1] changes buffile.c so\n> > that data is read and written in 8kB blocks if encryption is enabled. In order\n> > to record the IV per block, the computation of the buffer position within the\n> > file would have to be adjusted somehow. I can check it soon but not in the\n> > next few days.\n> \n> As far as I read the patch the nonce consists of pid, counter and\n> block number where the counter is the number incremented each time of\n> creating a BufFile. Therefore it could happen to rewrite the buffer\n> data with the same nonce and key, which is bad.\n\nThis patch was written before the requirement on non-repeating IV was raiesed,\nand it does not use the AES-CTR mode. I mentioned it here because it reads /\nwrites data in 8kB blocks.\n\n> So I think we can have the rewrite counter of the block in the each 8k\n> block header. And then the nonce consists of block number within a\n> segment file (4 bytes), temp file counter (8 bytes), rewrite counter\n> (2 bytes) and CTR mode counter (2 bytes). And then if we have a\n> single-use encryption key per backend processes I guess we can\n> guarantee the uniqueness of the combination of key and nonce.\n\nSince the segment size is 1 GB, the segment cosists of 2^17 blocks, so 4 bytes\nwill not be utilized.\n\nAs for the \"CTR mode counter\", consider that it gets incremented once per 16\nbytes of input. So even if BLCKSZ is 32 kB, we need no more than 11 bits for\nthis counter.\n\nIf these two parts become smaller, we can perhaps increase the size of the\n\"rewrite counter\".\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 14 Oct 2019 08:42:49 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Mon, Oct 14, 2019 at 3:42 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > On Wed, Oct 9, 2019 at 3:57 PM Antonin Houska <ah@cybertec.at> wrote:\n> > >\n> > > Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > >\n> > > v04-0011-Make-buffile.c-aware-of-encryption.patch in [1] changes buffile.c so\n> > > that data is read and written in 8kB blocks if encryption is enabled. In order\n> > > to record the IV per block, the computation of the buffer position within the\n> > > file would have to be adjusted somehow. I can check it soon but not in the\n> > > next few days.\n> >\n> > As far as I read the patch the nonce consists of pid, counter and\n> > block number where the counter is the number incremented each time of\n> > creating a BufFile. Therefore it could happen to rewrite the buffer\n> > data with the same nonce and key, which is bad.\n>\n> This patch was written before the requirement on non-repeating IV was raiesed,\n> and it does not use the AES-CTR mode. I mentioned it here because it reads /\n> writes data in 8kB blocks.\n>\n> > So I think we can have the rewrite counter of the block in the each 8k\n> > block header. And then the nonce consists of block number within a\n> > segment file (4 bytes), temp file counter (8 bytes), rewrite counter\n> > (2 bytes) and CTR mode counter (2 bytes). And then if we have a\n> > single-use encryption key per backend processes I guess we can\n> > guarantee the uniqueness of the combination of key and nonce.\n>\n> Since the segment size is 1 GB, the segment cosists of 2^17 blocks, so 4 bytes\n> will not be utilized.\n>\n> As for the \"CTR mode counter\", consider that it gets incremented once per 16\n> bytes of input. So even if BLCKSZ is 32 kB, we need no more than 11 bits for\n> this counter.\n>\n> If these two parts become smaller, we can perhaps increase the size of the\n> \"rewrite counter\".\n\nYeah I designed it to make implementation easier but we can increase\nthe size of the rewrite counter to 3 bytes while the block number uses\n3 bytes.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Tue, 15 Oct 2019 14:08:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "On Thu, Oct 10, 2019 at 10:40:37AM -0400, Stephen Frost wrote:\n> > Some people ask for indexable encrypted columns, but I tend to explain to\n> > them how impractical and inefficient that is. You can support hash indexes\n> > if you don't salt the encrypted data, but that greatly weakens the\n> > encryption by allowing attackers to use dictionary attacks and other brute\n> > force techniques efficiently. And you can't support b-tree > and < without\n> > very complex encryption schemes (\n> > https://en.wikipedia.org/wiki/Homomorphic_encryption).\n> \n> I'm not sure why you wouldn't salt the hash..? That's pretty important,\n> imv, and, of course, you have to store the salt but that shouldn't be\n> that big of a deal, I wouldn't think. Agreed that you can't support\n> b-tree (even with complex encryption schemes..., I've read some papers\n> about how just </> is enough to be able to glean a good bit of info\n> from, not super relevant to the overall discussion here so I won't go\n> hunt them down right now, but if there's interest, I can try to do so).\n\nYes. you can add salt to the value you store in the hash index, but when\nyou are looking for a matching value, how do you know what salt to use\nto find it in the index?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 23 Oct 2019 17:44:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Thu, Oct 10, 2019 at 10:40:37AM -0400, Stephen Frost wrote:\n> > > Some people ask for indexable encrypted columns, but I tend to explain to\n> > > them how impractical and inefficient that is. You can support hash indexes\n> > > if you don't salt the encrypted data, but that greatly weakens the\n> > > encryption by allowing attackers to use dictionary attacks and other brute\n> > > force techniques efficiently. And you can't support b-tree > and < without\n> > > very complex encryption schemes (\n> > > https://en.wikipedia.org/wiki/Homomorphic_encryption).\n> > \n> > I'm not sure why you wouldn't salt the hash..? That's pretty important,\n> > imv, and, of course, you have to store the salt but that shouldn't be\n> > that big of a deal, I wouldn't think. Agreed that you can't support\n> > b-tree (even with complex encryption schemes..., I've read some papers\n> > about how just </> is enough to be able to glean a good bit of info\n> > from, not super relevant to the overall discussion here so I won't go\n> > hunt them down right now, but if there's interest, I can try to do so).\n> \n> Yes. you can add salt to the value you store in the hash index, but when\n> you are looking for a matching value, how do you know what salt to use\n> to find it in the index?\n\nYeah, if the only value you have to look up with is the unencrypted\nsensitive information itself then you'd have to have the data hashed\nwithout a salt.\n\nIf the application had some way of providing a salt and then sending it\nto the database as part of the query, then you could (we used to do\nexactly this with md5...). This probably gets to be pretty use-case\nspecific, but it seems like if we had a data type for \"hashed value,\noptionally including a salt\" which could then be used with a hash index,\nit'd be pretty helpful for users.\n\nThanks,\n\nStephen", "msg_date": "Fri, 25 Oct 2019 11:03:49 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Transparent Data Encryption (TDE) and encrypted files" } ]
[ { "msg_contents": "For plan for full-cluster Transparent Data Encryption (TDE) is here:\n \n\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n\nThe values it has, I think, are:\n\n* encrypts data for anyone with read-access to the file system (but not\n memory)\n\n * I think write access would allow access to the encryption keys\n by modifying postgresql.conf or other files\n\n * This is particularly useful if the storage is remote\n\n* encrypts non-logical/non-pg_dump-like backups\n\n* fulfills several security compliance requirements\n\n* encrypts storage\n\n* perhaps easier to implement than file system encryption\n\nIs that accurate?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 30 Sep 2019 17:40:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Value of Transparent Data Encryption (TDE)" }, { "msg_contents": "On Mon, Sep 30, 2019 at 05:40:52PM -0400, Bruce Momjian wrote:\n>For plan for full-cluster Transparent Data Encryption (TDE) is here:\n>\n>\thttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#TODO_for_Full-Cluster_Encryption\n>\n>The values it has, I think, are:\n>\n>* encrypts data for anyone with read-access to the file system (but not\n> memory)\n>\n> * I think write access would allow access to the encryption keys\n> by modifying postgresql.conf or other files\n>\n> * This is particularly useful if the storage is remote\n>\n>* encrypts non-logical/non-pg_dump-like backups\n>\n>* fulfills several security compliance requirements\n>\n>* encrypts storage\n>\n\nMaybe. I think this is approaching the problem from the wrong angle.\nEncryption is more a means of achieving something. OK, for compliance\npurposes it's useful to be able to tick \"encryption\" checkbox. But other\nthan that, people really care about threat models and how encryption\nimproves them (or does not).\n\nSo I think it'd be valuable to improve the \"thread models\" section on\nthat wiki page, with more detailed cases. We need to explain what\ncapabilities the attacker has (can he read files?can he interact with\nthe database? can he read memory? ..) and then explain how that works\nwith encrypted cluster.\n\n\n>* perhaps easier to implement than file system encryption\n>\n\nNot sure. IMO filesystem encryption is fairly simple to use, to the\nextent that it's hard to beat. The problem is usually people can't use\nit for various reasons - lack of support on their OS, no access to the\nblock device, problems with obtaining the privileges etc.\n\nHaving it built into the database menas you can sidestep most of those\nissue (e.g. you can deploy it as a DBA, on arbitrary OS, ...).\n\nPlus it allows features you can't easily achieve with fs encryption,\nbecause the filesystem only sees opaque data files. So having keys per\ndatabase/user/... is easier from within the database.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 1 Oct 2019 15:43:05 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Value of Transparent Data Encryption (TDE)" }, { "msg_contents": "On Tue, Oct 1, 2019 at 03:43:05PM +0200, Tomas Vondra wrote:\n> On Mon, Sep 30, 2019 at 05:40:52PM -0400, Bruce Momjian wrote:\n> Maybe. I think this is approaching the problem from the wrong angle.\n> Encryption is more a means of achieving something. OK, for compliance\n> purposes it's useful to be able to tick \"encryption\" checkbox. But other\n> than that, people really care about threat models and how encryption\n> improves them (or does not).\n\nYes, that is what I am trying to do with this email thread.\n\n> So I think it'd be valuable to improve the \"thread models\" section on\n> that wiki page, with more detailed cases. We need to explain what\n> capabilities the attacker has (can he read files?can he interact with\n> the database? can he read memory? ..) and then explain how that works\n> with encrypted cluster.\n> \n> \n> > * perhaps easier to implement than file system encryption\n> > \n> \n> Not sure. IMO filesystem encryption is fairly simple to use, to the\n> extent that it's hard to beat. The problem is usually people can't use\n> it for various reasons - lack of support on their OS, no access to the\n> block device, problems with obtaining the privileges etc.\n\nRight, that's the \"perhaps easier\" part of my text above.\n\n> Having it built into the database menas you can sidestep most of those\n> issue (e.g. you can deploy it as a DBA, on arbitrary OS, ...).\n> \n> Plus it allows features you can't easily achieve with fs encryption,\n> because the filesystem only sees opaque data files. So having keys per\n> database/user/... is easier from within the database.\n\nYes, but we will not be doing that for the first release because of the\ncomplexity related to handling this in WAL and requiring crash recovery\nto be able to unlock all the keys for replay. I personally think that\ndatabase/user/... keys are best done at the SQL level, with proper\nlocking. pgcryptokey (http://momjian.us/download/pgcryptokey/) is an\nexample of that.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 1 Oct 2019 11:54:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Value of Transparent Data Encryption (TDE)" }, { "msg_contents": "On Tue, Oct 1, 2019 at 11:54:26AM -0400, Bruce Momjian wrote:\n> On Tue, Oct 1, 2019 at 03:43:05PM +0200, Tomas Vondra wrote:\n> > Plus it allows features you can't easily achieve with fs encryption,\n> > because the filesystem only sees opaque data files. So having keys per\n> > database/user/... is easier from within the database.\n> \n> Yes, but we will not be doing that for the first release because of the\n> complexity related to handling this in WAL and requiring crash recovery\n> to be able to unlock all the keys for replay. I personally think that\n> database/user/... keys are best done at the SQL level, with proper\n> locking. pgcryptokey (http://momjian.us/download/pgcryptokey/) is an\n> example of that.\n\nJust to give more detail. Initially, there was a desire to store keys\nin only one place, either in the file system or in database tables. \nHowever, it became clear that the needs of booting the server and crash\nrecovery required file system keys, and per-user/db keys were best done\nat the SQL level, so that indexing can be used, and logical dumps\ncontain the locked keys. SQL-level storage allows databases to be\ncompletely independent of other databases in terms of key storage and\nusage.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 1 Oct 2019 12:19:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Value of Transparent Data Encryption (TDE)" }, { "msg_contents": "On Tue, Oct 1, 2019 at 12:19 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Just to give more detail. Initially, there was a desire to store keys\n> in only one place, either in the file system or in database tables.\n> However, it became clear that the needs of booting the server and crash\n> recovery required file system keys, and per-user/db keys were best done\n> at the SQL level, so that indexing can be used, and logical dumps\n> contain the locked keys. SQL-level storage allows databases to be\n> completely independent of other databases in terms of key storage and\n> usage.\n\nWait, we're going to store the encryption keys with the database? It\nseems like you're debating whether to store your front door keys under\nthe doormat or in a fake rock by the side of the path, when what you\nreally ought to be doing is keeping them physically separated from the\nhouse, like in your pocket or your purse.\n\nIt seems to me that the right design is that there's a configurable\nmechanism for PostgreSQL to request keys from someplace outside the\ndatabase, and that other place is responsible for storing the keys\nsecurely and not losing them. Probably, it's a key-server of some kind\nrunning on another machine, but if you really want you can do\nsomething insecure instead, like getting them from the local\nfilesystem.\n\nI admit I haven't been following the threads on this topic, but this\njust seems like a really strange idea.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 3 Oct 2019 10:26:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Value of Transparent Data Encryption (TDE)" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Oct 1, 2019 at 12:19 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Just to give more detail. Initially, there was a desire to store keys\n> > in only one place, either in the file system or in database tables.\n> > However, it became clear that the needs of booting the server and crash\n> > recovery required file system keys, and per-user/db keys were best done\n> > at the SQL level, so that indexing can be used, and logical dumps\n> > contain the locked keys. SQL-level storage allows databases to be\n> > completely independent of other databases in terms of key storage and\n> > usage.\n> \n> Wait, we're going to store the encryption keys with the database? It\n> seems like you're debating whether to store your front door keys under\n> the doormat or in a fake rock by the side of the path, when what you\n> really ought to be doing is keeping them physically separated from the\n> house, like in your pocket or your purse.\n\nThis isn't news and shouldn't be shocking- databases which support TDE\nall have a vaulting system for managing the keys and, yes, that's stored\nwith the database.\n\n> It seems to me that the right design is that there's a configurable\n> mechanism for PostgreSQL to request keys from someplace outside the\n> database, and that other place is responsible for storing the keys\n> securely and not losing them. Probably, it's a key-server of some kind\n> running on another machine, but if you really want you can do\n> something insecure instead, like getting them from the local\n> filesystem.\n\nI support the option to have an external vault that's used, but I don't\nbelieve that should be a requirement and I don't think that removes the\nneed to have a vaulting system of our own, so we can have a stand-alone\nTDE solution.\n\n> I admit I haven't been following the threads on this topic, but this\n> just seems like a really strange idea.\n\nIt's not new and it's how TDE works in all of the other database systems\nwhich support it.\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Oct 2019 10:43:21 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Value of Transparent Data Encryption (TDE)" }, { "msg_contents": "On Thu, Oct 03, 2019 at 10:43:21AM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Robert Haas (robertmhaas@gmail.com) wrote:\n>> On Tue, Oct 1, 2019 at 12:19 PM Bruce Momjian <bruce@momjian.us> wrote:\n>> > Just to give more detail. Initially, there was a desire to store keys\n>> > in only one place, either in the file system or in database tables.\n>> > However, it became clear that the needs of booting the server and crash\n>> > recovery required file system keys, and per-user/db keys were best done\n>> > at the SQL level, so that indexing can be used, and logical dumps\n>> > contain the locked keys. SQL-level storage allows databases to be\n>> > completely independent of other databases in terms of key storage and\n>> > usage.\n>>\n>> Wait, we're going to store the encryption keys with the database? It\n>> seems like you're debating whether to store your front door keys under\n>> the doormat or in a fake rock by the side of the path, when what you\n>> really ought to be doing is keeping them physically separated from the\n>> house, like in your pocket or your purse.\n>\n>This isn't news and shouldn't be shocking- databases which support TDE\n>all have a vaulting system for managing the keys and, yes, that's stored\n>with the database.\n>\n\nRight. The important bit here is that the vault is encrypted, and has to\nbe unlocked using a passphrase (or something like that) when starting\nthe database. So it's not really as silly as a key under the doormat.\n\n>> It seems to me that the right design is that there's a configurable\n>> mechanism for PostgreSQL to request keys from someplace outside the\n>> database, and that other place is responsible for storing the keys\n>> securely and not losing them. Probably, it's a key-server of some kind\n>> running on another machine, but if you really want you can do\n>> something insecure instead, like getting them from the local\n>> filesystem.\n>\n>I support the option to have an external vault that's used, but I don't\n>believe that should be a requirement and I don't think that removes the\n>need to have a vaulting system of our own, so we can have a stand-alone\n>TDE solution.\n>\n\nRight. If anything, we need a local vault that we could use for testing.\nIn other cases it might be a simple wrapper for a vault/keyring provided\nby the operating system (if it's good enough for gpg keys ...).\n\n>> I admit I haven't been following the threads on this topic, but this\n>> just seems like a really strange idea.\n>\n>It's not new and it's how TDE works in all of the other database systems\n>which support it.\n>\n\nYep.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Oct 2019 17:08:46 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Value of Transparent Data Encryption (TDE)" }, { "msg_contents": "On Thu, Oct 03, 2019 at 10:26:15AM -0400, Robert Haas wrote:\n> On Tue, Oct 1, 2019 at 12:19 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Just to give more detail. Initially, there was a desire to store\n> > keys in only one place, either in the file system or in database\n> > tables. However, it became clear that the needs of booting the\n> > server and crash recovery required file system keys, and\n> > per-user/db keys were best done at the SQL level, so that indexing\n> > can be used, and logical dumps contain the locked keys. SQL-level\n> > storage allows databases to be completely independent of other\n> > databases in terms of key storage and usage.\n> \n> Wait, we're going to store the encryption keys with the database?\n\nEncryption keys are fine there so long as decryption keys are\nseparate.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 3 Oct 2019 22:55:18 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Value of Transparent Data Encryption (TDE)" } ]
[ { "msg_contents": "I work for YugaByte, Inc (www.yugabyte.com <http://www.yugabyte.com/>). YugabyteDB re-uses the source code that implements the “upper half” of PostgreSQL Version 11.2. See here:\n\nhttps://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/\n\nThis means that the problem that the PostgreSQL issue I describe tracks affects users of YugabyteDB too.\n\nI wrote up the problem here:\n\nhttps://github.com/yugabyte/yugabyte-db/issues/2464 <https://github.com/yugabyte/yugabyte-db/issues/2464>\n\nPlease read my account and comment.\n\nPeter Eisentraut, I hope that you’ll read this. I’m told that you're the authority for txn control in PL/pgSQL.\nI work for YugaByte, Inc (www.yugabyte.com). YugabyteDB re-uses the source code that implements the “upper half” of PostgreSQL Version 11.2. See here:https://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/This means that the problem that the PostgreSQL issue I describe tracks affects users of YugabyteDB too.I wrote up the problem here:https://github.com/yugabyte/yugabyte-db/issues/2464Please read my account and comment.Peter Eisentraut, I hope that you’ll read this. I’m told that you're the authority for txn control in PL/pgSQL.", "msg_date": "Mon, 30 Sep 2019 15:37:56 -0700", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?PL/pgSQL_=E2=80=94_=22commit=22_illegal_in_the_executab?=\n =?utf-8?Q?le_section_of_a_block_statement_that_has_an_exception_section?=" }, { "msg_contents": "\n\n> On Sep 30, 2019, at 15:37, Bryn Llewellyn <bryn@yugabyte.com> wrote:\n> I wrote up the problem here:\n> \n> https://github.com/yugabyte/yugabyte-db/issues/2464\n\nThis is documented; it's the very last line of the page you reference in the Github issue:\n\n\tA transaction cannot be ended inside a block with exception handlers.\n\nYour discussion doesn't answer the specific issue: BEGIN/EXCEPTION/END in pl/pgSQL is implemented by savepoints. What semantics should COMMIT / ROLLBACK have inside of that? Doing a COMMIT / ROLLBACK (at the database level) at that point would lose the savepoints, which would break the BEGIN/EXCEPTION/END semantics.\n\nIt's not clear to me what the alternative semantics would be. Can you propose specific database behavior for a COMMIT or ROLLBACK inside a BEGIN/EXCEPTION/END block which retain the savepoint behavior of BEGIN/EXCEPTION/END?\n--\n-- Christophe Pettus\n xof@thebuild.com\n\n\n\n", "msg_date": "Mon, 30 Sep 2019 18:40:58 -0700", "msg_from": "Christophe Pettus <xof@thebuild.com>", "msg_from_op": false, "msg_subject": "=?utf-8?Q?Re=3A_PL/pgSQL_=E2=80=94_=22commit=22_illegal_in_the_ex?=\n =?utf-8?Q?ecutable_section_of_a_block_statement_that_has_an_exception_sec?=\n =?utf-8?Q?tion?=" } ]
[ { "msg_contents": "Hello,\r\n\r\nget_relkind_objtype(...) was introduced as part of 8b9e9644dc, and it doesn't include\r\nRELKIND_TOASTVALUE. As a result when a user who has usage rights on schema pg_toast\r\nand attempts to reindex a table it is not the owner of it fails with the wrong error\r\nmessage.\r\n\r\ntestuser is a non-superuser role who has been granted all on pg_toast\r\n\r\npostgres=> \\c\r\nYou are now connected to database \"postgres\" as user \"testuser\".\r\npostgres=> REINDEX TABLE pg_toast.pg_toast_16388;\r\nERROR: unexpected relkind: 116\r\n\r\nIt seems get_relkind_objtype(...) is only used as part of aclcheck_error(...)\r\nI've attached a patch to include RELKIND_TOASTVALUE in get_relkind_objtype. \r\nNow it fails with the proper error message.\r\n\r\npostgres=> \\c\r\nYou are now connected to database \"postgres\" as user \"testuser\".\r\npostgres=> REINDEX TABLE pg_toast.pg_toast_16388;\r\nERROR: must be owner of table pg_toast_16388\r\n\r\nCheers,\r\n\r\nJohn H", "msg_date": "Tue, 1 Oct 2019 00:10:50 +0000", "msg_from": "\"Hsu, John\" <hsuchen@amazon.com>", "msg_from_op": true, "msg_subject": "Include RELKIND_TOASTVALUE in get_relkind_objtype" }, { "msg_contents": "On Tue, Oct 01, 2019 at 12:10:50AM +0000, Hsu, John wrote:\n> get_relkind_objtype(...) was introduced as part of 8b9e9644dc, and it doesn't include\n> RELKIND_TOASTVALUE. As a result when a user who has usage rights on schema pg_toast\n> and attempts to reindex a table it is not the owner of it fails with the wrong error\n> message.\n\n(Adding Peter E. in CC)\n\nSure. However this implies that the user doing the reindex not only\nhas ownership of the relation worked on, but is also able to work\ndirectly on the schema pg_toast. Should we really encourage people to\ndo that with non-superusers?\n\n> It seems get_relkind_objtype(...) is only used as part of aclcheck_error(...)\n> I've attached a patch to include RELKIND_TOASTVALUE in get_relkind_objtype. \n> Now it fails with the proper error message.\n> \n> postgres=> \\c\n> You are now connected to database \"postgres\" as user \"testuser\".\n> postgres=> REINDEX TABLE pg_toast.pg_toast_16388;\n> ERROR: must be owner of table pg_toast_16388\n\nHere is a set of commands to see the failure:\n=# CREATE ROLE testuser LOGIN;\n=# GRANT USAGE ON SCHEMA pg_toast TO testuser;\n\\c postgres testuser\n=> REINDEX TABLE pg_toast.pg_toast_2609;\nERROR: XX000: unexpected relkind: 116\n=> REINDEX INDEX pg_toast.pg_toast_2609_index;\nERROR: 42501: must be owner of index pg_toast_2609_index\nLOCATION: aclcheck_error, aclchk.c:3623\n\nAs you wrote, get_relkind_objtype() is primarily used for ACL errors,\nbut we have another set of code paths with get_object_type() which\ngets called for a subset of ALTER TABLE commands. So this error can\nbe triggered in more ways, though you had better not do the following\none:\n=# ALTER TABLE pg_toast.pg_toast_1260 rename to popo;\nERROR: XX000: unexpected relkind: 116\n\nThe comment about OBJECT_* in get_relkind_objtype() is here because\nthere is no need for toast objects to have object address support\n(there is a test in object_address.sql about that), and ObjectTypeMap\nhas no mapping OBJECT_* <-> toast table, so the change proposed is not\ncorrect from this perspective.\n--\nMichael", "msg_date": "Thu, 3 Oct 2019 15:37:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Include RELKIND_TOASTVALUE in get_relkind_objtype" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Oct 01, 2019 at 12:10:50AM +0000, Hsu, John wrote:\n>> get_relkind_objtype(...) was introduced as part of 8b9e9644dc, and it doesn't include\n>> RELKIND_TOASTVALUE. As a result when a user who has usage rights on schema pg_toast\n>> and attempts to reindex a table it is not the owner of it fails with the wrong error\n>> message.\n\n> (Adding Peter E. in CC)\n\n> Sure. However this implies that the user doing the reindex not only\n> has ownership of the relation worked on, but is also able to work\n> directly on the schema pg_toast. Should we really encourage people to\n> do that with non-superusers?\n\nFWIW, I really dislike this patch, mainly because it is based on the \nassumption (as John said) that get_relkind_objtype is used only\nin aclcheck_error calls. However it's not obvious why that should\nbe true, and there certainly is no documentation suggesting that\nit needs to be true. That's mainly because get_relkind_objtype has no\ndocumentation period, which if you ask me is flat out unacceptable\nfor a globally-exposed function. (Same comment about its wrapper\nget_object_type.)\n\nThe patch also falsifies the comment just a few lines away that\n\n /*\n * other relkinds are not supported here because they don't map to\n * OBJECT_* values\n */\n\nwithout doing anything about that.\n\nI'm inclined to think that we should redefine the charter of\nget_relkind_objtype/get_object_type to be that they'll produce\nsome OBJECT_* value for any relkind whatever, on the grounds\nthat throwing an error here isn't a particularly useful behavior;\nwe'd rather come out with a possibly-slightly-inaccurate generic\nmessage about a \"table\". And they need to be documented that way.\n\nAlternatively, instead of mapping other relkinds to OBJECT_TABLE,\nwe could invent a new enum entry OBJECT_RELATION. There's precedent\nfor that in OBJECT_ROUTINE ... but I don't know that we want to\nbuild out all the other infrastructure for a new ObjectType right now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Oct 2019 09:52:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Include RELKIND_TOASTVALUE in get_relkind_objtype" }, { "msg_contents": "On Thu, Oct 03, 2019 at 09:52:34AM -0400, Tom Lane wrote:\n> FWIW, I really dislike this patch, mainly because it is based on the \n> assumption (as John said) that get_relkind_objtype is used only\n> in aclcheck_error calls. However it's not obvious why that should\n> be true, and there certainly is no documentation suggesting that\n> it needs to be true. That's mainly because get_relkind_objtype has no\n> documentation period, which if you ask me is flat out unacceptable\n> for a globally-exposed function. (Same comment about its wrapper\n> get_object_type.)\n\nYes, I agree that the expectations that the caller of this function\ncan have are hard to guess. So we could tackle this occasion to add\nmore comments. I could try to come up with a better patch. Or\nperhaps you have already your mind on it?\n\n> The patch also falsifies the comment just a few lines away that\n> \n> /*\n> * other relkinds are not supported here because they don't map to\n> * OBJECT_* values\n> */\n> \n> without doing anything about that.\n\nThat's actually what I was referring to in my previous email.\n\n> I'm inclined to think that we should redefine the charter of\n> get_relkind_objtype/get_object_type to be that they'll produce\n> some OBJECT_* value for any relkind whatever, on the grounds\n> that throwing an error here isn't a particularly useful behavior;\n> we'd rather come out with a possibly-slightly-inaccurate generic\n> message about a \"table\". And they need to be documented that way.\n\nThis is tempting.\n\n> Alternatively, instead of mapping other relkinds to OBJECT_TABLE,\n> we could invent a new enum entry OBJECT_RELATION. There's precedent\n> for that in OBJECT_ROUTINE ... but I don't know that we want to\n> build out all the other infrastructure for a new ObjectType right now.\n\nI am too lazy to check the thread that led to 8b9e964, but I recall\nthat Peter wanted to get rid of OBJECT_RELATION because that's\nconfusing as that's not an purely exclusive object type, and it mapped\nwith other object types.\n--\nMichael", "msg_date": "Fri, 4 Oct 2019 17:55:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Include RELKIND_TOASTVALUE in get_relkind_objtype" }, { "msg_contents": "On Fri, Oct 04, 2019 at 05:55:40PM +0900, Michael Paquier wrote:\n> On Thu, Oct 03, 2019 at 09:52:34AM -0400, Tom Lane wrote:\n>> FWIW, I really dislike this patch, mainly because it is based on the \n>> assumption (as John said) that get_relkind_objtype is used only\n>> in aclcheck_error calls. However it's not obvious why that should\n>> be true, and there certainly is no documentation suggesting that\n>> it needs to be true. That's mainly because get_relkind_objtype has no\n>> documentation period, which if you ask me is flat out unacceptable\n>> for a globally-exposed function. (Same comment about its wrapper\n>> get_object_type.)\n> \n> Yes, I agree that the expectations that the caller of this function\n> can have are hard to guess. So we could tackle this occasion to add\n> more comments. I could try to come up with a better patch. Or\n> perhaps you have already your mind on it?\n\nOkay. Attached is what I was thinking about, with extra regression\ntests to cover the ground for toast tables and indexes that are able\nto reproduce the original failure, and more comments for the routines\nas they should be used only for ACL error messages.\n\nAny thoughts?\n--\nMichael", "msg_date": "Thu, 10 Oct 2019 14:07:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Include RELKIND_TOASTVALUE in get_relkind_objtype" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Okay. Attached is what I was thinking about, with extra regression\n> tests to cover the ground for toast tables and indexes that are able\n> to reproduce the original failure, and more comments for the routines\n> as they should be used only for ACL error messages.\n\nI'd rather do something like the attached, which makes it more of an\nexplicit goal that we won't fail on bad input. (As written, we'd only\nfail on bad classId, which is a case that really shouldn't happen.)\n\nTests are the same as yours, but I revised the commentary and got\nrid of the elog-for-bad-relkind. I also made some cosmetic changes\nin commands/alter.c, so as to (1) make it clear by inspection that\nthose calls are only used to feed aclcheck_error, and (2) avoid\nuselessly computing a value that we won't need in normal non-error\ncases.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 04 Nov 2019 15:31:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Include RELKIND_TOASTVALUE in get_relkind_objtype" }, { "msg_contents": "On Mon, Nov 04, 2019 at 03:31:27PM -0500, Tom Lane wrote:\n> I'd rather do something like the attached, which makes it more of an\n> explicit goal that we won't fail on bad input. (As written, we'd only\n> fail on bad classId, which is a case that really shouldn't happen.)\n\nOkay, that part looks fine.\n\n> Tests are the same as yours, but I revised the commentary and got\n> rid of the elog-for-bad-relkind.\n\nNo objections on that part either.\n\n> I also made some cosmetic changes in commands/alter.c, so as to (1)\n> make it clear by inspection that those calls are only used to feed\n> aclcheck_error, and (2) avoid uselessly computing a value that we\n> won't need in normal non-error cases.\n\nMakes also sense. Thanks for looking at it!\n--\nMichael", "msg_date": "Tue, 5 Nov 2019 11:29:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Include RELKIND_TOASTVALUE in get_relkind_objtype" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Nov 04, 2019 at 03:31:27PM -0500, Tom Lane wrote:\n>> I'd rather do something like the attached, which makes it more of an\n>> explicit goal that we won't fail on bad input. (As written, we'd only\n>> fail on bad classId, which is a case that really shouldn't happen.)\n\n> Okay, that part looks fine.\n\nPushed like that, then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Nov 2019 13:41:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Include RELKIND_TOASTVALUE in get_relkind_objtype" }, { "msg_contents": "On Tue, Nov 05, 2019 at 01:41:28PM -0500, Tom Lane wrote:\n> Pushed like that, then.\n\nThanks for the commit.\n--\nMichael", "msg_date": "Wed, 6 Nov 2019 13:43:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Include RELKIND_TOASTVALUE in get_relkind_objtype" } ]
[ { "msg_contents": "Hi,\n\n\nWe spend a surprising amount of time during expression evaluation to reevaluate whether input to a strict function (or similar) is not null, even though the value either comes from a strict function, or a column declared not null.\n\nNow you can rightfully say that a strict function still can return NULL, even when called with non-NULL input. But practically that's quite rare. Most of the common byvalue type operators are strict, and approximately none of those return NULL when actually called.\n\nThat makes me wonder if it's worthwhile to invent a function property declaring strict strictness or such. It'd allow for some quite noticable improvements for e.g. queries aggregating a lot of rows, we spend a fair time checking whether the transition value has \"turned\" not null. I'm about to submit a patch making that less expensive, but it's still expensive.\n\nI can also imagine that being able to propagate NOT NULL further up the parse-analysis tree could be beneficial for planning, but I've not looked at it in any detail.\n\n\nA related issue is that we, during executor initialization, currently \"loose\" information about a column's NOT NULLness just above the lower scan nodes. Efficiency wise that's a substantial loss for many realistic queries: For JITed deforming that basically turns a bunch of mov instructions with constant offsets into much slower attribute by attribute trawling through the tuple. The latter can approximately not take advantage of the superscalar nature of just about any relevant processor. And for non JITed execution an expression step that used a cheaper deforming routine for the cases where only leading not null columns are accessed would also yield significant speedups. This is made worse by the fact that we often not actually deform at the scan nodes, due to the physical tlist optimization. This is especially bad for nodes storing tuples as minimal tuples (e.g. hashjoin, hashagg), where often a very significant fraction of time of spent re-deforming columns that already were deformed earlier.\n\nIt doesn't seem very hard to propagate attnotnull upwards in a good number of the cases. We don't need to do so everywhere for it to be beneficial.\n\nComments?\n\nAndres\n\n\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\nHi,We spend a surprising amount of time during expression evaluation to reevaluate whether input to a strict function (or similar) is not null, even though the value either comes from a strict function, or a column declared not null.Now you can rightfully say that a strict function still can return NULL, even when called with non-NULL input. But practically that's quite rare. Most of the common byvalue type operators are strict, and approximately none of those return NULL when actually called.That makes me wonder if it's worthwhile to invent a function property declaring strict strictness or such. It'd allow for some quite noticable improvements for e.g. queries aggregating a lot of rows, we spend a fair time checking whether the transition value has \"turned\" not null. I'm about to submit a patch making that less expensive, but it's still expensive.I can also imagine that being able to propagate NOT NULL further up the parse-analysis tree could be beneficial for planning, but I've not looked at it in any detail.A related issue is that we, during executor initialization, currently \"loose\" information about a column's NOT NULLness just above the lower scan nodes. Efficiency wise that's a substantial loss for many realistic queries: For JITed deforming that basically turns a bunch of mov instructions with constant offsets into much slower attribute by attribute trawling through the tuple. The latter can approximately not take advantage of the superscalar nature of just about any relevant processor. And for non JITed execution an expression step that used a cheaper deforming routine for the cases where only leading not null columns are accessed would also yield significant speedups. This is made worse by the fact that we often not actually deform at the scan nodes, due to the physical tlist optimization. This is especially bad for nodes storing tuples as minimal tuples (e.g. hashjoin, hashagg), where often a very significant fraction of time of spent re-deforming columns that already were deformed earlier.It doesn't seem very hard to propagate attnotnull upwards in a good number of the cases. We don't need to do so everywhere for it to be beneficial.Comments?AndresAndres-- Sent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Tue, 01 Oct 2019 00:38:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Declaring a strict function returns not null / eval speed" }, { "msg_contents": "On 10/1/19 9:38 AM, Andres Freund wrote:\n> We spend a surprising amount of time during expression evaluation to \n> reevaluate whether input to a strict function (or similar) is not null, \n> even though the value either comes from a strict function, or a column \n> declared not null.\n> \n> Now you can rightfully say that a strict function still can return NULL, \n> even when called with non-NULL input. But practically that's quite rare. \n> Most of the common byvalue type operators are strict, and approximately \n> none of those return NULL when actually called.\n> \n> That makes me wonder if it's worthwhile to invent a function property \n> declaring strict strictness or such. It'd allow for some quite noticable \n> improvements for e.g. queries aggregating a lot of rows, we spend a fair \n> time checking whether the transition value has \"turned\" not null. I'm \n> about to submit a patch making that less expensive, but it's still \n> expensive.\n> \n> I can also imagine that being able to propagate NOT NULL further up the \n> parse-analysis tree could be beneficial for planning, but I've not \n> looked at it in any detail.\n\nAgreed, this sounds like something useful to do since virtually all \nstrict functions cannot return NULL, especially the ones which are used \nin tight loops. The main design issue seems to be to think up a name for \nthis new level of strictness which is not too confusing for end users.\n\nWe also have a handful of non-strict functions (e.g. concat() and range \nconstructors like tstzrange()) which are guaranteed to never return \nNULL, but I do not think they are many enough or performance critical \nenough to be worth adding this optimization to.\n\nAndreas\n\n\n", "msg_date": "Sun, 20 Oct 2019 13:30:33 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Declaring a strict function returns not null / eval speed" }, { "msg_contents": "Moin,\n\nOn 2019-10-20 13:30, Andreas Karlsson wrote:\n> On 10/1/19 9:38 AM, Andres Freund wrote:\n>> We spend a surprising amount of time during expression evaluation to \n>> reevaluate whether input to a strict function (or similar) is not \n>> null, even though the value either comes from a strict function, or a \n>> column declared not null.\n>> \n>> Now you can rightfully say that a strict function still can return \n>> NULL, even when called with non-NULL input. But practically that's \n>> quite rare. Most of the common byvalue type operators are strict, and \n>> approximately none of those return NULL when actually called.\n>> \n>> That makes me wonder if it's worthwhile to invent a function property \n>> declaring strict strictness or such. It'd allow for some quite \n>> noticable improvements for e.g. queries aggregating a lot of rows, we \n>> spend a fair time checking whether the transition value has \"turned\" \n>> not null. I'm about to submit a patch making that less expensive, but \n>> it's still expensive.\n>> \n>> I can also imagine that being able to propagate NOT NULL further up \n>> the parse-analysis tree could be beneficial for planning, but I've not \n>> looked at it in any detail.\n> \n> Agreed, this sounds like something useful to do since virtually all\n> strict functions cannot return NULL, especially the ones which are\n> used in tight loops. The main design issue seems to be to think up a\n> name for this new level of strictness which is not too confusing for\n> end users.\n\nSTRICT NONULL? That way you could do\n\n CREATE FUNCTION f1 ... STRICT;\n CREATE FUNCTION f2 ... STRICT NONULL;\n CREATE FUNCTION f3 ... NONULL;\n\nand the last wold throw \"not implementet yet\"? \"NEVER RETURNS NULL\" \nwould also ryme with the existing \"RETURNS NULL ON NULL INPUT\", but I \nfind the verbosity too high.\n\nBest regards,\n\nTels\n\n-- \nBest regards,\n\nTels\n\n\n", "msg_date": "Sun, 20 Oct 2019 13:48:13 +0200", "msg_from": "Tels <nospam-pg-abuse@bloodgate.com>", "msg_from_op": false, "msg_subject": "Re: Declaring a strict function returns not null / eval speed" }, { "msg_contents": "Tels <nospam-pg-abuse@bloodgate.com> writes:\n> On 2019-10-20 13:30, Andreas Karlsson wrote:\n>> Agreed, this sounds like something useful to do since virtually all\n>> strict functions cannot return NULL, especially the ones which are\n>> used in tight loops. The main design issue seems to be to think up a\n>> name for this new level of strictness which is not too confusing for\n>> end users.\n\n> STRICT NONULL? That way you could do\n\n> CREATE FUNCTION f1 ... STRICT;\n> CREATE FUNCTION f2 ... STRICT NONULL;\n> CREATE FUNCTION f3 ... NONULL;\n\n> and the last wold throw \"not implementet yet\"? \"NEVER RETURNS NULL\" \n> would also ryme with the existing \"RETURNS NULL ON NULL INPUT\", but I \n> find the verbosity too high.\n\n\"RETURNS NOT NULL\", perhaps? That'd have the advantage of not requiring\nany new keyword.\n\nI'm a little bit skeptical of the actual value of adding this additional\nlevel of complexity, but I suppose we can't determine that reliably\nwithout doing most of the work :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Oct 2019 10:27:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Declaring a strict function returns not null / eval speed" }, { "msg_contents": "Moin,\n\nOn 2019-10-20 16:27, Tom Lane wrote:\n> Tels <nospam-pg-abuse@bloodgate.com> writes:\n>> On 2019-10-20 13:30, Andreas Karlsson wrote:\n>>> Agreed, this sounds like something useful to do since virtually all\n>>> strict functions cannot return NULL, especially the ones which are\n>>> used in tight loops. The main design issue seems to be to think up a\n>>> name for this new level of strictness which is not too confusing for\n>>> end users.\n> \n>> STRICT NONULL? That way you could do\n> \n>> CREATE FUNCTION f1 ... STRICT;\n>> CREATE FUNCTION f2 ... STRICT NONULL;\n>> CREATE FUNCTION f3 ... NONULL;\n> \n>> and the last wold throw \"not implementet yet\"? \"NEVER RETURNS NULL\"\n>> would also ryme with the existing \"RETURNS NULL ON NULL INPUT\", but I\n>> find the verbosity too high.\n> \n> \"RETURNS NOT NULL\", perhaps? That'd have the advantage of not \n> requiring\n> any new keyword.\n\nHm, yes, that would be a good compromise on verbosity and align even \nbetter the other \"RETURNS ...\" variants.\n\n> I'm a little bit skeptical of the actual value of adding this \n> additional\n> level of complexity, but I suppose we can't determine that reliably\n> without doing most of the work :-(\n\nMaybe it would be possible to simulate the effect somehow? Or at least \nwe could try to find practical queries where the additional information \nresults in a much better plan if RETRUNS NOT NULL was set.\n\nBest regards,\n\nTels\n\n\n", "msg_date": "Sun, 20 Oct 2019 18:02:02 +0200", "msg_from": "Tels <nospam-pg-abuse@bloodgate.com>", "msg_from_op": false, "msg_subject": "Re: Declaring a strict function returns not null / eval speed" }, { "msg_contents": "Hi,\n\nOn 2019-10-20 10:27:19 -0400, Tom Lane wrote:\n> \"RETURNS NOT NULL\", perhaps? That'd have the advantage of not requiring\n> any new keyword.\n\nThat could work.\n\n\n> I'm a little bit skeptical of the actual value of adding this additional\n> level of complexity, but I suppose we can't determine that reliably\n> without doing most of the work :-(\n\nDepends a bit on what case we're concerned about improving. What brought\nme onto this was the concern that actually a good bit of the overhead of\ncomputing aggregate transition functions is often the checks whether the\ntransition value has become NULL. And that for a lot of the more common\naggregates that's unnecessary, as they'll never do so. That case is\npretty easy to test, we can just stop generating the relevant expression\nstep and do a few micro benchmarks.\n\nObviously for the planner taking advantage of that fact, it's more work...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Oct 2019 08:20:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Declaring a strict function returns not null / eval speed" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-20 10:27:19 -0400, Tom Lane wrote:\n>> \"RETURNS NOT NULL\", perhaps? That'd have the advantage of not requiring\n>> any new keyword.\n\n> That could work.\n\nActually, I think we probably don't need any SQL representation of this\nat all, because if what you're going to do with it is omit logically\nnecessary null-value checks, then a wrong setting would trivially crash\nthe server. Therefore, we can never give the ability to set this flag\nto users; we could only set it on built-in functions.\n\n(But that saves a lot of work, eg dump/restore support isn't needed\neither.)\n\nThis doesn't seem too awful to me, because non-builtin functions are\nmost likely slow enough that it doesn't matter.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Oct 2019 15:06:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Declaring a strict function returns not null / eval speed" }, { "msg_contents": "Re: Tom Lane 2019-10-22 <821.1571771210@sss.pgh.pa.us>\n> Actually, I think we probably don't need any SQL representation of this\n> at all, because if what you're going to do with it is omit logically\n> necessary null-value checks, then a wrong setting would trivially crash\n> the server. Therefore, we can never give the ability to set this flag\n> to users; we could only set it on built-in functions.\n\nOr require superuser.\n\n> This doesn't seem too awful to me, because non-builtin functions are\n> most likely slow enough that it doesn't matter.\n\nSome years ago, Kohsuke Kawaguchi, the Jenkins author, was giving a\nkeynote at FOSDEM about extensibility of software. The gist I took\naway from it was the tagline \"if core can do something that extensions\ncan't, that's a bug\". I think that's something that PostgreSQL should\ntry to live up to as well.\n\nChristoph\n\n\n", "msg_date": "Tue, 22 Oct 2019 21:18:45 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Declaring a strict function returns not null / eval speed" }, { "msg_contents": "Hi,\n\nOn 2019-10-22 15:06:50 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-10-20 10:27:19 -0400, Tom Lane wrote:\n> >> \"RETURNS NOT NULL\", perhaps? That'd have the advantage of not requiring\n> >> any new keyword.\n> \n> > That could work.\n> \n> Actually, I think we probably don't need any SQL representation of this\n> at all, because if what you're going to do with it is omit logically\n> necessary null-value checks, then a wrong setting would trivially crash\n> the server. Therefore, we can never give the ability to set this flag\n> to users; we could only set it on built-in functions.\n\nI assumed we'd allow it plainly for C functions, as there's already\nmyriad ways to break the server. And for anything but C, we probably\nshould check it in the language handler (or some generic code invoking\nthat).\n\nI think it's interesting to have this function property not just for\nperformance, but also semantic reasons. But it's fine to include the\ncheck in the function handler (or some wrapper around those, if we think\nthat's worthwhile), rather than relying on the function to get this\nright.\n\n\n> This doesn't seem too awful to me, because non-builtin functions are\n> most likely slow enough that it doesn't matter.\n\nWith builtin, do you mean just internal functions, or also \"C\"? I think\nit's worthwhile to allow \"C\" directly if benchmarks proves this is\nworthwhile.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Oct 2019 12:43:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Declaring a strict function returns not null / eval speed" } ]
[ { "msg_contents": "Dear Hackers,\n\nI have identified some OSS code which maybe can make use of C99 designated initialisers for nulls/values arrays.\n\n~\n\nBackground:\nThere are lots of tuple operations where arrays of values and flags are being passed.\nTypically these arrays are being previously initialised 0/false by memset.\nBy modifying code to use C99 designated initialiser syntax [1], most of these memsets can become redundant.\nActually, this mechanism is already being used in some of the existing OSS code. This patch/proposal just propagates the same idea to all other similar places I could find.\n\n~\n\nResult:\nLess code. Removes ~200 unnecessary memsets.\nMore consistent initialisation.\n\n~\n\nTypical Example:\nBefore:\n\tDatum\t\tvalues[Natts_pg_attribute];\n\tbool\t\tnulls[Natts_pg_attribute];\n\t...\n\tmemset(values, 0, sizeof(values));\n\tmemset(nulls, false, sizeof(nulls));\nAfter:\n\tDatum\t\tvalues[Natts_pg_attribute] = {0};\n\tbool\t\tnulls[Natts_pg_attribute] = {0};\n\n\n---\n[1] REF C99 [$6.7.8/21] If there are fewer initializers in a brace-enclosed list than there are elements or members of an aggregate, \nor fewer characters in a string literal used to initialize an array of known size than there are elements in the array, \nthe remainder of the aggregate shall be initialized implicitly the same as objects that have static storage duration\n\n~\n\nPlease refer to the attached patch.\n\nKind Regards,\n\n---\nPeter Smith\nFujitsu Australia", "msg_date": "Tue, 1 Oct 2019 07:55:26 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "Proposal: Make use of C99 designated initialisers for nulls/values\n arrays" }, { "msg_contents": "On Tue, Oct 1, 2019 at 1:25 PM Smith, Peter <peters@fast.au.fujitsu.com> wrote:\n>\n> Dear Hackers,\n>\n> I have identified some OSS code which maybe can make use of C99 designated initialisers for nulls/values arrays.\n>\n> ~\n>\n> Background:\n> There are lots of tuple operations where arrays of values and flags are being passed.\n> Typically these arrays are being previously initialised 0/false by memset.\n> By modifying code to use C99 designated initialiser syntax [1], most of these memsets can become redundant.\n> Actually, this mechanism is already being used in some of the existing OSS code. This patch/proposal just propagates the same idea to all other similar places I could find.\n>\n> ~\n>\n> Result:\n> Less code. Removes ~200 unnecessary memsets.\n> More consistent initialisation.\n>\n\n+1. This seems like an improvement. I can review and take this\nforward unless there are objections from others.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Oct 2019 15:42:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "\nOn 10/1/19 6:12 AM, Amit Kapila wrote:\n> On Tue, Oct 1, 2019 at 1:25 PM Smith, Peter <peters@fast.au.fujitsu.com> wrote:\n>> Dear Hackers,\n>>\n>> I have identified some OSS code which maybe can make use of C99 designated initialisers for nulls/values arrays.\n>>\n>> ~\n>>\n>> Background:\n>> There are lots of tuple operations where arrays of values and flags are being passed.\n>> Typically these arrays are being previously initialised 0/false by memset.\n>> By modifying code to use C99 designated initialiser syntax [1], most of these memsets can become redundant.\n>> Actually, this mechanism is already being used in some of the existing OSS code. This patch/proposal just propagates the same idea to all other similar places I could find.\n>>\n>> ~\n>>\n>> Result:\n>> Less code. Removes ~200 unnecessary memsets.\n>> More consistent initialisation.\n>>\n> +1. This seems like an improvement. I can review and take this\n> forward unless there are objections from others.\n>\n>\n\n+1.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 1 Oct 2019 08:40:26 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Tue, 1 Oct 2019 at 03:55, Smith, Peter <peters@fast.au.fujitsu.com>\nwrote:\n\n\n> Typical Example:\n> Before:\n> Datum values[Natts_pg_attribute];\n> bool nulls[Natts_pg_attribute];\n> ...\n> memset(values, 0, sizeof(values));\n> memset(nulls, false, sizeof(nulls));\n> After:\n> Datum values[Natts_pg_attribute] = {0};\n> bool nulls[Natts_pg_attribute] = {0};\n>\n\nI hope you'll forgive a noob question. Why does the \"After\" initialization\nfor the boolean array have {0} rather than {false}?\n\nOn Tue, 1 Oct 2019 at 03:55, Smith, Peter <peters@fast.au.fujitsu.com> wrote: \nTypical Example:\nBefore:\n        Datum           values[Natts_pg_attribute];\n        bool            nulls[Natts_pg_attribute];\n        ...\n        memset(values, 0, sizeof(values));\n        memset(nulls, false, sizeof(nulls));\nAfter:\n        Datum           values[Natts_pg_attribute] = {0};\n        bool            nulls[Natts_pg_attribute] = {0};I hope you'll forgive a noob question. Why does the \"After\" initialization for the boolean array have {0} rather than {false}?", "msg_date": "Tue, 1 Oct 2019 09:32:17 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Tue, Oct 1, 2019 at 08:40:26AM -0400, Andrew Dunstan wrote:\n> \n> On 10/1/19 6:12 AM, Amit Kapila wrote:\n> > On Tue, Oct 1, 2019 at 1:25 PM Smith, Peter <peters@fast.au.fujitsu.com> wrote:\n> >> Dear Hackers,\n> >>\n> >> I have identified some OSS code which maybe can make use of C99 designated initialisers for nulls/values arrays.\n> >>\n> >> ~\n> >>\n> >> Background:\n> >> There are lots of tuple operations where arrays of values and flags are being passed.\n> >> Typically these arrays are being previously initialised 0/false by memset.\n> >> By modifying code to use C99 designated initialiser syntax [1], most of these memsets can become redundant.\n> >> Actually, this mechanism is already being used in some of the existing OSS code. This patch/proposal just propagates the same idea to all other similar places I could find.\n> >>\n> >> ~\n> >>\n> >> Result:\n> >> Less code. Removes ~200 unnecessary memsets.\n> >> More consistent initialisation.\n> >>\n> > +1. This seems like an improvement. I can review and take this\n> > forward unless there are objections from others.\n> >\n> >\n> \n> +1.\n\nI like it!\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 1 Oct 2019 11:57:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n>>> On Tue, Oct 1, 2019 at 1:25 PM Smith, Peter <peters@fast.au.fujitsu.com> wrote:\n>>>> There are lots of tuple operations where arrays of values and flags are being passed.\n>>>> Typically these arrays are being previously initialised 0/false by memset.\n>>>> By modifying code to use C99 designated initialiser syntax [1], most of these memsets can become redundant.\n\n> I like it!\n\nFYI, I checked into whether this would result in worse generated code.\nIn the one place I checked (InsertPgAttributeTuple, which hopefully\nis representative), I got *exactly the same* assembly code before\nand after, on both a somewhat-aging gcc and fairly modern clang.\nHadn't quite expected that, but it removes any worries about whether\nwe might be losing anything.\n\nNote though that InsertPgAttributeTuple uses memset(), while some of\nthese other places use MemSet(). The code I see being generated for\nMemSet() is also the same(!) on clang, but it is different and\nprobably worse on gcc. I wonder if it isn't time to kick MemSet to\nthe curb. We have not re-evaluated that macro in more than a dozen\nyears, and compilers have surely changed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Oct 2019 12:17:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Hi,\n\nOn 2019-10-01 12:17:08 -0400, Tom Lane wrote:\n> FYI, I checked into whether this would result in worse generated code.\n> In the one place I checked (InsertPgAttributeTuple, which hopefully\n> is representative), I got *exactly the same* assembly code before\n> and after, on both a somewhat-aging gcc and fairly modern clang.\n> Hadn't quite expected that, but it removes any worries about whether\n> we might be losing anything.\n\nI think the only case where it's plausible to be really worse is where\nwe intentionally leave part of such allocations uninitialized - which we\ncan't easily do in these cases because the rest of the struct will also\nget zeroed out. The compiler will probably figure it out in some cases,\nbut there's plenty where it can't. But I don't think there's many\nplaces like that in our code though.\n\n\n> Note though that InsertPgAttributeTuple uses memset(), while some of\n> these other places use MemSet(). The code I see being generated for\n> MemSet() is also the same(!) on clang, but it is different and\n> probably worse on gcc. I wonder if it isn't time to kick MemSet to\n> the curb. We have not re-evaluated that macro in more than a dozen\n> years, and compilers have surely changed.\n\nYes, we really should!\n\n- Andres\n\n\n", "msg_date": "Tue, 1 Oct 2019 09:49:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Wed, Oct 2, 2019 at 5:49 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-10-01 12:17:08 -0400, Tom Lane wrote:\n> > Note though that InsertPgAttributeTuple uses memset(), while some of\n> > these other places use MemSet(). The code I see being generated for\n> > MemSet() is also the same(!) on clang, but it is different and\n> > probably worse on gcc. I wonder if it isn't time to kick MemSet to\n> > the curb. We have not re-evaluated that macro in more than a dozen\n> > years, and compilers have surely changed.\n>\n> Yes, we really should!\n\n+1\n\nFWIW I experimented with that over here:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGLfa6ANa0vs7Lf0op0XBH05HE8SyX8NFhDyT7k2CHYLXw%40mail.gmail.com\n\n\n", "msg_date": "Wed, 2 Oct 2019 09:36:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "From: Isaac Morland <isaac.morland@gmail.com> Sent: Tuesday, 1 October 2019 11:32 PM\r\n\r\n>Typical Example:\r\n>Before:\r\n>        Datum           values[Natts_pg_attribute];\r\n>        bool            nulls[Natts_pg_attribute];\r\n>        ...\r\n>        memset(values, 0, sizeof(values));\r\n>        memset(nulls, false, sizeof(nulls));\r\n>After:\r\n>        Datum           values[Natts_pg_attribute] = {0};\r\n>        bool            nulls[Natts_pg_attribute] = {0};\r\n>\r\n>I hope you'll forgive a noob question. Why does the \"After\" initialization for the boolean array have {0} rather than {false}? \r\n\r\nIt is a valid question. \r\n\r\nI found that the original memsets that this patch replaces were already using 0 and false interchangeably. So I just picked one. \r\nReasons I chose {0} over {false} are: (a) laziness, and (b) consistency with the values[] initialiser.\r\n\r\nBut it is no problem to change the bool initialisers to {false} if that becomes a committer review issue.\r\n\r\nKind Regards\r\n--\r\nPeter Smith\r\nFujitsu Australia\r\n", "msg_date": "Tue, 1 Oct 2019 23:23:00 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Wed, Oct 2, 2019 at 4:53 AM Smith, Peter <peters@fast.au.fujitsu.com>\nwrote:\n\n> From: Isaac Morland <isaac.morland@gmail.com> Sent: Tuesday, 1 October\n> 2019 11:32 PM\n>\n> >Typical Example:\n> >Before:\n> > Datum values[Natts_pg_attribute];\n> > bool nulls[Natts_pg_attribute];\n> > ...\n> > memset(values, 0, sizeof(values));\n> > memset(nulls, false, sizeof(nulls));\n> >After:\n> > Datum values[Natts_pg_attribute] = {0};\n> > bool nulls[Natts_pg_attribute] = {0};\n> >\n> >I hope you'll forgive a noob question. Why does the \"After\"\n> initialization for the boolean array have {0} rather than {false}?\n>\n> It is a valid question.\n>\n> I found that the original memsets that this patch replaces were already\n> using 0 and false interchangeably. So I just picked one.\n> Reasons I chose {0} over {false} are: (a) laziness, and (b) consistency\n> with the values[] initialiser.\n>\n\nIn this case, I think it is better to be consistent in all the places. As\nof now (without patch), we are using 'false' or '0' to initialize the\nboolean array. See below two instances from the patch:\n1.\n@@ -607,9 +601,9 @@ UpdateStatisticsForTypeChange(Oid statsOid, Oid\nrelationOid, int attnum,\n\n Relation rel;\n\n- Datum values[Natts_pg_statistic_ext_data];\n- bool nulls[Natts_pg_statistic_ext_data];\n- bool replaces[Natts_pg_statistic_ext_data];\n+ Datum values[Natts_pg_statistic_ext_data] = {0};\n+ bool nulls[Natts_pg_statistic_ext_data] = {0};\n+ bool replaces[Natts_pg_statistic_ext_data] = {0};\n\n oldtup = SearchSysCache1(STATEXTDATASTXOID, ObjectIdGetDatum(statsOid));\n if (!HeapTupleIsValid(oldtup))\n@@ -630,10 +624,6 @@ UpdateStatisticsForTypeChange(Oid statsOid, Oid\nrelationOid, int attnum,\n * OK, we need to reset some statistics. So let's build the new tuple,\n * replacing the affected statistics types with NULL.\n */\n- memset(nulls, 0, Natts_pg_statistic_ext_data * sizeof(bool));\n- memset(replaces, 0, Natts_pg_statistic_ext_data * sizeof(bool));\n- memset(values, 0, Natts_pg_statistic_ext_data * sizeof(Datum));\n\n2.\n@@ -69,10 +69,10 @@ CreateStatistics(CreateStatsStmt *stmt)\n Oid namespaceId;\n Oid stxowner = GetUserId();\n HeapTuple htup;\n- Datum values[Natts_pg_statistic_ext];\n- bool nulls[Natts_pg_statistic_ext];\n- Datum datavalues[Natts_pg_statistic_ext_data];\n- bool datanulls[Natts_pg_statistic_ext_data];\n+ Datum values[Natts_pg_statistic_ext] = {0};\n+ bool nulls[Natts_pg_statistic_ext] = {0};\n+ Datum datavalues[Natts_pg_statistic_ext_data] = {0};\n+ bool datanulls[Natts_pg_statistic_ext_data] = {0};\n int2vector *stxkeys;\n Relation statrel;\n Relation datarel;\n@@ -330,9 +330,6 @@ CreateStatistics(CreateStatsStmt *stmt)\n /*\n * Everything seems fine, so let's build the pg_statistic_ext tuple.\n */\n- memset(values, 0, sizeof(values));\n- memset(nulls, false, sizeof(nulls));\n-\n statoid = GetNewOidWithIndex(statrel, StatisticExtOidIndexId,\n Anum_pg_statistic_ext_oid);\n values[Anum_pg_statistic_ext_oid - 1] = ObjectIdGetDatum(statoid);\n@@ -357,9 +354,6 @@ CreateStatistics(CreateStatsStmt *stmt)\n */\n datarel = table_open(StatisticExtDataRelationId, RowExclusiveLock);\n\n- memset(datavalues, 0, sizeof(datavalues));\n- memset(datanulls, false, sizeof(datanulls));\n\nIn the first usage, we are initializing the boolean array with 0 and in the\nsecond case, we are using false. The patch changes it to use 0 at all the\nplaces which I think is better.\n\nI don't have any strong opinion on this, but I would mildly prefer to\ninitialize boolean array with false just for the sake of readability (we\ngenerally initializing booleans with false).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Oct 2, 2019 at 4:53 AM Smith, Peter <peters@fast.au.fujitsu.com> wrote:From: Isaac Morland <isaac.morland@gmail.com> Sent: Tuesday, 1 October 2019 11:32 PM\n\n>Typical Example:\n>Before:\n>        Datum           values[Natts_pg_attribute];\n>        bool            nulls[Natts_pg_attribute];\n>        ...\n>        memset(values, 0, sizeof(values));\n>        memset(nulls, false, sizeof(nulls));\n>After:\n>        Datum           values[Natts_pg_attribute] = {0};\n>        bool            nulls[Natts_pg_attribute] = {0};\n>\n>I hope you'll forgive a noob question. Why does the \"After\" initialization for the boolean array have {0} rather than {false}? \n\nIt is a valid question. \n\nI found that the original memsets that this patch replaces were already using 0 and false interchangeably. So I just picked one. \nReasons I chose {0} over {false} are: (a) laziness, and (b) consistency with the values[] initialiser.In this case, I think it is better to be consistent in all the places.  As of now (without patch), we are using 'false' or '0' to initialize the boolean array.  See below two instances from the patch:1.@@ -607,9 +601,9 @@ UpdateStatisticsForTypeChange(Oid statsOid, Oid relationOid, int attnum,  \tRelation\trel; -\tDatum\t\tvalues[Natts_pg_statistic_ext_data];-\tbool\t\tnulls[Natts_pg_statistic_ext_data];-\tbool\t\treplaces[Natts_pg_statistic_ext_data];+\tDatum\t\tvalues[Natts_pg_statistic_ext_data] = {0};+\tbool\t\tnulls[Natts_pg_statistic_ext_data] = {0};+\tbool\t\treplaces[Natts_pg_statistic_ext_data] = {0};  \toldtup = SearchSysCache1(STATEXTDATASTXOID, ObjectIdGetDatum(statsOid)); \tif (!HeapTupleIsValid(oldtup))@@ -630,10 +624,6 @@ UpdateStatisticsForTypeChange(Oid statsOid, Oid relationOid, int attnum, \t * OK, we need to reset some statistics. So let's build the new tuple, \t * replacing the affected statistics types with NULL. \t */-\tmemset(nulls, 0, Natts_pg_statistic_ext_data * sizeof(bool));-\tmemset(replaces, 0, Natts_pg_statistic_ext_data * sizeof(bool));-\tmemset(values, 0, Natts_pg_statistic_ext_data * sizeof(Datum));2.@@ -69,10 +69,10 @@ CreateStatistics(CreateStatsStmt *stmt) \tOid\t\t\tnamespaceId; \tOid\t\t\tstxowner = GetUserId(); \tHeapTuple\thtup;-\tDatum\t\tvalues[Natts_pg_statistic_ext];-\tbool\t\tnulls[Natts_pg_statistic_ext];-\tDatum\t\tdatavalues[Natts_pg_statistic_ext_data];-\tbool\t\tdatanulls[Natts_pg_statistic_ext_data];+\tDatum\t\tvalues[Natts_pg_statistic_ext] = {0};+\tbool\t\tnulls[Natts_pg_statistic_ext] = {0};+\tDatum\t\tdatavalues[Natts_pg_statistic_ext_data] = {0};+\tbool\t\tdatanulls[Natts_pg_statistic_ext_data] = {0}; \tint2vector *stxkeys; \tRelation\tstatrel; \tRelation\tdatarel;@@ -330,9 +330,6 @@ CreateStatistics(CreateStatsStmt *stmt) \t/* \t * Everything seems fine, so let's build the pg_statistic_ext tuple. \t */-\tmemset(values, 0, sizeof(values));-\tmemset(nulls, false, sizeof(nulls));- \tstatoid = GetNewOidWithIndex(statrel, StatisticExtOidIndexId, \t\t\t\t\t\t\t\t Anum_pg_statistic_ext_oid); \tvalues[Anum_pg_statistic_ext_oid - 1] = ObjectIdGetDatum(statoid);@@ -357,9 +354,6 @@ CreateStatistics(CreateStatsStmt *stmt) \t */ \tdatarel = table_open(StatisticExtDataRelationId, RowExclusiveLock); -\tmemset(datavalues, 0, sizeof(datavalues));-\tmemset(datanulls, false, sizeof(datanulls));In the first usage, we are initializing the boolean array with 0 and in the second case, we are using false.   The patch changes it to use 0 at all the places which I think is better.I don't have any strong opinion on this, but I would mildly prefer to initialize boolean array with false just for the sake of readability (we generally initializing booleans with false).-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Oct 2019 05:11:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com> Sent: Tuesday, 1 October 2019 8:12 PM\r\n\r\n> +1. This seems like an improvement. I can review and take this forward unless there are objections from others.\r\n\r\nFYI - I created a Commitfest entry for this here: https://commitfest.postgresql.org/25/2290/\r\n\r\nKind Regards\r\n--\r\nPeter Smith\r\nFujitsu Australia\r\n\r\n", "msg_date": "Tue, 1 Oct 2019 23:55:22 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com> Sent: Wednesday, 2 October 2019 9:42 AM\r\n\r\n> I don't have any strong opinion on this, but I would mildly prefer to initialize boolean array with false just for the sake of readability (we generally initializing booleans with false).\r\n\r\nDone. Please see attached updated patch.\r\n\r\nKind Regards\r\n--\r\nPeter Smith\r\nFujitsu Australia", "msg_date": "Wed, 2 Oct 2019 07:31:48 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Isaac Morland wrote:\n> I hope you'll forgive a noob question. Why does the \"After\"\n> initialization for the boolean array have {0} rather than {false}?\n\nI think using a value other than {0} potentially gives the incorrect\nimpression that the value is used for *all* elements of the\narray/structure, whereas it is only used for the first element. \"The\nremainder of the aggregate shall be initialized implicitly the same as\nobjects that have static storage duration.\"\n\nThe rest of the elements are being initialized to zero as interpreted by\ntheir types (so NULL for pointers, 0.0 for floats, even though neither\nof them need be bitwise zero). Setting the first item to 0 matches that\nexactly.\n\nUsing {false} may encourage the unwary to try\n\n\tbool foo[2] = {true};\n\nwhich will not set all elements to true.\n\n\n", "msg_date": "Wed, 2 Oct 2019 10:34:20 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Joe Nelson <joe@begriffs.com> writes:\n> Isaac Morland wrote:\n>> I hope you'll forgive a noob question. Why does the \"After\"\n>> initialization for the boolean array have {0} rather than {false}?\n\n> I think using a value other than {0} potentially gives the incorrect\n> impression that the value is used for *all* elements of the\n> array/structure, whereas it is only used for the first element.\n\nThere's been something vaguely bothering me about this proposal,\nand I think you just crystallized it.\n\n> Using {false} may encourage the unwary to try\n> \tbool foo[2] = {true};\n> which will not set all elements to true.\n\nRight. I think that in general it's bad practice for an initializer\nto not specify all fields/elements of the target. It is okay in the\nspecific case that we're substituting for a memset(..., 0, ...).\nPerhaps we could make this explicit by using a coding style like\n\n/* in c.h or some such place: */\n#define INIT_ALL_ZEROES {0}\n\n/* in code: */\n\tDatum values[N] = INIT_ALL_ZEROES;\n\nand then decreeing that it's not project style to use a partial\ninitializer other than in this way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Oct 2019 11:46:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Wed, 2 Oct 2019 at 11:34, Joe Nelson <joe@begriffs.com> wrote:\n\n> Isaac Morland wrote:\n> > I hope you'll forgive a noob question. Why does the \"After\"\n> > initialization for the boolean array have {0} rather than {false}?\n>\n> I think using a value other than {0} potentially gives the incorrect\n> impression that the value is used for *all* elements of the\n> array/structure, whereas it is only used for the first element. \"The\n> remainder of the aggregate shall be initialized implicitly the same as\n> objects that have static storage duration.\"\n>\n> The rest of the elements are being initialized to zero as interpreted by\n> their types (so NULL for pointers, 0.0 for floats, even though neither\n> of them need be bitwise zero). Setting the first item to 0 matches that\n> exactly.\n>\n> Using {false} may encourage the unwary to try\n>\n> bool foo[2] = {true};\n>\n> which will not set all elements to true.\n>\n\nThanks for the explanation. So the first however many elements are in curly\nbraces get initialized to those values, then the rest get initialized to\nblank/0/0.0/false/...?\n\nIf so, I don't suppose it's possible to give empty braces:\n\nbool nulls[Natts_pg_attribute] = {};\n\nOn Wed, 2 Oct 2019 at 11:34, Joe Nelson <joe@begriffs.com> wrote:Isaac Morland wrote:\n> I hope you'll forgive a noob question. Why does the \"After\"\n> initialization for the boolean array have {0} rather than {false}?\n\nI think using a value other than {0} potentially gives the incorrect\nimpression that the value is used for *all* elements of the\narray/structure, whereas it is only used for the first element. \"The\nremainder of the aggregate shall be initialized implicitly the same as\nobjects that have static storage duration.\"\n\nThe rest of the elements are being initialized to zero as interpreted by\ntheir types (so NULL for pointers, 0.0 for floats, even though neither\nof them need be bitwise zero). Setting the first item to 0 matches that\nexactly.\n\nUsing {false} may encourage the unwary to try\n\n        bool foo[2] = {true};\n\nwhich will not set all elements to true.Thanks for the explanation. So the first however many elements are in curly braces get initialized to those values, then the rest get initialized to blank/0/0.0/false/...?If so, I don't suppose it's possible to give empty braces:bool nulls[Natts_pg_attribute] = {};", "msg_date": "Wed, 2 Oct 2019 12:02:54 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "> If so, I don't suppose it's possible to give empty braces:\n> \n> bool nulls[Natts_pg_attribute] = {};\n\nGNU does add this capability as a nonstandard language extension, but\naccording to the C99 standard, no.\n\n\n", "msg_date": "Wed, 2 Oct 2019 11:22:33 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "\n\nOn 10/2/19 8:46 AM, Tom Lane wrote:\n> Joe Nelson <joe@begriffs.com> writes:\n>> Isaac Morland wrote:\n>>> I hope you'll forgive a noob question. Why does the \"After\"\n>>> initialization for the boolean array have {0} rather than {false}?\n> \n>> I think using a value other than {0} potentially gives the incorrect\n>> impression that the value is used for *all* elements of the\n>> array/structure, whereas it is only used for the first element.\n> \n> There's been something vaguely bothering me about this proposal,\n> and I think you just crystallized it.\n> \n>> Using {false} may encourage the unwary to try\n>> \tbool foo[2] = {true};\n>> which will not set all elements to true.\n> \n> Right. I think that in general it's bad practice for an initializer\n> to not specify all fields/elements of the target. It is okay in the\n> specific case that we're substituting for a memset(..., 0, ...).\n> Perhaps we could make this explicit by using a coding style like\n> \n> /* in c.h or some such place: */\n> #define INIT_ALL_ZEROES {0}\n> \n> /* in code: */\n> \tDatum values[N] = INIT_ALL_ZEROES;\n> \n> and then decreeing that it's not project style to use a partial\n> initializer other than in this way.\n\nThere are numerous locations in the code that raise warnings when\n-Wmissing-field-initializers is handed to gcc. See, for example, \nsrc/backend/utils/adt/formatting.c where\n\n static const KeyWord NUM_keywords[]\n\nis initialized, and the code comment above that disclaims the need to \ninitialize is_digit and date_mode. Are you proposing cleaning up all \nsuch incomplete initializations within the project?\n\nI understand that your INIT_ALL_ZEROS macro does nothing to change\nwhether -Wmissing-field-initializers would raise a warning. I'm\njust asking about the decree you propose, and I used that warning flag \nto get the compiler to spit out relevant examples.\n\nmark\n\n\n\n", "msg_date": "Wed, 2 Oct 2019 10:36:14 -0700", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On 10/2/19 8:46 AM, Tom Lane wrote:\n>> Right. I think that in general it's bad practice for an initializer\n>> to not specify all fields/elements of the target.\n\n> There are numerous locations in the code that raise warnings when\n> -Wmissing-field-initializers is handed to gcc. See, for example, \n> src/backend/utils/adt/formatting.c where\n> static const KeyWord NUM_keywords[]\n> is initialized, and the code comment above that disclaims the need to \n> initialize is_digit and date_mode. Are you proposing cleaning up all \n> such incomplete initializations within the project?\n\nHmm. Maybe it's worth doing as a code beautification effort, but\nI'm not volunteering. At the same time, I wouldn't like to make a\nchange like this, if it introduces dozens/hundreds of new cases.\n\n> I understand that your INIT_ALL_ZEROS macro does nothing to change\n> whether -Wmissing-field-initializers would raise a warning.\n\nNot sure --- the name of that option suggests that maybe it only\ncomplains about omitted *struct fields* not omitted *array elements*.\n\nIf it does complain, is there any way that we could extend the macro\nto annotate usages of it to suppress the warning?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Oct 2019 14:02:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "\n\nOn 10/2/19 11:02 AM, Tom Lane wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n>> On 10/2/19 8:46 AM, Tom Lane wrote:\n>>> Right. I think that in general it's bad practice for an initializer\n>>> to not specify all fields/elements of the target.\n> \n>> There are numerous locations in the code that raise warnings when\n>> -Wmissing-field-initializers is handed to gcc. See, for example,\n>> src/backend/utils/adt/formatting.c where\n>> static const KeyWord NUM_keywords[]\n>> is initialized, and the code comment above that disclaims the need to\n>> initialize is_digit and date_mode. Are you proposing cleaning up all\n>> such incomplete initializations within the project?\n> \n> Hmm. Maybe it's worth doing as a code beautification effort, but\n> I'm not volunteering. At the same time, I wouldn't like to make a\n> change like this, if it introduces dozens/hundreds of new cases.\n> \n>> I understand that your INIT_ALL_ZEROS macro does nothing to change\n>> whether -Wmissing-field-initializers would raise a warning.\n> \n> Not sure --- the name of that option suggests that maybe it only\n> complains about omitted *struct fields* not omitted *array elements*.\n\n\nWith gcc (Debian 8.3.0-6) 8.3.0\n\n int foo[6] = {0, 1, 2};\n\ndoes not draw a warning when compiled with this flag.\n\n> If it does complain, is there any way that we could extend the macro\n> to annotate usages of it to suppress the warning?\n\nNeither initializing a struct with {0} nor with INIT_ALL_ZEROS draws a \nwarning either, with my gcc. There are reports online that older \nversions of the compiler did, see\n\n https://gcc.gnu.org/bugzilla/show_bug.cgi?id=36750\n\nbut I don't have an older version to test with just now.\n\nNote that initializing a multi-element struct with {1} does still draw a \nwarning, and reading the thread above suggests that gcc made a specific \neffort to allow initialization to {0} to work without warning as a \nspecial case.\n\nSo your proposal for using INIT_ALL_ZEROS is probably good with \nsufficiently new compilers, and I'm generally in favor of the proposal, \nbut I don't think the decree you propose can work unless somebody cleans \nup all these other cases that I indicated in my prior email.\n\n(I'm sitting on a few patches until v12 goes out the door from some \nconversations with you several months ago, and perhaps I'll include a \npatch for this cleanup, too, when time comes for v13 patch sets to be \nsubmitted. My past experience submitting patches shortly before a\nrelease was that they get ignored.)\n\nmark\n\n\n\n\n\n\n", "msg_date": "Wed, 2 Oct 2019 11:39:22 -0700", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> (I'm sitting on a few patches until v12 goes out the door from some \n> conversations with you several months ago, and perhaps I'll include a \n> patch for this cleanup, too, when time comes for v13 patch sets to be \n> submitted.\n\nThat would be now. We already ran one CF for v13.\n\n> My past experience submitting patches shortly before a\n> release was that they get ignored.)\n\nWhat you need to do is add 'em to the commitfest app. They might\nstill get ignored for awhile, but we won't forget about them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Oct 2019 14:55:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Wed, Oct 02, 2019 at 02:55:39PM -0400, Tom Lane wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n>> (I'm sitting on a few patches until v12 goes out the door from some \n>> conversations with you several months ago, and perhaps I'll include a \n>> patch for this cleanup, too, when time comes for v13 patch sets to be \n>> submitted.\n> \n> That would be now. We already ran one CF for v13.\n\n+1.\n\n>> My past experience submitting patches shortly before a\n>> release was that they get ignored.)\n> \n> What you need to do is add 'em to the commitfest app. They might\n> still get ignored for awhile, but we won't forget about them.\n\nThe last commit fest of v13 will begin on March, and the next one is\nplanned for the beginning of November:\nhttps://commitfest.postgresql.org/25/\nSo you still have plently of time to get something into 13.\n\nHere are also some guidelines:\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\nBut you are already aware of anyway, right? :p\n--\nMichael", "msg_date": "Thu, 3 Oct 2019 11:08:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> Sent: Thursday, 3 October 2019 1:46 AM\n\n> Right. I think that in general it's bad practice for an initializer to not specify all fields/elements of the target.\n> It is okay in the specific case that we're substituting for a memset(..., 0, ...).\n> Perhaps we could make this explicit by using a coding style like\n>\n>/* in c.h or some such place: */\n>#define INIT_ALL_ZEROES {0}\n>\n>/* in code: */\n>\tDatum values[N] = INIT_ALL_ZEROES;\n\nThe patch has been updated per your suggestion. Now using macros for these partial initialisers.\n\nPlease see attachment.\n\nKind Regards\n---\nPeter Smith\nFujitsu Australia", "msg_date": "Thu, 3 Oct 2019 06:16:44 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Wed, Oct 2, 2019 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Joe Nelson <joe@begriffs.com> writes:\n> > Isaac Morland wrote:\n> >> I hope you'll forgive a noob question. Why does the \"After\"\n> >> initialization for the boolean array have {0} rather than {false}?\n>\n> > I think using a value other than {0} potentially gives the incorrect\n> > impression that the value is used for *all* elements of the\n> > array/structure, whereas it is only used for the first element.\n>\n> There's been something vaguely bothering me about this proposal,\n> and I think you just crystallized it.\n>\n> > Using {false} may encourage the unwary to try\n> > bool foo[2] = {true};\n> > which will not set all elements to true.\n>\n> Right. I think that in general it's bad practice for an initializer\n> to not specify all fields/elements of the target. It is okay in the\n> specific case that we're substituting for a memset(..., 0, ...).\n> Perhaps we could make this explicit by using a coding style like\n>\n> /* in c.h or some such place: */\n> #define INIT_ALL_ZEROES {0}\n>\n> /* in code: */\n> Datum values[N] = INIT_ALL_ZEROES;\n>\n\nThis is a good idea, but by reading the thread it is not completely clear\nif we want to pursue this or want to explore something else or leave the\ncurrent code as it is. Also, if we want to pursue, do we want to\nuse INIT_ALL_ZEROES for bool arrays as well?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Oct 2, 2019 at 9:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Joe Nelson <joe@begriffs.com> writes:\n> Isaac Morland wrote:\n>> I hope you'll forgive a noob question. Why does the \"After\"\n>> initialization for the boolean array have {0} rather than {false}?\n\n> I think using a value other than {0} potentially gives the incorrect\n> impression that the value is used for *all* elements of the\n> array/structure, whereas it is only used for the first element.\n\nThere's been something vaguely bothering me about this proposal,\nand I think you just crystallized it.\n\n> Using {false} may encourage the unwary to try\n>       bool foo[2] = {true};\n> which will not set all elements to true.\n\nRight.  I think that in general it's bad practice for an initializer\nto not specify all fields/elements of the target.  It is okay in the\nspecific case that we're substituting for a memset(..., 0, ...).\nPerhaps we could make this explicit by using a coding style like\n\n/* in c.h or some such place: */\n#define INIT_ALL_ZEROES  {0}\n\n/* in code: */\n        Datum values[N] = INIT_ALL_ZEROES;This is a good idea, but by reading the thread it is not completely clear if we want to pursue this or want to explore something else or leave the current code as it is.  Also, if we want to pursue, do we want to use INIT_ALL_ZEROES for bool arrays as well?-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 4 Oct 2019 09:01:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com> Sent: Friday, 4 October 2019 1:32 PM\r\n\r\n>  Also, if we want to pursue, do we want to use INIT_ALL_ZEROES for bool arrays as well?\r\n\r\nFYI - In case it went unnoticed - my last patch addresses this by defining 2 macros:\r\n\r\n#define INIT_ALL_ELEMS_ZERO\t{0}\r\n#define INIT_ALL_ELEMS_FALSE\t{false}\r\n\r\nKind Regards\r\n--\r\nPeter Smith\r\nFujitsu Australia\r\n", "msg_date": "Fri, 4 Oct 2019 03:51:48 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "\"Smith, Peter\" <peters@fast.au.fujitsu.com> writes:\n> From: Amit Kapila <amit.kapila16@gmail.com> Sent: Friday, 4 October 2019 1:32 PM\n>>   Also, if we want to pursue, do we want to use INIT_ALL_ZEROES for bool arrays as well?\n\n> FYI - In case it went unnoticed - my last patch addresses this by defining 2 macros:\n\n> #define INIT_ALL_ELEMS_ZERO\t{0}\n> #define INIT_ALL_ELEMS_FALSE\t{false}\n\nI would say that's 100% wrong. The entire point here is that it's\nmemset-equivalent, and therefore writes zeroes, regardless of what\nthe datatype is. As a counterexample, the coding you have above\nlooks a lot like it would work to add\n\n#define INIT_ALL_ELEMS_TRUE\t{true}\n\nwhich as previously noted will *not* work. So I think the\none-size-fits-all approach is what to use.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Oct 2019 00:08:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us> Sent: Friday, 4 October 2019 2:08 PM\r\n\r\n>> #define INIT_ALL_ELEMS_ZERO\t{0}\r\n>> #define INIT_ALL_ELEMS_FALSE\t{false}\r\n\r\n>I would say that's 100% wrong. The entire point here is that it's memset-equivalent, and therefore writes zeroes, regardless of what the datatype is. \r\n\r\nI agree it is memset-equivalent.\r\n\r\nAll examples of the memset code that INIT_ALL_ELEMS_ZERO replaces looked like this:\r\nmemset(values, 0, sizeof(values));\r\n\r\nMost examples of the memset code that INIT_ALL_ELEMS_FALSE replaces looked like this:\r\nmemset(nulls, false, sizeof(nulls));\r\n\r\n~\r\n\r\nI made the 2nd macro because I anticipate the same folk that don't like setting 0 to a bool will also not like setting something called INIT_ALL_ELEMS_ZERO to a bool array.\r\n\r\nHow about I just define them both the same?\r\n#define INIT_ALL_ELEMS_ZERO\t{0}\r\n#define INIT_ALL_ELEMS_FALSE\t{0}\r\n\r\n>As a counterexample, the coding you have above looks a lot like it would work to add\r\n>\r\n>#define INIT_ALL_ELEMS_TRUE\t{true}\r\n> which as previously noted will *not* work. So I think the one-size-fits-all approach is what to use.\r\n\r\nI agree it looks that way; in my previous email I should have provided more context to the code.\r\nBelow is the full fragment of the last shared patch, which included a note to prevent anybody from doing such a thing.\r\n\r\n~~\r\n\r\n/*\r\n * Macros for C99 designated-initialiser syntax to set all array elements to 0/false.\r\n *\r\n * Use these macros in preference to explicit {0} syntax to avoid giving a misleading\r\n * impression that the same value is always used for all elements.\r\n * e.g.\r\n * bool foo[2] = {false}; // sets both elements false\r\n * bool foo[2] = {true}; // does NOT set both elements true\r\n *\r\n * Reference: C99 [$6.7.8/21] If there are fewer initializers in a brace-enclosed list than there\r\n * are elements or members of an aggregate, or fewer characters in a string literal used to\r\n * initialize an array of known size than there are elements in the array, the remainder of the\r\n * aggregate shall be initialized implicitly the same as objects that have static storage duration\r\n */\r\n#define INIT_ALL_ELEMS_ZERO\t{0}\r\n#define INIT_ALL_ELEMS_FALSE\t{false} \r\n\r\n~~\r\n\r\n\r\nKind Regards,\r\n--\r\nPeter Smith\r\nFujitsu Australia\r\n", "msg_date": "Fri, 4 Oct 2019 06:39:57 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Fri, Oct 4, 2019 at 12:10 PM Smith, Peter <peters@fast.au.fujitsu.com>\nwrote:\n\n> From: Tom Lane <tgl@sss.pgh.pa.us> Sent: Friday, 4 October 2019 2:08 PM\n>\n> >> #define INIT_ALL_ELEMS_ZERO {0}\n> >> #define INIT_ALL_ELEMS_FALSE {false}\n>\n> >I would say that's 100% wrong. The entire point here is that it's\n> memset-equivalent, and therefore writes zeroes, regardless of what the\n> datatype is.\n>\n> I agree it is memset-equivalent.\n>\n> All examples of the memset code that INIT_ALL_ELEMS_ZERO replaces looked\n> like this:\n> memset(values, 0, sizeof(values));\n>\n> Most examples of the memset code that INIT_ALL_ELEMS_FALSE replaces looked\n> like this:\n> memset(nulls, false, sizeof(nulls));\n>\n> ~\n>\n> I made the 2nd macro because I anticipate the same folk that don't like\n> setting 0 to a bool will also not like setting something called\n> INIT_ALL_ELEMS_ZERO to a bool array.\n>\n> How about I just define them both the same?\n> #define INIT_ALL_ELEMS_ZERO {0}\n> #define INIT_ALL_ELEMS_FALSE {0}\n>\n>\nI think using one define would be preferred, but you can wait and see if\nothers prefer defining different macros for the same thing.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Oct 4, 2019 at 12:10 PM Smith, Peter <peters@fast.au.fujitsu.com> wrote:From: Tom Lane <tgl@sss.pgh.pa.us> Sent: Friday, 4 October 2019 2:08 PM\n\n>> #define INIT_ALL_ELEMS_ZERO  {0}\n>> #define INIT_ALL_ELEMS_FALSE {false}\n\n>I would say that's 100% wrong.  The entire point here is that it's memset-equivalent, and therefore writes zeroes, regardless of what the datatype is.  \n\nI agree it is memset-equivalent.\n\nAll examples of the memset code that INIT_ALL_ELEMS_ZERO replaces looked like this:\nmemset(values, 0, sizeof(values));\n\nMost examples of the memset code that INIT_ALL_ELEMS_FALSE replaces looked like this:\nmemset(nulls, false, sizeof(nulls));\n\n~\n\nI made the 2nd macro because I anticipate the same folk that don't like setting 0 to a bool will also not like setting something called INIT_ALL_ELEMS_ZERO to a bool array.\n\nHow about I just define them both the same?\n#define INIT_ALL_ELEMS_ZERO     {0}\n#define INIT_ALL_ELEMS_FALSE    {0}\nI think using one define would be preferred, but you can wait and see if others prefer defining different macros for the same thing.-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 4 Oct 2019 12:20:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Amit Kapila wrote:\n> > How about I just define them both the same?\n> > #define INIT_ALL_ELEMS_ZERO {0}\n> > #define INIT_ALL_ELEMS_FALSE {0}\n>\n> I think using one define would be preferred, but you can wait and see\n> if others prefer defining different macros for the same thing.\n\n+1 on using INIT_ALL_ELEMS_ZERO everywhere, even for bool[]. You may\nworry, \"Should we really be assigning 0 to something of type bool? Is\nthat the wrong type or a detail that varies by implementation?\" It's\nsafe though, the behavior is guaranteed to be correct by section 7.16 of\nthe C99 spec, which says that bool, true, and false are always macros\nfor _Bool, 1, and 0 respectively.\n\nOne might argue that INIT_ALL_ELEMS_FALSE as a synonym for\nINIT_ALL_ELEMS_ZERO is good for readability in the same way that \"false\"\nis for 0. However I want to avoid creating the impression that there is,\nor can be, a collection of INIT_ALL_ELEMS_xxx macros invoking different\ninitializer behavior.\n\n\n", "msg_date": "Fri, 4 Oct 2019 09:28:37 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Joe Nelson <joe@begriffs.com> writes:\n> One might argue that INIT_ALL_ELEMS_FALSE as a synonym for\n> INIT_ALL_ELEMS_ZERO is good for readability in the same way that \"false\"\n> is for 0. However I want to avoid creating the impression that there is,\n> or can be, a collection of INIT_ALL_ELEMS_xxx macros invoking different\n> initializer behavior.\n\nI concur with Joe here. The reason why some of the existing\nmemset's use \"false\" is for symmetry with other places where we use\n\"memset(p, true, n)\" to set an array of bools to all-true. That\ncoding is unfortunately a bit dubious --- it would sort-of fail if\nbool weren't of width 1, in that the bools would still test as true\nbut they wouldn't contain the standard bit pattern for true.\nI don't want to change those places, but we shouldn't make the\nmechanism proposed by this patch look like it can do anything but\ninitialize to zeroes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Oct 2019 10:51:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Fri, Oct 4, 2019 at 7:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I concur with Joe here. The reason why some of the existing\n> memset's use \"false\" is for symmetry with other places where we use\n> \"memset(p, true, n)\" to set an array of bools to all-true. That\n> coding is unfortunately a bit dubious --- it would sort-of fail if\n> bool weren't of width 1, in that the bools would still test as true\n> but they wouldn't contain the standard bit pattern for true.\n> I don't want to change those places, but we shouldn't make the\n> mechanism proposed by this patch look like it can do anything but\n> initialize to zeroes.\n>\n> regards, tom lane\n\nWhy introduce a macro at all for the universal zero initializer, if it\nseems to encourage the construction of other (incorrect) macros? IMO\nthe use of {0} as an initializer is well understood in the C developer\ncommunity, and I'm used to it showing up verbatim in code. Similar to\n{}'s role as the C++ universal initializer.\n\n--Jacob\n\n\n", "msg_date": "Fri, 4 Oct 2019 08:30:21 -0700", "msg_from": "Jacob Champion <pchampion@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Jacob Champion <pchampion@pivotal.io> writes:\n> On Fri, Oct 4, 2019 at 7:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I concur with Joe here. The reason why some of the existing\n>> memset's use \"false\" is for symmetry with other places where we use\n>> \"memset(p, true, n)\" to set an array of bools to all-true.\n\n> Why introduce a macro at all for the universal zero initializer, if it\n> seems to encourage the construction of other (incorrect) macros?\n\nWell, the argument is that some people might think that if {0} is enough\nto set all array elements to 0, then maybe {1} sets them all to ones\n(as, indeed, one could argue would be a far better specification than\nwhat the C committee actually wrote). Using a separate macro and then\ndiscouraging direct use of the incomplete-initializer syntax should help\nto avoid that error.\n\n> IMO\n> the use of {0} as an initializer is well understood in the C developer\n> community, and I'm used to it showing up verbatim in code.\n\nYeah, if we were all 100% familiar with every sentence in the C standard,\nwe could argue like that. But we get lots of submissions from people\nfor whom C is not their main language. The fewer gotchas there are in\nour agreed-on subset of C, the better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Oct 2019 11:49:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Fri, Oct 4, 2019 at 8:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jacob Champion <pchampion@pivotal.io> writes:\n> > On Fri, Oct 4, 2019 at 7:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I concur with Joe here. The reason why some of the existing\n> >> memset's use \"false\" is for symmetry with other places where we use\n> >> \"memset(p, true, n)\" to set an array of bools to all-true.\n>\n> > Why introduce a macro at all for the universal zero initializer, if it\n> > seems to encourage the construction of other (incorrect) macros?\n>\n> Well, the argument is that some people might think that if {0} is enough\n> to set all array elements to 0, then maybe {1} sets them all to ones\n> (as, indeed, one could argue would be a far better specification than\n> what the C committee actually wrote). Using a separate macro and then\n> discouraging direct use of the incomplete-initializer syntax should help\n> to avoid that error.\n>\n\nSeems avoidable overhead to remind folks on macro existence. Plus, for such\na thing macro exist in first place will be hard to remember. So,\nirrespective in long run, {0} might get used in code and hence seems better\nto just use {0} from start itself instead of macro/wrapper on top.\n\nPlus, even if someone starts out with thought {1} sets them all to ones, I\nfeel will soon realize by exercising the code isn't the reality. If such\ncode is written and nothing fails, that itself seems bigger issue.\n\nOn Fri, Oct 4, 2019 at 8:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jacob Champion <pchampion@pivotal.io> writes:\n> On Fri, Oct 4, 2019 at 7:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I concur with Joe here.  The reason why some of the existing\n>> memset's use \"false\" is for symmetry with other places where we use\n>> \"memset(p, true, n)\" to set an array of bools to all-true.\n\n> Why introduce a macro at all for the universal zero initializer, if it\n> seems to encourage the construction of other (incorrect) macros?\n\nWell, the argument is that some people might think that if {0} is enough\nto set all array elements to 0, then maybe {1} sets them all to ones\n(as, indeed, one could argue would be a far better specification than\nwhat the C committee actually wrote).  Using a separate macro and then\ndiscouraging direct use of the incomplete-initializer syntax should help\nto avoid that error.Seems avoidable overhead to remind folks on macro existence. Plus, for such a thing macro exist in first place will be hard to remember. So, irrespective in long run, {0} might get used in code and hence seems better to just use {0} from start itself instead of macro/wrapper on top.Plus, even if someone starts out with thought {1} sets them all to ones, I feel will soon realize by exercising the code isn't the reality. If such code is written and nothing fails, that itself seems bigger issue.", "msg_date": "Fri, 4 Oct 2019 10:44:13 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On 10/4/19 1:44 PM, Ashwin Agrawal wrote:\n\n> macro exist in first place will be hard to remember. So, irrespective\n> in long run, {0} might get used in code and hence seems better\n> to just use {0} from start itself instead of macro/wrapper on top.\n> \n> Plus, even if someone starts out with thought {1} sets them all to ones,\n> I feel will soon realize by exercising the code isn't the reality.\n\nI wish ISO C had gone the same place gcc (and C++ ?) went, and allowed\nthe initializer {}, which would eliminate any chance of it misleading\na casual reader.\n\nIf that were the case, I would be +1 on just using the {} syntax.\n\nBut given that the standard is stuck on requiring a first element,\nI am +1 on using the macro, just to avoid giving any wrong impressions,\neven fleeting ones.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 4 Oct 2019 14:05:41 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Fri, Oct 4, 2019 at 02:05:41PM -0400, Chapman Flack wrote:\n> On 10/4/19 1:44 PM, Ashwin Agrawal wrote:\n> \n> > macro exist in first place will be hard to remember. So, irrespective\n> > in long run, {0} might get used in code and hence seems better\n> > to just use {0} from start itself instead of macro/wrapper on top.\n> > \n> > Plus, even if someone starts out with thought {1} sets them all to ones,\n> > I feel will soon realize by exercising the code isn't the reality.\n> \n> I wish ISO C had gone the same place gcc (and C++ ?) went, and allowed\n> the initializer {}, which would eliminate any chance of it misleading\n> a casual reader.\n> \n> If that were the case, I would be +1 on just using the {} syntax.\n> \n> But given that the standard is stuck on requiring a first element,\n> I am +1 on using the macro, just to avoid giving any wrong impressions,\n> even fleeting ones.\n\nYeah, it is certainly weird that you have to assign the first array\nelement to get the rest to be zeros. By using a macro, we can document\nthis behavior in one place.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 4 Oct 2019 16:31:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Hi,\n\nOn 2019-10-04 16:31:29 -0400, Bruce Momjian wrote:\n> On Fri, Oct 4, 2019 at 02:05:41PM -0400, Chapman Flack wrote:\n> > On 10/4/19 1:44 PM, Ashwin Agrawal wrote:\n> > \n> > > macro exist in first place will be hard to remember. So, irrespective\n> > > in long run, {0} might get used in code and hence seems better\n> > > to just use {0} from start itself instead of macro/wrapper on top.\n\nIt already is in somewhat frequent use, fwiw.\n\n\n> > > Plus, even if someone starts out with thought {1} sets them all to ones,\n> > > I feel will soon realize by exercising the code isn't the reality.\n> > \n> > I wish ISO C had gone the same place gcc (and C++ ?) went, and allowed\n> > the initializer {}, which would eliminate any chance of it misleading\n> > a casual reader.\n> > \n> > If that were the case, I would be +1 on just using the {} syntax.\n> > \n> > But given that the standard is stuck on requiring a first element,\n> > I am +1 on using the macro, just to avoid giving any wrong impressions,\n> > even fleeting ones.\n> \n> Yeah, it is certainly weird that you have to assign the first array\n> element to get the rest to be zeros. By using a macro, we can document\n> this behavior in one place.\n\nIDK, to me this seems like something one just has to learn about C, with\nthe macro just obfuscating that already required knowledge. It's not\nlike this only applies to stack variables initializes with {0}. It's\nalso true of global variables, or function-local static ones, for\nexample.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 4 Oct 2019 13:43:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-04 16:31:29 -0400, Bruce Momjian wrote:\n>> Yeah, it is certainly weird that you have to assign the first array\n>> element to get the rest to be zeros. By using a macro, we can document\n>> this behavior in one place.\n\n> IDK, to me this seems like something one just has to learn about C, with\n> the macro just obfuscating that already required knowledge. It's not\n> like this only applies to stack variables initializes with {0}. It's\n> also true of global variables, or function-local static ones, for\n> example.\n\nHuh? For both those cases, the *default* behavior, going back to K&R C,\nis that the variable initializes to all-bits-zero. There's no need to\nwrite anything extra. If some people are writing {0} there, I think\nwe should discourage that on the grounds that it results in inconsistent\ncoding style.\n\nNote that I'm not proposing a rule against, say,\n\nstatic MyNodeType *my_variable = NULL;\n\nThat's perfectly sensible and adds no cognitive load that I can see.\nBut in cases where you have to indulge in type punning or reliance on\nobscure language features to get the result that would happen if you'd\njust not written anything, I think you should just not write anything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Oct 2019 17:08:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Hi,\n\nOn 2019-10-04 17:08:29 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-10-04 16:31:29 -0400, Bruce Momjian wrote:\n> >> Yeah, it is certainly weird that you have to assign the first array\n> >> element to get the rest to be zeros. By using a macro, we can document\n> >> this behavior in one place.\n> \n> > IDK, to me this seems like something one just has to learn about C, with\n> > the macro just obfuscating that already required knowledge. It's not\n> > like this only applies to stack variables initializes with {0}. It's\n> > also true of global variables, or function-local static ones, for\n> > example.\n> \n> Huh? For both those cases, the *default* behavior, going back to K&R C,\n> is that the variable initializes to all-bits-zero. There's no need to\n> write anything extra.\n\nWhat I mean is that if there's any initialization, it's to all zeroes,\nexcept for the parts explicitly initialized explicitly. And all that the\n{0} does, is that the rest of the fields are initialized the way other\nsuch initialization happens.\n\nThere's plenty places where we don't initialize every part, e.g. a\nstruct member, of global variables (and for stack allocated data as\nwell, increasingly so). To be able to make sense of things like\nsomevar = {.foo = bar /* field blub is not initialized */};\nor things like guc.c - where we rely on zero initialize most fields of\nconfig_generic.\n\n\n> If some people are writing {0} there, I think\n> we should discourage that on the grounds that it results in inconsistent\n> coding style.\n\nYea, I'm not advocating that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 4 Oct 2019 14:40:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com> Sent: Friday, 4 October 2019 4:50 PM\r\n\r\n>>How about I just define them both the same?\r\n>>#define INIT_ALL_ELEMS_ZERO     {0}\r\n>>#define INIT_ALL_ELEMS_FALSE    {0}\r\n>\r\n>I think using one define would be preferred, but you can wait and see if others prefer defining different macros for the same thing.\r\n\r\nWhile nowhere near unanimous, it seems majority favour using a macro (if only to protect the unwary and document the behaviour).\r\nAnd of those in favour of macros, using INIT_ALL_ELEMS_ZERO even for bool array is a clear preference.\r\n\r\nSo, please find attached the updated patch, which now has just 1 macro.\r\n\r\nKind Regards\r\n--\r\nPeter Smith\r\nFujitsu Australia", "msg_date": "Mon, 7 Oct 2019 23:13:28 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Sat, Oct 5, 2019 at 3:10 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-10-04 17:08:29 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2019-10-04 16:31:29 -0400, Bruce Momjian wrote:\n> > >> Yeah, it is certainly weird that you have to assign the first array\n> > >> element to get the rest to be zeros. By using a macro, we can document\n> > >> this behavior in one place.\n> >\n> > > IDK, to me this seems like something one just has to learn about C, with\n> > > the macro just obfuscating that already required knowledge. It's not\n> > > like this only applies to stack variables initializes with {0}. It's\n> > > also true of global variables, or function-local static ones, for\n> > > example.\n> >\n> > Huh? For both those cases, the *default* behavior, going back to K&R C,\n> > is that the variable initializes to all-bits-zero. There's no need to\n> > write anything extra.\n>\n> What I mean is that if there's any initialization, it's to all zeroes,\n> except for the parts explicitly initialized explicitly. And all that the\n> {0} does, is that the rest of the fields are initialized the way other\n> such initialization happens.\n>\n\nYou have a point and I think over time everyone will know this.\nHowever, so many people advocating for having a macro with a comment\nto be more explicit about this behavior shows that this is not equally\nobvious to everyone or at least they think that it will help future\npatch authors.\n\nNow, I think as the usage ({0}) already exists in the code, so I think\nif we decide to use a macro, then ideally those places should also be\nchanged. I am not telling that it must be done in the same patch, we\ncan even do it as a separate patch.\n\nI am personally still in the camp of people advocating the use of\nmacro for this purpose. It is quite possible after reading your\npoints, some people might change their opinion or some others also\nshare their opinion against using a macro in which case we can drop\nthe idea of using a macro.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Oct 2019 15:39:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Tue, Oct 8, 2019 at 4:43 AM Smith, Peter <peters@fast.au.fujitsu.com> wrote:\n>\n> From: Amit Kapila <amit.kapila16@gmail.com> Sent: Friday, 4 October 2019 4:50 PM\n>\n> >>How about I just define them both the same?\n> >>#define INIT_ALL_ELEMS_ZERO {0}\n> >>#define INIT_ALL_ELEMS_FALSE {0}\n> >\n> >I think using one define would be preferred, but you can wait and see if others prefer defining different macros for the same thing.\n>\n> While nowhere near unanimous, it seems majority favour using a macro (if only to protect the unwary and document the behaviour).\n> And of those in favour of macros, using INIT_ALL_ELEMS_ZERO even for bool array is a clear preference.\n>\n> So, please find attached the updated patch, which now has just 1 macro.\nFew thoughts on the patch:\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -770,8 +770,8 @@ pg_prepared_xact(PG_FUNCTION_ARGS)\n GlobalTransaction gxact = &status->array[status->currIdx++];\n PGPROC *proc = &ProcGlobal->allProcs[gxact->pgprocno];\n PGXACT *pgxact = &ProcGlobal->allPgXact[gxact->pgprocno];\n- Datum values[5];\n- bool nulls[5];\n+ Datum values[5] = INIT_ALL_ELEMS_ZERO;\n+ bool nulls[5] = INIT_ALL_ELEMS_ZERO;\n HeapTuple tuple;\n Datum result;\nInitialisation may not be required here as all the members are getting\npopulated immediately\n\n@@ -1314,9 +1314,6 @@ SetDefaultACL(InternalDefaultACL *iacls)\n Oid defAclOid;\n\n /* Prepare to insert or update pg_default_acl entry */\n- MemSet(values, 0, sizeof(values));\n- MemSet(nulls, false, sizeof(nulls));\n- MemSet(replaces, false, sizeof(replaces));\n\n if (isNew)\n We can place the comment just before the next block of code for\nbetter readability like you have done in other places.\n\n\n@@ -2024,9 +2018,6 @@ ExecGrant_Relation(InternalGrant *istmt)\n nnewmembers = aclmembers(new_acl, &newmembers);\n\n /* finished building new ACL value, now insert it */\n- MemSet(values, 0, sizeof(values));\n- MemSet(nulls, false, sizeof(nulls));\n- MemSet(replaces, false, sizeof(replaces));\n\n replaces[Anum_pg_class_relacl - 1] = true;\n We can place the comment just before the next block of code for\nbetter readability like you have done in other places.\n There are few more instances like this in the same file, we can\nhandle that too.\n\n-- a/src/backend/replication/slotfuncs.c\n+++ b/src/backend/replication/slotfuncs.c\n@@ -77,7 +77,7 @@ pg_create_physical_replication_slot(PG_FUNCTION_ARGS)\n bool immediately_reserve = PG_GETARG_BOOL(1);\n bool temporary = PG_GETARG_BOOL(2);\n Datum values[2];\n- bool nulls[2];\n+ bool nulls[2] = INIT_ALL_ELEMS_ZERO;\n TupleDesc tupdesc;\n HeapTuple tuple;\n Datum result;\n@@ -95,12 +95,10 @@ pg_create_physical_replication_slot(PG_FUNCTION_ARGS)\n InvalidXLogRecPtr);\n\n values[0] = NameGetDatum(&MyReplicationSlot->data.name);\n- nulls[0] = false;\n\n if (immediately_reserve)\n {\n values[1] = LSNGetDatum(MyReplicationSlot->data.restart_lsn);\n- nulls[1] = false;\n }\n else\n nulls[1] = true;\n We might not gain much here, may be this change is not required.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Oct 2019 09:51:06 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Tue, Oct 8, 2019 at 11:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Sat, Oct 5, 2019 at 3:10 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-10-04 17:08:29 -0400, Tom Lane wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > > > On 2019-10-04 16:31:29 -0400, Bruce Momjian wrote:\n> > > >> Yeah, it is certainly weird that you have to assign the first array\n> > > >> element to get the rest to be zeros. By using a macro, we can document\n> > > >> this behavior in one place.\n> > >\n> > > > IDK, to me this seems like something one just has to learn about C, with\n> > > > the macro just obfuscating that already required knowledge. It's not\n> > > > like this only applies to stack variables initializes with {0}. It's\n> > > > also true of global variables, or function-local static ones, for\n> > > > example.\n> > >\n> > > Huh? For both those cases, the *default* behavior, going back to K&R C,\n> > > is that the variable initializes to all-bits-zero. There's no need to\n> > > write anything extra.\n> >\n> > What I mean is that if there's any initialization, it's to all zeroes,\n> > except for the parts explicitly initialized explicitly. And all that the\n> > {0} does, is that the rest of the fields are initialized the way other\n> > such initialization happens.\n> >\n>\n> You have a point and I think over time everyone will know this.\n> However, so many people advocating for having a macro with a comment\n> to be more explicit about this behavior shows that this is not equally\n> obvious to everyone or at least they think that it will help future\n> patch authors.\n>\n> Now, I think as the usage ({0}) already exists in the code, so I think\n> if we decide to use a macro, then ideally those places should also be\n> changed. I am not telling that it must be done in the same patch, we\n> can even do it as a separate patch.\n>\n> I am personally still in the camp of people advocating the use of\n> macro for this purpose. It is quite possible after reading your\n> points, some people might change their opinion or some others also\n> share their opinion against using a macro in which case we can drop\n> the idea of using a macro.\n\n-1 for these macros.\n\nThese are basic facts about the C language. I hope C eventually\nsupports {} like C++, so that you don't have to think hard about\nwhether the first member is another struct, and recursively so … but\nsince the macros can't help with that problem, what is the point?\n\nI am reminded of an (apocryphal?) complaint from an old C FAQ about\npeople using #define BEGIN {.\n\n\n", "msg_date": "Thu, 17 Oct 2019 18:37:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Thu, Oct 17, 2019 at 06:37:11PM +1300, Thomas Munro wrote:\n> -1 for these macros.\n> \n> These are basic facts about the C language. I hope C eventually\n> supports {} like C++, so that you don't have to think hard about\n> whether the first member is another struct, and recursively so … but\n> since the macros can't help with that problem, what is the point?\n\nFWIW, I am not convinced that those macros are an improvement either.\n\n> I am reminded of an (apocryphal?) complaint from an old C FAQ about\n> people using #define BEGIN {.\n\nThis one? Wow.\nhttp://c-faq.com/cpp/slm.html\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 16:30:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "At Thu, 17 Oct 2019 16:30:02 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Oct 17, 2019 at 06:37:11PM +1300, Thomas Munro wrote:\n> > -1 for these macros.\n> > \n> > These are basic facts about the C language. I hope C eventually\n> > supports {} like C++, so that you don't have to think hard about\n> > whether the first member is another struct, and recursively so … but\n> > since the macros can't help with that problem, what is the point?\n> \n> FWIW, I am not convinced that those macros are an improvement either.\n\nFWIW agreed. I might have put +1 if it had multpile definitions\naccording to platforms, though.\n\n> > I am reminded of an (apocryphal?) complaint from an old C FAQ about\n> > people using #define BEGIN {.\n> \n> This one? Wow.\n> http://c-faq.com/cpp/slm.html\n\nI remember this.\n\nThough the new macro proposed here doesn't completely seems to be a\nso-called nonsyntactic macro, but the syntax using the macro looks\nsomewhat broken since it lacks {}, which should be there.\n\nbool nulls[Natts_pg_collection] = INIT_ALL_ELEMS_ZERO;\n\nWe could abuse the macro for structs.\n\npgstattuple_type stat = INIT_ALL_ELEMS_ZERO;\n\nThis is correct in syntax, but seems completely broken.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n", "msg_date": "Thu, 17 Oct 2019 20:28:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Thu, Oct 17, 2019 at 4:58 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 17 Oct 2019 16:30:02 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> > On Thu, Oct 17, 2019 at 06:37:11PM +1300, Thomas Munro wrote:\n> > > -1 for these macros.\n> > >\n> > > These are basic facts about the C language. I hope C eventually\n> > > supports {} like C++, so that you don't have to think hard about\n> > > whether the first member is another struct, and recursively so … but\n> > > since the macros can't help with that problem, what is the point?\n> >\n> > FWIW, I am not convinced that those macros are an improvement either.\n>\n> FWIW agreed. I might have put +1 if it had multpile definitions\n> according to platforms, though.\n>\n\nThanks, Thomas, Michael, and Horiguchi-San. I think there are enough\nvotes on not using a macro that we can proceed with that approach.\nThis takes us back to what Smith, Peter has initially proposed [1].\nI shall wait for a couple of days to see if someone would like to\nargue otherwise and then review the proposed patch.\n\n[1] - https://www.postgresql.org/message-id/201DD0641B056142AC8C6645EC1B5F62014B919631%40SYD1217\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 08:21:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> On Tue, Oct 8, 2019 at 11:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I am personally still in the camp of people advocating the use of\n> > macro for this purpose. It is quite possible after reading your\n> > points, some people might change their opinion or some others also\n> > share their opinion against using a macro in which case we can drop\n> > the idea of using a macro.\n> \n> -1 for these macros.\n\nAgreed.\n\n> These are basic facts about the C language. I hope C eventually\n> supports {} like C++, so that you don't have to think hard about\n> whether the first member is another struct, and recursively so … but\n> since the macros can't help with that problem, what is the point?\n\nI realize that I need to don some fireproof gear for suggesting this,\nbut I really wonder how much fallout we'd have from just allowing {} to\nbe used.. It's about a billion[1] times cleaner and more sensible than\nusing {0} and doesn't create a dependency on what the first element of\nthe struct is..\n\nThanks,\n\nStephen\n\n1: Detailed justification not included intentionally and is left as an\nexercise to the reader.", "msg_date": "Fri, 18 Oct 2019 08:18:20 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On 10/18/19 08:18, Stephen Frost wrote:\n> I realize that I need to don some fireproof gear for suggesting this,\n> but I really wonder how much fallout we'd have from just allowing {} to\n> be used.. It's about a billion[1] times cleaner and more sensible than\n> using {0} and doesn't create a dependency on what the first element of\n> the struct is..\n\nI guess the non-flamey empirical question would be, if it's not ISO C,\nare we supporting any compiler that doesn't understand it?\n\n-Chap\n\n\n", "msg_date": "Fri, 18 Oct 2019 08:59:52 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Greetings,\n\n* Chapman Flack (chap@anastigmatix.net) wrote:\n> On 10/18/19 08:18, Stephen Frost wrote:\n> > I realize that I need to don some fireproof gear for suggesting this,\n> > but I really wonder how much fallout we'd have from just allowing {} to\n> > be used.. It's about a billion[1] times cleaner and more sensible than\n> > using {0} and doesn't create a dependency on what the first element of\n> > the struct is..\n> \n> I guess the non-flamey empirical question would be, if it's not ISO C,\n> are we supporting any compiler that doesn't understand it?\n\nRight, that's basically what I was trying to ask. :)\n\nThanks,\n\nStephen", "msg_date": "Fri, 18 Oct 2019 09:03:31 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Hi,\n\nOn 2019-10-18 09:03:31 -0400, Stephen Frost wrote:\n> * Chapman Flack (chap@anastigmatix.net) wrote:\n> > On 10/18/19 08:18, Stephen Frost wrote:\n> > > I realize that I need to don some fireproof gear for suggesting this,\n> > > but I really wonder how much fallout we'd have from just allowing {} to\n> > > be used.. It's about a billion[1] times cleaner and more sensible than\n> > > using {0} and doesn't create a dependency on what the first element of\n> > > the struct is..\n> > \n> > I guess the non-flamey empirical question would be, if it's not ISO C,\n> > are we supporting any compiler that doesn't understand it?\n> \n> Right, that's basically what I was trying to ask. :)\n\nI don't understand why this is an issue worth deviating from the\nstandard for. Especially not when the person suggesting to do so isn't\neven doing the leg work to estimate the portability issues.\n\nI feel we've spent more than enough time on this topic.\n\n- Andres\n\n\n", "msg_date": "Sat, 19 Oct 2019 03:26:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-10-18 09:03:31 -0400, Stephen Frost wrote:\n> > * Chapman Flack (chap@anastigmatix.net) wrote:\n> > > On 10/18/19 08:18, Stephen Frost wrote:\n> > > > I realize that I need to don some fireproof gear for suggesting this,\n> > > > but I really wonder how much fallout we'd have from just allowing {} to\n> > > > be used.. It's about a billion[1] times cleaner and more sensible than\n> > > > using {0} and doesn't create a dependency on what the first element of\n> > > > the struct is..\n> > > \n> > > I guess the non-flamey empirical question would be, if it's not ISO C,\n> > > are we supporting any compiler that doesn't understand it?\n> > \n> > Right, that's basically what I was trying to ask. :)\n> \n> I don't understand why this is an issue worth deviating from the\n> standard for.\n\nBecause this use and the way the standard is defined in this case is\nconfusing and could lead later hackers to misunderstand what's going on\nand end up creating bugs- which is what a good chunk of this discussion\nwas about. The {} construct is much clearer in this regard and while\nit's not in the C standard it's in C++ and it's accepted by the commonly\nused compilers (clang and and pretty far back it seems for gcc), without\nwarning unless you enable -pedantic or similar.\n\n> Especially not when the person suggesting to do so isn't\n> even doing the leg work to estimate the portability issues.\n\nI figured it was common knowledge that gcc/clang supported it just fine,\nwhich covers something like 90% of the buildfarm. I haven't got easy\naccess to check others.\n\nThanks,\n\nStephen", "msg_date": "Sat, 19 Oct 2019 11:43:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Sat, Oct 19, 2019 at 9:14 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> * Andres Freund (andres@anarazel.de) wrote:\n>\n> > Especially not when the person suggesting to do so isn't\n> > even doing the leg work to estimate the portability issues.\n>\n> I figured it was common knowledge that gcc/clang supported it just fine,\n> which covers something like 90% of the buildfarm. I haven't got easy\n> access to check others.\n>\n\nI have tried {} on Windows (MSVC-2017) and it is giving compilation error:\n\n>\\src\\backend\\access\\transam\\commit_ts.c(425): error C2059: syntax error: '}'\n1>\\src\\backend\\access\\transam\\commit_ts.c(426): error C2059: syntax error: '}'\n\nThe changed code looks like below:\nDatum\npg_last_committed_xact(PG_FUNCTION_ARGS)\n{\n..\nDatum values[2] = {};\nbool nulls[2] = {};\n..\n}\n\nDoes this put an end to the option of using {} or do we want to\ninvestigate something more?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 17:07:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "> > I don't understand why this is an issue worth deviating from the\n> > standard for.\n> \n> Because this use and the way the standard is defined in this case is\n> confusing and could lead later hackers to misunderstand what's going on\n> and end up creating bugs-\n\nThe two possible misunderstandings seem to be:\n\n1. how 0 is interpreted in various contexts such as bool\n2. that the x in {x} applies to only the first element\n\nIMHO we should expect people to be familiar with (1), and we have the\nINIT_ALL_ELEMS_ZERO macro to avoid (2). However the more I look at the\ncode using that macro the less I like it. The {0} initializer is more\nidiomatic.\n\nMy vote would be to use {0} everywhere and avoid constructions like\n{false} which might exacerbate misunderstanding (2).\n\n> I figured it was common knowledge that gcc/clang supported it just fine,\n> which covers something like 90% of the buildfarm. I haven't got easy\n> access to check others.\n\nAs Amit pointed out, {} doesn't work with MSVC-2017, nor is there any\nreason it should, given that it isn't part of the C standard.\n\n\n", "msg_date": "Mon, 21 Oct 2019 10:25:57 -0500", "msg_from": "\"Joe Nelson\" <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Proposal:_Make_use_of_C99_designated_initialisers_for_null?=\n =?UTF-8?Q?s/values_arrays?=" }, { "msg_contents": "On 10/21/19 11:25 AM, Joe Nelson wrote:\n> we have the\n> INIT_ALL_ELEMS_ZERO macro to avoid (2). However the more I look at the\n> code using that macro the less I like it. The {0} initializer is more\n> idiomatic.\n\nIf faced with the two questions:\n\n1. which of a or b is more \"clear\" ?\n2. which of a or b is more \"idiomatic\" ?\n\nI think I would feel on more solid ground opining on (1),\nwhere wrt (2) I would feel a little muzzier trying to say\nwhat the question means.\n\nIt seems to me that idioms are common bits of usage that take off\nbecause they're widely recognized as saying a specific thing\nefficiently and clearly.\n\nOn that score, I'm not sure {0} really makes a good idiom ... indeed,\nit seems this conversation is largely about whether it /looks/ too\nmuch like an idiom, and to some readers could appear to be saying\nsomething efficiently and clearly but that isn't quite what it means.\n\nI would favor {} in a heartbeat if it were standard, because that\nsucker is an idiom.\n\nFailing that, though, I think I still favor the macro, because\nquestion (1) seems less fuzzy than question (2), and on \"clear\",\nthe macro wins.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 21 Oct 2019 11:46:43 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Mon, 21 Oct 2019 at 11:46, Chapman Flack <chap@anastigmatix.net> wrote:\n\n>\n> I would favor {} in a heartbeat if it were standard, because that\n> sucker is an idiom.\n>\n> Failing that, though, I think I still favor the macro, because\n> question (1) seems less fuzzy than question (2), and on \"clear\",\n> the macro wins.\n>\n\nIs it possible to define the macro to be {} where supported and {0} where\nneeded? Something like:\n\n#if ...\n#define INIT_ALL_ELEMS_ZERO {}\n#else\n#define INIT_ALL_ELEMS_ZERO {0}\n#endif\n\nThen it's clear the 0 is just there to make certain compilers happy and\ndoesn't have any actual meaning.\n\nOn Mon, 21 Oct 2019 at 11:46, Chapman Flack <chap@anastigmatix.net> wrote:\nI would favor {} in a heartbeat if it were standard, because that\nsucker is an idiom.\n\nFailing that, though, I think I still favor the macro, because\nquestion (1) seems less fuzzy than question (2), and on \"clear\",\nthe macro wins.\nIs it possible to define the macro to be {} where supported and {0} where needed? Something like:#if ...#define INIT_ALL_ELEMS_ZERO {}#else#define INIT_ALL_ELEMS_ZERO {0}#endifThen it's clear the 0 is just there to make certain compilers happy and doesn't have any actual meaning.", "msg_date": "Mon, 21 Oct 2019 13:35:49 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "> Is it possible to define the macro to be {} where supported and {0} \n> where needed? Something like:\n\nIf it's being put behind a macro then *stylistically* it shouldn't\nmatter whether {} or {0} is chosen, right? In which case {0} would\nbe a better choice because it's supported everywhere.\n\n\n", "msg_date": "Mon, 21 Oct 2019 13:03:07 -0500", "msg_from": "\"Joe Nelson\" <joe@begriffs.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?Q?Re:_Proposal:_Make_use_of_C99_designated_initialisers_for_null?=\n =?UTF-8?Q?s/values_arrays?=" }, { "msg_contents": "Greetings,\n\n* Joe Nelson (joe@begriffs.com) wrote:\n> > Is it possible to define the macro to be {} where supported and {0} \n> > where needed? Something like:\n> \n> If it's being put behind a macro then *stylistically* it shouldn't\n> matter whether {} or {0} is chosen, right? In which case {0} would\n> be a better choice because it's supported everywhere.\n\nThe problem with {0} in the first place is that it doesn't actually work\nin all cases... Simple cases, yes, but not more complex ones. It's\nunfortunate that there isn't a general solution here that works across\nplatforms (even if it involved macros..), but that seems to be the case.\n\nThanks,\n\nStephen", "msg_date": "Mon, 21 Oct 2019 14:13:19 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Joe Nelson (joe@begriffs.com) wrote:\n>> If it's being put behind a macro then *stylistically* it shouldn't\n>> matter whether {} or {0} is chosen, right? In which case {0} would\n>> be a better choice because it's supported everywhere.\n\n> The problem with {0} in the first place is that it doesn't actually work\n> in all cases... Simple cases, yes, but not more complex ones. It's\n> unfortunate that there isn't a general solution here that works across\n> platforms (even if it involved macros..), but that seems to be the case.\n\nThere is a general solution that works across platforms; it's called\nmemset() and it's what we're using today. I'm beginning to think that\nwe should just reject this patch. It's certainly not enough of an\nimprovement to justify the amount of discussion that's gone into it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Oct 2019 15:04:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On 2019-10-21 15:04:36 -0400, Tom Lane wrote:\n> There is a general solution that works across platforms; it's called\n> memset() and it's what we're using today. I'm beginning to think that\n> we should just reject this patch. It's certainly not enough of an\n> improvement to justify the amount of discussion that's gone into it.\n\nbikeshedding vs reality of programming & efficiency: 1 : 0.\n\n\n", "msg_date": "Mon, 21 Oct 2019 12:36:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Tue, Oct 22, 2019 at 12:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Joe Nelson (joe@begriffs.com) wrote:\n> >> If it's being put behind a macro then *stylistically* it shouldn't\n> >> matter whether {} or {0} is chosen, right? In which case {0} would\n> >> be a better choice because it's supported everywhere.\n>\n> > The problem with {0} in the first place is that it doesn't actually work\n> > in all cases... Simple cases, yes, but not more complex ones. It's\n> > unfortunate that there isn't a general solution here that works across\n> > platforms (even if it involved macros..), but that seems to be the case.\n>\n> There is a general solution that works across platforms; it's called\n> memset() and it's what we're using today. I'm beginning to think that\n> we should just reject this patch.\n>\n\nHmm, but then what is your suggestion for existing code that uses {0}.\nIf we reject this patch and leave the current code as it is, there is\nalways a risk of some people using {0} and others using memset which\nwill lead to further deviation in the code. Now, maybe if we change\nthe existing code to always use memset where we use {0}, then we can\nkind of enforce such a rule for future patch authors.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Oct 2019 16:13:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Tue, Oct 22, 2019 at 04:13:03PM +0530, Amit Kapila wrote:\n> Hmm, but then what is your suggestion for existing code that uses {0}.\n> If we reject this patch and leave the current code as it is, there is\n> always a risk of some people using {0} and others using memset which\n> will lead to further deviation in the code. Now, maybe if we change\n> the existing code to always use memset where we use {0}, then we can\n> kind of enforce such a rule for future patch authors.\n\nWell, we could have a shot at reducing the footprint of {0} then where\nwe can. I am seeing less than a dozen in contrib/, and a bit more\nthan thirty in src/backend/. Or we could just do as we do with such\nbusiness: let's update them when we see that's adapted and when\nmodifying the surrounding area.\n\nAt least I see one conclusion coming out of this thread: the patch is\nin the direction of getting rejected. My recommendation would be to\ndo that, and focus on other patches which could get merged: we have a\ntotal of 220 entries in this CF.\n--\nMichael", "msg_date": "Tue, 12 Nov 2019 14:17:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Hi,\n\nOn 2019-11-12 14:17:42 +0900, Michael Paquier wrote:\n> On Tue, Oct 22, 2019 at 04:13:03PM +0530, Amit Kapila wrote:\n> > Hmm, but then what is your suggestion for existing code that uses {0}.\n> > If we reject this patch and leave the current code as it is, there is\n> > always a risk of some people using {0} and others using memset which\n> > will lead to further deviation in the code. Now, maybe if we change\n> > the existing code to always use memset where we use {0}, then we can\n> > kind of enforce such a rule for future patch authors.\n> \n> Well, we could have a shot at reducing the footprint of {0} then where\n> we can. I am seeing less than a dozen in contrib/, and a bit more\n> than thirty in src/backend/.\n\n-many. I think this serves zero positive purpose, except to make it\nharder to analyze code-flow.\n\nI think it's not worth going around to convert code to use {0} style\ninitializers in most cases, but when authors write it, we shouldn't\nremove it either.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Nov 2019 11:31:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "Hi Hackers.\n\nThis submission seems to have stalled. \n\nPlease forgive this post - I am unsure if the submission process expects me to come to defence of my own patch for one last gasp, or if I am supposed to just sit back and watch it die a slow death of a thousand cuts.\n\nI thought this submission actually started out very popular, but then support slowly eroded, and currently seems headed towards a likely rejection.\n\n~\n\nAnyway, here are my arguments:\n\n(a) I recognise that on first glance, the {0} syntax might evoke a momentary \"double-take\" by the someone reading the code. IMO this would only be experienced by somebody encountering {0} syntax for the very first time. This is not an really uncommon \"pattern\" (it's already elsewhere in PostreSQL code), and once you've seen it two or three times there is no confusion what it is doing.\n\n(b) Because of (a) I don't really agree with the notion that it should be replaced by a macro to hide the C syntax. I did try adding various macros as suggested, but all that achieved was to was spin off another 20 emails debating the macro format. I thought any code committer/reviewer should have no trouble at all to understand standard C syntax.\n\n(c) It was never a goal of this submission that *all* memsets should be replaced by {0}. Sometimes {0} is more concise and better IMO, but sometimes memset is a way more appropriate choice. This patch only replaces simple examples of primitive types like the values[] and nulls[] arrays (which was a repeated pattern for many tuple operations). I think any concern that {0} may not work for all other complex cases is a red-herring. When memset is better, then use memset.\n\n(d) Wishing for C99 syntax to be same as the simpler {} style of C++ is another red-herring. I can only use what is officially supported. It is what it is.\n\n(e) The PostgreSQL miscellaneous coding conventions - https://www.postgresql.org/docs/current/source-conventions.html - says to avoid \" intermingled declarations and code\". This leads to some code where the variable declaration and the initialization (e.g. memset 0 or memset false) code can be widely separated. It can be an easy source of mistakes to assume a variable was already initialized when maybe it wasn't. This patch puts the initialization at the point of declaration, and so eliminates this risk. Isn't that best practice?\n \n(f) I'm still a bit perplexed how can it be that removing 200 lines of unnecessary function calls is not considered a good thing to do? Are patches that only tidy up code generally not accepted? I don't know.\n\n~\n\nThat's all I have to say in support of my patch; it will live or it will die according to the community wish.\n \nIf nothing else, at least I've learned a new term - \"bike shedding\" :-)\n\nKind Regards.\n---\nPeter Smith\nFujitsu Australia\n\n\n", "msg_date": "Thu, 21 Nov 2019 04:50:22 +0000", "msg_from": "\"Smith, Peter\" <peters@fast.au.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" }, { "msg_contents": "On Thu, Nov 21, 2019 at 04:50:22AM +0000, Smith, Peter wrote:\n> I thought this submission actually started out very popular, but\n> then support slowly eroded, and currently seems headed towards a\n> likely rejection.\n\nYeah, it seems to me that this tends to be a rejection, and the thread\nhas actually died. As we are close to the end of the CF, I am just\nupdating the patch as such.\n--\nMichael", "msg_date": "Wed, 27 Nov 2019 17:23:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Proposal: Make use of C99 designated initialisers for\n nulls/values arrays" } ]
[ { "msg_contents": "Greetings,\nI am Sakshi Munjal and I would like to know how I can apply to google\ncode-in as mentor with your organization.\n\nI have great interest in coding and I like to spend my free time learning\nnew things. I have developed my skills in full stack web development, I\nhave knowledge about machine learning and I am currently pursuing my\ninterest in cyber security. I like to spend my free time playing piano and\nserving in church. I have been an all time merit holder in school and I\nmanaged to score 9.5 CGPA in college semester as well. I have contributed\nto open source organizations earlier and I would like to enhance my\nthinking horizon.\n\nThank you for taking out your valuable time.\nI would definitely wait for a response from you.\nRegards\n\nGreetings,I am Sakshi Munjal and I would like to know how I can apply to google code-in as mentor with your organization.\nI have great interest in coding and I like to spend my free time \nlearning new things. I have developed my skills in full stack web \ndevelopment, I have knowledge about machine learning and I am currently \npursuing my interest in cyber security. I like to spend my free time \nplaying piano and serving in church. I have been an all time merit \nholder in school and I managed to score 9.5 CGPA in college semester as \nwell. I have contributed to open source organizations earlier and I \nwould like to enhance my thinking horizon. Thank you for taking out your valuable time.I would definitely wait for a response from you.Regards", "msg_date": "Tue, 1 Oct 2019 14:14:12 +0530", "msg_from": "Sakshi Munjal <sakshi1499@gmail.com>", "msg_from_op": true, "msg_subject": "About Google Code-in" } ]
[ { "msg_contents": "Collegues,\n\nI've encountered following problem on some old Sparc64 machine running\nsolaris 10:\n\nWhen I compile postgresql 12 with --enable-tap-tests and run make check\nin src/bin, test src/bin/pg_basebackup/t/010_pg_basebackup.pl\nhangs and hangs infinitely. \n\nI've tried to attach gdb to the hanging process, but it attempt to \ndo backtrace in it, gdb reports that stack is corrupt\n\nAttaching to program `/home/vitus/postgrespro/src/bin/pg_basebackup/pg_basebackup', process 1467\n[New process 1467]\nRetry #1:\nRetry #2:\nRetry #3:\nRetry #4:\nReading symbols from /usr/lib/sparcv9/ld.so.1...(no debugging symbols found)...done.\nLoaded symbols for /usr/lib/sparcv9/ld.so.1\n---Type <return> to continue, or q <return> to quit---\n0x00000000ff2cca38 in ?? ()\n(gdb) bt\n#0 0x00000000ff2cca38 in ?? ()\nBacktrace stopped: previous frame identical to this frame (corrupt stack?)\n\n\nWhen afterword I kill hanged process with\nkill -SEGV to get core, I get following stack trace from core file:\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0xffffffff7d5d94ac in ___sigtimedwait () from /lib/64/libc.so.1\n(gdb) bt\n#0 0xffffffff7d5d94ac in ___sigtimedwait () from /lib/64/libc.so.1\n#1 0xffffffff7d5c8c8c in __sigtimedwait () from /lib/64/libc.so.1\n#2 0xffffffff7d5c0628 in __posix_sigwait () from /lib/64/libc.so.1\n#3 0xffffffff7f3362b4 in pq_reset_sigpipe (osigset=0xffffffff7fffeb1c,\nsigpipe_pending=false, got_epipe=true) at fe-secure.c:529 #4\n0xffffffff7f336084 in pqsecure_raw_write (conn=0x100135e90,\nptr=0x10013a370, len=5) at fe-secure.c:399 #5 0xffffffff7f335e28 in\npqsecure_write (conn=0x100135e90, ptr=0x10013a370, len=5) at\nfe-secure.c:316 #6 0xffffffff7f326c54 in pqSendSome (conn=0x100135e90,\nlen=5) at fe-misc.c:876 #7 0xffffffff7f326e84 in pqFlush\n(conn=0x100135e90) at fe-misc.c:1004 #8 0xffffffff7f316584 in\nsendTerminateConn (conn=0x100135e90) at fe-connect.c:4031 #9\n0xffffffff7f3165a4 in closePGconn (conn=0x100135e90) at\nfe-connect.c:4049 #10 0xffffffff7f31663c in PQfinish (conn=0x100135e90)\nat fe-connect.c:4083 #11 0x000000010000bc64 in BaseBackup () at\npg_basebackup.c:2136 #12 0x000000010000d7ec in main (argc=4,\nargv=0xffffffff7ffff808) at pg_basebackup.c:2547\n\nThis happens on random tests in this test file with probablity about\n1/10, but because there is more than 100 tests, hanging has 100%\nprobablity. But other two test files in src/bin/pg_basebackup directory\ndon't hang.\n\nAs far as I can notice, there is only two machines with Solaris in\npgbuildfarm now, and neither of them has any records of running\nREL_12_STABLE branch. (not to mention that both don't run tap tests).\n\n-- \n\n\n", "msg_date": "Tue, 1 Oct 2019 17:04:03 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "pg_basebackup from REL_12_STABLE hands on solaris/sparch" } ]
[ { "msg_contents": "My Buildfarm animal (peripatus) has been failing check since yesterday. \nCan someone look at it?\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Tue, 01 Oct 2019 10:32:26 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Peripatus: Can someone look?" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> My Buildfarm animal (peripatus) has been failing check since yesterday. \n> Can someone look at it?\n\nIt's been doing this in parallel queries, in v11 and up:\n\n2019-09-29 19:00:15.534 CDT [49513:1] ERROR: could not open shared memory segment \"/PostgreSQL.1225945786\": Permission denied\n2019-09-29 19:00:15.535 CDT [48596:491] pg_regress/partition_prune ERROR: parallel worker failed to initialize\n2019-09-29 19:00:15.535 CDT [48596:492] pg_regress/partition_prune HINT: More details may be available in the server log.\n2019-09-29 19:00:15.535 CDT [48596:493] pg_regress/partition_prune CONTEXT: PL/pgSQL function explain_parallel_append(text) line 5 at FOR over EXECUTE statement\n2019-09-29 19:00:15.535 CDT [48596:494] pg_regress/partition_prune STATEMENT: select explain_parallel_append('select avg(ab.a) from ab inner join lprt_a a on ab.a = a.a where a.a in(1, 0, 0)');\n2019-09-29 19:00:15.535 CDT [48596:495] pg_regress/partition_prune WARNING: could not remove shared memory segment \"/PostgreSQL.1225945786\": Permission denied\n\nwhich looks like an external problem to me --- certainly, nothing\nwe changed yesterday would explain it. Did you change anything\nabout the system configuration, or do a software update?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Oct 2019 11:46:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Peripatus: Can someone look?" }, { "msg_contents": "On 10/01/2019 10:46 am, Tom Lane wrote:\n> Larry Rosenman <ler@lerctr.org> writes:\n>> My Buildfarm animal (peripatus) has been failing check since \n>> yesterday.\n>> Can someone look at it?\n> \n> It's been doing this in parallel queries, in v11 and up:\n> \n> 2019-09-29 19:00:15.534 CDT [49513:1] ERROR: could not open shared\n> memory segment \"/PostgreSQL.1225945786\": Permission denied\n> 2019-09-29 19:00:15.535 CDT [48596:491] pg_regress/partition_prune\n> ERROR: parallel worker failed to initialize\n> 2019-09-29 19:00:15.535 CDT [48596:492] pg_regress/partition_prune\n> HINT: More details may be available in the server log.\n> 2019-09-29 19:00:15.535 CDT [48596:493] pg_regress/partition_prune\n> CONTEXT: PL/pgSQL function explain_parallel_append(text) line 5 at\n> FOR over EXECUTE statement\n> 2019-09-29 19:00:15.535 CDT [48596:494] pg_regress/partition_prune\n> STATEMENT: select explain_parallel_append('select avg(ab.a) from ab\n> inner join lprt_a a on ab.a = a.a where a.a in(1, 0, 0)');\n> 2019-09-29 19:00:15.535 CDT [48596:495] pg_regress/partition_prune\n> WARNING: could not remove shared memory segment\n> \"/PostgreSQL.1225945786\": Permission denied\n> \n> which looks like an external problem to me --- certainly, nothing\n> we changed yesterday would explain it. Did you change anything\n> about the system configuration, or do a software update?\n> \n> \t\t\tregards, tom lane\n\nI did do an upgrade to a later SVN rev.\n\nLet me reboot and see if that fixes anything.\n\n(this is -CURRENT on FreeBSD, so it's always a moving target).\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Tue, 01 Oct 2019 10:49:33 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Peripatus: Can someone look?" }, { "msg_contents": "On Wed, Oct 2, 2019 at 4:49 AM Larry Rosenman <ler@lerctr.org> wrote:\n> On 10/01/2019 10:46 am, Tom Lane wrote:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> >> My Buildfarm animal (peripatus) has been failing check since\n> >> yesterday.\n> >> Can someone look at it?\n> >\n> > It's been doing this in parallel queries, in v11 and up:\n> >\n> > 2019-09-29 19:00:15.534 CDT [49513:1] ERROR: could not open shared\n> > memory segment \"/PostgreSQL.1225945786\": Permission denied\n> > 2019-09-29 19:00:15.535 CDT [48596:491] pg_regress/partition_prune\n> > ERROR: parallel worker failed to initialize\n> > 2019-09-29 19:00:15.535 CDT [48596:492] pg_regress/partition_prune\n> > HINT: More details may be available in the server log.\n> > 2019-09-29 19:00:15.535 CDT [48596:493] pg_regress/partition_prune\n> > CONTEXT: PL/pgSQL function explain_parallel_append(text) line 5 at\n> > FOR over EXECUTE statement\n> > 2019-09-29 19:00:15.535 CDT [48596:494] pg_regress/partition_prune\n> > STATEMENT: select explain_parallel_append('select avg(ab.a) from ab\n> > inner join lprt_a a on ab.a = a.a where a.a in(1, 0, 0)');\n> > 2019-09-29 19:00:15.535 CDT [48596:495] pg_regress/partition_prune\n> > WARNING: could not remove shared memory segment\n> > \"/PostgreSQL.1225945786\": Permission denied\n> >\n> > which looks like an external problem to me --- certainly, nothing\n> > we changed yesterday would explain it. Did you change anything\n> > about the system configuration, or do a software update?\n> >\n> > regards, tom lane\n>\n> I did do an upgrade to a later SVN rev.\n>\n> Let me reboot and see if that fixes anything.\n>\n> (this is -CURRENT on FreeBSD, so it's always a moving target).\n\nHi Larry,\n\nI'm seeing this on my FreeBSD 13 bleeding edge system too (built a\ncouple of days ago) and will see if I can find out what's up with\nthat. The most obvious culprit is the stuff that just landed in the\nkernel to support Linux-style memfd_create() and thereby changed\naround some shm_open() related things. Seems to be clearly not a\nPostgreSQL problem.\n\nThanks,\nThomas\n\n\n", "msg_date": "Wed, 2 Oct 2019 14:33:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Peripatus: Can someone look?" }, { "msg_contents": "On 10/01/2019 8:33 pm, Thomas Munro wrote:\n> On Wed, Oct 2, 2019 at 4:49 AM Larry Rosenman <ler@lerctr.org> wrote:\n>> On 10/01/2019 10:46 am, Tom Lane wrote:\n>> > Larry Rosenman <ler@lerctr.org> writes:\n>> >> My Buildfarm animal (peripatus) has been failing check since\n>> >> yesterday.\n>> >> Can someone look at it?\n>> >\n>> > It's been doing this in parallel queries, in v11 and up:\n>> >\n>> > 2019-09-29 19:00:15.534 CDT [49513:1] ERROR: could not open shared\n>> > memory segment \"/PostgreSQL.1225945786\": Permission denied\n>> > 2019-09-29 19:00:15.535 CDT [48596:491] pg_regress/partition_prune\n>> > ERROR: parallel worker failed to initialize\n>> > 2019-09-29 19:00:15.535 CDT [48596:492] pg_regress/partition_prune\n>> > HINT: More details may be available in the server log.\n>> > 2019-09-29 19:00:15.535 CDT [48596:493] pg_regress/partition_prune\n>> > CONTEXT: PL/pgSQL function explain_parallel_append(text) line 5 at\n>> > FOR over EXECUTE statement\n>> > 2019-09-29 19:00:15.535 CDT [48596:494] pg_regress/partition_prune\n>> > STATEMENT: select explain_parallel_append('select avg(ab.a) from ab\n>> > inner join lprt_a a on ab.a = a.a where a.a in(1, 0, 0)');\n>> > 2019-09-29 19:00:15.535 CDT [48596:495] pg_regress/partition_prune\n>> > WARNING: could not remove shared memory segment\n>> > \"/PostgreSQL.1225945786\": Permission denied\n>> >\n>> > which looks like an external problem to me --- certainly, nothing\n>> > we changed yesterday would explain it. Did you change anything\n>> > about the system configuration, or do a software update?\n>> >\n>> > regards, tom lane\n>> \n>> I did do an upgrade to a later SVN rev.\n>> \n>> Let me reboot and see if that fixes anything.\n>> \n>> (this is -CURRENT on FreeBSD, so it's always a moving target).\n> \n> Hi Larry,\n> \n> I'm seeing this on my FreeBSD 13 bleeding edge system too (built a\n> couple of days ago) and will see if I can find out what's up with\n> that. The most obvious culprit is the stuff that just landed in the\n> kernel to support Linux-style memfd_create() and thereby changed\n> around some shm_open() related things. Seems to be clearly not a\n> PostgreSQL problem.\n> \n> Thanks,\n> Thomas\n\nThanks, Thomas.\n\nHere's my 2 SVN revs if that would help:\nFreeBSD SVN rev:\nr352600 - - 1.69G 2019-09-22 13:13\nr352873 NR / 43.1G 2019-09-29 16:36\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Tue, 01 Oct 2019 20:44:38 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: Peripatus: Can someone look?" } ]
[ { "msg_contents": "FreeBSD SVN rev:\nr352600 - - 1.69G 2019-09-22 13:13\nr352873 NR / 43.1G 2019-09-29 16:36\n\nI went from r352600 to r352873 and now I'm getting PostgreSQL permission \ndenied\nerrors on the check phase of the build.\n\nFreeBSD folks: Any ideas?\nPostgreSQL folks: FYI.\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Tue, 01 Oct 2019 20:27:27 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "My buildfarm member now giving permission denied" }, { "msg_contents": "On 10/01/2019 8:27 pm, Larry Rosenman wrote:\n> FreeBSD SVN rev:\n> r352600 - - 1.69G 2019-09-22 13:13\n> r352873 NR / 43.1G 2019-09-29 16:36\n> \n> I went from r352600 to r352873 and now I'm getting PostgreSQL \n> permission denied\n> errors on the check phase of the build.\n> \n> FreeBSD folks: Any ideas?\n> PostgreSQL folks: FYI.\nlatest build log:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2019-10-02%2001%3A20%3A14\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Tue, 01 Oct 2019 20:32:27 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: My buildfarm member now giving permission denied" }, { "msg_contents": "On Tue, Oct 1, 2019 at 8:32 PM Larry Rosenman <ler@lerctr.org> wrote:\n>\n> On 10/01/2019 8:27 pm, Larry Rosenman wrote:\n> > FreeBSD SVN rev:\n> > r352600 - - 1.69G 2019-09-22 13:13\n> > r352873 NR / 43.1G 2019-09-29 16:36\n> >\n> > I went from r352600 to r352873 and now I'm getting PostgreSQL\n> > permission denied\n> > errors on the check phase of the build.\n> >\n> > FreeBSD folks: Any ideas?\n> > PostgreSQL folks: FYI.\n> latest build log:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2019-10-02%2001%3A20%3A14\n>\n\nJust to follow up with this for list' sake; this was fixed by r352952.\n\nThanks,\n\nKyle Evans\n\n\n", "msg_date": "Wed, 2 Oct 2019 09:38:20 -0500", "msg_from": "Kyle Evans <kevans@freebsd.org>", "msg_from_op": false, "msg_subject": "Re: My buildfarm member now giving permission denied" } ]
[ { "msg_contents": "Dear Hackers\r\n\r\nI am using PostgreSQL's SPI to execute a simple SQL query (SELECT * FROM \r\n...) via SPI_exec. As a a result, I get an SPITupleTable with the \r\nresults of my query.\r\n\r\nNow that I have the SPITupleTable, I was wondering if it would be \r\npossible to later query over it further in my SQL statements using SPI, \r\nfor example, something a bit similar to SPI_Exec (\"Select * FROM \r\n:mySPITupleTable\", 0);\r\n\r\nMy motivation is to treat, and use the SPITupleTable as 'intermediate' \r\nor 'temporary' tables which I would discard early - I want to apply a \r\nseries of manipulations to my SPITupleTable before I would finally store \r\nit in the tablespace. Therefore, minimization of any overheads is also \r\nvery important. I understand that I could introduce a CREATE TABLE to my \r\nSQL query and reference a table in that way, but I am under the \r\nimpression that it would incur unnecessary overheads?\r\n\r\nSo, I would be grateful if anyone could help me understand how to \r\nmanipulate the SPITupleTable further with SQL or indicate if it is at \r\nall possible. In the case that it is not possible, I would also be \r\ninterested in alternatives and discussion on overheads.\r\n\r\nThanks in advance.\r\n\r\nBest,\r\nTom\r\n", "msg_date": "Wed, 2 Oct 2019 02:36:07 +0000", "msg_from": "Tom Mercha <mercha_t@hotmail.com>", "msg_from_op": true, "msg_subject": "Is querying SPITupleTable with SQL possible?" }, { "msg_contents": "Tom Mercha <mercha_t@hotmail.com> writes:\n> I am using PostgreSQL's SPI to execute a simple SQL query (SELECT * FROM \n> ...) via SPI_exec. As a a result, I get an SPITupleTable with the \n> results of my query.\n> Now that I have the SPITupleTable, I was wondering if it would be \n> possible to later query over it further in my SQL statements using SPI, \n> for example, something a bit similar to SPI_Exec (\"Select * FROM \n> :mySPITupleTable\", 0);\n\nIt's possible you could use the \"transition table\" (aka\nEphemeralNamedRelation) infrastructure for this, though I'm not sure\nif it's really a close fit, or whether it's been built out enough to\nsupport this usage. From memory, it wants to work with tuplestores,\nwhich are a bit heavier-weight than SPITupleTables.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Oct 2019 10:11:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is querying SPITupleTable with SQL possible?" }, { "msg_contents": "On 02/10/2019 16:11, Tom Lane wrote:\r\n> Tom Mercha <mercha_t@hotmail.com> writes:\r\n>> I am using PostgreSQL's SPI to execute a simple SQL query (SELECT * FROM\r\n>> ...) via SPI_exec. As a a result, I get an SPITupleTable with the\r\n>> results of my query.\r\n>> Now that I have the SPITupleTable, I was wondering if it would be\r\n>> possible to later query over it further in my SQL statements using SPI,\r\n>> for example, something a bit similar to SPI_Exec (\"Select * FROM\r\n>> :mySPITupleTable\", 0);\r\n> \r\n> It's possible you could use the \"transition table\" (aka\r\n> EphemeralNamedRelation) infrastructure for this, though I'm not sure\r\n> if it's really a close fit, or whether it's been built out enough to\r\n> support this usage. From memory, it wants to work with tuplestores,\r\n> which are a bit heavier-weight than SPITupleTables.\r\n> \r\n> \t\t\tregards, tom lane\r\n> \r\n\r\nThanks for this feedback! The EphemeralNamedRelation seems that it could \r\nbe a good fit for what I'm looking for.\r\n\r\nHowever, I'm not quite so sure how I can query over the \r\nEphemeralNamedRelation using SQL? Could someone indicate where I can \r\nfind an example?\r\n\r\nRegards\r\nTom\r\n", "msg_date": "Wed, 2 Oct 2019 18:53:16 +0000", "msg_from": "Tom Mercha <mercha_t@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Is querying SPITupleTable with SQL possible?" }, { "msg_contents": "On 10/2/19 2:53 PM, Tom Mercha wrote:\n\n> However, I'm not quite so sure how I can query over the \n> EphemeralNamedRelation using SQL? Could someone indicate where I can \n> find an example?\n\nYou could look in the documentation for CREATE TRIGGER in PG 10\nor later, specifically the clauses like REFERENCING NEW TABLE AS foo.\n\nhttps://www.postgresql.org/docs/10/sql-createtrigger.html\n\nWhile the trigger function is executing, it can do SPI SQL\nqueries exactly as if there is a table named 'foo' sitting there,\nbut it only \"exists\" for that function and only until it returns.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 2 Oct 2019 14:59:24 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Is querying SPITupleTable with SQL possible?" }, { "msg_contents": "Hi,\r\nI have had see your discussion about node EphemeralNamedRelation with the Community.\r\nNow, I want to use this node in SQL(for test), I have saw the manual but could not understand,\r\ncan you show me a example on how to use it in SQL?\r\n\r\nThanks a lot~\r\n\r\nRegards, wu fei\r\n\r\n-----Original Message-----\r\nFrom: Tom Mercha [mailto:mercha_t@hotmail.com] \r\nSent: 2019年10月3日 2:53\r\nTo: Tom Lane <tgl@sss.pgh.pa.us>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: Is querying SPITupleTable with SQL possible?\r\n\r\nOn 02/10/2019 16:11, Tom Lane wrote:\r\n> Tom Mercha <mercha_t@hotmail.com> writes:\r\n>> I am using PostgreSQL's SPI to execute a simple SQL query (SELECT * \r\n>> FROM\r\n>> ...) via SPI_exec. As a a result, I get an SPITupleTable with the \r\n>> results of my query.\r\n>> Now that I have the SPITupleTable, I was wondering if it would be \r\n>> possible to later query over it further in my SQL statements using \r\n>> SPI, for example, something a bit similar to SPI_Exec (\"Select * FROM \r\n>> :mySPITupleTable\", 0);\r\n> \r\n> It's possible you could use the \"transition table\" (aka\r\n> EphemeralNamedRelation) infrastructure for this, though I'm not sure \r\n> if it's really a close fit, or whether it's been built out enough to \r\n> support this usage. From memory, it wants to work with tuplestores, \r\n> which are a bit heavier-weight than SPITupleTables.\r\n> \r\n> \t\t\tregards, tom lane\r\n> \r\n\r\nThanks for this feedback! The EphemeralNamedRelation seems that it could be a good fit for what I'm looking for.\r\n\r\nHowever, I'm not quite so sure how I can query over the EphemeralNamedRelation using SQL? Could someone indicate where I can find an example?\r\n\r\nRegards\r\nTom\r\n\r\n\r\n\n\n", "msg_date": "Fri, 20 Dec 2019 01:53:44 +0000", "msg_from": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Is querying SPITupleTable with SQL possible?" }, { "msg_contents": "[un-top-postifying]\n\nOn Fri, Dec 20, 2019 at 2:53 PM Wu, Fei <wufei.fnst@cn.fujitsu.com> wrote:\n>> On 02/10/2019 16:11, Tom Lane wrote:\n>> > Tom Mercha <mercha_t@hotmail.com> writes:\n>> >> I am using PostgreSQL's SPI to execute a simple SQL query (SELECT *\n>> >> FROM\n>> >> ...) via SPI_exec. As a a result, I get an SPITupleTable with the\n>> >> results of my query.\n>> >> Now that I have the SPITupleTable, I was wondering if it would be\n>> >> possible to later query over it further in my SQL statements using\n>> >> SPI, for example, something a bit similar to SPI_Exec (\"Select * FROM\n>> >> :mySPITupleTable\", 0);\n>> >\n>> > It's possible you could use the \"transition table\" (aka\n>> > EphemeralNamedRelation) infrastructure for this, though I'm not sure\n>> > if it's really a close fit, or whether it's been built out enough to\n>> > support this usage. From memory, it wants to work with tuplestores,\n>> > which are a bit heavier-weight than SPITupleTables.\n>>\n>> Thanks for this feedback! The EphemeralNamedRelation seems that it could be a good fit for what I'm looking for.\n>>\n>> However, I'm not quite so sure how I can query over the EphemeralNamedRelation using SQL? Could someone indicate where I can find an example?\n>\n> I have had see your discussion about node EphemeralNamedRelation with the Community.\n> Now, I want to use this node in SQL(for test), I have saw the manual but could not understand,\n> can you show me a example on how to use it in SQL?\n\nI missed this thread before. If you want to expose an ENR to SQL\nyou'll need to write some C code for now (unless someone has added\nsupport for other languages?). Try something like this (not tested):\n\nEphemeralNamedRelation enr = palloc0(sizeof(*enr));\n\nenr->md.name = \"my_magic_table\";\nenr->md.reliddesc = InvalidOid;\nenr->md.tupdesc = my_magic_table_tuple_descriptor;\nenr->md.enrtype = ENR_NAMED_TUPLESTORE;\nenr->md.enrtuples = how_many_tuples_to_tell_the_planner_we_have;\nenr->reldata = my_tupestorestate;\nrc = SPI_register_relation(enr);\nif (rc != SPI_OK_REL_REGISTER)\n explode();\n\nYou will need to come up with a TupleDesc that describes the columns\nin your magic table, and a Tuplestorestate that holds the tuples.\nAfter that you should be able to plan and execute read-only SQL\nqueries against that tuplestore using that name, via the usual\nSPI_xxx() interfaces. I'm not sure how you'd really do this though:\nyou might need to make a function that takes a query as a string, then\ndoes the above setup in a new SPI connection, and then executes the\nquery. This would probably be a lot more fun from a PL like Python.\n(If you wanted to create ENRs that are available to your top level\nconnection, I haven't looked into it but I suspect that'd require more\nmachinery than we have right now, and I'm not sure if it'd be a good\nidea.)\n\nIn theory, there could be a new type ENR_SPI_TUPLE_TABLE that could\nwork with SPITupleTable instead of Tuplestore. The idea was that we\nmight be able to do more clever things like that in the future, which\nis why we tried to make it at least a little bit general. One thing\nthat could be nice would be SQL Server style table variables; you\ncould have a functions that receive them as parameters, return them,\nand be able to insert/update/delete. That's a bit far fetched, but\ngives some idea of the reason we invented QueryEnvironment and passed\nto all the right places in the planner and executor (or perhaps not\nenough yet, we do occasionally find places that we forgot to pass\nit...). So yeah, to use an SPITupleTable now you'd need to insert its\ncontents into a Tuplestorestate.\n\nAs mentioned, this is the mechanism that is used to support SQL\nstandard \"transition tables\" in triggers. You can see that the\nfunction SPI_register_trigger_data() just does what I showed above to\nexpose all the transition tables to your trigger's SQL statements.\nThat part even works in Perl, Python, TCL triggers, but to make your\nown tables in higher level languages you'd need to expose more\ninterfaces and figure out sane ways to get TupleDescriptor and\ninteract with Tuplestore. If you want to see examples of SQL queries\ninside triggers that access transition tables, check out\nsrc/test/regress/expected/triggers.out and\nsrc/pl/plpython/expected/plpython_trigger.out.\n\n\n", "msg_date": "Fri, 20 Dec 2019 15:48:13 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is querying SPITupleTable with SQL possible?" } ]
[ { "msg_contents": "Hi, \n\nI found wrong message output when ECPG preprocessed some SQL commands . \nFor example, ECPG application has following command, ECPG outputs \"unsupported feature will be passed to server\". \n-----------\nEXEC SQL CREATE SCHEMA IF NOT EXISTS hollywood;\n-----------\nI attached sample code to reproduce this problem. \n\n[Investigation]\nI think parse.pl has some problem. The following filters do not seem to work properly: \n src/interfaces/ecpg/preproc/parse.pl\n if ($feature_not_supported == 1)\n {\n\n # we found an unsupported feature, but we have to\n # filter out ExecuteStmt: CREATE OptTemp TABLE ...\n # because the warning there is only valid in some situations\n if ($flds->[0] ne 'create' || $flds->[2] ne 'table')\n {\n add_to_buffer('rules',\n 'mmerror(PARSE_ERROR, ET_WARNING, \"unsupported feature will be passed to server\");' );\n }\n $feature_not_supported = 0;\n }\n\n[Solution]\nI have two solutions for this. \n1) This problem occurs because filter does not work properly. \n So, by setting the filter conditions properly, wrong warning should not be output. \n However, we have to modify the conditions when the syntax in gram.y is changed. \n\n2) This problem occurs when a syntax is not supported under certain conditions like following: \n So, when \"if sentence\" is found inside rule, the warning should not be output. \n\nCreateSchemaStmt:\n | CREATE SCHEMA IF_P NOT EXISTS OptSchemaName AUTHORIZATION RoleSpec OptSchemaEltList\n {\n CreateSchemaStmt *n = makeNode(CreateSchemaStmt);\n\n if ($9 != NIL)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"CREATE SCHEMA IF NOT EXISTS cannot include schema elements\"),\n parser_errposition(@9)));\n\nI attached a draft patch of the solution 2). \n\nRegards, \nDaisuke, Higuchi", "msg_date": "Wed, 2 Oct 2019 04:10:43 +0000", "msg_from": "\"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com>", "msg_from_op": true, "msg_subject": "[Bug fix] ECPG: ECPG preprocessor outputs \"unsupported feature will\n be passed to server\" even if the command is supported" } ]
[ { "msg_contents": "Hi,\n\nI noticed that some of the header files inclusion is not ordered as\nper the usual standard that is followed.\nThe attached patch contains the fix for the order in which the header\nfiles are included.\nLet me know your thoughts on the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Oct 2019 14:56:52 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Ordering of header file inclusion" }, { "msg_contents": "On Wed, Oct 2, 2019 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> I noticed that some of the header files inclusion is not ordered as\n> per the usual standard that is followed.\n> The attached patch contains the fix for the order in which the header\n> files are included.\n> Let me know your thoughts on the same.\n>\n\n+1. I think this will make an order of header inclusions consistent\nthroughout code. One thing which will be slightly tricky is we might\nnot be able to back-patch this as some of this belongs to a recent\nversion(s) and others to older versions as well. OTOH, I have not\ninvestigated how much of this is relevant to back branches. I think\nmost of these will apply to 12, but I am not sure if it is worth the\neffort to segregate the changes which apply to back branches. What do\nyou think?\n\n Few minor comments after a quick read:\n #include \"lib/ilist.h\"\n-\n+#include \"miscadmin.h\"\n\nI think we shouldn't remove the extra line as part of the above change.\n\n--- a/src/bin/psql/variables.c\n+++ b/src/bin/psql/variables.c\n@@ -8,10 +8,8 @@\n #include \"postgres_fe.h\"\n\n #include \"common.h\"\n-#include \"variables.h\"\n-\n #include \"common/logging.h\"\n-\n+#include \"variables.h\"\n\nSame as above.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Oct 2019 12:05:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Oct 2, 2019 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n>> I noticed that some of the header files inclusion is not ordered as\n>> per the usual standard that is followed.\n>> The attached patch contains the fix for the order in which the header\n>> files are included.\n>> Let me know your thoughts on the same.\n\n> +1.\n\nFWIW, I'm not on board with reordering system-header inclusions.\nSome platforms have (had?) ordering dependencies for those, and where\nthat's true, it's seldom alphabetical. It's only our own headers\nwhere we can safely expect that any arbitrary order will work.\n\n> I think we shouldn't remove the extra line as part of the above change.\n\nI would take out the blank lines between our own #includes. Those are\ntotally arbitrary and unnecessary. The whole point of style rules like\nthis one is to reduce the differences between the way one person might\nwrite something and the way that someone else might. Deciding to throw\nin a blank line is surely in the realm of unnecessary differences.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Oct 2019 10:49:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Oct 8, 2019 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Wed, Oct 2, 2019 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> I noticed that some of the header files inclusion is not ordered as\n> >> per the usual standard that is followed.\n> >> The attached patch contains the fix for the order in which the header\n> >> files are included.\n> >> Let me know your thoughts on the same.\n>\n> > +1.\n>\n> FWIW, I'm not on board with reordering system-header inclusions.\n> Some platforms have (had?) ordering dependencies for those, and where\n> that's true, it's seldom alphabetical. It's only our own headers\n> where we can safely expect that any arbitrary order will work.\n>\n\nOkay, that makes sense. However, I noticed that ordering for\nsystem-header inclusions is somewhat random. For ex. nodeSubPlan.c,\ndatetime.c, etc. include limits.h first and then math.h whereas\nknapsack.c, float.c includes them in reverse order. There could be\nmore such inconsistencies and the probable reason is that we don't\nhave any specific rule, so different people decide to do it\ndifferently.\n\n> > I think we shouldn't remove the extra line as part of the above change.\n>\n> I would take out the blank lines between our own #includes.\n>\n\nOkay, that would be better, but doing it half-heartedly as done in\npatch might make it worse. So, it is better to remove blank lines\nbetween our own #includes in all cases.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Oct 2019 11:37:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Wed, Oct 9, 2019 at 11:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 8, 2019 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > On Wed, Oct 2, 2019 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >> I noticed that some of the header files inclusion is not ordered as\n> > >> per the usual standard that is followed.\n> > >> The attached patch contains the fix for the order in which the header\n> > >> files are included.\n> > >> Let me know your thoughts on the same.\n> >\n> > > +1.\n> >\n> > FWIW, I'm not on board with reordering system-header inclusions.\n> > Some platforms have (had?) ordering dependencies for those, and where\n> > that's true, it's seldom alphabetical. It's only our own headers\n> > where we can safely expect that any arbitrary order will work.\n> >\n>\n> Okay, that makes sense. However, I noticed that ordering for\n> system-header inclusions is somewhat random. For ex. nodeSubPlan.c,\n> datetime.c, etc. include limits.h first and then math.h whereas\n> knapsack.c, float.c includes them in reverse order. There could be\n> more such inconsistencies and the probable reason is that we don't\n> have any specific rule, so different people decide to do it\n> differently.\n>\n> > > I think we shouldn't remove the extra line as part of the above change.\n> >\n> > I would take out the blank lines between our own #includes.\n> >\n>\n> Okay, that would be better, but doing it half-heartedly as done in\n> patch might make it worse. So, it is better to remove blank lines\n> between our own #includes in all cases.\n>\nAttached patch contains the fix based on the comments suggested.\nI have added/deleted extra lines in certain places so that the\nreadability is better.\nI have removed the duplicate includes of certain header files in same\nsource file.\nIn some place postgres header files was getting included as\n<postgres_header.h>, I have changed it to \"postgres_header.h\".\nLet me know if any change is required.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Oct 2019 22:57:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Oct 15, 2019 at 10:57 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Oct 9, 2019 at 11:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 8, 2019 at 8:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > > On Wed, Oct 2, 2019 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >> I noticed that some of the header files inclusion is not ordered as\n> > > >> per the usual standard that is followed.\n> > > >> The attached patch contains the fix for the order in which the header\n> > > >> files are included.\n> > > >> Let me know your thoughts on the same.\n> > >\n> > > > +1.\n> > >\n> > > FWIW, I'm not on board with reordering system-header inclusions.\n> > > Some platforms have (had?) ordering dependencies for those, and where\n> > > that's true, it's seldom alphabetical. It's only our own headers\n> > > where we can safely expect that any arbitrary order will work.\n> > >\n> >\n> > Okay, that makes sense. However, I noticed that ordering for\n> > system-header inclusions is somewhat random. For ex. nodeSubPlan.c,\n> > datetime.c, etc. include limits.h first and then math.h whereas\n> > knapsack.c, float.c includes them in reverse order. There could be\n> > more such inconsistencies and the probable reason is that we don't\n> > have any specific rule, so different people decide to do it\n> > differently.\n> >\n> > > > I think we shouldn't remove the extra line as part of the above change.\n> > >\n> > > I would take out the blank lines between our own #includes.\n> > >\n> >\n> > Okay, that would be better, but doing it half-heartedly as done in\n> > patch might make it worse. So, it is better to remove blank lines\n> > between our own #includes in all cases.\n> >\n> Attached patch contains the fix based on the comments suggested.\n>\n\nThanks for working on this. I will look into this in the coming few\ndays or during next CF. Can you please register it for the next CF\n(https://commitfest.postgresql.org/25/)?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Oct 2019 08:10:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Wed, Oct 16, 2019 at 8:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks for working on this. I will look into this in the coming few\n> days or during next CF. Can you please register it for the next CF\n> (https://commitfest.postgresql.org/25/)?\n>\nThanks, I have added it to the commitfest.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Oct 2019 08:57:08 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Oct 15, 2019 at 10:57 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Attached patch contains the fix based on the comments suggested.\n> I have added/deleted extra lines in certain places so that the\n> readability is better.\n>\n\nHmm, I am not sure if that is better in all cases. It seems to be\nmaking the code look inconsistent at few places. See comments below:\n\n1.\ndiff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c\nindex 4b2186b..45215ba 100644\n--- a/contrib/bloom/blinsert.c\n+++ b/contrib/bloom/blinsert.c\n@@ -15,6 +15,7 @@\n #include \"access/genam.h\"\n #include \"access/generic_xlog.h\"\n #include \"access/tableam.h\"\n+#include \"bloom.h\"\n #include \"catalog/index.h\"\n #include \"miscadmin.h\"\n #include \"storage/bufmgr.h\"\n@@ -23,7 +24,6 @@\n #include \"utils/memutils.h\"\n #include \"utils/rel.h\"\n\n-#include \"bloom.h\"\n\n PG_MODULE_MAGIC;\n\ndiff --git a/contrib/bloom/blscan.c b/contrib/bloom/blscan.c\nindex 49e364a..4b9a2b8 100644\n--- a/contrib/bloom/blscan.c\n+++ b/contrib/bloom/blscan.c\n@@ -13,14 +13,14 @@\n #include \"postgres.h\"\n\n #include \"access/relscan.h\"\n-#include \"pgstat.h\"\n+#include \"bloom.h\"\n #include \"miscadmin.h\"\n+#include \"pgstat.h\"\n #include \"storage/bufmgr.h\"\n #include \"storage/lmgr.h\"\n #include \"utils/memutils.h\"\n #include \"utils/rel.h\"\n\n-#include \"bloom.h\"\n\n /*\n * Begin scan of bloom index.\n\nThe above changes lead to one extra line between the last header\ninclude and from where the actual code starts.\n\n2.\ndiff --git a/contrib/intarray/_int_bool.c b/contrib/intarray/_int_bool.c\nindex 91e2a80..0d2f667 100644\n--- a/contrib/intarray/_int_bool.c\n+++ b/contrib/intarray/_int_bool.c\n@@ -3,11 +3,11 @@\n */\n #include \"postgres.h\"\n\n+#include \"_int.h\"\n+\n #include \"miscadmin.h\"\n #include \"utils/builtins.h\"\n\n-#include \"_int.h\"\n-\n PG_FUNCTION_INFO_V1(bqarr_in);\n PG_FUNCTION_INFO_V1(bqarr_out);\n PG_FUNCTION_INFO_V1(boolop);\ndiff --git a/contrib/intarray/_int_gin.c b/contrib/intarray/_int_gin.c\nindex 7aebfec..d6241d4 100644\n--- a/contrib/intarray/_int_gin.c\n+++ b/contrib/intarray/_int_gin.c\n@@ -3,11 +3,11 @@\n */\n #include \"postgres.h\"\n\n+#include \"_int.h\"\n+\n #include \"access/gin.h\"\n #include \"access/stratnum.h\"\n\nWhy extra line after inclusion of _int.h?\n\n3.\ndiff --git a/contrib/intarray/_int_tool.c b/contrib/intarray/_int_tool.c\nindex fde8d15..75ad04e 100644\n--- a/contrib/intarray/_int_tool.c\n+++ b/contrib/intarray/_int_tool.c\n@@ -5,10 +5,10 @@\n\n #include <limits.h>\n\n-#include \"catalog/pg_type.h\"\n-\n #include \"_int.h\"\n\n+#include \"catalog/pg_type.h\"\n+\n\nWhy extra lines after both includes?\n\n4.\ndiff --git a/contrib/intarray/_intbig_gist.c b/contrib/intarray/_intbig_gist.c\nindex 2a20abe..87ea86c 100644\n--- a/contrib/intarray/_intbig_gist.c\n+++ b/contrib/intarray/_intbig_gist.c\n@@ -3,12 +3,12 @@\n */\n #include \"postgres.h\"\n\n+#include \"_int.h\"\n+\n #include \"access/gist.h\"\n #include \"access/stratnum.h\"\n #include \"port/pg_bitutils.h\"\n\n-#include \"_int.h\"\n-\n #define GETENTRY(vec,pos) ((GISTTYPE *)\nDatumGetPointer((vec)->vector[(pos)].key))\n /*\n ** _intbig methods\ndiff --git a/contrib/isn/isn.c b/contrib/isn/isn.c\nindex 0c2cac7..36bb582 100644\n--- a/contrib/isn/isn.c\n+++ b/contrib/isn/isn.c\n@@ -15,9 +15,9 @@\n #include \"postgres.h\"\n\n #include \"fmgr.h\"\n+#include \"isn.h\"\n #include \"utils/builtins.h\"\n\n-#include \"isn.h\"\n\nAgain extra spaces. I am not why you have extra spaces in a few cases.\n\nI haven't reviewed it completely, but generally, the changes seem to\nbe fine. Please see if you can be consistent in extra space between\nincludes. Kindly check the same throughout the patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Oct 2019 16:44:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "diff --git a/contrib/bloom/blcost.c b/contrib/bloom/blcost.c\nindex f9fe57f..6224735 100644\n--- a/contrib/bloom/blcost.c\n+++ b/contrib/bloom/blcost.c\n@@ -12,10 +12,10 @@\n */\n #include \"postgres.h\"\n\n+#include \"bloom.h\"\n #include \"fmgr.h\"\n #include \"utils/selfuncs.h\"\n\n-#include \"bloom.h\"\n\n /*\n * Estimate cost of bloom index scan.\n\nThis class of change I don't like.\n\nThe existing arrangement keeps \"other\" header files separate from the\nheader file of the module itself. It seems useful to keep that separate.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 19 Oct 2019 21:50:03 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "Hi,\n\nOn 2019-10-19 21:50:03 +0200, Peter Eisentraut wrote:\n> diff --git a/contrib/bloom/blcost.c b/contrib/bloom/blcost.c\n> index f9fe57f..6224735 100644\n> --- a/contrib/bloom/blcost.c\n> +++ b/contrib/bloom/blcost.c\n> @@ -12,10 +12,10 @@\n> */\n> #include \"postgres.h\"\n> \n> +#include \"bloom.h\"\n> #include \"fmgr.h\"\n> #include \"utils/selfuncs.h\"\n> \n> -#include \"bloom.h\"\n> \n> /*\n> * Estimate cost of bloom index scan.\n> \n> This class of change I don't like.\n> \n> The existing arrangement keeps \"other\" header files separate from the\n> header file of the module itself. It seems useful to keep that separate.\n\nIf we were to do so, we ought to put bloom.h first and clearly seperated\nout, not last, as the former makes the bug of the the header not being\nstandalone more obvious.\n\nI'm -1 on having a policy of putting the headers separate though, I feel\nthat's too much work, and there's too many cases where it's not that\nclear which header that should be.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Oct 2019 14:14:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-19 21:50:03 +0200, Peter Eisentraut wrote:\n>> This class of change I don't like.\n>> The existing arrangement keeps \"other\" header files separate from the\n>> header file of the module itself. It seems useful to keep that separate.\n\n> If we were to do so, we ought to put bloom.h first and clearly seperated\n> out, not last, as the former makes the bug of the the header not being\n> standalone more obvious.\n\nWe have headerscheck and cpluspluscheck to catch that problem, so I don't\nthink that it needs to be a reason not to rationalize header inclusion\norder.\n\nI don't have a very strong opinion on whether modules outside the core\nbackend should separate their own headers from core-system headers.\nI think there's some argument for that, but it's not something we've\ndone consistently. And, as you say, there's no convention as to\nwhere we'd include local headers if we do separate them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Oct 2019 00:53:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Thu, Oct 17, 2019 at 4:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 15, 2019 at 10:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Attached patch contains the fix based on the comments suggested.\n> > I have added/deleted extra lines in certain places so that the\n> > readability is better.\n> >\n>\n> Hmm, I am not sure if that is better in all cases. It seems to be\n> making the code look inconsistent at few places. See comments below:\n>\n> 1.\n> diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c\n> index 4b2186b..45215ba 100644\n> --- a/contrib/bloom/blinsert.c\n> +++ b/contrib/bloom/blinsert.c\n> @@ -15,6 +15,7 @@\n> #include \"access/genam.h\"\n> #include \"access/generic_xlog.h\"\n> #include \"access/tableam.h\"\n> +#include \"bloom.h\"\n> #include \"catalog/index.h\"\n> #include \"miscadmin.h\"\n> #include \"storage/bufmgr.h\"\n> @@ -23,7 +24,6 @@\n> #include \"utils/memutils.h\"\n> #include \"utils/rel.h\"\n>\n> -#include \"bloom.h\"\n>\n> PG_MODULE_MAGIC;\n>\n> diff --git a/contrib/bloom/blscan.c b/contrib/bloom/blscan.c\n> index 49e364a..4b9a2b8 100644\n> --- a/contrib/bloom/blscan.c\n> +++ b/contrib/bloom/blscan.c\n> @@ -13,14 +13,14 @@\n> #include \"postgres.h\"\n>\n> #include \"access/relscan.h\"\n> -#include \"pgstat.h\"\n> +#include \"bloom.h\"\n> #include \"miscadmin.h\"\n> +#include \"pgstat.h\"\n> #include \"storage/bufmgr.h\"\n> #include \"storage/lmgr.h\"\n> #include \"utils/memutils.h\"\n> #include \"utils/rel.h\"\n>\n> -#include \"bloom.h\"\n>\n> /*\n> * Begin scan of bloom index.\n>\n> The above changes lead to one extra line between the last header\n> include and from where the actual code starts.\n>\n> 2.\n> diff --git a/contrib/intarray/_int_bool.c b/contrib/intarray/_int_bool.c\n> index 91e2a80..0d2f667 100644\n> --- a/contrib/intarray/_int_bool.c\n> +++ b/contrib/intarray/_int_bool.c\n> @@ -3,11 +3,11 @@\n> */\n> #include \"postgres.h\"\n>\n> +#include \"_int.h\"\n> +\n> #include \"miscadmin.h\"\n> #include \"utils/builtins.h\"\n>\n> -#include \"_int.h\"\n> -\n> PG_FUNCTION_INFO_V1(bqarr_in);\n> PG_FUNCTION_INFO_V1(bqarr_out);\n> PG_FUNCTION_INFO_V1(boolop);\n> diff --git a/contrib/intarray/_int_gin.c b/contrib/intarray/_int_gin.c\n> index 7aebfec..d6241d4 100644\n> --- a/contrib/intarray/_int_gin.c\n> +++ b/contrib/intarray/_int_gin.c\n> @@ -3,11 +3,11 @@\n> */\n> #include \"postgres.h\"\n>\n> +#include \"_int.h\"\n> +\n> #include \"access/gin.h\"\n> #include \"access/stratnum.h\"\n>\n> Why extra line after inclusion of _int.h?\n>\n> 3.\n> diff --git a/contrib/intarray/_int_tool.c b/contrib/intarray/_int_tool.c\n> index fde8d15..75ad04e 100644\n> --- a/contrib/intarray/_int_tool.c\n> +++ b/contrib/intarray/_int_tool.c\n> @@ -5,10 +5,10 @@\n>\n> #include <limits.h>\n>\n> -#include \"catalog/pg_type.h\"\n> -\n> #include \"_int.h\"\n>\n> +#include \"catalog/pg_type.h\"\n> +\n>\n> Why extra lines after both includes?\n>\n> 4.\n> diff --git a/contrib/intarray/_intbig_gist.c b/contrib/intarray/_intbig_gist.c\n> index 2a20abe..87ea86c 100644\n> --- a/contrib/intarray/_intbig_gist.c\n> +++ b/contrib/intarray/_intbig_gist.c\n> @@ -3,12 +3,12 @@\n> */\n> #include \"postgres.h\"\n>\n> +#include \"_int.h\"\n> +\n> #include \"access/gist.h\"\n> #include \"access/stratnum.h\"\n> #include \"port/pg_bitutils.h\"\n>\n> -#include \"_int.h\"\n> -\n> #define GETENTRY(vec,pos) ((GISTTYPE *)\n> DatumGetPointer((vec)->vector[(pos)].key))\n> /*\n> ** _intbig methods\n> diff --git a/contrib/isn/isn.c b/contrib/isn/isn.c\n> index 0c2cac7..36bb582 100644\n> --- a/contrib/isn/isn.c\n> +++ b/contrib/isn/isn.c\n> @@ -15,9 +15,9 @@\n> #include \"postgres.h\"\n>\n> #include \"fmgr.h\"\n> +#include \"isn.h\"\n> #include \"utils/builtins.h\"\n>\n> -#include \"isn.h\"\n>\n> Again extra spaces. I am not why you have extra spaces in a few cases.\n>\n> I haven't reviewed it completely, but generally, the changes seem to\n> be fine. Please see if you can be consistent in extra space between\n> includes. Kindly check the same throughout the patch.\n>\nThanks for reviewing the patch.\nI have made an updated patch with comments you have suggested.\nI have split the patch into 3 patches so that the review can be simpler.\nThis patch also includes the changes suggested by Peter & Andres.\nI had just seen seen Tom Lane's suggestions regarding submodule header\nfile, this patch contains fix based on Andres suggestions. Let me know\nif that need to be changed, I can update it.\nShould we make this changes only in master branch or should we make in\nback branches also. If we decide for back branches, I will check if\nthis patch can apply in back branches and if this patch cannot be\ndirectly applied I can make separate patch for the back branch and\nsend.\nPlease let me know your suggestions for any changes.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 20 Oct 2019 22:58:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Sun, Oct 20, 2019 at 1:20 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> diff --git a/contrib/bloom/blcost.c b/contrib/bloom/blcost.c\n> index f9fe57f..6224735 100644\n> --- a/contrib/bloom/blcost.c\n> +++ b/contrib/bloom/blcost.c\n> @@ -12,10 +12,10 @@\n> */\n> #include \"postgres.h\"\n>\n> +#include \"bloom.h\"\n> #include \"fmgr.h\"\n> #include \"utils/selfuncs.h\"\n>\n> -#include \"bloom.h\"\n>\n> /*\n> * Estimate cost of bloom index scan.\n>\n> This class of change I don't like.\n>\n> The existing arrangement keeps \"other\" header files separate from the\n> header file of the module itself. It seems useful to keep that separate.\n>\nThanks Peter for your thoughts, I have modified the changes based on\nyour suggestions.\nI have included the module header file in the beginning.\nThe changes are attached in the previous mail.\n\nPlease let me know your suggestions for any changes.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Oct 2019 23:01:00 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Sun, Oct 20, 2019 at 2:44 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-10-19 21:50:03 +0200, Peter Eisentraut wrote:\n> > diff --git a/contrib/bloom/blcost.c b/contrib/bloom/blcost.c\n> > index f9fe57f..6224735 100644\n> > --- a/contrib/bloom/blcost.c\n> > +++ b/contrib/bloom/blcost.c\n> > @@ -12,10 +12,10 @@\n> > */\n> > #include \"postgres.h\"\n> >\n> > +#include \"bloom.h\"\n> > #include \"fmgr.h\"\n> > #include \"utils/selfuncs.h\"\n> >\n> > -#include \"bloom.h\"\n> >\n> > /*\n> > * Estimate cost of bloom index scan.\n> >\n> > This class of change I don't like.\n> >\n> > The existing arrangement keeps \"other\" header files separate from the\n> > header file of the module itself. It seems useful to keep that separate.\n>\n> If we were to do so, we ought to put bloom.h first and clearly seperated\n> out, not last, as the former makes the bug of the the header not being\n> standalone more obvious.\n>\n> I'm -1 on having a policy of putting the headers separate though, I feel\n> that's too much work, and there's too many cases where it's not that\n> clear which header that should be.\n>\nThanks Andres for reviewing the changes, I have modified the changes\nbased on your suggestions.\nI have included the module header file in the beginning.\nThe changes are attached in the previous mail.\n\nPlease let me know your suggestions for any changes.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 20 Oct 2019 23:02:31 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Sun, Oct 20, 2019 at 10:58 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Oct 17, 2019 at 4:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > I haven't reviewed it completely, but generally, the changes seem to\n> > be fine. Please see if you can be consistent in extra space between\n> > includes. Kindly check the same throughout the patch.\n> >\n> Thanks for reviewing the patch.\n> I have made an updated patch with comments you have suggested.\n> I have split the patch into 3 patches so that the review can be simpler.\n> This patch also includes the changes suggested by Peter & Andres.\n> I had just seen seen Tom Lane's suggestions regarding submodule header\n> file, this patch contains fix based on Andres suggestions. Let me know\n> if that need to be changed, I can update it.\n>\n\nAFAICS, none of Andres or Tom seems to be in favor of separating\nmodule headers. I am also not sure if we should try to make sure of\nthat in every case.\n\n> Should we make this changes only in master branch or should we make in\n> back branches also.\n>\n\nI am in favor of doing this only for HEAD, but I am fine if others\nwant to see for back branches as well and you can prepare the patches\nfor the same.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 08:47:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sun, Oct 20, 2019 at 10:58 PM vignesh C <vignesh21@gmail.com> wrote:\n>> Should we make this changes only in master branch or should we make in\n>> back branches also.\n\n> I am in favor of doing this only for HEAD, but I am fine if others\n> want to see for back branches as well and you can prepare the patches\n> for the same.\n\nThere is no good reason to back-patch this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Oct 2019 23:22:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Mon, Oct 21, 2019 at 8:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Oct 20, 2019 at 10:58 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, Oct 17, 2019 at 4:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > I haven't reviewed it completely, but generally, the changes seem to\n> > > be fine. Please see if you can be consistent in extra space between\n> > > includes. Kindly check the same throughout the patch.\n> > >\n> > Thanks for reviewing the patch.\n> > I have made an updated patch with comments you have suggested.\n> > I have split the patch into 3 patches so that the review can be simpler.\n> > This patch also includes the changes suggested by Peter & Andres.\n> > I had just seen seen Tom Lane's suggestions regarding submodule header\n> > file, this patch contains fix based on Andres suggestions. Let me know\n> > if that need to be changed, I can update it.\n> >\n>\n> AFAICS, none of Andres or Tom seems to be in favor of separating\n> module headers. I am also not sure if we should try to make sure of\n> that in every case.\n>\nThanks for the suggestions.\nUpdated patch contains the fix based on Tom Lane's Suggestion.\nLet me know your thoughts for further revision if required.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Oct 2019 23:04:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Mon, Oct 21, 2019 at 11:04 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Oct 21, 2019 at 8:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> Thanks for the suggestions.\n> Updated patch contains the fix based on Tom Lane's Suggestion.\n> Let me know your thoughts for further revision if required.\n>\n\nFew comments on 0001-Ordering-of-header-files-in-contrib-dir-oct21.patch\n-----------------------------------------------------------------------------------------------------------\n1.\n--- a/contrib/isn/isn.c\n+++ b/contrib/isn/isn.c\n@@ -15,9 +15,9 @@\n #include \"postgres.h\"\n\n #include \"fmgr.h\"\n+#include \"isn.h\"\n #include \"utils/builtins.h\"\n\n-#include \"isn.h\"\n #include \"EAN13.h\"\n #include \"ISBN.h\"\n #include \"ISMN.h\"\n\nWhy only \"isn.h\" is moved and not others?\n\n2.\n+++ b/contrib/pgcrypto/px-crypt.c\n@@ -31,9 +31,8 @@\n\n #include \"postgres.h\"\n\n-#include \"px.h\"\n #include \"px-crypt.h\"\n-\n+#include \"px.h\"\n\nI think such ordering was fine. Forex. see, hash.c (hash.h was\nincluded first and then hash_xlog.h).\n\n3.\n+#include \"_int.h\"\n #include \"access/gist.h\"\n #include \"access/stratnum.h\"\n\n-#include \"_int.h\"\n-\n\nDo we need to give preference to '_'? Is it being done somewhere\nelse? It is not that this is wrong, just that I am not sure about\nthis.\n\n4.\n--- a/contrib/hstore/hstore_io.c\n+++ b/contrib/hstore/hstore_io.c\n@@ -8,6 +8,7 @@\n #include \"access/htup_details.h\"\n #include \"catalog/pg_type.h\"\n #include \"funcapi.h\"\n+#include \"hstore.h\"\n #include \"lib/stringinfo.h\"\n #include \"libpq/pqformat.h\"\n #include \"utils/builtins.h\"\n@@ -18,7 +19,6 @@\n #include \"utils/memutils.h\"\n #include \"utils/typcache.h\"\n\n-#include \"hstore.h\"\n\n PG_MODULE_MAGIC;\n\nThis created an extra white line.\n\n5.\nWhile reviewing, I noticed that in contrib/intarray/_int_op.c, there\nis an extra white line between postgres.h and its first include. I\nthink we can make that as well consistent.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Oct 2019 12:56:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Oct 22, 2019 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 21, 2019 at 11:04 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Oct 21, 2019 at 8:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > Thanks for the suggestions.\n> > Updated patch contains the fix based on Tom Lane's Suggestion.\n> > Let me know your thoughts for further revision if required.\n> >\n\nThis patch series has broadly changed the code to organize the header\nincludes in alphabetic order. It also makes sure that all files first\nincludes 'postgres.h'/'postgres_fe.h', system header includes and then\nPostgres header includes.\n\nIt also has a change where it seems that for local header includes, we\nhave used '<>' whereas quotes (\"\") should have been used. See,\necpg/compatlib/informix.c.\n\nI am planning to commit this as multiple commits (a. contrib modules,\nb. non-backend changes and c. backend changes) as there is some risk\nof buildfarm break. From my side, I will ensure that everything is\npassing on windows and centos. Any objections to this plan?\n\nReview for 0003-Ordering-of-header-files-remaining-dir-oct21\n-----------------------------------------------------------------------------------------\n1.\n--- a/src/bin/pg_basebackup/pg_recvlogical.c\n+++ b/src/bin/pg_basebackup/pg_recvlogical.c\n@@ -19,18 +19,16 @@\n #include <sys/select.h>\n #endif\n\n-/* local\nincludes */\n-#include \"streamutil.h\"\n-\n #include \"access/xlog_internal.h\"\n-#include \"common/file_perm.h\"\n #include \"common/fe_memutils.h\"\n+#include\n\"common/file_perm.h\"\n #include \"common/logging.h\"\n #include \"getopt_long.h\"\n #include \"libpq-fe.h\"\n #include \"libpq/pqsignal.h\"\n #include \"pqexpbuffer.h\"\n\n+#include \"streamutil.h\"\n\nExtra space before streamutil.h include.\n\n2.\n--- a/src/interfaces/libpq/fe-connect.c\n+++ b/src/interfaces/libpq/fe-connect.c\n@@ -21,11 +21,6 @@\n #include <time.h>\n #include <unistd.h>\n\n-#include\n\"libpq-fe.h\"\n-#include \"libpq-int.h\"\n-#include \"fe-auth.h\"\n-#include \"pg_config_paths.h\"\n-\n #ifdef WIN32\n #include \"win32.h\"\n #ifdef _WIN32_IE\n@@ -74,10\n+69,13 @@ static int ldapServiceLookup(const char *purl,\nPQconninfoOption *options,\n #include \"common/link-canary.h\"\n #include \"common/scram-\ncommon.h\"\n #include \"common/string.h\"\n+#include \"fe-auth.h\"\n+#include \"libpq-fe.h\"\n+#include \"libpq-int.h\"\n #include \"mb/pg_wchar.h\"\n+#include\n\"pg_config_paths.h\"\n\nAfter this change, the Windows build is failing for me. You forgot to\nmove the below code:\n#ifdef USE_LDAP\n#ifdef WIN32\n#include <winldap.h>\n#else\n/* OpenLDAP deprecates RFC 1823, but we want standard conformance */\n#define LDAP_DEPRECATED 1\n#include <ldap.h>\ntypedef struct timeval LDAP_TIMEVAL;\n#endif\nstatic int ldapServiceLookup(const char *purl, PQconninfoOption *options,\n PQExpBuffer errorMessage);\n#endif\n\nAll this needs to be moved after all the includes.\n\n3.\n /* ScanKeywordList lookup data for ECPG keywords */\n #include \"ecpg_kwlist_d.h\"\n+#include \"preproc_extern.h\"\n+#include \"preproc.h\"\n\nI think preproc.h include should be before preproc_extern.h due to the\nreason mentioned earlier.\n\n4.\n--- a/src/test/modules/worker_spi/worker_spi.c\n+++ b/src/test/modules/worker_spi/worker_spi.c\n@@ -22,24 +22,22 @@\n */\n #include \"postgres.h\"\n\n-/* These are always necessary for a bgworker */\n+/* these headers are used by this particular worker's code */\n+#include \"access/xact.h\"\n+#include \"executor/spi.h\"\n+#include \"fmgr.h\"\n+#include \"lib/stringinfo.h\"\n #include \"miscadmin.h\"\n+#include \"pgstat.h\"\n #include \"postmaster/bgworker.h\"\n #include \"storage/ipc.h\"\n #include \"storage/latch.h\"\n #include \"storage/lwlock.h\"\n #include \"storage/proc.h\"\n #include \"storage/shmem.h\"\n-\n-/* these headers are used by this particular worker's code */\n-#include \"access/xact.h\"\n-#include \"executor/spi.h\"\n\nI am skeptical of this change as it is very clearly written in\ncomments the reason why header includes are separated.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Oct 2019 15:41:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Oct 22, 2019 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Few comments on 0001-Ordering-of-header-files-in-contrib-dir-oct21.patch\n> -----------------------------------------------------------------------------------------------------------\n> 1.\n> --- a/contrib/isn/isn.c\n> +++ b/contrib/isn/isn.c\n> @@ -15,9 +15,9 @@\n> #include \"postgres.h\"\n>\n> #include \"fmgr.h\"\n> +#include \"isn.h\"\n> #include \"utils/builtins.h\"\n>\n> -#include \"isn.h\"\n> #include \"EAN13.h\"\n> #include \"ISBN.h\"\n> #include \"ISMN.h\"\n>\n> Why only \"isn.h\" is moved and not others?\n>\nFixed.\nOrder is based on ascii table. Upper case letters will come before\nlower case letters.\n> 2.\n> +++ b/contrib/pgcrypto/px-crypt.c\n> @@ -31,9 +31,8 @@\n>\n> #include \"postgres.h\"\n>\n> -#include \"px.h\"\n> #include \"px-crypt.h\"\n> -\n> +#include \"px.h\"\n>\n> I think such ordering was fine. Forex. see, hash.c (hash.h was\n> included first and then hash_xlog.h).\n>\nOrder is based on ascii table.\nAscii value for \".\" is 46, Ascii value for \"-\" is 45.\nHence have placed like:\n#include \"px-crypt.h\"\n#include \"px.h\"\nNot make any changes for this. If still required I can modify.\n> 3.\n> +#include \"_int.h\"\n> #include \"access/gist.h\"\n> #include \"access/stratnum.h\"\n>\n> -#include \"_int.h\"\n> -\n>\n> Do we need to give preference to '_'? Is it being done somewhere\n> else? It is not that this is wrong, just that I am not sure about\n> this.\n>\nThe changes are done based on ascii table.\nAscii value of \"_\" is 95.\nAscii value of \"a\" is 97.\nHence _int.h is place before access/gist.h.\nI have not made any changes for this. If still required I can modify.\n> 4.\n> --- a/contrib/hstore/hstore_io.c\n> +++ b/contrib/hstore/hstore_io.c\n> @@ -8,6 +8,7 @@\n> #include \"access/htup_details.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"funcapi.h\"\n> +#include \"hstore.h\"\n> #include \"lib/stringinfo.h\"\n> #include \"libpq/pqformat.h\"\n> #include \"utils/builtins.h\"\n> @@ -18,7 +19,6 @@\n> #include \"utils/memutils.h\"\n> #include \"utils/typcache.h\"\n>\n> -#include \"hstore.h\"\n>\n> PG_MODULE_MAGIC;\n>\n> This created an extra white line.\n>\nFixed.\n> 5.\n> While reviewing, I noticed that in contrib/intarray/_int_op.c, there\n> is an extra white line between postgres.h and its first include. I\n> think we can make that as well consistent.\n>\nFixed.\nThanks for the comments.\nAttached patch has the updated changes.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Oct 2019 22:52:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Oct 22, 2019 at 3:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Review for 0003-Ordering-of-header-files-remaining-dir-oct21\n> -----------------------------------------------------------------------------------------\n> 1.\n> --- a/src/bin/pg_basebackup/pg_recvlogical.c\n> +++ b/src/bin/pg_basebackup/pg_recvlogical.c\n> @@ -19,18 +19,16 @@\n> #include <sys/select.h>\n> #endif\n>\n> -/* local\n> includes */\n> -#include \"streamutil.h\"\n> -\n> #include \"access/xlog_internal.h\"\n> -#include \"common/file_perm.h\"\n> #include \"common/fe_memutils.h\"\n> +#include\n> \"common/file_perm.h\"\n> #include \"common/logging.h\"\n> #include \"getopt_long.h\"\n> #include \"libpq-fe.h\"\n> #include \"libpq/pqsignal.h\"\n> #include \"pqexpbuffer.h\"\n>\n> +#include \"streamutil.h\"\n>\n> Extra space before streamutil.h include.\n>\nFixed.\n> 2.\n> --- a/src/interfaces/libpq/fe-connect.c\n> +++ b/src/interfaces/libpq/fe-connect.c\n> @@ -21,11 +21,6 @@\n> #include <time.h>\n> #include <unistd.h>\n>\n> -#include\n> \"libpq-fe.h\"\n> -#include \"libpq-int.h\"\n> -#include \"fe-auth.h\"\n> -#include \"pg_config_paths.h\"\n> -\n> #ifdef WIN32\n> #include \"win32.h\"\n> #ifdef _WIN32_IE\n> @@ -74,10\n> +69,13 @@ static int ldapServiceLookup(const char *purl,\n> PQconninfoOption *options,\n> #include \"common/link-canary.h\"\n> #include \"common/scram-\n> common.h\"\n> #include \"common/string.h\"\n> +#include \"fe-auth.h\"\n> +#include \"libpq-fe.h\"\n> +#include \"libpq-int.h\"\n> #include \"mb/pg_wchar.h\"\n> +#include\n> \"pg_config_paths.h\"\n>\n> After this change, the Windows build is failing for me. You forgot to\n> move the below code:\n> #ifdef USE_LDAP\n> #ifdef WIN32\n> #include <winldap.h>\n> #else\n> /* OpenLDAP deprecates RFC 1823, but we want standard conformance */\n> #define LDAP_DEPRECATED 1\n> #include <ldap.h>\n> typedef struct timeval LDAP_TIMEVAL;\n> #endif\n> static int ldapServiceLookup(const char *purl, PQconninfoOption *options,\n> PQExpBuffer errorMessage);\n> #endif\n>\n> All this needs to be moved after all the includes.\n>\nFixed. I don't have windows environment, let me know if you still face\nsome issue.\n> 3.\n> /* ScanKeywordList lookup data for ECPG keywords */\n> #include \"ecpg_kwlist_d.h\"\n> +#include \"preproc_extern.h\"\n> +#include \"preproc.h\"\n>\n> I think preproc.h include should be before preproc_extern.h due to the\n> reason mentioned earlier.\n>\nFor this file the earlier order was also like that.\nAs per the ordering preproc_extern.h should be before prepr.h.\nBut the preproc.h has some dependency on preproc_extern.h.\nNot made any changes for this.\nSame is the case in c_keywords.c also.\n> 4.\n> --- a/src/test/modules/worker_spi/worker_spi.c\n> +++ b/src/test/modules/worker_spi/worker_spi.c\n> @@ -22,24 +22,22 @@\n> */\n> #include \"postgres.h\"\n>\n> -/* These are always necessary for a bgworker */\n> +/* these headers are used by this particular worker's code */\n> +#include \"access/xact.h\"\n> +#include \"executor/spi.h\"\n> +#include \"fmgr.h\"\n> +#include \"lib/stringinfo.h\"\n> #include \"miscadmin.h\"\n> +#include \"pgstat.h\"\n> #include \"postmaster/bgworker.h\"\n> #include \"storage/ipc.h\"\n> #include \"storage/latch.h\"\n> #include \"storage/lwlock.h\"\n> #include \"storage/proc.h\"\n> #include \"storage/shmem.h\"\n> -\n> -/* these headers are used by this particular worker's code */\n> -#include \"access/xact.h\"\n> -#include \"executor/spi.h\"\n>\n> I am skeptical of this change as it is very clearly written in\n> comments the reason why header includes are separated.\nFixed. Have reverted this change.\nAttached patch has the updated changes.\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Oct 2019 23:05:12 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Oct 22, 2019 at 11:05 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Oct 22, 2019 at 3:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Review for 0003-Ordering-of-header-files-remaining-dir-oct21\n> > -----------------------------------------------------------------------------------------\n> > 1.\n> > --- a/src/bin/pg_basebackup/pg_recvlogical.c\n> > +++ b/src/bin/pg_basebackup/pg_recvlogical.c\n> > @@ -19,18 +19,16 @@\nI have compiled and verified make check & make check-world in the following:\nCentOS Linux release 7.7.1908\nRed Hat Enterprise Linux Server release 7.1\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Oct 2019 10:12:32 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Oct 22, 2019 at 3:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 22, 2019 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Oct 21, 2019 at 11:04 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Mon, Oct 21, 2019 at 8:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> This patch series has broadly changed the code to organize the header\n> includes in alphabetic order. It also makes sure that all files first\n> includes 'postgres.h'/'postgres_fe.h', system header includes and then\n> Postgres header includes.\n>\n> It also has a change where it seems that for local header includes, we\n> have used '<>' whereas quotes (\"\") should have been used. See,\n> ecpg/compatlib/informix.c.\n>\n> I am planning to commit this as multiple commits (a. contrib modules,\n> b. non-backend changes and c. backend changes) as there is some risk\n> of buildfarm break. From my side, I will ensure that everything is\n> passing on windows and centos. Any objections to this plan?\n>\n\nAttached are patches for (a) and (b) after another round of review and\nfixes by Vignesh. I am planning to commit the first one (a) tomorrow\nmorning and then if everything is fine on buildfarm, I will commit the\nsecond one (b) and once both are good, I will look into the third one\n(c). Another pair of eyes on these patches would be good.\n\nJust to be clear, the basic rule we follow here is to always first\ninclude 'postgres.h' or 'postgres_fe.h' whichever is applicable, then\nsystem header includes and then Postgres header includes. In this, we\nalso follow that all the Postgres header includes are in order based\non their ASCII value. We generally follow these rules, but the code\nhas deviated in many places. These commits make these rules\nconsistent for the entire code.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 23 Oct 2019 10:23:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Wed, Oct 23, 2019 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Attached are patches for (a) and (b) after another round of review and\n> fixes by Vignesh. I am planning to commit the first one (a) tomorrow\n> morning and then if everything is fine on buildfarm, I will commit the\n> second one (b) and once both are good, I will look into the third one\n> (c). Another pair of eyes on these patches would be good.\n>\n\nI have pushed the first one. I'll wait for some time and probably\ncommit the second one tomorrow.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Oct 2019 14:43:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Thu, Oct 24, 2019 at 2:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 23, 2019 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Attached are patches for (a) and (b) after another round of review and\n> > fixes by Vignesh. I am planning to commit the first one (a) tomorrow\n> > morning and then if everything is fine on buildfarm, I will commit the\n> > second one (b) and once both are good, I will look into the third one\n> > (c). Another pair of eyes on these patches would be good.\n> >\n>\n> I have pushed the first one. I'll wait for some time and probably\n> commit the second one tomorrow.\n>\n\nI have committed the second one as well. Now, the remaining patch is\nfor the entire backend. I will pick it up after a few days.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Oct 2019 15:28:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Fri, Oct 25, 2019 at 3:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 24, 2019 at 2:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 23, 2019 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Attached are patches for (a) and (b) after another round of review and\n> > > fixes by Vignesh. I am planning to commit the first one (a) tomorrow\n> > > morning and then if everything is fine on buildfarm, I will commit the\n> > > second one (b) and once both are good, I will look into the third one\n> > > (c). Another pair of eyes on these patches would be good.\n> > >\n> >\n> > I have pushed the first one. I'll wait for some time and probably\n> > commit the second one tomorrow.\n> >\n>\n> I have committed the second one as well. Now, the remaining patch is\n> for the entire backend. I will pick it up after a few days.\n>\nThanks Amit for committing the changes.\nI found couple of more inconsistencies, the attached patch includes\nthe fix for the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 2 Nov 2019 07:42:06 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Sat, Nov 2, 2019 at 7:42 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> >\n> Thanks Amit for committing the changes.\n> I found couple of more inconsistencies, the attached patch includes\n> the fix for the same.\n>\nThanks for the patch. It seems you've to rebase the patch as it\ndoesn't apply on the latest HEAD. Apart from that, the changes looks\ngood to me.\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Nov 2019 14:22:32 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Fri, Nov 8, 2019 at 2:22 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n> On Sat, Nov 2, 2019 at 7:42 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > >\n> > Thanks Amit for committing the changes.\n> > I found couple of more inconsistencies, the attached patch includes\n> > the fix for the same.\n> >\n> Thanks for the patch. It seems you've to rebase the patch as it\n> doesn't apply on the latest HEAD. Apart from that, the changes looks\n> good to me.\n>\n\nThanks Kuntal for reviewing the patch. I have attached the patch which\nhas been rebased on the latest HEAD.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 10 Nov 2019 17:30:44 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Sun, Nov 10, 2019 at 5:30 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Nov 8, 2019 at 2:22 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> >\n> > On Sat, Nov 2, 2019 at 7:42 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > >\n> > > Thanks Amit for committing the changes.\n> > > I found couple of more inconsistencies, the attached patch includes\n> > > the fix for the same.\n> > >\n> > Thanks for the patch. It seems you've to rebase the patch as it\n> > doesn't apply on the latest HEAD. Apart from that, the changes looks\n> > good to me.\n> >\n>\n> Thanks Kuntal for reviewing the patch. I have attached the patch which\n> has been rebased on the latest HEAD.\n>\n\nThanks for the latest patch. I will look into this today.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Nov 2019 07:53:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Sun, Nov 10, 2019 at 5:30 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n[review_latest_patch]:\n\nDo we want to consider the ordering of map file inclusions as well\n(see the changes pointed out below)? If so, what all we should\nvalidate, is compilation of these modules sufficient? Tom, anyone, do\nyou have any opinion on this?\n\n1.\n utf8_and_cyrillic.c\n\n #include \"fmgr.h\"\n #include \"mb/pg_wchar.h\"\n-#include \"../../Unicode/utf8_to_koi8r.map\"\n #include \"../../Unicode/koi8r_to_utf8.map\"\n-#include \"../../Unicode/utf8_to_koi8u.map\"\n #include \"../../Unicode/koi8u_to_utf8.map\"\n+#include \"../../Unicode/utf8_to_koi8r.map\"\n+#include \"../../Unicode/utf8_to_koi8u.map\"\n\n PG_MODULE_MAGIC;\n\n2.\nutf8_and_iso8859.c\n..\n #include \"../../Unicode/iso8859_13_to_utf8.map\"\n #include \"../../Unicode/iso8859_14_to_utf8.map\"\n #include \"../../Unicode/iso8859_15_to_utf8.map\"\n+#include \"../../Unicode/iso8859_16_to_utf8.map\"\n #include \"../../Unicode/iso8859_2_to_utf8.map\"\n #include \"../../Unicode/iso8859_3_to_utf8.map\"\n #include \"../../Unicode/iso8859_4_to_utf8.map\"\n@@ -39,7 +41,6 @@\n #include \"../../Unicode/utf8_to_iso8859_7.map\"\n #include \"../../Unicode/utf8_to_iso8859_8.map\"\n #include \"../../Unicode/utf8_to_iso8859_9.map\"\n-#include \"../../Unicode/iso8859_16_to_utf8.map\"\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Nov 2019 11:35:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Mon, Nov 11, 2019 at 11:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Nov 10, 2019 at 5:30 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> [review_latest_patch]:\n>\n> Do we want to consider the ordering of map file inclusions as well\n> (see the changes pointed out below)? If so, what all we should\n> validate, is compilation of these modules sufficient? Tom, anyone, do\n> you have any opinion on this?\n>\n\nEven I don't know how to validate the above changes by some test\napplication, other than by compiling.\n\n> 1.\n> utf8_and_cyrillic.c\n>\n> #include \"fmgr.h\"\n> #include \"mb/pg_wchar.h\"\n> -#include \"../../Unicode/utf8_to_koi8r.map\"\n> #include \"../../Unicode/koi8r_to_utf8.map\"\n> -#include \"../../Unicode/utf8_to_koi8u.map\"\n> #include \"../../Unicode/koi8u_to_utf8.map\"\n> +#include \"../../Unicode/utf8_to_koi8r.map\"\n> +#include \"../../Unicode/utf8_to_koi8u.map\"\n>\n> PG_MODULE_MAGIC;\n>\n> 2.\n> utf8_and_iso8859.c\n> ..\n> #include \"../../Unicode/iso8859_13_to_utf8.map\"\n> #include \"../../Unicode/iso8859_14_to_utf8.map\"\n> #include \"../../Unicode/iso8859_15_to_utf8.map\"\n> +#include \"../../Unicode/iso8859_16_to_utf8.map\"\n> #include \"../../Unicode/iso8859_2_to_utf8.map\"\n> #include \"../../Unicode/iso8859_3_to_utf8.map\"\n> #include \"../../Unicode/iso8859_4_to_utf8.map\"\n> @@ -39,7 +41,6 @@\n> #include \"../../Unicode/utf8_to_iso8859_7.map\"\n> #include \"../../Unicode/utf8_to_iso8859_8.map\"\n> #include \"../../Unicode/utf8_to_iso8859_9.map\"\n> -#include \"../../Unicode/iso8859_16_to_utf8.map\"\n>\n\nThanks Amit for your comments. Please find the updated patch which\ndoes not include the changes mentioned above. I will post a separate\npatch for these changes based on the response from others.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 12 Nov 2019 06:33:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Nov 12, 2019 at 6:33 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n>\n> Thanks Amit for your comments. Please find the updated patch which\n> does not include the changes mentioned above.\n>\n\nThanks for working on this. I have pushed your latest patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Nov 2019 11:19:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Tue, Nov 12, 2019 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 12, 2019 at 6:33 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> >\n> > Thanks Amit for your comments. Please find the updated patch which\n> > does not include the changes mentioned above.\n> >\n>\n> Thanks for working on this. I have pushed your latest patch.\n>\n\nThanks Amit for pushing the patch. I have re-verified and found that\nchanges need to be done in few more places. The main changes are made\nin the header file and plpython source files. The attached patch\nhandles the same. I have verified make check and make check-world\nincluding --with-python & --with-perl in the following:\nCentOS Linux release 7.7.1908\nRed Hat Enterprise Linux Server release 7.1\n\nI have verified including --llvm in CentOS Linux release 7.7.1908.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 16 Nov 2019 07:01:43 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Sat, Nov 16, 2019 at 7:01 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Nov 12, 2019 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 12, 2019 at 6:33 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > >\n> > > Thanks Amit for your comments. Please find the updated patch which\n> > > does not include the changes mentioned above.\n> > >\n> >\n> > Thanks for working on this. I have pushed your latest patch.\n> >\n>\n> Thanks Amit for pushing the patch. I have re-verified and found that\n> changes need to be done in few more places. The main changes are made\n> in the header file and plpython source files. The attached patch\n> handles the same. I have verified make check and make check-world\n> including --with-python & --with-perl in the following:\n> CentOS Linux release 7.7.1908\n> Red Hat Enterprise Linux Server release 7.1\n>\n> I have verified including --llvm in CentOS Linux release 7.7.1908.\n>\n\nThanks for finding the remaining places, the patch looks good to me.\nI hope this covers the entire code. BTW, are you using some script to\nfind this or is this a result of manual inspection of code? I have\nmodified the commit message in the attached patch. I will commit this\nearly next week unless someone else wants to review it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 21 Nov 2019 14:10:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Thu, Nov 21, 2019 at 2:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Nov 16, 2019 at 7:01 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Nov 12, 2019 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 12, 2019 at 6:33 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Thanks Amit for your comments. Please find the updated patch which\n> > > > does not include the changes mentioned above.\n> > > >\n> > >\n> > > Thanks for working on this. I have pushed your latest patch.\n> > >\n> >\n> > Thanks Amit for pushing the patch. I have re-verified and found that\n> > changes need to be done in few more places. The main changes are made\n> > in the header file and plpython source files. The attached patch\n> > handles the same. I have verified make check and make check-world\n> > including --with-python & --with-perl in the following:\n> > CentOS Linux release 7.7.1908\n> > Red Hat Enterprise Linux Server release 7.1\n> >\n> > I have verified including --llvm in CentOS Linux release 7.7.1908.\n> >\n>\n> Thanks for finding the remaining places, the patch looks good to me.\n> I hope this covers the entire code. BTW, are you using some script to\n> find this or is this a result of manual inspection of code? I have\n> modified the commit message in the attached patch. I will commit this\n> early next week unless someone else wants to review it.\n>\n\nI have used script to verify if the inclusions are sorted. There are\nfew files which I did not modify intentionally, they are mainly like\nthe below type as in uuid-ossp.c:\n#include \"postgres.h\"\n\n#include \"fmgr.h\"\n#include \"port/pg_bswap.h\"\n#include \"utils/builtins.h\"\n#include \"utils/uuid.h\"\n\n/*\n * It's possible that there's more than one uuid.h header file present.\n * We expect configure to set the HAVE_ symbol for only the one we want.\n *\n * BSD includes a uuid_hash() function that conflicts with the one in\n * builtins.h; we #define it out of the way.\n */\n#define uuid_hash bsd_uuid_hash\n\n#if defined(HAVE_UUID_H)\n#include <uuid.h>\n#elif defined(HAVE_OSSP_UUID_H)\n#include <ossp/uuid.h>\n#elif defined(HAVE_UUID_UUID_H)\n#include <uuid/uuid.h>\n\n\nAfter the inclusion they have define and further include based on #if\ndefined. In few cases I had seen the include happens at the end of the\nfile like in regcomp.c as there may be impact. I felt it is better not\nto change these files. Let me know your thoughts on the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Nov 2019 09:44:20 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> After the inclusion they have define and further include based on #if\n> defined. In few cases I had seen the include happens at the end of the\n> file like in regcomp.c as there may be impact. I felt it is better not\n> to change these files. Let me know your thoughts on the same.\n\nI think the point of this patch series is just to make cosmetic\nadjustments in places where people have randomly failed to maintain\nalphabetic order of a consecutive group of #include's. Messing with\nexamples like the above is way out of scope, if you ask me --- it\nentails more analysis, and more risk of breakage, than a purely\ncosmetic goal is worth.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Nov 2019 09:37:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Fri, Nov 22, 2019 at 8:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > After the inclusion they have define and further include based on #if\n> > defined. In few cases I had seen the include happens at the end of the\n> > file like in regcomp.c as there may be impact. I felt it is better not\n> > to change these files. Let me know your thoughts on the same.\n>\n> I think the point of this patch series is just to make cosmetic\n> adjustments in places where people have randomly failed to maintain\n> alphabetic order of a consecutive group of #include's. Messing with\n> examples like the above is way out of scope, if you ask me --- it\n> entails more analysis, and more risk of breakage, than a purely\n> cosmetic goal is worth.\n>\n\n+1. I agree with what Tom said, so let's leave such things as it is.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 23 Nov 2019 11:23:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" }, { "msg_contents": "On Thu, Nov 21, 2019 at 2:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks for finding the remaining places, the patch looks good to me.\n> I hope this covers the entire code. BTW, are you using some script to\n> find this or is this a result of manual inspection of code? I have\n> modified the commit message in the attached patch. I will commit this\n> early next week unless someone else wants to review it.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Nov 2019 17:38:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ordering of header file inclusion" } ]
[ { "msg_contents": "Regarding the previous thread and commit here:\nhttps://www.postgresql.org/message-id/flat/20180713162815.GA3835%40momjian.us\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=60e3bd1d7f92430b24b710ecf0559656eb8ed499\n\nI'm suggesting to reformat the warning, which I found to be misleading:\n\n|could not load library \"$libdir/pgfincore\": ERROR: could not access file \"$libdir/pgfincore\": No such file or directory\n|Database: postgres\n|Database: too\n\nTo me that reads as \"error message\" followed by successful processing of two,\nnamed database, and not \"error message followed by list of databases for which\nthat error was experienced\". Essentially, the database names are themselves\nthe \"error\", and the message is a prefix indicating the library version; but\nnormally, error-looking things are output without a \"prefix\", since they\nweren't anticipated.\n\nThe existing check is optimized to check each library once, but then outputs\neach database which would try to load it. That's an implementation detail, but\nadds to confusion, since it shows a single error-looking thing which might\napply to multiple DBs (not obvious to me that it's associated with an DB at\nall). That leads me to believe that after I \"DROP EXTENSION\" once, I can\nreasonably expect the upgrade to complete, which has a good chance of being\nwrong, and is exactly what the patch was intended to avoid :(\n\nTo reproduce:\n\n$ /usr/pgsql-11/bin/initdb -D ./pgtestlib11\n$ /usr/pgsql-12/bin/initdb -D ./pgtestlib12\n$ /usr/pgsql-11/bin/pg_ctl -D ./pgtestlib11 -o '-c port=5678 -c unix_socket_directories=/tmp' start\n$ psql postgres -h /tmp -p 5678 -c 'CREATE EXTENSION pgfincore' -c 'CREATE DATABASE too'\n$ psql too -h /tmp -p 5678 -c 'CREATE EXTENSION pgfincore'\n$ /usr/pgsql-11/bin/pg_ctl -D ./pgtestlib11 stop\n$ /usr/pgsql-12/bin/pg_upgrade -b /usr/pgsql-11/bin -B /usr/pgsql-12/bin -d ./pgtestlib11 -D pgtestlib12\n$ cat loadable_libraries.txt \nCould not load library \"$libdir/pgfincore\": ERROR: could not access file \"$libdir/pgfincore\": No such file or directory\nDatabase: postgres\nDatabase: too\n\nI concede that the situation is clearer if there are multiple libraries causing\nerrors, especially in overlapping list of databases:\n\n|[pryzbyj@database ~]$ cat loadable_libraries.txt \n|could not load library \"$libdir/pg_repack\": ERROR: could not access file \"$libdir/pg_repack\": No such file or directory\n|Database: postgres\n|Database: too\n|could not load library \"$libdir/pgfincore\": ERROR: could not access file \"$libdir/pgfincore\": No such file or directory\n|Database: postgres\n|Database: too\n\nI think the list of databases should be formatted to indicate its association\nwith the preceding error by indentation and verbage, or larger refactoring to\npresent in a list, like:\n\"Databases with library which failed to load: %s: %s\",\n\tPQerrorMessage(conn), list_of_dbs_loading_that_lib\n\nJustin", "msg_date": "Wed, 2 Oct 2019 12:23:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Wed, Oct 2, 2019 at 12:23:37PM -0500, Justin Pryzby wrote:\n> Regarding the previous thread and commit here:\n> https://www.postgresql.org/message-id/flat/20180713162815.GA3835%40momjian.us\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=60e3bd1d7f92430b24b710ecf0559656eb8ed499\n> \n> I'm suggesting to reformat the warning, which I found to be misleading:\n> \n> |could not load library \"$libdir/pgfincore\": ERROR: could not access file \"$libdir/pgfincore\": No such file or directory\n> |Database: postgres\n> |Database: too\n> \n> To me that reads as \"error message\" followed by successful processing of two,\n> named database, and not \"error message followed by list of databases for which\n> that error was experienced\". Essentially, the database names are themselves\n> the \"error\", and the message is a prefix indicating the library version; but\n> normally, error-looking things are output without a \"prefix\", since they\n> weren't anticipated.\n> \n> The existing check is optimized to check each library once, but then outputs\n> each database which would try to load it. That's an implementation detail, but\n> adds to confusion, since it shows a single error-looking thing which might\n> apply to multiple DBs (not obvious to me that it's associated with an DB at\n> all). That leads me to believe that after I \"DROP EXTENSION\" once, I can\n> reasonably expect the upgrade to complete, which has a good chance of being\n> wrong, and is exactly what the patch was intended to avoid :(\n\nUnderstood. This is a general problem with the way pg_upgrade displays\nerrors and the databases/objects associated with them. The attached\npatch fixes the output text to say \"in database\", e.g.:\n\n Could not load library \"$libdir/pgfincore\": ERROR: could not access file \"$libdir/pgfincore\": No such file or directory\n in database: postgres\n in database: too\n\nWould intenting help too? I am inclined to fix this only head, and not\nto backpatch the change.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Fri, 4 Oct 2019 17:37:46 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Fri, Oct 04, 2019 at 05:37:46PM -0400, Bruce Momjian wrote:\n> On Wed, Oct 2, 2019 at 12:23:37PM -0500, Justin Pryzby wrote:\n> > Regarding the previous thread and commit here:\n> > https://www.postgresql.org/message-id/flat/20180713162815.GA3835%40momjian.us\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=60e3bd1d7f92430b24b710ecf0559656eb8ed499\n> > \n> > I'm suggesting to reformat the warning, which I found to be misleading:\n> \n> Understood. This is a general problem with the way pg_upgrade displays\n> errors and the databases/objects associated with them. The attached\n> patch fixes the output text to say \"in database\", e.g.:\n> \n> Could not load library \"$libdir/pgfincore\": ERROR: could not access file \"$libdir/pgfincore\": No such file or directory\n> in database: postgres\n> in database: too\n> \n> Would intenting help too? I am inclined to fix this only head, and not\n> to backpatch the change.\n\nYes, indenting would also help.\n\nI would argue to include in 12.1, since 12 is what most everyone will use for\nupgrades, and patch for .1 will help people upgrading for 11 of the next 12\nmonths. (But, your patch is more general than mine).\n\nThanks,\nJustin\n\n\n", "msg_date": "Fri, 4 Oct 2019 17:40:08 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Fri, Oct 4, 2019 at 05:40:08PM -0500, Justin Pryzby wrote:\n> On Fri, Oct 04, 2019 at 05:37:46PM -0400, Bruce Momjian wrote:\n> > On Wed, Oct 2, 2019 at 12:23:37PM -0500, Justin Pryzby wrote:\n> > > Regarding the previous thread and commit here:\n> > > https://www.postgresql.org/message-id/flat/20180713162815.GA3835%40momjian.us\n> > > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=60e3bd1d7f92430b24b710ecf0559656eb8ed499\n> > > \n> > > I'm suggesting to reformat the warning, which I found to be misleading:\n> > \n> > Understood. This is a general problem with the way pg_upgrade displays\n> > errors and the databases/objects associated with them. The attached\n> > patch fixes the output text to say \"in database\", e.g.:\n> > \n> > Could not load library \"$libdir/pgfincore\": ERROR: could not access file \"$libdir/pgfincore\": No such file or directory\n> > in database: postgres\n> > in database: too\n> > \n> > Would intenting help too? I am inclined to fix this only head, and not\n> > to backpatch the change.\n> \n> Yes, indenting would also help.\n> \n> I would argue to include in 12.1, since 12 is what most everyone will use for\n> upgrades, and patch for .1 will help people upgrading for 11 of the next 12\n> months. (But, your patch is more general than mine).\n\nNo, there might be tools that depend on the existing format, and this is\nthe first report of confusion I have read.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 4 Oct 2019 20:15:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, Oct 4, 2019 at 05:40:08PM -0500, Justin Pryzby wrote:\n>> I would argue to include in 12.1, since 12 is what most everyone will use for\n>> upgrades, and patch for .1 will help people upgrading for 11 of the next 12\n>> months. (But, your patch is more general than mine).\n\n> No, there might be tools that depend on the existing format, and this is\n> the first report of confusion I have read.\n\nTranslations will also lag behind any such change. Speaking of which,\nit might be a good idea to include some translator: annotations to\nhelp translators understand what context these fragmentary phrases\nare used in. I'd actually say that my biggest concern with these\nmessages is whether they can translate into something sane.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Oct 2019 23:55:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Fri, Oct 4, 2019 at 11:55:21PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Fri, Oct 4, 2019 at 05:40:08PM -0500, Justin Pryzby wrote:\n> >> I would argue to include in 12.1, since 12 is what most everyone will use for\n> >> upgrades, and patch for .1 will help people upgrading for 11 of the next 12\n> >> months. (But, your patch is more general than mine).\n> \n> > No, there might be tools that depend on the existing format, and this is\n> > the first report of confusion I have read.\n> \n> Translations will also lag behind any such change. Speaking of which,\n> it might be a good idea to include some translator: annotations to\n> help translators understand what context these fragmentary phrases\n> are used in. I'd actually say that my biggest concern with these\n> messages is whether they can translate into something sane.\n\nUh, I looked at the pg_ugprade code and the error message is:\n\n pg_fatal(\"Your installation contains \\\"contrib/isn\\\" functions which rely on the\\n\"\n \"bigint data type. Your old and new clusters pass bigint values\\n\"\n \"differently so this cluster cannot currently be upgraded. You can\\n\"\n \"manually upgrade databases that use \\\"contrib/isn\\\" facilities and remove\\n\"\n \"\\\"contrib/isn\\\" from the old cluster and restart the upgrade. A list of\\n\"\n \"the problem functions is in the file:\\n\"\n \" %s\\n\\n\", output_path);\n\nand the \"in database\" (which I have changed to capitalized \"In database\"\nin the attached patch), looks like:\n\n fprintf(script, \"In database: %s\\n\", active_db->db_name);\n\nmeaning it _isn't_ an output error message, but rather something that\nappears in an error file. I don't think either of these are translated.\nIs that wrong?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Mon, 7 Oct 2019 13:47:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Mon, Oct 7, 2019 at 01:47:57PM -0400, Bruce Momjian wrote:\n> On Fri, Oct 4, 2019 at 11:55:21PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Fri, Oct 4, 2019 at 05:40:08PM -0500, Justin Pryzby wrote:\n> > >> I would argue to include in 12.1, since 12 is what most everyone will use for\n> > >> upgrades, and patch for .1 will help people upgrading for 11 of the next 12\n> > >> months. (But, your patch is more general than mine).\n> > \n> > > No, there might be tools that depend on the existing format, and this is\n> > > the first report of confusion I have read.\n> > \n> > Translations will also lag behind any such change. Speaking of which,\n> > it might be a good idea to include some translator: annotations to\n> > help translators understand what context these fragmentary phrases\n> > are used in. I'd actually say that my biggest concern with these\n> > messages is whether they can translate into something sane.\n> \n> Uh, I looked at the pg_ugprade code and the error message is:\n> \n> pg_fatal(\"Your installation contains \\\"contrib/isn\\\" functions which rely on the\\n\"\n> \"bigint data type. Your old and new clusters pass bigint values\\n\"\n> \"differently so this cluster cannot currently be upgraded. You can\\n\"\n> \"manually upgrade databases that use \\\"contrib/isn\\\" facilities and remove\\n\"\n> \"\\\"contrib/isn\\\" from the old cluster and restart the upgrade. A list of\\n\"\n> \"the problem functions is in the file:\\n\"\n> \" %s\\n\\n\", output_path);\n> \n> and the \"in database\" (which I have changed to capitalized \"In database\"\n> in the attached patch), looks like:\n> \n> fprintf(script, \"In database: %s\\n\", active_db->db_name);\n> \n> meaning it _isn't_ an output error message, but rather something that\n> appears in an error file. I don't think either of these are translated.\n> Is that wrong?\n\nPatch applied to head.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 8 Oct 2019 22:17:29 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On 2019-Oct-07, Bruce Momjian wrote:\n\n> Uh, I looked at the pg_ugprade code and the error message is:\n> \n> pg_fatal(\"Your installation contains \\\"contrib/isn\\\" functions which rely on the\\n\"\n> \"bigint data type. Your old and new clusters pass bigint values\\n\"\n> \"differently so this cluster cannot currently be upgraded. You can\\n\"\n> \"manually upgrade databases that use \\\"contrib/isn\\\" facilities and remove\\n\"\n> \"\\\"contrib/isn\\\" from the old cluster and restart the upgrade. A list of\\n\"\n> \"the problem functions is in the file:\\n\"\n> \" %s\\n\\n\", output_path);\n> \n> and the \"in database\" (which I have changed to capitalized \"In database\"\n> in the attached patch), looks like:\n> \n> fprintf(script, \"In database: %s\\n\", active_db->db_name);\n> \n> meaning it _isn't_ an output error message, but rather something that\n> appears in an error file. I don't think either of these are translated.\n> Is that wrong?\n\npg_fatal is a \"gettext trigger\" (see nls.mk), so that part of the\nmessage is definitely translated. And the fprintf format string should\nbe decorated with _() in order to make translatable too; otherwise the\nmessage is only half-translated when it appears in the pg_upgrade log,\nwhich is not nice. This should look like:\n\n \t\t\tif (!db_used)\n \t\t\t{\n\t\t\t\t/* translator: This is an error message indicator */\n \t\t\t\tfprintf(script, _(\"In database: %s\\n\"), active_db->db_name);\n \t\t\t\tdb_used = true;\n \t\t\t}\n \t\t\tfprintf(script, \" %s.%s\\n\",\n\n\nBTW, how is one supposed to \"manually upgrade databases that use\ncontrib/isb\"? This part is not very clear.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 Nov 2019 16:06:52 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Oct-07, Bruce Momjian wrote:\n>> and the \"in database\" (which I have changed to capitalized \"In database\"\n>> in the attached patch), looks like:\n>> fprintf(script, \"In database: %s\\n\", active_db->db_name);\n>> meaning it _isn't_ an output error message, but rather something that\n>> appears in an error file. I don't think either of these are translated.\n>> Is that wrong?\n\n> pg_fatal is a \"gettext trigger\" (see nls.mk), so that part of the\n> message is definitely translated.\n\nRight, but Bruce's point is that what goes into the separate output\nfile listing problem cases is not translated, and never has been.\nMaybe we should start doing so, but that would be a distinct issue.\nI'm not really sure that we should translate it, anyway --- could\nthere be anyone out there who is using tools to process these files?\n\n> BTW, how is one supposed to \"manually upgrade databases that use\n> contrib/isb\"? This part is not very clear.\n\nAgreed, the pg_fatal message is claiming that you can do something\nwithout really providing any concrete instructions for it. I'm not\nsure that that's helpful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Nov 2019 14:46:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Thu, Nov 14, 2019 at 04:06:52PM -0300, Alvaro Herrera wrote:\n> BTW, how is one supposed to \"manually upgrade databases that use\n> contrib/isb\"? This part is not very clear.\n\nI thought you would dump out databases that use isn, drop those\ndatabases, use pg_upgrade for the remaining databases, then load the\ndumped database. Attached is a patch that improves the wording.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Thu, 14 Nov 2019 17:16:41 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Thu, Nov 14, 2019 at 02:46:29PM -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Oct-07, Bruce Momjian wrote:\n> >> and the \"in database\" (which I have changed to capitalized \"In database\"\n> >> in the attached patch), looks like:\n> >> fprintf(script, \"In database: %s\\n\", active_db->db_name);\n> >> meaning it _isn't_ an output error message, but rather something that\n> >> appears in an error file. I don't think either of these are translated.\n> >> Is that wrong?\n> \n> > pg_fatal is a \"gettext trigger\" (see nls.mk), so that part of the\n> > message is definitely translated.\n> \n> Right, but Bruce's point is that what goes into the separate output\n> file listing problem cases is not translated, and never has been.\n> Maybe we should start doing so, but that would be a distinct issue.\n> I'm not really sure that we should translate it, anyway --- could\n> there be anyone out there who is using tools to process these files?\n\nYes, we are lacking in all these output files so if we do one, we should\ndo them all.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 14 Nov 2019 17:17:15 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, Nov 14, 2019 at 04:06:52PM -0300, Alvaro Herrera wrote:\n>> BTW, how is one supposed to \"manually upgrade databases that use\n>> contrib/isb\"? This part is not very clear.\n\n> I thought you would dump out databases that use isn, drop those\n> databases, use pg_upgrade for the remaining databases, then load the\n> dumped database. Attached is a patch that improves the wording.\n\nThat's better wording, but do we need similar for any of the other\nnot-supported checks?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Nov 2019 17:49:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Thu, Nov 14, 2019 at 05:49:12PM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Thu, Nov 14, 2019 at 04:06:52PM -0300, Alvaro Herrera wrote:\n> >> BTW, how is one supposed to \"manually upgrade databases that use\n> >> contrib/isb\"? This part is not very clear.\n> \n> > I thought you would dump out databases that use isn, drop those\n> > databases, use pg_upgrade for the remaining databases, then load the\n> > dumped database. Attached is a patch that improves the wording.\n> \n> That's better wording, but do we need similar for any of the other\n> not-supported checks?\n\nI don't think so. The other checks are checking for the _use_ of\ncertain things in user objects, and the user objects can be dropped,\nwhile this check checks for the existence of _functions_ from an\nextension that must be uninstalled. I assume the extension can be\nuninstalled in the database, but I assume something uses it and that\ntelling people to find all the users of the extension then dropping it\nis too complex to describe.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 14 Nov 2019 18:00:37 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "> On 14 Nov 2019, at 23:16, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, Nov 14, 2019 at 04:06:52PM -0300, Alvaro Herrera wrote:\n>> BTW, how is one supposed to \"manually upgrade databases that use\n>> contrib/isb\"? This part is not very clear.\n> \n> I thought you would dump out databases that use isn, drop those\n> databases, use pg_upgrade for the remaining databases, then load the\n> dumped database. Attached is a patch that improves the wording.\n\nI agree with this patch, that's much a more informative message.\n\nThere is one tiny typo in the patch: s/laster/later/\n\n+\t\t\t\t \"cluster, drop them, restart the upgrade, and restore them laster. A\\n\"\n\ncheers ./daniel\n\n", "msg_date": "Fri, 15 Nov 2019 00:32:55 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "On Fri, Nov 15, 2019 at 12:32:55AM +0100, Daniel Gustafsson wrote:\n> > On 14 Nov 2019, at 23:16, Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > On Thu, Nov 14, 2019 at 04:06:52PM -0300, Alvaro Herrera wrote:\n> >> BTW, how is one supposed to \"manually upgrade databases that use\n> >> contrib/isb\"? This part is not very clear.\n> > \n> > I thought you would dump out databases that use isn, drop those\n> > databases, use pg_upgrade for the remaining databases, then load the\n> > dumped database. Attached is a patch that improves the wording.\n> \n> I agree with this patch, that's much a more informative message.\n> \n> There is one tiny typo in the patch: s/laster/later/\n> \n> +\t\t\t\t \"cluster, drop them, restart the upgrade, and restore them laster. A\\n\"\n\nI have applied the patch, with improved wording. I only applied this to\nPG 13 since I was worried old tools might be checking for the old error\ntext. Should this be backpatched more?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 27 Nov 2019 20:26:17 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" }, { "msg_contents": "> On 28 Nov 2019, at 02:26, Bruce Momjian <bruce@momjian.us> wrote:\n\n> I have applied the patch, with improved wording. I only applied this to\n> PG 13 since I was worried old tools might be checking for the old error\n> text. Should this be backpatched more?\n\nI don't think it's unreasonable to assume that there are lots of inhouse\ntooling for pg_upgrade orchestration which grep for specific messages.\nStopping at 13 seems perfectly reasonable for this change.\n\ncheers ./daniel\n\n\n", "msg_date": "Thu, 28 Nov 2019 09:58:06 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: format of pg_upgrade loadable_libraries warning" } ]
[ { "msg_contents": "Hi team,\n\nI am creating sample extension in postgres, For that \"During PG_INIT, i\nwant to get the list of database Id's in which my extension is installed\".\nIs there a way to get this?\n\nand also another one.\n\nHow to have trigger for create extension?\n\nThanks in advance.\n\nHi team,I am creating sample extension in postgres, For that \"During PG_INIT, i want to get the list of database Id's in which my extension is installed\". Is there a way to get this?and also another one.How to have trigger for create extension?Thanks in advance.", "msg_date": "Thu, 3 Oct 2019 10:53:38 +0530", "msg_from": "Natarajan R <nataraj3098@gmail.com>", "msg_from_op": true, "msg_subject": "Regarding extension" }, { "msg_contents": "Em qui, 3 de out de 2019 às 02:24, Natarajan R <nataraj3098@gmail.com> escreveu:\n>\n> I am creating sample extension in postgres, For that \"During PG_INIT, i want to get the list of database Id's in which my extension is installed\". Is there a way to get this?\n>\nI'm not sure what you mean by \"ld\". However, if you want to know an\nextension is installed in a specific database, you should be logged in\nit. That's because extension catalog is not global.\n\n> How to have trigger for create extension?\n>\nEvent triggers.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Thu, 3 Oct 2019 11:01:00 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: Regarding extension" }, { "msg_contents": "Thanks for your response Euler.\n\n1)\n\"id\" i meant by database id\n\nI make my question simple, \" during pg_init i want to get databaseid's in\nwhich my extension is installed... \"\n1. by using pg_database and pg_extension catalogs\n2. if there any other way, kindly suggest me.\n\n\n2)\nI have one sql file which will be loaded during create extension, in that\nfile only i have code for event trigger for create extension on\nddl_command_end event....\nMy question is \"When giving create extension, sql file will be loaded at\nthat time only, if that is the case, this event trigger will be invoked or\nnot? \"\n\nThanks...\n\n\nOn Thu, 3 Oct 2019 at 19:31, Euler Taveira <euler@timbira.com.br> wrote:\n\n> Em qui, 3 de out de 2019 às 02:24, Natarajan R <nataraj3098@gmail.com>\n> escreveu:\n> >\n> > I am creating sample extension in postgres, For that \"During PG_INIT, i\n> want to get the list of database Id's in which my extension is installed\".\n> Is there a way to get this?\n> >\n> I'm not sure what you mean by \"ld\". However, if you want to know an\n> extension is installed in a specific database, you should be logged in\n> it. That's because extension catalog is not global.\n>\n> > How to have trigger for create extension?\n> >\n> Event triggers.\n>\n>\n> --\n> Euler Taveira Timbira -\n> http://www.timbira.com.br/\n> PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n>\n\nThanks for your response Euler.1)\"id\" i meant by database idI make my question simple, \" during pg_init i want to get databaseid's in which my extension is installed... \" 1. by using pg_database and pg_extension catalogs2. if there any other way, kindly suggest me.2)I have one sql file which will be loaded during create extension, in that file only i have code for event trigger for create extension on ddl_command_end event....My question is \"When giving create extension, sql file will be loaded at that time only, if that is the case, this event trigger will be invoked or not? \"Thanks...On Thu, 3 Oct 2019 at 19:31, Euler Taveira <euler@timbira.com.br> wrote:Em qui, 3 de out de 2019 às 02:24, Natarajan R <nataraj3098@gmail.com> escreveu:\n>\n> I am creating sample extension in postgres, For that \"During PG_INIT, i want to get the list of database Id's in which my extension is installed\". Is there a way to get this?\n>\nI'm not sure what you mean by \"ld\". However, if you want to know an\nextension is installed in a specific database, you should be logged in\nit. That's because extension catalog is not global.\n\n> How to have trigger for create extension?\n>\nEvent triggers.\n\n\n-- \n   Euler Taveira                                   Timbira -\nhttp://www.timbira.com.br/\n   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Thu, 3 Oct 2019 19:51:04 +0530", "msg_from": "Natarajan R <nataraj3098@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Regarding extension" }, { "msg_contents": "On Thu, Oct 03, 2019 at 07:51:04PM +0530, Natarajan R wrote:\n>Thanks for your response Euler.\n>\n>1)\n>\"id\" i meant by database id\n>\n>I make my question simple, \" during pg_init i want to get databaseid's in\n>which my extension is installed... \"\n>1. by using pg_database and pg_extension catalogs\n>2. if there any other way, kindly suggest me.\n>\n\nWell, there's also MyDatabaseId variable, which tells you the OID of the\ncurrent database. So you can use that, from the C code. In SQL, you can\nsimply run \"SELECT current_database()\" or something like that.\n\n>\n>2)\n>I have one sql file which will be loaded during create extension, in that\n>file only i have code for event trigger for create extension on\n>ddl_command_end event....\n>My question is \"When giving create extension, sql file will be loaded at\n>that time only, if that is the case, this event trigger will be invoked or\n>not? \"\n>\n\nI'm not sure I understand the question. Are you asking if the event\ntrigger will be invoked to notify you about creation of the extension\ncontaining it? I'm pretty sure that won't happen - it will be executed\nonly for future CREATE EXTENSION commands.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Oct 2019 17:24:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Regarding extension" }, { "msg_contents": "On Thu, 3 Oct 2019 at 20:54, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Thu, Oct 03, 2019 at 07:51:04PM +0530, Natarajan R wrote:\n> >Thanks for your response Euler.\n> >\n> >1)\n> >\"id\" i meant by database id\n> >\n> >I make my question simple, \" during pg_init i want to get databaseid's in\n> >which my extension is installed... \"\n> >1. by using pg_database and pg_extension catalogs\n> >2. if there any other way, kindly suggest me.\n> >\n>\n> Well, there's also MyDatabaseId variable, which tells you the OID of the\n> current database. So you can use that, from the C code. In SQL, you can\n> simply run \"SELECT current_database()\" or something like that.\n>\n> Me: Thanks Tomas, But this is for that particular database only, I want\nto get the *list of database Id's* on which my extension is installed\nduring *PG_INIT* itself...\n\n>\n> >2)\n> >I have one sql file which will be loaded during create extension, in that\n> >file only i have code for event trigger for create extension on\n> >ddl_command_end event....\n> >My question is \"When giving create extension, sql file will be loaded at\n> >that time only, if that is the case, this event trigger will be invoked or\n> >not? \"\n> >\n>\n> I'm not sure I understand the question. Are you asking if the event\n> trigger will be invoked to notify you about creation of the extension\n> containing it? I'm pretty sure that won't happen - it will be executed\n> only for future CREATE EXTENSION commands.\n>\n> Me: Thanks Tomas, Yaah, what you said above is the way it should perform,\nbut this trigger has been invoked in postgres 10.0 but not in postgres\n10.4.. So, i am asking any GUC or anything need to be enabled to invoke\nthis type of event triggers in 10.4 version tooo..\n\nOn Thu, 3 Oct 2019 at 20:54, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Thu, Oct 03, 2019 at 07:51:04PM +0530, Natarajan R wrote:\n>Thanks for your response Euler.\n>\n>1)\n>\"id\" i meant by database id\n>\n>I make my question simple, \" during pg_init i want to get databaseid's in\n>which my extension is installed... \"\n>1. by using pg_database and pg_extension catalogs\n>2. if there any other way, kindly suggest me.\n>\n\nWell, there's also MyDatabaseId variable, which tells you the OID of the\ncurrent database. So you can use that, from the C code. In SQL, you can\nsimply run \"SELECT current_database()\" or something like that.\nMe:  Thanks Tomas, But this is for that particular database only, I want to get the list of database Id's on which my extension is installed during PG_INIT itself...\n>\n>2)\n>I have one sql file which will be loaded during create extension, in that\n>file only i have code for event trigger for create extension on\n>ddl_command_end event....\n>My question is \"When giving create extension, sql file will be loaded at\n>that time only, if that is the case, this event trigger will be invoked or\n>not? \"\n>\n\nI'm not sure I understand the question. Are you asking if the event\ntrigger will be invoked to notify you about creation of the extension\ncontaining it? I'm pretty sure that won't happen - it will be executed\nonly for future CREATE EXTENSION commands.\nMe: Thanks Tomas, Yaah, what you said above is the way it should perform, but this trigger has been invoked in postgres 10.0 but not in postgres 10.4.. So, i am asking any GUC or anything need to be enabled to invoke this type of event triggers in 10.4 version tooo..", "msg_date": "Fri, 4 Oct 2019 10:22:18 +0530", "msg_from": "Natarajan R <nataraj3098@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Regarding extension" }, { "msg_contents": "Natarajan R <nataraj3098@gmail.com> writes:\n> Me: Thanks Tomas, But this is for that particular database only, I want\n> to get the *list of database Id's* on which my extension is installed\n> during *PG_INIT* itself...\n\nYou can't. In the first place, that information simply isn't obtainable,\nbecause a session running within one database doesn't have access to the\ncatalogs of other databases in the cluster. (You could perhaps imagine\nfiring up connections to other DBs a la dblink/postgres_fdw, but that will\nfail because you won't necessarily have permissions to connect to every\ndatabase.) In the second place, it's a pretty terrible design to be\nattempting any sort of database access within _PG_init, because that\nprecludes loading that module outside a transaction; for example you\nwill not be able to preload it via shared_preload_libraries or allied\nfeatures.\n\nWe should back up about three steps and ask why you think you need\nto do this. Generally speaking, database code shouldn't be concerning\nitself with what is happening in other databases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Oct 2019 01:17:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regarding extension" }, { "msg_contents": "On Fri, 4 Oct 2019 at 13:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Natarajan R <nataraj3098@gmail.com> writes:\n> > Me: Thanks Tomas, But this is for that particular database only, I want\n> > to get the *list of database Id's* on which my extension is installed\n> > during *PG_INIT* itself...\n>\n> You can't. In the first place, that information simply isn't obtainable,\n> because a session running within one database doesn't have access to the\n> catalogs of other databases in the cluster. (You could perhaps imagine\n> firing up connections to other DBs a la dblink/postgres_fdw, but that will\n> fail because you won't necessarily have permissions to connect to every\n> database.) In the second place, it's a pretty terrible design to be\n> attempting any sort of database access within _PG_init, because that\n> precludes loading that module outside a transaction; for example you\n> will not be able to preload it via shared_preload_libraries or allied\n> features.\n>\n\nAbsolutely agreed. Having done this myself, it's much, much harder than\nyou'd expect and not something I suggest anyone try unless it's absolutely\nnecessary.\n\nIt'd be an absolute dream if extensions could create their own shared\ncatalogs; that'd make life *so* much easier. But I seem to recall looking\nat that and nope-ing right out. That was a while ago so I should probably\nrevisit it.\n\nAnyhow: BDR and pglogical are extensions that do need to concern itself\nwith what's in various databases, so this is an issue I've worked with day\nto day for some time.\n\nBDR1 used a custom security label and the pg_shseclabel catalog to mark\ndatabases that were BDR-enabled. It launched a worker that connected to\ndatabase InvalidOid, so it could read only the global shared catalogs, then\nit scanned them to find out which DBs to launch individual workers for.\nThis interacted poorly with pg_dump/pg_restore and proved fragile, so I\ndon't recommend it.\n\npglogical instead launches a static bgworker with no DB connections. On\nstartup or when it gets a suitable message over its extension shmem\nsegment + a latch set, it launches new workers for each DB. Each worker\ninspects the DB to check for the presence of the pglogical extension and\nexits if it isn't found.\n\nAll in all, it's pretty clumsy, though it works very well.\n\nWe have to do our own process management and registration. Workarounds must\nbe put in place for processes failing to launch then a new process taking\ntheir shmem slot and various other things. pglogical lands up having to\nduplicate quite a bit of the bgw and postmaster management infrastructure\nbecause it's not extensible and it has some serious deficiencies in\nerror/crash handling.\n\n(We also have our own dependency management, lock management, shared cache\ninvalidations, syscache/catcache-like mechanism, and other areas where we'd\nrather extend Pg's infrastructure but can't. Being able to create our own\ndependency types, custom lock types/methods, custom syscaches we could get\ninvalidations for, etc would just be amazing. But each would likely be a\nmajor effort to get into core, if we could get it accepted at all given the\n\"in core users\" argument, and we'd have to keep the old method around\nanyway...)\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 4 Oct 2019 at 13:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:Natarajan R <nataraj3098@gmail.com> writes:\n> Me:  Thanks Tomas, But this is for that particular database only, I want\n> to get the *list of database Id's* on which my extension is installed\n> during *PG_INIT* itself...\n\nYou can't.  In the first place, that information simply isn't obtainable,\nbecause a session running within one database doesn't have access to the\ncatalogs of other databases in the cluster.  (You could perhaps imagine\nfiring up connections to other DBs a la dblink/postgres_fdw, but that will\nfail because you won't necessarily have permissions to connect to every\ndatabase.)  In the second place, it's a pretty terrible design to be\nattempting any sort of database access within _PG_init, because that\nprecludes loading that module outside a transaction; for example you\nwill not be able to preload it via shared_preload_libraries or allied\nfeatures.Absolutely agreed. Having done this myself, it's much, much harder than you'd expect and not something I suggest anyone try unless it's absolutely necessary.It'd be an absolute dream if extensions could create their own shared catalogs; that'd make life so much easier. But I seem to recall looking at that and nope-ing right out. That was a while ago so I should probably revisit it.Anyhow: BDR and pglogical are extensions that do need to concern itself with what's in various databases, so this is an issue I've worked with day to day for some time.BDR1 used a custom security label and the pg_shseclabel catalog to mark databases that were BDR-enabled. It launched a worker that connected to database InvalidOid, so it could read only the global shared catalogs, then it scanned them to find out which DBs to launch individual workers for. This interacted poorly with pg_dump/pg_restore and proved fragile, so I don't recommend it.pglogical instead launches a static bgworker with no DB connections. On startup or when it gets a suitable message over its extension shmem segment + a latch set, it launches new workers for each DB. Each worker inspects the DB to check for the presence of the pglogical extension and exits if it isn't found.All in all, it's pretty clumsy, though it works very well.We have to do our own process management and registration. Workarounds must be put in place for processes failing to launch then a new process taking their shmem slot and various other things. pglogical lands up having to duplicate quite a bit of the bgw and postmaster management infrastructure because it's not extensible and it has some serious deficiencies in error/crash handling. (We also have our own dependency management, lock management, shared cache invalidations, syscache/catcache-like mechanism, and other areas where we'd rather extend Pg's infrastructure but can't. Being able to create our own dependency types, custom lock types/methods, custom syscaches we could get invalidations for, etc would just be amazing. But each would likely be a major effort to get into core, if we could get it accepted at all given the \"in core users\" argument, and we'd have to keep the old method around anyway...) --  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Tue, 8 Oct 2019 15:59:50 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Regarding extension" } ]
[ { "msg_contents": "With AIX xlc v16.1.0, various compilations get internal errors:\n\n/opt/IBM/xlc/16.1.0/bin/xlc_r -qmaxmem=33554432 -qnoansialias -g -O2 -qsrcmsg -I/scratch/nm/farmlike/../cmpx/src/interfaces/libpq -I../../../src/include -I/scratch/nm/farmlike/../cmpx/src/include -c -o pg_backup_utils.o /scratch/nm/farmlike/../cmpx/src/bin/pg_dump/pg_backup_utils.c\n/opt/IBM/xlc/16.1.0/bin/.orig/xlc_r: 1501-230 (S) Internal compiler error; please contact your Service Representative. For more information visit:\nhttp://www.ibm.com/support/docview.wss?uid=swg21110810\n\nI tracked this down to our use of -qsrcmsg, which I plan to remove from the\ndefault CFLAGS, including back branches. While this is a compiler bug, it's\nnot worth breaking plausible builds to default-enable the minor change\n-qsrcmsg provides:\n\n$ echo 'break me' >x.c\n$ /opt/IBM/xlc/16.1.0/bin/xlc_r x.c\n\"x.c\", line 1.1: 1506-275 (S) Unexpected text 'break' encountered.\n\"x.c\", line 1.7: 1506-166 (S) Definition of function me requires parentheses.\n\"x.c\", line 1.9: 1506-204 (S) Unexpected end of file.\n$ /opt/IBM/xlc/16.1.0/bin/xlc_r -qsrcmsg x.c\n 1 | break me\n a.....b.c\na - 1506-275 (S) Unexpected text 'break' encountered.\nb - 1506-166 (S) Definition of function me requires parentheses.\nc - 1506-204 (S) Unexpected end of file.", "msg_date": "Wed, 2 Oct 2019 23:41:05 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Removing -qsrcmsg (AIX)" } ]
[ { "msg_contents": "Hi,\n\nIt is been observed that when we define the generated columns in WHEN\ncondition for BEFORE EACH ROW trigger then server throw an error from\nCreateTrigger().\n\ne.g:\ncreate table bar(a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2)\nSTORED);\n\nCREATE OR REPLACE FUNCTION test() RETURNS trigger AS $$\nBEGIN\nNEW.b = 10;\nraise notice 'Before row trigger';\nRETURN NEW;\nEND;\n$$ LANGUAGE plpgsql;\n\npostgres@78049=#CREATE TRIGGER bar_trigger\nBEFORE INSERT ON bar\nFOR EACH ROW\n*WHEN (NEW.b < 8)*\nEXECUTE FUNCTION test();\n2019-10-03 19:25:29.945 IST [78049] ERROR: *BEFORE trigger's WHEN\ncondition cannot reference NEW generated columns* at character 68\n2019-10-03 19:25:29.945 IST [78049] DETAIL: Column \"b\" is a generated\ncolumn.\n2019-10-03 19:25:29.945 IST [78049] STATEMENT: CREATE TRIGGER bar_trigger\nBEFORE INSERT ON bar\nFOR EACH ROW\nWHEN (NEW.b < 8)\nEXECUTE FUNCTION test();\nERROR: BEFORE trigger's WHEN condition cannot reference NEW generated\ncolumns\nLINE 4: WHEN (NEW.b < 8)\n ^\nDETAIL: Column \"b\" is a generated column.\n\n\nwhereas, for identity columns, server allows us to create trigger for same\nand trigger gets invoked as defined. Is this behavior expected? or we need\nto restrict the identity columns in such scenario because anyone one\noverride the identity column value in trigger.\n\ne.g:\n\ncreate table foo(no int, id int generated always as identity);\n\nCREATE OR REPLACE FUNCTION test() RETURNS trigger AS $$\nBEGIN\nNEW.id = 10;\nraise notice 'Before row trigger';\nRETURN NEW;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE TRIGGER foo_trigger\nBEFORE INSERT ON foo\nFOR EACH ROW\nWHEN (NEW.id < 8)\nEXECUTE FUNCTION test();\n\n\npostgres@78049=#insert into foo values(1);\n*NOTICE: Before row trigger*\nINSERT 0 1\npostgres@78049=#select * from foo;\n no | id\n----+----\n 1 | *10*\n(1 row)\n\n\nThoughts?\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nHi,It is been observed that when we define the generated columns in WHEN condition for BEFORE EACH ROW trigger then server throw an error from CreateTrigger().e.g:create table bar(a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED);CREATE OR REPLACE FUNCTION test() RETURNS trigger AS $$BEGINNEW.b  = 10;raise notice 'Before row trigger';RETURN NEW;END;$$ LANGUAGE plpgsql;postgres@78049=#CREATE TRIGGER bar_triggerBEFORE INSERT ON barFOR EACH ROWWHEN (NEW.b < 8)EXECUTE FUNCTION test();2019-10-03 19:25:29.945 IST [78049] ERROR:  BEFORE trigger's WHEN condition cannot reference NEW generated columns at character 682019-10-03 19:25:29.945 IST [78049] DETAIL:  Column \"b\" is a generated column.2019-10-03 19:25:29.945 IST [78049] STATEMENT:  CREATE TRIGGER bar_trigger\tBEFORE INSERT ON bar\tFOR EACH ROW\tWHEN (NEW.b < 8)\tEXECUTE FUNCTION test();ERROR:  BEFORE trigger's WHEN condition cannot reference NEW generated columnsLINE 4: WHEN (NEW.b < 8)              ^DETAIL:  Column \"b\" is a generated column.whereas, for identity columns, server allows us to create trigger for same and trigger gets invoked as defined. Is this behavior expected? or we need to restrict the identity columns in such scenario because anyone one override the identity column value in trigger.e.g:create table foo(no int, id int  generated always as identity);CREATE OR REPLACE FUNCTION test() RETURNS trigger AS $$BEGINNEW.id  = 10;raise notice 'Before row trigger';RETURN NEW;END;$$ LANGUAGE plpgsql;CREATE TRIGGER foo_triggerBEFORE INSERT ON fooFOR EACH ROWWHEN (NEW.id < 8)EXECUTE FUNCTION test();postgres@78049=#insert into foo values(1);NOTICE:  Before row triggerINSERT 0 1postgres@78049=#select * from foo; no | id ----+----  1 | 10(1 row)Thoughts?-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.", "msg_date": "Thu, 3 Oct 2019 19:38:21 +0530", "msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>", "msg_from_op": true, "msg_subject": "identity column behavior in WHEN condition for BEFORE EACH ROW\n trigger" }, { "msg_contents": "> whereas, for identity columns, server allows us to create trigger for same\n> and trigger gets invoked as defined. Is this behavior expected? or we need\n> to restrict the identity columns in such scenario because anyone one\n> override the identity column value in trigger.\n>\n\nAlso, I think it is breaking the OVERRIDING SYSTEM VALUE clause in INSERT\nstatement. i.e: without this clause, can insert the modified value from\ntrigger in identity column. I don't find any document reference for this\nbehavior.\n\nThoughts?\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\n whereas, for identity columns, server allows us to create trigger for same and trigger gets invoked as defined. Is this behavior expected? or we need to restrict the identity columns in such scenario because anyone one override the identity column value in trigger. Also, I think it is breaking the OVERRIDING SYSTEM VALUE clause in INSERT statement. i.e: without this clause, can insert the modified value from trigger in identity column. I don't find any document reference for this behavior.Thoughts?-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.", "msg_date": "Mon, 7 Oct 2019 08:40:48 +0530", "msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: identity column behavior in WHEN condition for BEFORE EACH ROW\n trigger" }, { "msg_contents": "On 2019-10-03 16:08, Suraj Kharage wrote:\n> It is been observed that when we define the generated columns in WHEN\n> condition for BEFORE EACH ROW trigger then server throw an error from\n> CreateTrigger().\n\n> whereas, for identity columns, server allows us to create trigger for\n> same and trigger gets invoked as defined. Is this behavior expected? or\n> we need to restrict the identity columns in such scenario because anyone\n> one override the identity column value in trigger.\n\nThis is per SQL standard: Identity columns are assigned before triggers,\ngenerated columns are computed after BEFORE triggers.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 7 Oct 2019 20:14:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: identity column behavior in WHEN condition for BEFORE EACH ROW\n trigger" } ]
[ { "msg_contents": "\nMy new msys2 animal fairywren has had 3 recent failures when checking\npg_upgrade. The failures have been while running the regression tests,\nspecifically the interval test, and they all look like this:\n\n\n2019-10-03 05:36:00.373 UTC [24272:43] LOG: server process (PID 23756) was terminated by exception 0xC0000028\n2019-10-03 05:36:00.373 UTC [24272:44] DETAIL: Failed process was running: INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted interval');\n\n\nThat error is \"bad stack\"\n\nThe failures have been on REL_12_STABLE (twice) and master (once).\nHowever, they are not consistent (REL_!2_STABLE is currently green).\n\n\nThe interval test itself hasn't changed for m ore than 2 years, and I\nhaven't found any obvious recent change that might cause the problem. I\nguess it could be a comoiler bug ... this is gcc 9.2.0, which is the\ncurrent release.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 3 Oct 2019 10:21:13 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "fairywren failures" }, { "msg_contents": "Hi,\n\nOn 2019-10-03 10:21:13 -0400, Andrew Dunstan wrote:\n> My new msys2 animal fairywren has had 3 recent failures when checking\n> pg_upgrade. The failures have been while running the regression tests,\n> specifically the interval test, and they all look like this:\n> \n> \n> 2019-10-03 05:36:00.373 UTC [24272:43] LOG: server process (PID 23756) was terminated by exception 0xC0000028\n> 2019-10-03 05:36:00.373 UTC [24272:44] DETAIL: Failed process was running: INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted interval');\n> \n> \n> That error is \"bad stack\"\n\n> The failures have been on REL_12_STABLE (twice) and master (once).\n> However, they are not consistent (REL_!2_STABLE is currently green).\n> \n> \n> The interval test itself hasn't changed for m ore than 2 years, and I\n> haven't found any obvious recent change that might cause the problem. I\n> guess it could be a comoiler bug ... this is gcc 9.2.0, which is the\n> current release.\n\nThis is around where an error is thrown:\n -- badly formatted interval\n INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted interval');\n-ERROR: invalid input syntax for type interval: \"badly formatted interval\"\n-LINE 1: INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted inter...\n- ^\n\nand the error is stack related. So I suspect that setjmp/longjmp might\nbe to blame here, and somehow don't save/restore the stack into a proper\nstate. I don't know enough about mingw/msys/windows to know whether that\nuses a self-written setjmp or relies on the MS implementation.\n\nIf you could gather a backtrace it might help us. It's possible that the\nstack is \"just\" misaligned or something, we had problems with that\nbefore (IIRC valgrind didn't always align stacks correctly for processes\nthat forked from within a signal handler, which then crashed when using\ninstructions with alignment requirements, but only sometimes, because\nthe stack coiuld be aligned).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Oct 2019 08:18:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: fairywren failures" }, { "msg_contents": "Hi,\n\nOn 2019-10-03 08:18:42 -0700, Andres Freund wrote:\n> On 2019-10-03 10:21:13 -0400, Andrew Dunstan wrote:\n> > My new msys2 animal fairywren has had 3 recent failures when checking\n> > pg_upgrade. The failures have been while running the regression tests,\n> > specifically the interval test, and they all look like this:\n> > \n> > \n> > 2019-10-03 05:36:00.373 UTC [24272:43] LOG: server process (PID 23756) was terminated by exception 0xC0000028\n> > 2019-10-03 05:36:00.373 UTC [24272:44] DETAIL: Failed process was running: INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted interval');\n> > \n> > \n> > That error is \"bad stack\"\n> \n> > The failures have been on REL_12_STABLE (twice) and master (once).\n> > However, they are not consistent (REL_!2_STABLE is currently green).\n> > \n> > \n> > The interval test itself hasn't changed for m ore than 2 years, and I\n> > haven't found any obvious recent change that might cause the problem. I\n> > guess it could be a comoiler bug ... this is gcc 9.2.0, which is the\n> > current release.\n> \n> This is around where an error is thrown:\n> -- badly formatted interval\n> INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted interval');\n> -ERROR: invalid input syntax for type interval: \"badly formatted interval\"\n> -LINE 1: INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted inter...\n> - ^\n> \n> and the error is stack related. So I suspect that setjmp/longjmp might\n> be to blame here, and somehow don't save/restore the stack into a proper\n> state. I don't know enough about mingw/msys/windows to know whether that\n> uses a self-written setjmp or relies on the MS implementation.\n> \n> If you could gather a backtrace it might help us. It's possible that the\n> stack is \"just\" misaligned or something, we had problems with that\n> before (IIRC valgrind didn't always align stacks correctly for processes\n> that forked from within a signal handler, which then crashed when using\n> instructions with alignment requirements, but only sometimes, because\n> the stack coiuld be aligned).\n\nIt seems we're not the only ones hitting this:\nhttps://rt.perl.org/Public/Bug/Display.html?id=133603\n\nDoesn't look like they've really narrowed it down that much yet.\n\n- Andres\n\n\n", "msg_date": "Thu, 3 Oct 2019 08:23:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: fairywren failures" }, { "msg_contents": "Hi,\n\nOn 2019-10-03 08:23:49 -0700, Andres Freund wrote:\n> On 2019-10-03 08:18:42 -0700, Andres Freund wrote:\n> > This is around where an error is thrown:\n> > -- badly formatted interval\n> > INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted interval');\n> > -ERROR: invalid input syntax for type interval: \"badly formatted interval\"\n> > -LINE 1: INSERT INTO INTERVAL_TBL (f1) VALUES ('badly formatted inter...\n> > - ^\n> >\n> > and the error is stack related. So I suspect that setjmp/longjmp might\n> > be to blame here, and somehow don't save/restore the stack into a proper\n> > state. I don't know enough about mingw/msys/windows to know whether that\n> > uses a self-written setjmp or relies on the MS implementation.\n> >\n> > If you could gather a backtrace it might help us. It's possible that the\n> > stack is \"just\" misaligned or something, we had problems with that\n> > before (IIRC valgrind didn't always align stacks correctly for processes\n> > that forked from within a signal handler, which then crashed when using\n> > instructions with alignment requirements, but only sometimes, because\n> > the stack coiuld be aligned).\n>\n> It seems we're not the only ones hitting this:\n> https://rt.perl.org/Public/Bug/Display.html?id=133603\n>\n> Doesn't look like they've really narrowed it down that much yet.\n\nA few notes:\n\n* As an experiment, it could be worthwhile to try to redefine\n sigsetjmp/longjmp/sigjmp_buf with what\n https://gcc.gnu.org/onlinedocs/gcc/Nonlocal-Gotos.html\n provides, it's apparently a separate implementation from MS crt one.\n\n* Arguably\n \"Do not use longjmp to transfer control from a callback routine\n invoked directly or indirectly by Windows code.\"\n and\n \"Do not use longjmp to transfer control out of an interrupt-handling\n routine unless the interrupt is caused by a floating-point\n exception. In this case, a program may return from an interrupt\n handler via longjmp if it first reinitializes the floating-point math\n package by calling _fpreset.\"\n\n from https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/longjmp?view=vs-2019\n\n might be violated by our signal signal emulation on windows. But I've\n not looked into that in detail.\n\n* Any chance you could get the pre-processed source for postgres.c or\n such? I'm kinda wondering if the definition of setjmp() that we get\n includes the returns_twice attribute that gcc wants to see, and\n whether we're picking up the mingw version of longjmp, or the windows\n one.\n\n https://sourceforge.net/p/mingw-w64/mingw-w64/ci/844cb490ab2cc32ac3df5914700564b2e40739d8/tree/mingw-w64-headers/crt/setjmp.h#l31\n\n* It's certainly curious that the failures so far only have happended as\n part of pg_upgradeCheck, rather than the plain regression tests.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Oct 2019 09:17:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: fairywren failures" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> * It's certainly curious that the failures so far only have happended as\n> part of pg_upgradeCheck, rather than the plain regression tests.\n\nIsn't it though. We spent a long time wondering why we saw parallel\nplan instability mostly in pg_upgradeCheck, too [1]. We eventually\ndecided that the cause of that instability was chance timing collisions\nwith bgwriter/checkpointer, but nobody ever really explained why\npg_upgradeCheck should be more prone to hit those windows than the plain\ntests are. I feel like there's something still to be understood there.\n\nWhether this is related, who's to say. But given your thought about\nstack alignment, I'm half thinking that the crash is seen when we get a\nsignal (e.g. SIGUSR1 from sinval processing) at the wrong time, allowing\nthe stack to become unaligned, and that the still-unexplained timing\ndifference in pg_upgradeCheck accounts for that test being more prone to\nshow it.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20190605050037.GA33985@rfd.leadboat.com\n\n\n", "msg_date": "Thu, 03 Oct 2019 16:13:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: fairywren failures" }, { "msg_contents": "\nOn 10/3/19 4:13 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> * It's certainly curious that the failures so far only have happended as\n>> part of pg_upgradeCheck, rather than the plain regression tests.\n> Isn't it though. We spent a long time wondering why we saw parallel\n> plan instability mostly in pg_upgradeCheck, too [1]. We eventually\n> decided that the cause of that instability was chance timing collisions\n> with bgwriter/checkpointer, but nobody ever really explained why\n> pg_upgradeCheck should be more prone to hit those windows than the plain\n> tests are. I feel like there's something still to be understood there.\n>\n> Whether this is related, who's to say. But given your thought about\n> stack alignment, I'm half thinking that the crash is seen when we get a\n> signal (e.g. SIGUSR1 from sinval processing) at the wrong time, allowing\n> the stack to become unaligned, and that the still-unexplained timing\n> difference in pg_upgradeCheck accounts for that test being more prone to\n> show it.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/20190605050037.GA33985@rfd.leadboat.com\n\n\n\nYes, that's very puzzling. But what do we actually do differently in the\npg_upgrade checks that might account for it? Nothing that is at all\nobvious to me that might account for it.\n\n\nAnother data point: the new Visual Studio 2019 instance drongo running\non the same machine is not exhibiting these problems. Yes, it's not\nrunning test.sh, but vcregress.pl does pretty much the same thing. So\nthat does seem to point to the toolset. I'll see if I can get the same\ntoolset jacana is using installed and try that.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 4 Oct 2019 09:14:57 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: fairywren failures" }, { "msg_contents": "On 2019-10-03 16:21, Andrew Dunstan wrote:\n> My new msys2 animal fairywren\n\nCould you please check how this animal is labeled? AFAICT, this is not\nan msys2 build but a mingw build (x86_64-w64-mingw32).\n\n> has had 3 recent failures when checking\n> pg_upgrade. The failures have been while running the regression tests,\n> specifically the interval test, and they all look like this:\n\nI've also seen this randomly, but only under 64-bit mingw, never 32-bit\nmingw.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 12 Oct 2019 21:56:03 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: fairywren failures" }, { "msg_contents": "On Sat, Oct 12, 2019 at 3:56 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-10-03 16:21, Andrew Dunstan wrote:\n> > My new msys2 animal fairywren\n>\n> Could you please check how this animal is labeled? AFAICT, this is not\n> an msys2 build but a mingw build (x86_64-w64-mingw32).\n\n\nIt is indeed an msys2 system. However, when we set MSYSTEM=MINGW64 as\nwe do in fairywren's config environment so that the compiler it is\nproperly detected by configure (using Msys2's /etc/config.site)\n'uname -a' reports MINGW64... instead of MSYS...\n\nThis is a bit confusing.\n\nThe compiler currently being used on the animal is the gcc 7.3.0 from\nthe Mingw64 project, the same one that's being usied on jacana (which\nruns Msys1). Notwithstanding the \"mingw32\" in the compiler name, these\nare 64 bit builds. I think the \"32\" there is somewhat vestigial.\n\n\n>\n> > has had 3 recent failures when checking\n> > pg_upgrade. The failures have been while running the regression tests,\n> > specifically the interval test, and they all look like this:\n>\n> I've also seen this randomly, but only under 64-bit mingw, never 32-bit\n> mingw.\n>\n\n\nSince I downgraded the compiler from gcc 9.0 about a week ago these\nerrors seem to have stopped.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 07:34:10 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: fairywren failures" }, { "msg_contents": "On 2019-10-16 13:34, Andrew Dunstan wrote:\n>> Could you please check how this animal is labeled? AFAICT, this is not\n>> an msys2 build but a mingw build (x86_64-w64-mingw32).\n> \n> It is indeed an msys2 system. However, when we set MSYSTEM=MINGW64 as\n> we do in fairywren's config environment so that the compiler it is\n> properly detected by configure (using Msys2's /etc/config.site)\n> 'uname -a' reports MINGW64... instead of MSYS...\n\nWhen you install MSYS2 from msys2.org, you get three possible build\ntargets, depending on what you set MSYSTEM to:\n\nMSYSTEM=MINGW32\nMSYSTEM=MINGW64\nMSYSTEM=MSYS\n\nWhen a buildfarm member identifiers itself as \"msys ... 2\", then I would\nexpect the third variant, but that's not what it's doing. A\nMSYSTEM=MSYS build is similar to a Cygwin build (since MSYS2 is a fork\nof Cygwin), which is also a valid thing to do, but it's obviously quite\ndifferent from a mingw build.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 25 Oct 2019 21:09:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: fairywren failures" }, { "msg_contents": "\nOn 10/25/19 3:09 PM, Peter Eisentraut wrote:\n> On 2019-10-16 13:34, Andrew Dunstan wrote:\n>>> Could you please check how this animal is labeled? AFAICT, this is not\n>>> an msys2 build but a mingw build (x86_64-w64-mingw32).\n>> It is indeed an msys2 system. However, when we set MSYSTEM=MINGW64 as\n>> we do in fairywren's config environment so that the compiler it is\n>> properly detected by configure (using Msys2's /etc/config.site)\n>> 'uname -a' reports MINGW64... instead of MSYS...\n> When you install MSYS2 from msys2.org, you get three possible build\n> targets, depending on what you set MSYSTEM to:\n>\n> MSYSTEM=MINGW32\n> MSYSTEM=MINGW64\n> MSYSTEM=MSYS\n>\n> When a buildfarm member identifiers itself as \"msys ... 2\", then I would\n> expect the third variant, but that's not what it's doing. A\n> MSYSTEM=MSYS build is similar to a Cygwin build (since MSYS2 is a fork\n> of Cygwin), which is also a valid thing to do, but it's obviously quite\n> different from a mingw build.\n\n\n\n\nIf it helps you I can change the compiler name in the animal metainfo to\nmingw64-gcc. Msys2 is the build environment, but not the target, which\nis native Windows.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 26 Oct 2019 12:07:41 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: fairywren failures" } ]
[ { "msg_contents": "Hello,\n\nI stumbled on a windows-only bug in pg_basebackup which I've reported as \n#16032 \n(https://www.postgresql.org/message-id/16032-4ba56823a2b2805f%40postgresql.org).\n\nI'm pretty sure I've fixed it in the attached patch.\n\nMany Thanks,\nRob", "msg_date": "Thu, 03 Oct 2019 18:27:34 +0100", "msg_from": "Rob <postgresql@mintsoft.net>", "msg_from_op": true, "msg_subject": "Fix for Bug #16032" }, { "msg_contents": "Hi Rob,\n\nOn Thu, Oct 03, 2019 at 06:27:34PM +0100, Rob wrote:\n> I stumbled on a windows-only bug in pg_basebackup which I've reported as\n> #16032 (https://www.postgresql.org/message-id/16032-4ba56823a2b2805f%40postgresql.org).\n> \n> I'm pretty sure I've fixed it in the attached patch.\n\nCould it be possible to keep the discussion on the original thread? I\nalready replied to it, and there are no problems with discussing\npatches dealing with bugs directly on pgsql-bugs. Thanks for caring.\n\nThanks,\n--\nMichael", "msg_date": "Mon, 7 Oct 2019 16:01:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix for Bug #16032" } ]
[ { "msg_contents": "explain(SETTINGS) was implemented to show relevant settings for which an odd\nvalue could affect a query but could be forgotten during troubleshooting.\n\nThis is a \"concept\" patch to show the version, which is frequently requested on\n-performance list and other support requests. If someone sends\nexplain(settings), they don't need to also (remember to) send the version..\n\npostgres=# explain(settings)SELECT;\n Result (cost=0.00..0.01 rows=1 width=0)\n Settings: server_version_num = '130000', work_mem = '128MB'\n\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 85ca2b3..2edc83c 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -3143,7 +3143,7 @@ static struct config_int ConfigureNamesInt[] =\n \t\t{\"server_version_num\", PGC_INTERNAL, PRESET_OPTIONS,\n \t\t\tgettext_noop(\"Shows the server version as an integer.\"),\n \t\t\tNULL,\n-\t\t\tGUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n+\t\t\tGUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE | GUC_EXPLAIN\n \t\t},\n \t\t&server_version_num,\n \t\tPG_VERSION_NUM, PG_VERSION_NUM, PG_VERSION_NUM,\n@@ -8955,7 +8955,7 @@ get_explain_guc_options(int *num)\n \t\t}\n \n \t\t/* skip GUC variables that match the built-in default */\n-\t\tif (!modified)\n+\t\tif (!modified && strcmp(conf->name, \"server_version_num\"))\n \t\t\tcontinue;\n \n \t\t/* assign to the values array */\n\n\n", "msg_date": "Thu, 3 Oct 2019 13:44:09 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "consider including server_version in explain(settings)" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> This is a \"concept\" patch to show the version, which is frequently requested on\n> -performance list and other support requests. If someone sends\n> explain(settings), they don't need to also (remember to) send the version..\n\nI'm not really on board with the proposal at all here; I think it'll\nbe useless clutter most of the time. I do not agree with the position\nthat the only use-case for explain(settings) is performance trouble\nreports. Moreover, if we start including fixed settings then where\ndo we stop? People might also want \"pg_config\" output for example,\nand that's surely not reasonable to include in EXPLAIN.\n\nIndependently of that, however:\n \n> \t\t/* skip GUC variables that match the built-in default */\n> -\t\tif (!modified)\n> +\t\tif (!modified && strcmp(conf->name, \"server_version_num\"))\n> \t\t\tcontinue;\n\nThis is both horribly contorted logic (it could at least do with a\ncomment) and against project coding conventions (do not use the result\nof strcmp() as if it were a boolean).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Oct 2019 15:46:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: consider including server_version in explain(settings)" } ]
[ { "msg_contents": "Why postgres not providing freeing shared memory?\n\nWhy postgres not providing freeing shared memory?", "msg_date": "Fri, 4 Oct 2019 15:20:49 +0530", "msg_from": "Natarajan R <nataraj3098@gmail.com>", "msg_from_op": true, "msg_subject": "Shared memory" }, { "msg_contents": "On Fri, Oct 4, 2019 at 5:51 AM Natarajan R <nataraj3098@gmail.com> wrote:\n> Why postgres not providing freeing shared memory?\n\nBecause it's intended to be used mostly for data structures that live\nfor the entire server lifetime.\n\nThere are some cases, such as various hash tables, where the number of\nentries can grow and shrink over time. It might be useful to return\nmemory that is freed up when the hash table shrinks to the common\npool, but it would be complex, because then we'd have to keep track of\nmultiple chunks of freed memory and consolidate adjacent chunks and so\nforth. I don't see that we'd be likely to get much benefit from such a\nsystem, since a lot of cases memory fragmentation would prevent us\nfrom getting any real benefit.\n\nIf you need a shared data structure that is temporary, you might want\nto check out DSM and DSA.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 09:09:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Shared memory" }, { "msg_contents": "On Fri, 4 Oct 2019 at 17:51, Natarajan R <nataraj3098@gmail.com> wrote:\n\n> Why postgres not providing freeing shared memory?\n>\n\nIt does.\n\nYou are presumably looking at static shared memory segments which are\nassigned at server start. Most extensions need to use one of these, of\nfixed size, for housekeeping and things like storing LWLocks .\n\nBut extensions can use DSM or DSA for most of their memory use, giving them\nflexible storage.\n\nPostgreSQL doesn't have a fully dynamic shared heap with\nmalloc()/free()-like semantics. AFAIK we don't support the use of AllocSet\nMemoryContext s on DSM-backed shared memory. That might be nice, but it's\ncomplicated.\n\nMost extensions use coding patterns like variable length arrays instead.\n\nI strongly advise you to go read the pglogical source code. It demonstrates\na lot of the things you will need to understand to write complex extensions\nthat touch multiple databases.\n\nPostgreSQL itself could make this a LOT easier though. pglogical's\nbasic worker and connection management would be considerably simpler if:\n\n- Extensions could add shared catalog relations\n- we could access, copy, and struct-embed BackgroundWorkerHandle\n- registered bgworkers weren't relaunched after Pg crash-restart (as\ndiscussed on prior threads)\n- the bgworker registration struct had more than uint32 worth of room for\narguments\n- extensions could request that Pg maintain extra shmem space for each\nbgworker for the extension's use (so we didn't have to maintain our own\nparallel array of shmem entries for each worker and coordinate that with\nthe bgworker's own areas)\n\nI guess I should probably pony up with a patch for some of this....\n\nWhile I'm at it, there are so many other extension points I wish for, with\nthe high points being:\n\n- Extension-defined syscaches\n- Extension-defined locktypes/methods/etc\n- More extensible pg_depend\n- Hooks in table rewrite\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 4 Oct 2019 at 17:51, Natarajan R <nataraj3098@gmail.com> wrote:Why postgres not providing freeing shared memory?\nIt does.You are presumably looking at static shared memory segments which are assigned at server start. Most extensions need to use one of these, of fixed size, for housekeeping and things like storing LWLocks .But extensions can use DSM or DSA for most of their memory use, giving them flexible storage.PostgreSQL doesn't have a fully dynamic shared heap with malloc()/free()-like semantics. AFAIK we don't support the use of AllocSet MemoryContext s on DSM-backed shared memory. That might be nice, but it's complicated.Most extensions use coding patterns like variable length arrays instead.I strongly advise you to go read the pglogical source code. It demonstrates a lot of the things you will need to understand to write complex extensions that touch multiple databases.PostgreSQL itself could make this a LOT easier though. pglogical's basic worker and connection management would be considerably simpler if:- Extensions could add shared catalog relations- we could access, copy, and struct-embed BackgroundWorkerHandle- registered bgworkers weren't relaunched after Pg crash-restart (as discussed on prior threads)- the bgworker registration struct had more than uint32 worth of room for arguments- extensions could request that Pg maintain extra shmem space for each bgworker for the extension's use (so we didn't have to maintain our own parallel array of shmem entries for each worker and coordinate that with the bgworker's own areas)I guess I should probably pony up with a patch for some of this....While I'm at it, there are so many other extension points I wish for, with the high points being:- Extension-defined syscaches- Extension-defined locktypes/methods/etc- More extensible pg_depend- Hooks in table rewrite--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 10 Oct 2019 08:04:55 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Shared memory" } ]
[ { "msg_contents": "typedef struct HashTableKey\n{\n Oid dbId; // 4 bytes\n int64 productid; // 8 bytes\n}HashTableKey; (total size - 12 bytes)\n\ntypedef struct HashTableEntry\n{\n HashTableKey key;\n ProductInfo *pdt;\n}HashTableEntry;\n\nHASHCTL hashInfo;\nhashInfo.keysize = sizeof(HashTableKey);\nhashInfo.entrysize = sizeof(HashTableEntry);\nSampleHashTable = ShmemInitHash(\"productid vs product struct HashTable\",\nsize, size, &hashInfo, HASH_ELEM | HASH_SHARED_MEM | HASH_BLOBS);\n\nwhile printing keysize: elog(LOG,\"Keysize = %d\",sizeof(HashTableKey));\n\nI am getting Keysize = 16, How? what should i need to do inorder to have\nkeysize = 12\n\ntypedef struct HashTableKey{  Oid dbId; // 4 bytes  int64 productid; // 8 bytes}HashTableKey; (total size - 12 bytes)typedef struct HashTableEntry{  HashTableKey key;  ProductInfo *pdt;}HashTableEntry;HASHCTL hashInfo;hashInfo.keysize = sizeof(HashTableKey);hashInfo.entrysize = sizeof(HashTableEntry);SampleHashTable = ShmemInitHash(\"productid vs product struct HashTable\", size, size, &hashInfo, HASH_ELEM | HASH_SHARED_MEM | HASH_BLOBS);while printing keysize: elog(LOG,\"Keysize = %d\",sizeof(HashTableKey));I am getting Keysize = 16, How? what should i need to do inorder to have keysize = 12", "msg_date": "Fri, 4 Oct 2019 17:06:47 +0530", "msg_from": "Natarajan R <nataraj3098@gmail.com>", "msg_from_op": true, "msg_subject": "HashTable KeySize" }, { "msg_contents": "On Fri, Oct 04, 2019 at 05:06:47PM +0530, Natarajan R wrote:\n>typedef struct HashTableKey\n>{\n> Oid dbId; // 4 bytes\n> int64 productid; // 8 bytes\n>}HashTableKey; (total size - 12 bytes)\n>\n>typedef struct HashTableEntry\n>{\n> HashTableKey key;\n> ProductInfo *pdt;\n>}HashTableEntry;\n>\n>HASHCTL hashInfo;\n>hashInfo.keysize = sizeof(HashTableKey);\n>hashInfo.entrysize = sizeof(HashTableEntry);\n>SampleHashTable = ShmemInitHash(\"productid vs product struct HashTable\",\n>size, size, &hashInfo, HASH_ELEM | HASH_SHARED_MEM | HASH_BLOBS);\n>\n>while printing keysize: elog(LOG,\"Keysize = %d\",sizeof(HashTableKey));\n>\n>I am getting Keysize = 16, How? what should i need to do inorder to have\n>keysize = 12\n\nThat's likely due to alignment. The second field is a 64-bit value will\nbe aligned at 8-byte boundary, so in memory the struct will look like\nthis:\n\n dbId -- 4 bytes\n padding -- 4 bytes\n productId -- 8 bytes\n\nSee\n\n https://en.wikipedia.org/wiki/Data_structure_alignment\n\nand there's also a tool to show the memory layout:\n\n https://linux.die.net/man/1/pahole\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Oct 2019 14:43:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: HashTable KeySize" } ]
[ { "msg_contents": "As of d9dd406fe281d22d5238d3c26a7182543c711e74, we require MSVC 2013,\nwhich means _MSC_VER >= 1800. This means that conditionals about\nolder versions of _MSC_VER can be removed or simplified.\n\nPrevious code was also in some cases handling MinGW, where _MSC_VER is\nnot defined at all, incorrectly, such as in pg_ctl.c and win32_port.h,\nleading to some compiler warnings. This should now be handled better.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 4 Oct 2019 16:35:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Remove some code for old unsupported versions of MSVC" }, { "msg_contents": "On Fri, Oct 04, 2019 at 04:35:59PM +0200, Peter Eisentraut wrote:\n> As of d9dd406fe281d22d5238d3c26a7182543c711e74, we require MSVC 2013,\n> which means _MSC_VER >= 1800. This means that conditionals about\n> older versions of _MSC_VER can be removed or simplified.\n> \n> Previous code was also in some cases handling MinGW, where _MSC_VER is\n> not defined at all, incorrectly, such as in pg_ctl.c and win32_port.h,\n> leading to some compiler warnings. This should now be handled better.\n\nThanks Peter for cleaning up this code. I have looked at it, did some\ntesting and it looks good to me. No spots are visibly missing.\n--\nMichael", "msg_date": "Mon, 7 Oct 2019 15:52:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove some code for old unsupported versions of MSVC" }, { "msg_contents": "On 2019-10-07 08:52, Michael Paquier wrote:\n> On Fri, Oct 04, 2019 at 04:35:59PM +0200, Peter Eisentraut wrote:\n>> As of d9dd406fe281d22d5238d3c26a7182543c711e74, we require MSVC 2013,\n>> which means _MSC_VER >= 1800. This means that conditionals about\n>> older versions of _MSC_VER can be removed or simplified.\n>>\n>> Previous code was also in some cases handling MinGW, where _MSC_VER is\n>> not defined at all, incorrectly, such as in pg_ctl.c and win32_port.h,\n>> leading to some compiler warnings. This should now be handled better.\n> \n> Thanks Peter for cleaning up this code. I have looked at it, did some\n> testing and it looks good to me. No spots are visibly missing.\n\npushed\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 8 Oct 2019 10:55:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove some code for old unsupported versions of MSVC" } ]
[ { "msg_contents": "skink just found what seems like a serious problem with commit\nc477f3e44:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2019-10-04%2010%3A15%3A05\n\nperforming post-bootstrap initialization ... TRAP: FailedAssertion(\"newlist == list\", File: \"/home/andres/build/buildfarm/HEAD/pgsql.build/../pgsql/src/backend/nodes/list.c\", Line: 200)\nAborted\n\nThat assert is here:\n\n /*\n * Currently, asking aset.c to reduce the allocated size of the List\n * header is pointless in terms of reclaiming space, unless the list\n * is very long. However, it seems worth doing anyway to cause the\n * no-longer-needed initial_elements[] space to be cleared in\n * debugging builds.\n */\n newlist = (List *) repalloc(list, offsetof(List, initial_elements));\n\n /* That better not have failed, nor moved the list header */\n Assert(newlist == list);\n\nIf we invoke realloc() then it's very much within its rights to return\na block that's not where the original block was. I'm a bit surprised\nthat initdb runs any SQL commands that create Lists long enough to\nreach the threshold where c477f3e44 would make a difference (~1000\ninitial elements would be needed), but there you have it.\n\nThis is a bit sticky to resolve. I'm inclined to think that the fault\nis on list.c, as it really shouldn't assume that repalloc doesn't move\nthe block. (In fact, the quoted comment is self-contradictory, since\nit evidently thinks that space *could* be given back when a large\nenough block is involved; which is exactly what didn't happen before\nc477f3e44.) But I do not think we can give up the invariant that List\nheaders don't move around. Hence we basically can't do repalloc here\nafter all. Perhaps, to limit the cost of that, we should eat the cost\nof separately palloc'ing the ListCell array when the initial list length\nexceeds some threshold.\n\nI'm also wondering a bit whether there's anyplace *else* that is\ncheating by assuming that a downsizing repalloc doesn't move the block.\nWe could investigate that by testing with a modified form of\nAllocSetRealloc that always moves the block, but of course that won't\nfind bugs in untested code paths. So another possibility is to revert\nc477f3e44 and then document that AllocSetRealloc does not move a block\nwhen reducing its size. That does not seem very attractive though.\n\nAny opinions about this?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Oct 2019 13:00:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Problem with repalloc downsizing patch" }, { "msg_contents": "I wrote:\n> I'm also wondering a bit whether there's anyplace *else* that is\n> cheating by assuming that a downsizing repalloc doesn't move the block.\n> We could investigate that by testing with a modified form of\n> AllocSetRealloc that always moves the block, but of course that won't\n> find bugs in untested code paths. So another possibility is to revert\n> c477f3e44 and then document that AllocSetRealloc does not move a block\n> when reducing its size. That does not seem very attractive though.\n\nI did that testing, and found that check-world does not expose any other\ntrouble spots beyond the one in enlarge_list(). So I think this option\nshould be rejected.\n\nThat leaves us with needing to decide whether we should or shouldn't\nforcibly split off the initial ListCell array if it's large. I'm\nkind of leaning to not doing so, because doing that would add an\nextra test (at least) to each list creation, and the frequency of\nbeing able to reclaim space seems like it'd be pretty small. You\nneed a large initial list, *plus* a request to make it even larger.\n\n(I haven't been able to reproduce skink's failure though, so maybe\nthere's something I'm missing.)\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Oct 2019 15:04:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Problem with repalloc downsizing patch" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16040\nLogged by: Jeremy Smith\nEmail address: jeremy@musicsmith.net\nPostgreSQL version: 12.0\nOperating system: Official Docker Image, CentOS7\nDescription: \n\nI have also tried this with 11.3, 11.4, and 11.5, so this is not new in\n12.0. Here's a really basic way to reproduce this:\r\n\r\npostgres=# BEGIN;\r\nBEGIN\r\npostgres=#\r\npostgres=# -- Create a test table and some data\r\npostgres=# CREATE TABLE test (a int);\r\nCREATE TABLE\r\npostgres=# INSERT INTO test SELECT generate_series(1,10);\r\nINSERT 0 10\r\npostgres=# alter table test set (parallel_workers = 4);\r\nALTER TABLE\r\npostgres=# -- Use auto_explain to show plan of query in the function\r\npostgres=# LOAD 'auto_explain'; \r\nLOAD\r\npostgres=# SET auto_explain.log_analyze = on;\r\nSET\r\npostgres=# SET client_min_messages = log;\r\nSET\r\npostgres=# SET auto_explain.log_nested_statements = on;\r\nSET\r\npostgres=# SET auto_explain.log_min_duration = 0;\r\nSET\r\npostgres=# -- Set parallel costs artificially low, for demonstration\npurposes\r\npostgres=# set parallel_tuple_cost = 0;\r\nSET\r\npostgres=# set parallel_setup_cost = 0;\r\nSET\r\npostgres=# set max_parallel_workers_per_gather = 4;\r\nSET\r\npostgres=# -- Normal query will use 4 workers\r\npostgres=# SELECT test.a, count(*) FROM test GROUP BY test.a;\r\nLOG: duration: 19.280 ms plan:\r\nQuery Text: SELECT test.a, count(*) FROM test GROUP BY test.a;\r\nFinalize HashAggregate (cost=25.56..27.56 rows=200 width=12) (actual\ntime=16.649..16.795 rows=10 loops=1)\r\n Group Key: a\r\n -> Gather (cost=19.56..21.56 rows=800 width=12) (actual\ntime=2.853..18.744 rows=10 loops=1)\r\n Workers Planned: 4\r\n Workers Launched: 4\r\n -> Partial HashAggregate (cost=19.56..21.56 rows=200 width=12)\n(actual time=0.493..0.519 rows=2 loops=5)\r\n Group Key: a\r\n -> Parallel Seq Scan on test (cost=0.00..16.38 rows=638\nwidth=4) (actual time=0.009..0.083 rows=2 loops=5)\r\n a | count\r\n----+-------\r\n 9 | 1\r\n 3 | 1\r\n 5 | 1\r\n 4 | 1\r\n 10 | 1\r\n 6 | 1\r\n 2 | 1\r\n 7 | 1\r\n 1 | 1\r\n 8 | 1\r\n(10 rows)\r\n\r\npostgres=#\r\npostgres=# CREATE OR REPLACE FUNCTION test_count()\r\npostgres-# RETURNS TABLE (a int, n bigint) AS\r\npostgres-# $$\r\npostgres$# BEGIN\r\npostgres$# RETURN QUERY SELECT test.a, count(*) FROM test GROUP BY\ntest.a;\r\npostgres$# END;\r\npostgres$# $$\r\npostgres-# LANGUAGE PLPGSQL;\r\nCREATE FUNCTION\r\npostgres=#\r\npostgres=# -- This query will not use parallel workers\r\npostgres=# SELECT * FROM test_count();\r\nLOG: duration: 0.437 ms plan:\r\nQuery Text: SELECT test.a, count(*) FROM test GROUP BY test.a\r\nHashAggregate (cost=48.25..50.25 rows=200 width=12) (actual\ntime=0.193..0.276 rows=10 loops=1)\r\n Group Key: a\r\n -> Seq Scan on test (cost=0.00..35.50 rows=2550 width=4) (actual\ntime=0.010..0.096 rows=10 loops=1)\r\nLOG: duration: 1.069 ms plan:\r\nQuery Text: SELECT * FROM test_count();\r\nFunction Scan on test_count (cost=0.25..10.25 rows=1000 width=12) (actual\ntime=0.895..0.968 rows=10 loops=1)\r\n a | n\r\n----+---\r\n 9 | 1\r\n 3 | 1\r\n 5 | 1\r\n 4 | 1\r\n 10 | 1\r\n 6 | 1\r\n 2 | 1\r\n 7 | 1\r\n 1 | 1\r\n 8 | 1\r\n(10 rows)\r\n\r\npostgres=# -- A workaround for long-running queries, using CREATE TABLE,\nwhich will run in parallel\r\npostgres=# CREATE OR REPLACE FUNCTION test_count2()\r\npostgres-# RETURNS TABLE (a int, n bigint) AS\r\npostgres-# $$\r\npostgres$# BEGIN\r\npostgres$# CREATE TEMPORARY TABLE test_count2_temp_table AS\r\npostgres$# SELECT test.a, count(*) FROM test GROUP BY test.a;\r\npostgres$# RETURN QUERY select * from test_count2_temp_table;\r\npostgres$# END;\r\npostgres$# $$\r\npostgres-# LANGUAGE PLPGSQL;\r\nCREATE FUNCTION\r\npostgres=#\r\npostgres=# -- The CREATE TABLE AS query will use parallel workers, but the\r\npostgres=# -- RETURN QUERY statement will not\r\npostgres=# SELECT * FROM test_count2();\r\nLOG: duration: 24.139 ms plan:\r\nQuery Text: CREATE TEMPORARY TABLE test_count2_temp_table AS\r\n SELECT test.a, count(*) FROM test GROUP BY test.a\r\nFinalize HashAggregate (cost=25.56..27.56 rows=200 width=12) (actual\ntime=21.819..21.896 rows=10 loops=1)\r\n Group Key: a\r\n -> Gather (cost=19.56..21.56 rows=800 width=12) (actual\ntime=0.755..22.966 rows=10 loops=1)\r\n Workers Planned: 4\r\n Workers Launched: 4\r\n -> Partial HashAggregate (cost=19.56..21.56 rows=200 width=12)\n(actual time=0.105..0.148 rows=2 loops=5)\r\n Group Key: a\r\n -> Parallel Seq Scan on test (cost=0.00..16.38 rows=638\nwidth=4) (actual time=0.009..0.056 rows=2 loops=5)\r\nLOG: duration: 0.420 ms plan:\r\nQuery Text: select * from test_count2_temp_table\r\nSeq Scan on test_count2_temp_table (cost=0.00..30.40 rows=2040 width=12)\n(actual time=0.014..0.305 rows=10 loops=1)\r\nLOG: duration: 26.118 ms plan:\r\nQuery Text: SELECT * FROM test_count2();\r\nFunction Scan on test_count2 (cost=0.25..10.25 rows=1000 width=12) (actual\ntime=25.845..25.994 rows=10 loops=1)\r\n a | n\r\n----+---\r\n 9 | 1\r\n 3 | 1\r\n 5 | 1\r\n 4 | 1\r\n 10 | 1\r\n 6 | 1\r\n 2 | 1\r\n 7 | 1\r\n 1 | 1\r\n 8 | 1\r\n(10 rows)\r\n\r\n\r\n\r\nIt's not obvious from the documentation\n(https://www.postgresql.org/docs/12/when-can-parallel-query-be-used.html)\nthat this should be the case. RETURN QUERY is not interruptible, like a\ncursor or for loop.", "msg_date": "Fri, 04 Oct 2019 20:20:32 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> [ $SUBJECT ]\n\nI got around to looking at this today, and what I find is that the\nproblem is that exec_stmt_return_query() uses a portal (i.e. a cursor)\nto read the results of the query. That seemed like a good idea, back\nin the late bronze age, because it allowed plpgsql to fetch the query\nresults a few rows at a time and not risk blowing out memory with a huge\nSPI result. However, the parallel-query infrastructure refuses to\nparallelize when the query is being read via a cursor.\n\nI think that the latter restriction is probably sane, because we don't\nwant to suspend execution of a parallel query while we've got worker\nprocesses waiting. And there might be some implementation restrictions\nlurking under it too --- that's not a part of the code I know in any\ndetail.\n\nHowever, there's no fundamental reason why exec_stmt_return_query has\nto use a cursor. It's going to run the query to completion immediately\nanyway, and shove all the result rows into a tuplestore. What we lack\nis a way to get the SPI query to pass its results directly to a\ntuplestore, without the SPITupleTable intermediary. (Note that the\ntuplestore can spill a large result to disk, whereas SPITupleTable\ncan't do that.)\n\nSo, attached is a draft patch to enable that. By getting rid of the\nintermediate SPITupleTable, this should improve the performance of\nRETURN QUERY somewhat even without considering the possibility of\nparallelizing the source query. I've not tried to measure that though.\nI've also not looked for other places that could use this new\ninfrastructure, but there may well be some.\n\nOne thing I'm not totally pleased about with this is adding another\nSPI interface routine using the old parameter-values API (that is,\nnull flags as char ' '/'n'). That was the path of least resistance\ngiven the other moving parts in pl_exec.c and spi.c, but maybe we\nshould try to modernize that before we set it in stone.\n\nAnother thing standing between this patch and committability is suitable\nadditions to the SPI documentation. But I saw no value in writing that\nbefore the previous point is settled.\n\nI will go add this to the next commitfest (for v14), but I wonder\nif we should try to squeeze it into v13? This isn't the only\ncomplaint we've gotten about non-parallelizability of RETURN QUERY.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 21 Mar 2020 23:23:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "ne 22. 3. 2020 v 4:23 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> PG Bug reporting form <noreply@postgresql.org> writes:\n> > [ $SUBJECT ]\n>\n> I got around to looking at this today, and what I find is that the\n> problem is that exec_stmt_return_query() uses a portal (i.e. a cursor)\n> to read the results of the query. That seemed like a good idea, back\n> in the late bronze age, because it allowed plpgsql to fetch the query\n> results a few rows at a time and not risk blowing out memory with a huge\n> SPI result. However, the parallel-query infrastructure refuses to\n> parallelize when the query is being read via a cursor.\n>\n> I think that the latter restriction is probably sane, because we don't\n> want to suspend execution of a parallel query while we've got worker\n> processes waiting. And there might be some implementation restrictions\n> lurking under it too --- that's not a part of the code I know in any\n> detail.\n>\n> However, there's no fundamental reason why exec_stmt_return_query has\n> to use a cursor. It's going to run the query to completion immediately\n> anyway, and shove all the result rows into a tuplestore. What we lack\n> is a way to get the SPI query to pass its results directly to a\n> tuplestore, without the SPITupleTable intermediary. (Note that the\n> tuplestore can spill a large result to disk, whereas SPITupleTable\n> can't do that.)\n>\n> So, attached is a draft patch to enable that. By getting rid of the\n> intermediate SPITupleTable, this should improve the performance of\n> RETURN QUERY somewhat even without considering the possibility of\n> parallelizing the source query. I've not tried to measure that though.\n> I've also not looked for other places that could use this new\n> infrastructure, but there may well be some.\n>\n> One thing I'm not totally pleased about with this is adding another\n> SPI interface routine using the old parameter-values API (that is,\n> null flags as char ' '/'n'). That was the path of least resistance\n> given the other moving parts in pl_exec.c and spi.c, but maybe we\n> should try to modernize that before we set it in stone.\n>\n> Another thing standing between this patch and committability is suitable\n> additions to the SPI documentation. But I saw no value in writing that\n> before the previous point is settled.\n>\n> I will go add this to the next commitfest (for v14), but I wonder\n> if we should try to squeeze it into v13? This isn't the only\n> complaint we've gotten about non-parallelizability of RETURN QUERY.\n>\n\n+1\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\nne 22. 3. 2020 v 4:23 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:PG Bug reporting form <noreply@postgresql.org> writes:\n> [ $SUBJECT ]\n\nI got around to looking at this today, and what I find is that the\nproblem is that exec_stmt_return_query() uses a portal (i.e. a cursor)\nto read the results of the query.  That seemed like a good idea, back\nin the late bronze age, because it allowed plpgsql to fetch the query\nresults a few rows at a time and not risk blowing out memory with a huge\nSPI result.  However, the parallel-query infrastructure refuses to\nparallelize when the query is being read via a cursor.\n\nI think that the latter restriction is probably sane, because we don't\nwant to suspend execution of a parallel query while we've got worker\nprocesses waiting.  And there might be some implementation restrictions\nlurking under it too --- that's not a part of the code I know in any\ndetail.\n\nHowever, there's no fundamental reason why exec_stmt_return_query has\nto use a cursor.  It's going to run the query to completion immediately\nanyway, and shove all the result rows into a tuplestore.  What we lack\nis a way to get the SPI query to pass its results directly to a\ntuplestore, without the SPITupleTable intermediary.  (Note that the\ntuplestore can spill a large result to disk, whereas SPITupleTable\ncan't do that.)\n\nSo, attached is a draft patch to enable that.  By getting rid of the\nintermediate SPITupleTable, this should improve the performance of\nRETURN QUERY somewhat even without considering the possibility of\nparallelizing the source query.  I've not tried to measure that though.\nI've also not looked for other places that could use this new\ninfrastructure, but there may well be some.\n\nOne thing I'm not totally pleased about with this is adding another\nSPI interface routine using the old parameter-values API (that is,\nnull flags as char ' '/'n').  That was the path of least resistance\ngiven the other moving parts in pl_exec.c and spi.c, but maybe we\nshould try to modernize that before we set it in stone.\n\nAnother thing standing between this patch and committability is suitable\nadditions to the SPI documentation.  But I saw no value in writing that\nbefore the previous point is settled.\n\nI will go add this to the next commitfest (for v14), but I wonder\nif we should try to squeeze it into v13?  This isn't the only\ncomplaint we've gotten about non-parallelizability of RETURN QUERY.+1Pavel\n\n                        regards, tom lane", "msg_date": "Sun, 22 Mar 2020 07:48:14 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nAll good with this patch. \r\n\r\n-- \r\nHighgo Software (Canada/China/Pakistan)\r\nURL : www.highgo.ca\r\nADDR: 10318 WHALLEY BLVD, Surrey, BC\r\nCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.ca\r\nSKYPE: engineeredvirus\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 27 Mar 2020 18:22:18 +0000", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n\n> All good with this patch. \n\nThanks for testing!\n\nAnybody have an objection to cramming this into v13? It is a bit late,\nbut it seems like a performance bug fix ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 15:27:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "I wrote:\n> ... attached is a draft patch to enable that. By getting rid of the\n> intermediate SPITupleTable, this should improve the performance of\n> RETURN QUERY somewhat even without considering the possibility of\n> parallelizing the source query. I've not tried to measure that though.\n> I've also not looked for other places that could use this new\n> infrastructure, but there may well be some.\n\n> One thing I'm not totally pleased about with this is adding another\n> SPI interface routine using the old parameter-values API (that is,\n> null flags as char ' '/'n'). That was the path of least resistance\n> given the other moving parts in pl_exec.c and spi.c, but maybe we\n> should try to modernize that before we set it in stone.\n\nHere's a revised patch that does the additional legwork needed to\nuse ParamListInfo throughout the newly-added code. I was able to\nget pl_exec.c out of the business of using old-style null flags\nentirely, which seems like a nice improvement.\n\n> Another thing standing between this patch and committability is suitable\n> additions to the SPI documentation. But I saw no value in writing that\n> before the previous point is settled.\n\nTook care of that too.\n\nI looked around for other places that could use this infrastructure.\nIt turns out that most places that are fetching via SPITupleTables\ndon't really have much of an issue, because they are only expecting\nto get one or so tuples anyway. There are a few where it might be\nworth changing, but it's hard to get really excited because they all\nhave other constraints on the max amount of data. As an example,\nthe various table-to-xml thingies in utils/adt/xml.c could be converted,\nbut they're still funneling their output into an XML string. As long\nas that has a hard limit at 1GB, it's not very realistic to expect that\nyou can shove huge tables into it.\n\nA different sort of cleanup we could undertake is to deprecate and\neventually remove some of the SPI API functions. As of this patch,\nfor example, SPI_cursor_open_with_args and SPI_execute_with_args are\nunused anywhere in our code. But since we document them, it's hard\nto guess whether any external code is relying on them. I suppose\ndeprecation would be a multi-year project in any case.\n\nI think this is committable now. Any objections?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 08 Jun 2020 20:11:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "On Sat, Mar 21, 2020 at 11:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think that the latter restriction is probably sane, because we don't\n> want to suspend execution of a parallel query while we've got worker\n> processes waiting.\n\nRight.\n\n> And there might be some implementation restrictions\n> lurking under it too --- that's not a part of the code I know in any\n> detail.\n\nThere are. When you EnterParallelMode(), various normally-permissible\noptions are restricted and will error out (e.g. updating your snapshot\nor command ID). Parallel query's not safe unless you remain in\nparallel mode from start to finish, but that means you can't let\ncontrol escape into code that might do arbitrary things. That in a\nnutshell is why the cursor restriction is there.\n\nThis is a heck of a nice improvement. Thanks for working on it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Jun 2020 14:13:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "Hi,\n\nFirst congrats to the postgres 14 release 👏\n\nI’ve just started testing with it and I found some unexpected behavior with some plpgsql function.\nA function that inserts data and tries to return with a table now results in the error `query is not a SELECT`.\nIn previous versions that query succeeded.\n\nWhile the message got updated in https://www.postgresql.org/message-id/flat/1914708.1629474624%40sss.pgh.pa.us, the changes here might cause the actual issue.\nHere’s a quite simplified version to reproduce the issue.\nIs this some new expected behavior that’s not documented or mentioned in the change log?\n\nCREATE TABLE t (value text);\nCREATE FUNCTION t_insert(v text)\nRETURNS SETOF t\nAS '\nBEGIN\n RETURN QUERY\n INSERT INTO t (\"value\")\n VALUES (v)\n RETURNING *;\nEND\n' LANGUAGE plpgsql;\n\nSELECT * FROM t_insert('foo’);\n\nERROR: query is not a SELECT\n\n\nWhile a CTE query is working:\n\nCREATE OR REPLACE FUNCTION t_insert(v text) RETURNS SETOF t\nAS '\nBEGIN\n RETURN QUERY\n WITH q AS (INSERT INTO t (\"value\") VALUES (v) RETURNING *)\n SELECT * FROM q;\nEND\n' LANGUAGE plpgsql;\n\nSELECT * FROM t_insert('foo’);\n\nvalue\n--------\nfoo\n\n\n\n> On 12 Jun 2020, at 20:13, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Sat, Mar 21, 2020 at 11:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think that the latter restriction is probably sane, because we don't\n>> want to suspend execution of a parallel query while we've got worker\n>> processes waiting.\n> \n> Right.\n> \n>> And there might be some implementation restrictions\n>> lurking under it too --- that's not a part of the code I know in any\n>> detail.\n> \n> There are. When you EnterParallelMode(), various normally-permissible\n> options are restricted and will error out (e.g. updating your snapshot\n> or command ID). Parallel query's not safe unless you remain in\n> parallel mode from start to finish, but that means you can't let\n> control escape into code that might do arbitrary things. That in a\n> nutshell is why the cursor restriction is there.\n> \n> This is a heck of a nice improvement. Thanks for working on it.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n> \n> \n> \n\n\nHi,First congrats to the postgres 14 release 👏I’ve just started testing with it and I found some unexpected behavior with some plpgsql function.A function that inserts data and tries to return with a table now results in the error `query is not a SELECT`.In previous versions that query succeeded.While the message got updated in https://www.postgresql.org/message-id/flat/1914708.1629474624%40sss.pgh.pa.us, the changes here might cause the actual issue.Here’s a quite simplified version to reproduce the issue.Is this some new expected behavior that’s not documented or mentioned in the change log?CREATE TABLE t (value text);CREATE FUNCTION t_insert(v text)RETURNS SETOF tAS 'BEGIN  RETURN QUERY    INSERT INTO t (\"value\")    VALUES (v)    RETURNING *;END' LANGUAGE plpgsql;SELECT * FROM t_insert('foo’);ERROR:  query is not a SELECTWhile a CTE query is working:CREATE OR REPLACE FUNCTION t_insert(v text) RETURNS SETOF tAS 'BEGIN  RETURN QUERY  WITH q AS (INSERT INTO t (\"value\") VALUES (v) RETURNING *)  SELECT * FROM q;END' LANGUAGE plpgsql;SELECT * FROM t_insert('foo’);value--------fooOn 12 Jun 2020, at 20:13, Robert Haas <robertmhaas@gmail.com> wrote:On Sat, Mar 21, 2020 at 11:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I think that the latter restriction is probably sane, because we don'twant to suspend execution of a parallel query while we've got workerprocesses waiting.Right.And there might be some implementation restrictionslurking under it too --- that's not a part of the code I know in anydetail.There are. When you EnterParallelMode(), various normally-permissibleoptions are restricted and will error out (e.g. updating your snapshotor command ID). Parallel query's not safe unless you remain inparallel mode from start to finish, but that means you can't letcontrol escape into code that might do arbitrary things. That in anutshell is why the cursor restriction is there.This is a heck of a nice improvement. Thanks for working on it.-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Sun, 3 Oct 2021 04:20:17 +0200", "msg_from": "Marc Bachmann <marc.brookman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "Marc Bachmann <marc.brookman@gmail.com> writes:\n> A function that inserts data and tries to return with a table now results in the error `query is not a SELECT`.\n> In previous versions that query succeeded.\n\nHmm ... I'm a bit surprised that that worked before, but since it did,\nwe shouldn't break it. It looks like this was an accidental side-effect\nof refactoring rather than something intentional. Will look closer\ntomorrow or so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Oct 2021 23:48:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "I wrote:\n> Marc Bachmann <marc.brookman@gmail.com> writes:\n>> A function that inserts data and tries to return with a table now results in the error `query is not a SELECT`.\n>> In previous versions that query succeeded.\n\n> Hmm ... I'm a bit surprised that that worked before, but since it did,\n> we shouldn't break it.\n\nFix pushed, thanks for the report!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Oct 2021 13:22:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" }, { "msg_contents": "I’m happy to help.\nAnd sorry that you had to go through that many changes right after the release.\n\nKind regards\nMarc\n\n> On 3 Oct 2021, at 19:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> Marc Bachmann <marc.brookman@gmail.com> writes:\n>>> A function that inserts data and tries to return with a table now results in the error `query is not a SELECT`.\n>>> In previous versions that query succeeded.\n> \n>> Hmm ... I'm a bit surprised that that worked before, but since it did,\n>> we shouldn't break it.\n> \n> Fix pushed, thanks for the report!\n> \n> \t\t\tregards, tom lane\n\n\n\n", "msg_date": "Sun, 3 Oct 2021 20:01:56 +0200", "msg_from": "Marc Bachmann <marc.brookman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16040: PL/PGSQL RETURN QUERY statement never uses a parallel\n plan" } ]
[ { "msg_contents": "Hi,\n\nThere are some links referred in the source files which are currently\nnot working.\n\nThe below link:\n<http://www.UNIX-systems.org/online.html>\nis updated with:\n<http://www.unix.org/online.html>\n\nThe below links:\nhttp://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/function_attributes.html\nhttp://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/type_attrib.html\nare updated with:\nhttp://www-01.ibm.com/support/knowledgecenter/SSGH2K_13.1.2/com.ibm.xlc131.aix.doc/language_ref/function_attributes.html\nhttp://www-01.ibm.com/support/knowledgecenter/SSGH2K_13.1.2/com.ibm.xlc131.aix.doc/language_ref/type_attrib.html\n\nIn c.h the link was not updated but in generic-xlc.h the link has been\nupdated earlier.\n\nAttached patch contains the fix with the updated links.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 5 Oct 2019 07:13:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Updated some links which are not working with new links" }, { "msg_contents": "On Sat, Oct 5, 2019 at 7:13 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> There are some links referred in the source files which are currently\n> not working.\n>\n> The below link:\n> <http://www.UNIX-systems.org/online.html>\n> is updated with:\n> <http://www.unix.org/online.html>\n>\n> The below links:\n>\nhttp://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/function_attributes.html\n>\nhttp://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/type_attrib.html\n> are updated with:\n>\nhttp://www-01.ibm.com/support/knowledgecenter/SSGH2K_13.1.2/com.ibm.xlc131.aix.doc/language_ref/function_attributes.html\n>\nhttp://www-01.ibm.com/support/knowledgecenter/SSGH2K_13.1.2/com.ibm.xlc131.aix.doc/language_ref/type_attrib.html\n>\n> In c.h the link was not updated but in generic-xlc.h the link has been\n> updated earlier.\n>\nHi Michael,\nThe attached patch in previous mail contain the changes for the updated\nlinks requested in [1]. It is not the complete set, but it is the first set\nfor which I could find the equivalent links.\nhttps://www.postgresql.org/message-id/20191006074122.GC14532%40paquier.xyz\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Oct 5, 2019 at 7:13 AM vignesh C <vignesh21@gmail.com> wrote:>> Hi,>> There are some links referred in the source files which are currently> not working.>> The below link:> <http://www.UNIX-systems.org/online.html>> is updated with:> <http://www.unix.org/online.html>>> The below links:> http://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/function_attributes.html> http://www-01.ibm.com/support/knowledgecenter/SSGH2K_11.1.0/com.ibm.xlc111.aix.doc/language_ref/type_attrib.html> are updated with:> http://www-01.ibm.com/support/knowledgecenter/SSGH2K_13.1.2/com.ibm.xlc131.aix.doc/language_ref/function_attributes.html> http://www-01.ibm.com/support/knowledgecenter/SSGH2K_13.1.2/com.ibm.xlc131.aix.doc/language_ref/type_attrib.html>> In c.h the link was not updated but in generic-xlc.h the link has been> updated earlier.>Hi Michael,The attached patch in previous mail contain the changes for the updated links requested in [1]. It is not the complete set, but it is the first set for which I could find the equivalent links.https://www.postgresql.org/message-id/20191006074122.GC14532%40paquier.xyzRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Oct 2019 09:38:41 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Updated some links which are not working with new links" }, { "msg_contents": "Hi Vignesh,\n\nOn Mon, Oct 07, 2019 at 09:38:41AM +0530, vignesh C wrote:\n> The attached patch in previous mail contain the changes for the updated\n> links requested in [1]. It is not the complete set, but it is the first set\n> for which I could find the equivalent links.\n> https://www.postgresql.org/message-id/20191006074122.GC14532%40paquier.xyz\n\nI may be missing something of course, but I do not see a patch neither\non this message nor on the previous one. Could you send a patch you\nthink is correct?\n--\nMichael", "msg_date": "Mon, 7 Oct 2019 16:47:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Updated some links which are not working with new links" }, { "msg_contents": "On Mon, Oct 7, 2019 at 1:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi Vignesh,\n>\n> On Mon, Oct 07, 2019 at 09:38:41AM +0530, vignesh C wrote:\n> > The attached patch in previous mail contain the changes for the updated\n> > links requested in [1]. It is not the complete set, but it is the first set\n> > for which I could find the equivalent links.\n> > https://www.postgresql.org/message-id/20191006074122.GC14532%40paquier.xyz\n>\n> I may be missing something of course, but I do not see a patch neither\n> on this message nor on the previous one. Could you send a patch you\n> think is correct?\nSorry Michael for the miscommunication, the patch was present in the\nfirst mail of this mail thread.\nI'm re-attaching the patch in this mail.\nLet me know if anything is required.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Oct 2019 14:14:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Updated some links which are not working with new links" }, { "msg_contents": "On Mon, Oct 07, 2019 at 02:14:05PM +0530, vignesh C wrote:\n> Sorry Michael for the miscommunication, the patch was present in the\n> first mail of this mail thread.\n> I'm re-attaching the patch in this mail.\n> Let me know if anything is required.\n\nThanks. It looks like I have been able to miss the actual patch, and\ngot confused as there were two threads about more or less the same\nmatter. The links were redirected to an https equivalent, so applied\nwith that.\n--\nMichael", "msg_date": "Tue, 8 Oct 2019 14:36:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Updated some links which are not working with new links" }, { "msg_contents": "On Tue, Oct 8, 2019 at 11:06 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 07, 2019 at 02:14:05PM +0530, vignesh C wrote:\n> > Sorry Michael for the miscommunication, the patch was present in the\n> > first mail of this mail thread.\n> > I'm re-attaching the patch in this mail.\n> > Let me know if anything is required.\n>\n> Thanks. It looks like I have been able to miss the actual patch, and\n> got confused as there were two threads about more or less the same\n> matter. The links were redirected to an https equivalent, so applied\n> with that.\nThanks Michael.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Oct 2019 11:33:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Updated some links which are not working with new links" } ]
[ { "msg_contents": "Hi,\n\nYesterday we (that's me and my colleague Ricardo Gomez) were working on\nan issue where a monitoring script was returning increasing lag\ninformation on a primary instead of a NULL value.\n\nThe query used involved the following functions (the function was\namended to work-around the issue I'm reporting here):\n\npg_last_wal_receive_lsn()\npg_last_wal_replay_lsn()\npg_last_xact_replay_timestamp()\n\nUnder normal circumstances we would expect to receive NULLs from all\nthree functions on a primary node, and code comments back up my thoughts.\n\nThe problem is, what if the node is a standby which was promoted without\nrestarting, or that had to perform crash recovery?\n\nSo during the time it's recovering the values in ` XLogCtl` are updated\nwith recovery information, and once the recovery finishes, due to crash\nrecovery reaching a consistent state, or a promotion of a standby\nhappening, those values are not reset to startup defaults.\n\nThat's when you start seeing non-null values returned by\n`pg_last_wal_replay_lsn()`and `pg_last_xact_replay_timestamp()`.\n\nNow, I don't know if we should call this a bug, or an undocumented\nanomaly. We could fix the bug by resetting the values from ` XLogCtl`\nafter finishing recovery, or document that we might see non-NULL values\nin certain cases.\n\nRegards,\n\n-- \nMart�n Marqu�s http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Sat, 5 Oct 2019 08:43:03 -0300", "msg_from": "=?UTF-8?Q?Mart=c3=adn_Marqu=c3=a9s?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Non-null values of recovery functions after promote or crash of\n primary" }, { "msg_contents": "Greetings,\n\n* Martín Marqués (martin@2ndquadrant.com) wrote:\n> pg_last_wal_receive_lsn()\n> pg_last_wal_replay_lsn()\n> pg_last_xact_replay_timestamp()\n> \n> Under normal circumstances we would expect to receive NULLs from all\n> three functions on a primary node, and code comments back up my thoughts.\n\nAgreed.\n\n> The problem is, what if the node is a standby which was promoted without\n> restarting, or that had to perform crash recovery?\n> \n> So during the time it's recovering the values in ` XLogCtl` are updated\n> with recovery information, and once the recovery finishes, due to crash\n> recovery reaching a consistent state, or a promotion of a standby\n> happening, those values are not reset to startup defaults.\n> \n> That's when you start seeing non-null values returned by\n> `pg_last_wal_replay_lsn()`and `pg_last_xact_replay_timestamp()`.\n> \n> Now, I don't know if we should call this a bug, or an undocumented\n> anomaly. We could fix the bug by resetting the values from ` XLogCtl`\n> after finishing recovery, or document that we might see non-NULL values\n> in certain cases.\n\nIMV, and not unlike other similar cases I've talked about on another\nthread, these should be cleared when the system is promoted as they're\notherwise confusing and nonsensical.\n\nThanks,\n\nStephen", "msg_date": "Tue, 8 Oct 2019 14:03:02 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Non-null values of recovery functions after promote or crash of\n primary" }, { "msg_contents": "Hi,\n\n> IMV, and not unlike other similar cases I've talked about on another\n> thread, these should be cleared when the system is promoted as they're\n> otherwise confusing and nonsensical.\n\nKeep in mind that this also happens when the server crashes and has to\nperform crash recovery. In that case the server was always a primary.\n\n--\nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 8 Oct 2019 16:56:38 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Non-null values of recovery functions after promote or crash of\n primary" } ]
[ { "msg_contents": "Report test_atomic_ops() failures consistently, via macros.\n\nThis prints the unexpected value in more failure cases, and it removes\nforty-eight hand-maintained error messages. Back-patch to 9.5, which\nintroduced these tests.\n\nReviewed (in an earlier version) by Andres Freund.\n\nDiscussion: https://postgr.es/m/20190915160021.GA24376@alvherre.pgsql\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e800bd7414df3ce8170761e5b75b13e83f576988\n\nModified Files\n--------------\nsrc/test/regress/regress.c | 227 ++++++++++++++++-----------------------------\n1 file changed, 81 insertions(+), 146 deletions(-)", "msg_date": "Sat, 05 Oct 2019 17:08:38 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "pgsql: Report test_atomic_ops() failures consistently, via macros." }, { "msg_contents": "Hi,\n\nOn 2019-10-05 17:08:38 +0000, Noah Misch wrote:\n> Report test_atomic_ops() failures consistently, via macros.\n> \n> This prints the unexpected value in more failure cases, and it removes\n> forty-eight hand-maintained error messages. Back-patch to 9.5, which\n> introduced these tests.\n\nThanks for these, that's a nice improvement.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=639feb92e1186e1de495d1bfdb191869cb56450a\n\n...\n+#define EXPECT_EQ_U32(result_expr, expected_expr) \\\n+ do { \\\n+ uint32 result = (result_expr); \\\n+ uint32 expected = (expected_expr); \\\n+ if (result != expected) \\\n+ elog(ERROR, \\\n+ \"%s yielded %u, expected %s in file \\\"%s\\\" line %u\", \\\n+ #result_expr, result, #expected_expr, __FILE__, __LINE__); \\\n+ } while (0)\n...\n\n\nI wonder if we should put these (and a few more, for other types), into\na more general place. I would like to have them for writing both tests\nlike regress.c:test_atomic_ops(), and for writing assertions that\nactually display useful error messages. For the former it makes sense\nto ERROR out, for the latter they ought to abort, as currently.\n\nSeems like putting ASSERT_{EQ,LT,...}_{U32,S32,...} (or Assert_Eq_...,\nbut that'd imo look weirder than the inconsistency) into c.h would make\nsense, and EXPECT_ somewhere in common/pg_test.h or such?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 5 Oct 2019 12:07:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "On Sat, Oct 05, 2019 at 12:07:29PM -0700, Andres Freund wrote:\n> +#define EXPECT_EQ_U32(result_expr, expected_expr) \\\n> + do { \\\n> + uint32 result = (result_expr); \\\n> + uint32 expected = (expected_expr); \\\n> + if (result != expected) \\\n> + elog(ERROR, \\\n> + \"%s yielded %u, expected %s in file \\\"%s\\\" line %u\", \\\n> + #result_expr, result, #expected_expr, __FILE__, __LINE__); \\\n> + } while (0)\n> ...\n> \n> \n> I wonder if we should put these (and a few more, for other types), into\n> a more general place. I would like to have them for writing both tests\n> like regress.c:test_atomic_ops(), and for writing assertions that\n> actually display useful error messages. For the former it makes sense\n> to ERROR out, for the latter they ought to abort, as currently.\n> \n> Seems like putting ASSERT_{EQ,LT,...}_{U32,S32,...} (or Assert_Eq_...,\n> but that'd imo look weirder than the inconsistency) into c.h would make\n> sense, and EXPECT_ somewhere in common/pg_test.h or such?\n\nSounds reasonable. For broader use, I would include the expected value, not\njust expected_expr:\n\n elog(ERROR, \\\n \"%s yielded %u, expected %s (%u) in file \\\"%s\\\" line %u\", \\\n #result_expr, result, #expected_expr, expected, __FILE__, __LINE__); \\\n\nI didn't do that for the atomics tests, where expected_expr is always trivial.\nThe codebase has plenty of Assert(x == y) where either of x or y could have\nthe surprising value.\n\n\n", "msg_date": "Sat, 5 Oct 2019 19:20:38 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-05 17:08:38 +0000, Noah Misch wrote:\n>> Report test_atomic_ops() failures consistently, via macros.\n\n> I wonder if we should put these (and a few more, for other types), into\n> a more general place. I would like to have them for writing both tests\n> like regress.c:test_atomic_ops(), and for writing assertions that\n> actually display useful error messages. For the former it makes sense\n> to ERROR out, for the latter they ought to abort, as currently.\n\nIMO, anything named like \"assert\" ought to act like Assert does now,\nie (1) it's a no-op in a non-assert build and (2) you get an abort()\non failure. No strong opinions about what the test-and-elog variant\nshould be called -- but it seems like we might have some difficulty\nagreeing on what the appropriate error level is for that. If it's\nmorally like an Assert except we want it on all the time, should\nit be PANIC? What will happen in frontend code?\n\n> Seems like putting ASSERT_{EQ,LT,...}_{U32,S32,...} (or Assert_Eq_...,\n> but that'd imo look weirder than the inconsistency) into c.h would make\n> sense, and EXPECT_ somewhere in common/pg_test.h or such?\n\nI'd just put them all in c.h. I see no reason why a new header\nis helpful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Oct 2019 13:57:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "On 2019-10-06 04:20, Noah Misch wrote:\n>> Seems like putting ASSERT_{EQ,LT,...}_{U32,S32,...} (or Assert_Eq_...,\n>> but that'd imo look weirder than the inconsistency) into c.h would make\n>> sense, and EXPECT_ somewhere in common/pg_test.h or such?\n> \n> Sounds reasonable. For broader use, I would include the expected value, not\n> just expected_expr:\n> \n> elog(ERROR, \\\n> \"%s yielded %u, expected %s (%u) in file \\\"%s\\\" line %u\", \\\n> #result_expr, result, #expected_expr, expected, __FILE__, __LINE__); \\\n> \n> I didn't do that for the atomics tests, where expected_expr is always trivial.\n> The codebase has plenty of Assert(x == y) where either of x or y could have\n> the surprising value.\n\nI've been meaning to propose some JUnit-style more-specific Assert\nvariants such as AssertEquals for this reason. But as Tom writes\nnearby, it should be a straight wrapper around Assert, not elog. So\nthese need to be named separately.\n\nBtw., JUnit uses the ordering convention assertEquals(expected, actual),\nwhereas Perl Test::More uses is(actual, expected). Let's make sure we\npick something and stick with it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 7 Oct 2019 21:56:20 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "On 2019-10-07 19:57, Tom Lane wrote:\n> I'd just put them all in c.h. I see no reason why a new header\n> is helpful.\n\nAssert stuff is already in there, but surely stuff that calls elog()\ndoesn't belong in there?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 7 Oct 2019 21:58:08 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "Hi,\n\nOn 2019-10-07 13:57:41 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-10-05 17:08:38 +0000, Noah Misch wrote:\n> >> Report test_atomic_ops() failures consistently, via macros.\n>\n> > I wonder if we should put these (and a few more, for other types), into\n> > a more general place. I would like to have them for writing both tests\n> > like regress.c:test_atomic_ops(), and for writing assertions that\n> > actually display useful error messages. For the former it makes sense\n> > to ERROR out, for the latter they ought to abort, as currently.\n>\n> IMO, anything named like \"assert\" ought to act like Assert does now,\n> ie (1) it's a no-op in a non-assert build and (2) you get an abort()\n> on failure.\n\nNo disagreement at all.\n\n\n> No strong opinions about what the test-and-elog variant\n> should be called -- but it seems like we might have some difficulty\n> agreeing on what the appropriate error level is for that. If it's\n> morally like an Assert except we want it on all the time, should\n> it be PANIC?\n\nPerhaps it ought to just take elevel as a parameter? Could even be\nuseful for debugging...\n\n\n> What will happen in frontend code?\n\nHm. Map to pg_log_*, and abort() if it's an erroring elevel?\n\n\n> > Seems like putting ASSERT_{EQ,LT,...}_{U32,S32,...} (or Assert_Eq_...,\n> > but that'd imo look weirder than the inconsistency) into c.h would make\n> > sense, and EXPECT_ somewhere in common/pg_test.h or such?\n>\n> I'd just put them all in c.h. I see no reason why a new header\n> is helpful.\n\nWFM.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Oct 2019 13:06:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-10-07 19:57, Tom Lane wrote:\n>> I'd just put them all in c.h. I see no reason why a new header\n>> is helpful.\n\n> Assert stuff is already in there, but surely stuff that calls elog()\n> doesn't belong in there?\n\nTrue, though I had the impression that Andres wanted to propose things\nthat would work in either frontend or backend, presumably with different\nimplementations. You could argue it either way as to whether to have\nthat in c.h (with an #ifdef) or separately in postgres.h and\npostgres_fe.h.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Oct 2019 16:07:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "Hi,\n\nOn 2019-10-07 21:58:08 +0200, Peter Eisentraut wrote:\n> On 2019-10-07 19:57, Tom Lane wrote:\n> > I'd just put them all in c.h. I see no reason why a new header\n> > is helpful.\n> \n> Assert stuff is already in there, but surely stuff that calls elog()\n> doesn't belong in there?\n\nMake it call an ExpectFailure() (or similar) function that takes the\nvarious parameters (expression, file, line, severity, format string,\nargs) as an argument? And that's implemented in terms of elog() in the\nbackend, and pg_log* + abort() (when appropriate) in the frontend?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Oct 2019 13:10:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "On Mon, Oct 07, 2019 at 09:56:20PM +0200, Peter Eisentraut wrote:\n> On 2019-10-06 04:20, Noah Misch wrote:\n> > elog(ERROR, \\\n> > \"%s yielded %u, expected %s (%u) in file \\\"%s\\\" line %u\", \\\n> > #result_expr, result, #expected_expr, expected, __FILE__, __LINE__); \\\n\n> I've been meaning to propose some JUnit-style more-specific Assert\n> variants such as AssertEquals for this reason. But as Tom writes\n> nearby, it should be a straight wrapper around Assert, not elog. So\n> these need to be named separately.\n\nAgreed.\n\n> Btw., JUnit uses the ordering convention assertEquals(expected, actual),\n> whereas Perl Test::More uses is(actual, expected). Let's make sure we\n> pick something and stick with it.\n\nSince we write \"if (actual == expected)\", I prefer f(actual, expected). CUnit\nuses CU_ASSERT_EQUAL(actual, expected).\n\n\n", "msg_date": "Mon, 7 Oct 2019 21:59:46 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" }, { "msg_contents": "On 2019-10-08 06:59, Noah Misch wrote:\n>> Btw., JUnit uses the ordering convention assertEquals(expected, actual),\n>> whereas Perl Test::More uses is(actual, expected). Let's make sure we\n>> pick something and stick with it.\n> Since we write \"if (actual == expected)\", I prefer f(actual, expected). CUnit\n> uses CU_ASSERT_EQUAL(actual, expected).\n\nYes, that seems to be the dominating order outside of JUnit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 9 Oct 2019 17:47:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expressive test macros (was: Report test_atomic_ops() failures\n consistently, via macros)" } ]
[ { "msg_contents": "This thread is a follow up to the thread https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m where I've been trying to remove StdRdOptions \nstructure and replace it with unique structure for each relation kind.\n\nI've decided to split that patch into smaller parts.\n\nThis part adds some asserts to ViewOptions macroses.\nSince an option pointer there is converted into (ViewOptions *) it would be \nreally good to make sure that this macros is called in proper context, and we \ndo the convertation properly. At least when running tests with asserts turned \non.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)", "msg_date": "Sun, 06 Oct 2019 00:23:00 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "[PATCH] Add some useful asserts into View Options macroses" }, { "msg_contents": "On Sat, Oct 5, 2019 at 5:23 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> This thread is a follow up to the thread https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m where I've been trying to remove StdRdOptions\n> structure and replace it with unique structure for each relation kind.\n>\n> I've decided to split that patch into smaller parts.\n>\n> This part adds some asserts to ViewOptions macroses.\n> Since an option pointer there is converted into (ViewOptions *) it would be\n> really good to make sure that this macros is called in proper context, and we\n> do the convertation properly. At least when running tests with asserts turned\n> on.\n\nSeems like a good idea. Should we try to do something similar for the\nmacros in that header file that cast to StdRdOptions?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 12:59:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add some useful asserts into View Options macroses" }, { "msg_contents": "В письме от понедельник, 7 октября 2019 г. 12:59:27 MSK пользователь Robert \nHaas написал:\n\n> > This thread is a follow up to the thread\n> > https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m where I've\n> > been trying to remove StdRdOptions structure and replace it with unique\n> > structure for each relation kind.\n> > \n> > I've decided to split that patch into smaller parts.\n> > \n> > This part adds some asserts to ViewOptions macroses.\n> > Since an option pointer there is converted into (ViewOptions *) it would\n> > be\n> > really good to make sure that this macros is called in proper context, and\n> > we do the convertation properly. At least when running tests with asserts\n> > turned on.\n> Seems like a good idea. Should we try to do something similar for the\n> macros in that header file that cast to StdRdOptions?\n\nThat would not be as easy as for ViewOptions. For example as for the current \nmaster code, fillfactor from StdRdOptions is used in Toast, Heap, Hash index, \nnbtree index, and spgist index. This will make RelationGetFillFactor macros a \nbit complicated for example.\n\nNow I have patches that limits usage of StdRdOptions to Heap and Toast.\n\nWhen StdRdOptions is not that widely used, we whould be able to add asserts \nfor it, it will not make the code too complex.\n\nSo I would suggest to do ViewOptions asserts now, and keep dealing with \nStdRdOptions for later. When we are finished with my current patches, I will \ntake care about it.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)\n\n\n", "msg_date": "Tue, 08 Oct 2019 13:44:25 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add some useful asserts into View Options macroses" }, { "msg_contents": "On 2019-10-08 12:44, Nikolay Shaplov wrote:\n> В письме от понедельник, 7 октября 2019 г. 12:59:27 MSK пользователь Robert\n> Haas написал:\n> \n>>> This thread is a follow up to the thread\n>>> https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m where I've\n>>> been trying to remove StdRdOptions structure and replace it with unique\n>>> structure for each relation kind.\n>>>\n>>> I've decided to split that patch into smaller parts.\n>>>\n>>> This part adds some asserts to ViewOptions macroses.\n>>> Since an option pointer there is converted into (ViewOptions *) it would\n>>> be\n>>> really good to make sure that this macros is called in proper context, and\n>>> we do the convertation properly. At least when running tests with asserts\n>>> turned on.\n\nCommitted.\n\nI simplified the parentheses by one level from your patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 1 Nov 2019 13:29:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add some useful asserts into View Options macroses" }, { "msg_contents": "В письме от пятница, 1 ноября 2019 г. 13:29:58 MSK пользователь Peter \nEisentraut написал:\n \n> Committed.\n> \n> I simplified the parentheses by one level from your patch.\nThank you!\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)", "msg_date": "Fri, 08 Nov 2019 20:20:59 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add some useful asserts into View Options macroses" } ]
[ { "msg_contents": "Hello\n\nWhile playing around I noticed that depending on the number of parallel\nworkers in pg_restore compared to the number of partitions a table has,\nrestoring an FK fails because the FK itself is restored before the index\npartitions have completed restoring. The exact conditions to cause the\nfailure seem to vary depending on whether the dump is schema-only or not.\n\nThis can seemingly be fixed by having pg_dump make the constraint depend\non the attach of each partition, as in the attached patch. With this\npatch I no longer see failures.\n\n\nThis patch is a bit weird because I added a new \"simple list\" type, to\nstore pointers. One alternative would be to store the dumpId values for\nthe partitions instead, but we don't have a dumpId-typed simple list\neither. We could solve that by casting the dumpId to Oid, but that\nseems almost as strange as the current proposal.\n\nThe other thing that makes this patch a little weird is that we have to\nscan the list of indexes in the referenced partitioned table in order to\nfind the correct one. This should be okay, as the number of indexes in\nany one table is not expected to grow very large. This isn't easy to\nfix because we don't have a bsearchable array of indexes like we do of\nother object types, and this already requires some contortions nearby.\nStill, I'm not sure that this absolutely needs fixing now.\n\n\n-- \n�lvaro Herrera Developer, https://www.PostgreSQL.org/", "msg_date": "Sat, 5 Oct 2019 19:43:33 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "parallel restore sometimes fails for FKs to partitioned tables" }, { "msg_contents": "On 2019-Oct-05, Alvaro Herrera wrote:\n\n> While playing around I noticed that depending on the number of parallel\n> workers in pg_restore compared to the number of partitions a table has,\n> restoring an FK fails because the FK itself is restored before the index\n> partitions have completed restoring. The exact conditions to cause the\n> failure seem to vary depending on whether the dump is schema-only or not.\n> \n> This can seemingly be fixed by having pg_dump make the constraint depend\n> on the attach of each partition, as in the attached patch. With this\n> patch I no longer see failures.\n\nPushed with some additional tweaks.\n\nParallel restore of partitioned tables containing data and unique keys\nstill fails with some deadlock errors, though :-(\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 17 Oct 2019 05:14:25 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: parallel restore sometimes fails for FKs to partitioned tables" } ]
[ { "msg_contents": "Hi,\n\nThere are few links present in our source files for which the web links are no more active.\nDetails for the same is given below:\n\nSl No\nLink\nReferred File\n1\nhttp://h21007.www2.hp.com/portal/download/files/unprot/Itanium/inline_assem_ERS.pdf <http://h21007.www2.hp.com/portal/download/files/unprot/Itanium/inline_assem_ERS.pdf>\t\ngeneric-acc.h\n2\nhttp://h21007.www2.hp.com/portal/download/files/unprot/itanium/spinlocks.pdf <http://h21007.www2.hp.com/portal/download/files/unprot/itanium/spinlocks.pdf>\t\ngeneric-acc.h\n3\nhttp://www.comp.nus.edu.sg/~wuyongzh/my_doc/ntstatus.txt <http://www.comp.nus.edu.sg/~wuyongzh/my_doc/ntstatus.txt>\t\nwin32_port.h\n4\nhttp://www.microsoft.com/msj/0197/exception/exception.aspx <http://www.microsoft.com/msj/0197/exception/exception.aspx>\t\nwin32_port.h\n5\nhttp://www.nologs.com/ntstatus.html <http://www.nologs.com/ntstatus.html>\t\nwin32_port.h\n6\nhttp://www.ross.net/crc/ <http://www.ross.net/crc/> \npg_crc.h\n7\nhttp://www.ross.net/crc/download/crc_v3.txt <http://www.ross.net/crc/download/crc_v3.txt> \npg_crc.h\n8\nhttp://www.codeproject.com/string/dmetaphone1.asp <http://www.codeproject.com/string/dmetaphone1.asp>\t\ndmetaphone.c\n9\nhttp://www.postgresql.org/2009/explain <http://www.postgresql.org/2009/explain>\t\nexplain.c\n10\nhttp://www.merriampark.com/ld.htm <http://www.merriampark.com/ld.htm>\t\nlevenshtein.c\n11\nhttp://www.cs.auckland.ac.nz/software/AlgAnim/niemann/s_man.htm <http://www.cs.auckland.ac.nz/software/AlgAnim/niemann/s_man.htm> \nrbtree.c\n\nI could not find the equivalent links for the same.\nShould we update the links for the same?\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\nHi,There are few links present in our source files for which the web links are no more active.Details for the same is given below:Sl NoLinkReferred File1http://h21007.www2.hp.com/portal/download/files/unprot/Itanium/inline_assem_ERS.pdfgeneric-acc.h2http://h21007.www2.hp.com/portal/download/files/unprot/itanium/spinlocks.pdfgeneric-acc.h3http://www.comp.nus.edu.sg/~wuyongzh/my_doc/ntstatus.txtwin32_port.h4http://www.microsoft.com/msj/0197/exception/exception.aspxwin32_port.h5http://www.nologs.com/ntstatus.htmlwin32_port.h6http://www.ross.net/crc/ pg_crc.h7http://www.ross.net/crc/download/crc_v3.txt pg_crc.h8http://www.codeproject.com/string/dmetaphone1.aspdmetaphone.c9http://www.postgresql.org/2009/explainexplain.c10http://www.merriampark.com/ld.htmlevenshtein.c11http://www.cs.auckland.ac.nz/software/AlgAnim/niemann/s_man.htm rbtree.cI could not find the equivalent links for the same.Should we update the links for the same?Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 6 Oct 2019 09:06:44 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Non-Active links being referred in our source code" }, { "msg_contents": "On Sun, Oct 06, 2019 at 09:06:44AM +0530, vignesh C wrote:\n> I could not find the equivalent links for the same.\n> Should we update the links for the same?\n\nIf it could be possible to find equivalent links which could update\nto, it would be nice.\n--\nMichael", "msg_date": "Sun, 6 Oct 2019 16:41:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Non-Active links being referred in our source code" }, { "msg_contents": "On Sun, Oct 6, 2019 at 9:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Oct 06, 2019 at 09:06:44AM +0530, vignesh C wrote:\n> > I could not find the equivalent links for the same.\n> > Should we update the links for the same?\n>\n> If it could be possible to find equivalent links which could update\n> to, it would be nice.\n\nAbout the broken links in win32_port.h, they are all referring to\nntstatus. As for first case that shows the code groups, there is an up\nto date alternative. There is also an alternative for second case that\npoints to their codes and descriptions. On the other hand, the last\ncase is quoting a document that is no longer available, I would\nsuggest to rephrase the comment, thus eliminating the quote.\n\nPlease find attached a patch with the proposed alternatives.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Mon, 7 Oct 2019 17:11:40 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Non-Active links being referred in our source code" }, { "msg_contents": "On Mon, Oct 07, 2019 at 05:11:40PM +0200, Juan José Santamaría Flecha wrote:\n> About the broken links in win32_port.h, they are all referring to\n> ntstatus. As for first case that shows the code groups, there is an up\n> to date alternative. There is also an alternative for second case that\n> points to their codes and descriptions. On the other hand, the last\n> case is quoting a document that is no longer available, I would\n> suggest to rephrase the comment, thus eliminating the quote.\n> \n> Please find attached a patch with the proposed alternatives.\n\nThanks Juan for the patch. I have checked your suggestions and it\nlooks good to me, so committed. Good idea to tell about\nWIN32_NO_STATUS. I have noticed one typo on the way.\n--\nMichael", "msg_date": "Tue, 8 Oct 2019 14:05:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Non-Active links being referred in our source code" }, { "msg_contents": "On Tue, Oct 8, 2019 at 10:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 07, 2019 at 05:11:40PM +0200, Juan José Santamaría Flecha wrote:\n> > About the broken links in win32_port.h, they are all referring to\n> > ntstatus. As for first case that shows the code groups, there is an up\n> > to date alternative. There is also an alternative for second case that\n> > points to their codes and descriptions. On the other hand, the last\n> > case is quoting a document that is no longer available, I would\n> > suggest to rephrase the comment, thus eliminating the quote.\n> >\n> > Please find attached a patch with the proposed alternatives.\n>\n> Thanks Juan for the patch. I have checked your suggestions and it\n> looks good to me, so committed. Good idea to tell about\n> WIN32_NO_STATUS. I have noticed one typo on the way.\nAbout pg_crc.h, I have made the changes with the correct links.\nThe patch for the same is attached.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Oct 2019 21:48:12 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Non-Active links being referred in our source code" }, { "msg_contents": "On Mon, Oct 14, 2019 at 09:48:12PM +0530, vignesh C wrote:\n> About pg_crc.h, I have made the changes with the correct links.\n> The patch for the same is attached.\n\nConfirmed, so applied. Thanks, Vignesh.\n--\nMichael", "msg_date": "Wed, 16 Oct 2019 15:11:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Non-Active links being referred in our source code" } ]
[ { "msg_contents": "As per docs [1] (see maintenance_work_mem), the maximum amount of memory\nused by the Vacuum command must be no more than maintenance_work_mem.\nHowever, during the review/discussion of the \"parallel vacuum\" patch [2],\nwe observed that it is not true. Basically, if there is a gin index\ndefined on a table, then the vacuum on that table can consume up to 2\n* maintenance_work_mem memory space. The vacuum can use\nmaintenance_work_mem memory space to keep track of dead tuples and\nanother maintenance_work_mem memory space to move tuples from pending pages\ninto regular GIN structure (see ginInsertCleanup). The behavior related\nto Gin index consuming extra maintenance_work_mem memory is introduced by\ncommit e2c79e14d998cd31f860854bc9210b37b457bb01. It is not clear to me if\nthis is acceptable behavior and if so, shouldn't we document it?\n\nWe wanted to decide how a parallel vacuum should use memory? Can each\nworker consume maintenance_work_mem to clean up the gin Index or all\nworkers should use no more than maintenance_work_mem? We were thinking of\nlater but before we decide what is the right behavior for parallel vacuum,\nI thought it is better to once discuss if the current memory usage model is\nright.\n\n\n[1] - https://www.postgresql.org/docs/devel/runtime-config-resource.html\n[2] -\nhttps://www.postgresql.org/message-id/CAD21AoARj%3De%3D6_KOZnaR66jRkDmGaVdLcrt33Ua-zMUugKU3mQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nAs per docs [1] (see maintenance_work_mem), the maximum amount of memory used by the Vacuum command must be no more than maintenance_work_mem.  However, during the review/discussion of the \"parallel vacuum\" patch [2], we observed that it is not true.  Basically, if there is a gin index defined on a table, then the vacuum on that table can consume up to 2 * maintenance_work_mem memory space.  The vacuum can use maintenance_work_mem memory space to keep track of dead tuples and another maintenance_work_mem memory space to move tuples from pending pages into regular GIN structure (see ginInsertCleanup).   The behavior related to Gin index consuming extra maintenance_work_mem memory is introduced by commit \n\n\n\n\n\ne2c79e14d998cd31f860854bc9210b37b457bb01.  It is not clear to me if this is acceptable behavior and if so, shouldn't we document it?We wanted to decide how a parallel vacuum should use memory?  Can each worker consume maintenance_work_mem to clean up the gin Index or all workers should use no more than maintenance_work_mem?  We were thinking of later but before we decide what is the right behavior for parallel vacuum, I thought it is better to once discuss if the current memory usage model is right.[1] - https://www.postgresql.org/docs/devel/runtime-config-resource.html[2] - https://www.postgresql.org/message-id/CAD21AoARj%3De%3D6_KOZnaR66jRkDmGaVdLcrt33Ua-zMUugKU3mQ%40mail.gmail.com-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 6 Oct 2019 16:24:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Sun, Oct 6, 2019 at 6:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> As per docs [1] (see maintenance_work_mem), the maximum amount of memory used by the Vacuum command must be no more than maintenance_work_mem. However, during the review/discussion of the \"parallel vacuum\" patch [2], we observed that it is not true. Basically, if there is a gin index defined on a table, then the vacuum on that table can consume up to 2 * maintenance_work_mem memory space. The vacuum can use maintenance_work_mem memory space to keep track of dead tuples and another maintenance_work_mem memory space to move tuples from pending pages into regular GIN structure (see ginInsertCleanup). The behavior related to Gin index consuming extra maintenance_work_mem memory is introduced by commit e2c79e14d998cd31f860854bc9210b37b457bb01. It is not clear to me if this is acceptable behavior and if so, shouldn't we document it?\n\nI would say that sucks, because it makes it harder to set\nmaintenance_work_mem correctly. Not sure how hard it would be to fix,\nthough.\n\n> We wanted to decide how a parallel vacuum should use memory? Can each worker consume maintenance_work_mem to clean up the gin Index or all workers should use no more than maintenance_work_mem? We were thinking of later but before we decide what is the right behavior for parallel vacuum, I thought it is better to once discuss if the current memory usage model is right.\n\nWell, I had the idea when we were developing parallel query that we\nshould just ignore the problem of work_mem: every node can use X\namount of work_mem, and if there are multiple copies of the node in\nmultiple processes, then you probably end up using more memory. I\nhave been informed by Thomas Munro -- in very polite terminology --\nthat this was a terrible decision which is causing all kinds of\nproblems for users. I haven't actually encountered that situation\nmyself, but I don't doubt that it's an issue.\n\nI think it's a lot easier to do better when we're talking about\nmaintenance commands rather than queries. Maintenance operations\ntypically don't have the problem that queries do with an unknown\nnumber of nodes using memory; you typically know all of your memory\nneeds up front. So it's easier to budget that out across workers or\nwhatever. It's a little harder in this case, because you could have\nany number of GIN indexes (1 to infinity) and the amount of memory you\ncan use depends on not only on how many of them there are but,\npresumably also, the number of those that are going to be vacuumed at\nthe same time. So you might have 8 indexes, 3 workers, and 2 of the\nindexes are GIN. In that case, you know that you can't have more than\n2 GIN indexes being processed at the same time, but it's likely to be\nonly one, and maybe with proper scheduling you could make it sure it's\nonly one. On the other hand, if you dole out the memory assuming it's\nonly 1, what happens if you start that one, then process all 6 of the\nnon-GIN indexes, and that one isn't done yet. I guess you could wait\nto start cleanup on the other GIN indexes until the previous index\ncleanup finishes, but that kinda sucks too. So I'm not really sure how\nto handle this particular case. I think the principle of dividing up\nthe memory rather than just using more is probably a good one, but\nfiguring out exactly how that should work seems tricky.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 15:27:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Mon, Oct 7, 2019 at 12:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I would say that sucks, because it makes it harder to set\n> maintenance_work_mem correctly. Not sure how hard it would be to fix,\n> though.\n\nginInsertCleanup() may now be the worst piece of code in the entire\ntree, so no surprised that it gets this wrong too.\n\n2016's commit e2c79e14d99 ripped out the following comment about the\nuse of maintenance_work_mem by ginInsertCleanup():\n\n@@ -821,13 +847,10 @@ ginInsertCleanup(GinState *ginstate,\n * Is it time to flush memory to disk? Flush if we are at the end of\n * the pending list, or if we have a full row and memory is getting\n * full.\n- *\n- * XXX using up maintenance_work_mem here is probably unreasonably\n- * much, since vacuum might already be using that much.\n */\n\nISTM that the use of maintenance_work_mem wasn't given that much\nthought originally.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 7 Oct 2019 13:17:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Tue, Oct 8, 2019 at 1:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Oct 7, 2019 at 12:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I would say that sucks, because it makes it harder to set\n> > maintenance_work_mem correctly. Not sure how hard it would be to fix,\n> > though.\n>\n> ginInsertCleanup() may now be the worst piece of code in the entire\n> tree, so no surprised that it gets this wrong too.\n>\n> 2016's commit e2c79e14d99 ripped out the following comment about the\n> use of maintenance_work_mem by ginInsertCleanup():\n>\n> @@ -821,13 +847,10 @@ ginInsertCleanup(GinState *ginstate,\n> * Is it time to flush memory to disk? Flush if we are at the end of\n> * the pending list, or if we have a full row and memory is getting\n> * full.\n> - *\n> - * XXX using up maintenance_work_mem here is probably unreasonably\n> - * much, since vacuum might already be using that much.\n> */\n>\n> ISTM that the use of maintenance_work_mem wasn't given that much\n> thought originally.\n>\n\nOne idea to something better could be to check, if there is a GIN\nindex on a table, then use 1/4 (25% or whatever) of\nmaintenance_work_mem for GIN indexes and 3/4 (75%) of\nmaintenance_work_mem for collection dead tuples.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Oct 2019 11:14:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Tue, Oct 8, 2019 at 12:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Oct 6, 2019 at 6:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > We wanted to decide how a parallel vacuum should use memory? Can each worker consume maintenance_work_mem to clean up the gin Index or all workers should use no more than maintenance_work_mem? We were thinking of later but before we decide what is the right behavior for parallel vacuum, I thought it is better to once discuss if the current memory usage model is right.\n>\n> Well, I had the idea when we were developing parallel query that we\n> should just ignore the problem of work_mem: every node can use X\n> amount of work_mem, and if there are multiple copies of the node in\n> multiple processes, then you probably end up using more memory. I\n> have been informed by Thomas Munro -- in very polite terminology --\n> that this was a terrible decision which is causing all kinds of\n> problems for users. I haven't actually encountered that situation\n> myself, but I don't doubt that it's an issue.\n>\n> I think it's a lot easier to do better when we're talking about\n> maintenance commands rather than queries. Maintenance operations\n> typically don't have the problem that queries do with an unknown\n> number of nodes using memory; you typically know all of your memory\n> needs up front. So it's easier to budget that out across workers or\n> whatever. It's a little harder in this case, because you could have\n> any number of GIN indexes (1 to infinity) and the amount of memory you\n> can use depends on not only on how many of them there are but,\n> presumably also, the number of those that are going to be vacuumed at\n> the same time. So you might have 8 indexes, 3 workers, and 2 of the\n> indexes are GIN. In that case, you know that you can't have more than\n> 2 GIN indexes being processed at the same time, but it's likely to be\n> only one, and maybe with proper scheduling you could make it sure it's\n> only one. On the other hand, if you dole out the memory assuming it's\n> only 1, what happens if you start that one, then process all 6 of the\n> non-GIN indexes, and that one isn't done yet. I guess you could wait\n> to start cleanup on the other GIN indexes until the previous index\n> cleanup finishes, but that kinda sucks too. So I'm not really sure how\n> to handle this particular case. I think the principle of dividing up\n> the memory rather than just using more is probably a good one, but\n> figuring out exactly how that should work seems tricky.\n>\n\nYeah and what if we have workers equal to indexes, so doing the clean\nup of Gin indexes serially (wait for the prior index to finish before\nstarting the clean up of next Gin index) in that case would be bad\ntoo. I think we can do something simple like choose minimum among\n'number of Gin Indexes', 'number of workers requested for parallel\nvacuum' and 'number of max_parallel_maintenance_workers' and then\ndivide the maintenance_work_mem by that to get the memory used by each\nof the Gin indexes. I think it has some caveats like we might not be\nable to launch the number of workers we decided and in that case we\nprobably could have computed bigger value of work_mem that can be used\nby Gin indexes. I think whatever we pick here can be good for some\ncases and not-so-good for others, so why not pick something general\nand simple.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Oct 2019 11:29:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Tue, Oct 8, 2019 at 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 8, 2019 at 1:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Mon, Oct 7, 2019 at 12:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I would say that sucks, because it makes it harder to set\n> > > maintenance_work_mem correctly. Not sure how hard it would be to fix,\n> > > though.\n> >\n> > ginInsertCleanup() may now be the worst piece of code in the entire\n> > tree, so no surprised that it gets this wrong too.\n> >\n> > 2016's commit e2c79e14d99 ripped out the following comment about the\n> > use of maintenance_work_mem by ginInsertCleanup():\n> >\n> > @@ -821,13 +847,10 @@ ginInsertCleanup(GinState *ginstate,\n> > * Is it time to flush memory to disk? Flush if we are at the end of\n> > * the pending list, or if we have a full row and memory is getting\n> > * full.\n> > - *\n> > - * XXX using up maintenance_work_mem here is probably unreasonably\n> > - * much, since vacuum might already be using that much.\n> > */\n> >\n> > ISTM that the use of maintenance_work_mem wasn't given that much\n> > thought originally.\n> >\n>\n> One idea to something better could be to check, if there is a GIN\n> index on a table, then use 1/4 (25% or whatever) of\n> maintenance_work_mem for GIN indexes and 3/4 (75%) of\n> maintenance_work_mem for collection dead tuples.\n>\n\nI felt that it would not be easy for users to tune\nmaintenance_work_mem which controls more than one things. If this is\nan index AM(GIN) specific issue we might rather want to control the\nmemory limit of pending list cleanup by a separate GUC parameter like\ngin_pending_list_limit, say gin_pending_list_work_mem. And we can\neither set the (the memory for GIN pending list cleanup / # of GIN\nindexes) to the parallel workers.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Wed, 9 Oct 2019 13:51:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Wed, Oct 9, 2019 at 10:22 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Oct 8, 2019 at 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 8, 2019 at 1:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > >\n> > > On Mon, Oct 7, 2019 at 12:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > I would say that sucks, because it makes it harder to set\n> > > > maintenance_work_mem correctly. Not sure how hard it would be to fix,\n> > > > though.\n> > >\n> > > ginInsertCleanup() may now be the worst piece of code in the entire\n> > > tree, so no surprised that it gets this wrong too.\n> > >\n> > > 2016's commit e2c79e14d99 ripped out the following comment about the\n> > > use of maintenance_work_mem by ginInsertCleanup():\n> > >\n> > > @@ -821,13 +847,10 @@ ginInsertCleanup(GinState *ginstate,\n> > > * Is it time to flush memory to disk? Flush if we are at the end of\n> > > * the pending list, or if we have a full row and memory is getting\n> > > * full.\n> > > - *\n> > > - * XXX using up maintenance_work_mem here is probably unreasonably\n> > > - * much, since vacuum might already be using that much.\n> > > */\n> > >\n> > > ISTM that the use of maintenance_work_mem wasn't given that much\n> > > thought originally.\n> > >\n> >\n> > One idea to something better could be to check, if there is a GIN\n> > index on a table, then use 1/4 (25% or whatever) of\n> > maintenance_work_mem for GIN indexes and 3/4 (75%) of\n> > maintenance_work_mem for collection dead tuples.\n> >\n>\n> I felt that it would not be easy for users to tune\n> maintenance_work_mem which controls more than one things. If this is\n> an index AM(GIN) specific issue we might rather want to control the\n> memory limit of pending list cleanup by a separate GUC parameter like\n> gin_pending_list_limit, say gin_pending_list_work_mem. And we can\n> either set the (the memory for GIN pending list cleanup / # of GIN\n> indexes) to the parallel workers.\n>\nIMHO if we do that then we will loose the meaning of having\nmaintenance_work_mem right? Then user can not control that how much\nmemory the autovacuum worker will use.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Oct 2019 13:59:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Wed, Oct 9, 2019 at 2:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Oct 9, 2019 at 10:22 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Oct 8, 2019 at 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 8, 2019 at 1:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > >\n> > > > ISTM that the use of maintenance_work_mem wasn't given that much\n> > > > thought originally.\n> > > >\n> > >\n> > > One idea to something better could be to check, if there is a GIN\n> > > index on a table, then use 1/4 (25% or whatever) of\n> > > maintenance_work_mem for GIN indexes and 3/4 (75%) of\n> > > maintenance_work_mem for collection dead tuples.\n> > >\n> >\n> > I felt that it would not be easy for users to tune\n> > maintenance_work_mem which controls more than one things. If this is\n> > an index AM(GIN) specific issue we might rather want to control the\n> > memory limit of pending list cleanup by a separate GUC parameter like\n> > gin_pending_list_limit, say gin_pending_list_work_mem.\n\nSure, by having another work_mem parameter for the Gin indexes which\ncontrols when we need to flush the pending list will make life easier\nas a programmer. I think if we have a specific parameter for this\npurpose, then we can even think of using the same for a clean up\nduring insert operation as well. However, I am not sure how easy it\nwould be for users? Basically, now they need to remember another\nparameter and for which there is no easy way to know what should be\nthe value. I think one has to check\ngin_metapage_info->n_pending_pages and then based on that they can\nconfigure the value for this parameter to get the maximum benefit\npossible.\n\nCan we think of using work_mem for this? Basically, we use work_mem\nduring insert operation, so why not use it during vacuum operation for\nthis purpose?\n\nAnother idea could be to try to divide the maintenance_work_mem\nsmartly if we know the value of pending_pages for each Gin index, but\nI think for that we need to either read the metapage of maybe use some\nsort of stats which can be used by vacuum. We need to somehow divide\nit based on the amount of memory required for a number of dead tuples\nin heap and memory required by tuples in the pending list. I am not\nsure how feasible is this approach.\n\nAbout difficulty for users tuning one or two parameters for vacuum, I\nthink if they can compute what could be the values for Guc's\nseparately, then why can't they add up and set it as one value.\nHaving said that, I am not denying that having a separate parameter\ngives better control, and for this specific case using separate\nparameter can allow us to use it both during vacuum and insert\noperations.\n\n> > And we can\n> > either set the (the memory for GIN pending list cleanup / # of GIN\n> > indexes) to the parallel workers.\n> >\n> IMHO if we do that then we will loose the meaning of having\n> maintenance_work_mem right? Then user can not control that how much\n> memory the autovacuum worker will use.\n>\n\nI am not sure how different it is from the current situation?\nBasically, now it can use up to 2 * maintenance_work_mem memory and if\nwe do what Sawada-San is proposing, then it will be\nmaintenance_work_mem + gin_*_work_mem. Do you have some other\nalternative idea in mind or you think the current situation is better\nthan anything else we can do in this area?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Oct 2019 14:40:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Wed, Oct 9, 2019 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 9, 2019 at 2:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Oct 9, 2019 at 10:22 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 8, 2019 at 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Oct 8, 2019 at 1:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > > >\n> > > > > ISTM that the use of maintenance_work_mem wasn't given that much\n> > > > > thought originally.\n> > > > >\n> > > >\n> > > > One idea to something better could be to check, if there is a GIN\n> > > > index on a table, then use 1/4 (25% or whatever) of\n> > > > maintenance_work_mem for GIN indexes and 3/4 (75%) of\n> > > > maintenance_work_mem for collection dead tuples.\n> > > >\n> > >\n> > > I felt that it would not be easy for users to tune\n> > > maintenance_work_mem which controls more than one things. If this is\n> > > an index AM(GIN) specific issue we might rather want to control the\n> > > memory limit of pending list cleanup by a separate GUC parameter like\n> > > gin_pending_list_limit, say gin_pending_list_work_mem.\n>\n> Sure, by having another work_mem parameter for the Gin indexes which\n> controls when we need to flush the pending list will make life easier\n> as a programmer. I think if we have a specific parameter for this\n> purpose, then we can even think of using the same for a clean up\n> during insert operation as well. However, I am not sure how easy it\n> would be for users? Basically, now they need to remember another\n> parameter and for which there is no easy way to know what should be\n> the value. I think one has to check\n> gin_metapage_info->n_pending_pages and then based on that they can\n> configure the value for this parameter to get the maximum benefit\n> possible.\n>\n> Can we think of using work_mem for this? Basically, we use work_mem\n> during insert operation, so why not use it during vacuum operation for\n> this purpose?\n>\n> Another idea could be to try to divide the maintenance_work_mem\n> smartly if we know the value of pending_pages for each Gin index, but\n> I think for that we need to either read the metapage of maybe use some\n> sort of stats which can be used by vacuum. We need to somehow divide\n> it based on the amount of memory required for a number of dead tuples\n> in heap and memory required by tuples in the pending list. I am not\n> sure how feasible is this approach.\n>\n> About difficulty for users tuning one or two parameters for vacuum, I\n> think if they can compute what could be the values for Guc's\n> separately, then why can't they add up and set it as one value.\n> Having said that, I am not denying that having a separate parameter\n> gives better control, and for this specific case using separate\n> parameter can allow us to use it both during vacuum and insert\n> operations.\n>\n> > > And we can\n> > > either set the (the memory for GIN pending list cleanup / # of GIN\n> > > indexes) to the parallel workers.\n> > >\n> > IMHO if we do that then we will loose the meaning of having\n> > maintenance_work_mem right? Then user can not control that how much\n> > memory the autovacuum worker will use.\n> >\n>\n> I am not sure how different it is from the current situation?\n> Basically, now it can use up to 2 * maintenance_work_mem memory and if\n> we do what Sawada-San is proposing, then it will be\n> maintenance_work_mem + gin_*_work_mem. Do you have some other\n> alternative idea in mind or you think the current situation is better\n> than anything else we can do in this area?\n\nI think the current situation is not good but if we try to cap it to\nmaintenance_work_mem + gin_*_work_mem then also I don't think it will\nmake the situation much better. However, I think the idea you\nproposed up-thread[1] is better. At least the maintenance_work_mem\nwill be the top limit what the auto vacuum worker can use.\n\n[1] https://www.postgresql.org/message-id/CAA4eK1JhY88BXC%3DZK%3D89MALm%2BLyMkMhi6WG6AZfE4%2BKij6mebg%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Oct 2019 15:42:18 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Wed, Oct 9, 2019 at 7:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Oct 9, 2019 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 9, 2019 at 2:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 9, 2019 at 10:22 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Oct 8, 2019 at 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Oct 8, 2019 at 1:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > > > >\n> > > > > > ISTM that the use of maintenance_work_mem wasn't given that much\n> > > > > > thought originally.\n> > > > > >\n> > > > >\n> > > > > One idea to something better could be to check, if there is a GIN\n> > > > > index on a table, then use 1/4 (25% or whatever) of\n> > > > > maintenance_work_mem for GIN indexes and 3/4 (75%) of\n> > > > > maintenance_work_mem for collection dead tuples.\n> > > > >\n> > > >\n> > > > I felt that it would not be easy for users to tune\n> > > > maintenance_work_mem which controls more than one things. If this is\n> > > > an index AM(GIN) specific issue we might rather want to control the\n> > > > memory limit of pending list cleanup by a separate GUC parameter like\n> > > > gin_pending_list_limit, say gin_pending_list_work_mem.\n> >\n> > Sure, by having another work_mem parameter for the Gin indexes which\n> > controls when we need to flush the pending list will make life easier\n> > as a programmer. I think if we have a specific parameter for this\n> > purpose, then we can even think of using the same for a clean up\n> > during insert operation as well. However, I am not sure how easy it\n> > would be for users? Basically, now they need to remember another\n> > parameter and for which there is no easy way to know what should be\n> > the value. I think one has to check\n> > gin_metapage_info->n_pending_pages and then based on that they can\n> > configure the value for this parameter to get the maximum benefit\n> > possible.\n> >\n> > Can we think of using work_mem for this? Basically, we use work_mem\n> > during insert operation, so why not use it during vacuum operation for\n> > this purpose?\n> >\n> > Another idea could be to try to divide the maintenance_work_mem\n> > smartly if we know the value of pending_pages for each Gin index, but\n> > I think for that we need to either read the metapage of maybe use some\n> > sort of stats which can be used by vacuum. We need to somehow divide\n> > it based on the amount of memory required for a number of dead tuples\n> > in heap and memory required by tuples in the pending list. I am not\n> > sure how feasible is this approach.\n> >\n> > About difficulty for users tuning one or two parameters for vacuum, I\n> > think if they can compute what could be the values for Guc's\n> > separately, then why can't they add up and set it as one value.\n> > Having said that, I am not denying that having a separate parameter\n> > gives better control, and for this specific case using separate\n> > parameter can allow us to use it both during vacuum and insert\n> > operations.\n> >\n> > > > And we can\n> > > > either set the (the memory for GIN pending list cleanup / # of GIN\n> > > > indexes) to the parallel workers.\n> > > >\n> > > IMHO if we do that then we will loose the meaning of having\n> > > maintenance_work_mem right? Then user can not control that how much\n> > > memory the autovacuum worker will use.\n> > >\n> >\n> > I am not sure how different it is from the current situation?\n> > Basically, now it can use up to 2 * maintenance_work_mem memory and if\n> > we do what Sawada-San is proposing, then it will be\n> > maintenance_work_mem + gin_*_work_mem. Do you have some other\n> > alternative idea in mind or you think the current situation is better\n> > than anything else we can do in this area?\n>\n> I think the current situation is not good but if we try to cap it to\n> maintenance_work_mem + gin_*_work_mem then also I don't think it will\n> make the situation much better. However, I think the idea you\n> proposed up-thread[1] is better. At least the maintenance_work_mem\n> will be the top limit what the auto vacuum worker can use.\n>\n\nI'm concerned that there are other index AMs that could consume more\nmemory like GIN. In principle we can vacuum third party index AMs and\nwill be able to even parallel vacuum them. I expect that\nmaintenance_work_mem is the top limit of memory usage of maintenance\ncommand but actually it's hard to set the limit to memory usage of\nbulkdelete and cleanup by the core. So I thought that since GIN is the\none of the index AM it can have a new parameter to make its job\nfaster. If we have that parameter it might not make the current\nsituation much better but user will be able to set a lower value to\nthat parameter to not use the memory much while keeping the number of\nindex vacuums.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Thu, 10 Oct 2019 13:28:17 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Thu, Oct 10, 2019 at 9:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 9, 2019 at 7:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I think the current situation is not good but if we try to cap it to\n> > maintenance_work_mem + gin_*_work_mem then also I don't think it will\n> > make the situation much better. However, I think the idea you\n> > proposed up-thread[1] is better. At least the maintenance_work_mem\n> > will be the top limit what the auto vacuum worker can use.\n> >\n>\n> I'm concerned that there are other index AMs that could consume more\n> memory like GIN. In principle we can vacuum third party index AMs and\n> will be able to even parallel vacuum them. I expect that\n> maintenance_work_mem is the top limit of memory usage of maintenance\n> command but actually it's hard to set the limit to memory usage of\n> bulkdelete and cleanup by the core. So I thought that since GIN is the\n> one of the index AM it can have a new parameter to make its job\n> faster. If we have that parameter it might not make the current\n> situation much better but user will be able to set a lower value to\n> that parameter to not use the memory much while keeping the number of\n> index vacuums.\n>\n\nI can understand your concern why dividing maintenance_work_mem for\nvacuuming heap and cleaning up the index might be tricky especially\nbecause of third party indexes, but introducing new Guc isn't free\neither. I think that should be the last resort and we need buy-in\nfrom more people for that. Did you consider using work_mem for this?\nAnd even if we want to go with a new guc, maybe it is better to have\nsome generic name like maintenance_index_work_mem or something along\nthose lines so that it can be used for other index cleanups as well if\nrequired.\n\nTom, Teodor, do you have any opinion on this matter? This has been\nintroduced by commit:\n\ncommit ff301d6e690bb5581502ea3d8591a1600fd87acc\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Mar 24 20:17:18 2009 +0000\n\nImplement \"fastupdate\" support for GIN indexes, in which we try to\naccumulate multiple index entries in a holding area before adding them\nto the main index structure. This helps because bulk insert is\n(usually) significantly faster than retail insert for GIN.\n..\n..\nTeodor Sigaev\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Oct 2019 12:06:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Thu, Oct 10, 2019 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 10, 2019 at 9:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 9, 2019 at 7:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I think the current situation is not good but if we try to cap it to\n> > > maintenance_work_mem + gin_*_work_mem then also I don't think it will\n> > > make the situation much better. However, I think the idea you\n> > > proposed up-thread[1] is better. At least the maintenance_work_mem\n> > > will be the top limit what the auto vacuum worker can use.\n> > >\n> >\n> > I'm concerned that there are other index AMs that could consume more\n> > memory like GIN. In principle we can vacuum third party index AMs and\n> > will be able to even parallel vacuum them. I expect that\n> > maintenance_work_mem is the top limit of memory usage of maintenance\n> > command but actually it's hard to set the limit to memory usage of\n> > bulkdelete and cleanup by the core. So I thought that since GIN is the\n> > one of the index AM it can have a new parameter to make its job\n> > faster. If we have that parameter it might not make the current\n> > situation much better but user will be able to set a lower value to\n> > that parameter to not use the memory much while keeping the number of\n> > index vacuums.\n> >\n>\n> I can understand your concern why dividing maintenance_work_mem for\n> vacuuming heap and cleaning up the index might be tricky especially\n> because of third party indexes, but introducing new Guc isn't free\n> either. I think that should be the last resort and we need buy-in\n> from more people for that. Did you consider using work_mem for this?\n\nYeah that seems work too. But I wonder if it could be the similar\nstory to gin_pending_list_limit. I mean that previously we used to use\n work_mem as the maximum size of GIN pending list. But we concluded\nthat it was not appropriate to control both by one GUC so we\nintroduced gin_penidng_list_limit and the storage parameter at commit\n263865a4 (originally it's pending_list_cleanup_size but rename to\ngin_pending_list_limit at commit c291503b1). I feel that that story is\nquite similar to this issue.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Thu, 10 Oct 2019 17:39:28 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Thu, Oct 10, 2019 at 2:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 10, 2019 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 10, 2019 at 9:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 9, 2019 at 7:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > I think the current situation is not good but if we try to cap it to\n> > > > maintenance_work_mem + gin_*_work_mem then also I don't think it will\n> > > > make the situation much better. However, I think the idea you\n> > > > proposed up-thread[1] is better. At least the maintenance_work_mem\n> > > > will be the top limit what the auto vacuum worker can use.\n> > > >\n> > >\n> > > I'm concerned that there are other index AMs that could consume more\n> > > memory like GIN. In principle we can vacuum third party index AMs and\n> > > will be able to even parallel vacuum them. I expect that\n> > > maintenance_work_mem is the top limit of memory usage of maintenance\n> > > command but actually it's hard to set the limit to memory usage of\n> > > bulkdelete and cleanup by the core. So I thought that since GIN is the\n> > > one of the index AM it can have a new parameter to make its job\n> > > faster. If we have that parameter it might not make the current\n> > > situation much better but user will be able to set a lower value to\n> > > that parameter to not use the memory much while keeping the number of\n> > > index vacuums.\n> > >\n> >\n> > I can understand your concern why dividing maintenance_work_mem for\n> > vacuuming heap and cleaning up the index might be tricky especially\n> > because of third party indexes, but introducing new Guc isn't free\n> > either. I think that should be the last resort and we need buy-in\n> > from more people for that. Did you consider using work_mem for this?\n>\n> Yeah that seems work too. But I wonder if it could be the similar\n> story to gin_pending_list_limit. I mean that previously we used to use\n> work_mem as the maximum size of GIN pending list. But we concluded\n> that it was not appropriate to control both by one GUC so we\n> introduced gin_penidng_list_limit and the storage parameter at commit\n> 263865a4\n>\n\nIt seems you want to say about commit id\na1b395b6a26ae80cde17fdfd2def8d351872f399. I wonder why they have not\nchanged it to gin_penidng_list_limit (at that time\npending_list_cleanup_size) in that commit itself? I think if we want\nto use gin_pending_list_limit in this context then we can replace both\nwork_mem and maintenance_work_mem with gin_penidng_list_limit.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Oct 2019 15:07:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Thu, Oct 10, 2019 at 6:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 10, 2019 at 2:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Oct 10, 2019 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Oct 10, 2019 at 9:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Oct 9, 2019 at 7:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > I think the current situation is not good but if we try to cap it to\n> > > > > maintenance_work_mem + gin_*_work_mem then also I don't think it will\n> > > > > make the situation much better. However, I think the idea you\n> > > > > proposed up-thread[1] is better. At least the maintenance_work_mem\n> > > > > will be the top limit what the auto vacuum worker can use.\n> > > > >\n> > > >\n> > > > I'm concerned that there are other index AMs that could consume more\n> > > > memory like GIN. In principle we can vacuum third party index AMs and\n> > > > will be able to even parallel vacuum them. I expect that\n> > > > maintenance_work_mem is the top limit of memory usage of maintenance\n> > > > command but actually it's hard to set the limit to memory usage of\n> > > > bulkdelete and cleanup by the core. So I thought that since GIN is the\n> > > > one of the index AM it can have a new parameter to make its job\n> > > > faster. If we have that parameter it might not make the current\n> > > > situation much better but user will be able to set a lower value to\n> > > > that parameter to not use the memory much while keeping the number of\n> > > > index vacuums.\n> > > >\n> > >\n> > > I can understand your concern why dividing maintenance_work_mem for\n> > > vacuuming heap and cleaning up the index might be tricky especially\n> > > because of third party indexes, but introducing new Guc isn't free\n> > > either. I think that should be the last resort and we need buy-in\n> > > from more people for that. Did you consider using work_mem for this?\n> >\n> > Yeah that seems work too. But I wonder if it could be the similar\n> > story to gin_pending_list_limit. I mean that previously we used to use\n> > work_mem as the maximum size of GIN pending list. But we concluded\n> > that it was not appropriate to control both by one GUC so we\n> > introduced gin_penidng_list_limit and the storage parameter at commit\n> > 263865a4\n> >\n>\n> It seems you want to say about commit id\n> a1b395b6a26ae80cde17fdfd2def8d351872f399.\n\nYeah thanks.\n\n> I wonder why they have not\n> changed it to gin_penidng_list_limit (at that time\n> pending_list_cleanup_size) in that commit itself? I think if we want\n> to use gin_pending_list_limit in this context then we can replace both\n> work_mem and maintenance_work_mem with gin_penidng_list_limit.\n\nHmm as far as I can see the discussion, no one mentioned about\nmaintenance_work_mem. Perhaps we just oversighted? I also didn't know\nthat.\n\nI also think we can replace at least the work_mem for cleanup of\npending list with gin_pending_list_limit. In the following comment in\nginfast.c,\n\n/*\n * Force pending list cleanup when it becomes too long. And,\n * ginInsertCleanup could take significant amount of time, so we prefer to\n * call it when it can do all the work in a single collection cycle. In\n * non-vacuum mode, it shouldn't require maintenance_work_mem, so fire it\n * while pending list is still small enough to fit into\n * gin_pending_list_limit.\n *\n * ginInsertCleanup() should not be called inside our CRIT_SECTION.\n */\ncleanupSize = GinGetPendingListCleanupSize(index);\nif (metadata->nPendingPages * GIN_PAGE_FREESIZE > cleanupSize * 1024L)\n needCleanup = true;\n\nISTM the gin_pending_list_limit in the above comment corresponds to\nthe following code in ginfast.c,\n\n/*\n * We are called from regular insert and if we see concurrent cleanup\n * just exit in hope that concurrent process will clean up pending\n * list.\n */\nif (!ConditionalLockPage(index, GIN_METAPAGE_BLKNO, ExclusiveLock))\n return;\nworkMemory = work_mem;\n\nIf work_mem is smaller than gin_pending_list_limit the pending list\ncleanup would behave against the intention of the above comment that\nprefers to do all the work in a single collection cycle while pending\nlist is still small enough to fit into gin_pending_list_limit.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Fri, 11 Oct 2019 11:05:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Fri, Oct 11, 2019 at 7:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 10, 2019 at 6:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > It seems you want to say about commit id\n> > a1b395b6a26ae80cde17fdfd2def8d351872f399.\n>\n> Yeah thanks.\n>\n> > I wonder why they have not\n> > changed it to gin_penidng_list_limit (at that time\n> > pending_list_cleanup_size) in that commit itself? I think if we want\n> > to use gin_pending_list_limit in this context then we can replace both\n> > work_mem and maintenance_work_mem with gin_penidng_list_limit.\n>\n> Hmm as far as I can see the discussion, no one mentioned about\n> maintenance_work_mem. Perhaps we just oversighted?\n>\n\nIt is possible, but we can't be certain until those people confirm the same.\n\n> I also didn't know\n> that.\n>\n> I also think we can replace at least the work_mem for cleanup of\n> pending list with gin_pending_list_limit. In the following comment in\n> ginfast.c,\n>\n\nAgreed, but that won't solve the original purpose for which we started\nthis thread.\n\n> /*\n> * Force pending list cleanup when it becomes too long. And,\n> * ginInsertCleanup could take significant amount of time, so we prefer to\n> * call it when it can do all the work in a single collection cycle. In\n> * non-vacuum mode, it shouldn't require maintenance_work_mem, so fire it\n> * while pending list is still small enough to fit into\n> * gin_pending_list_limit.\n> *\n> * ginInsertCleanup() should not be called inside our CRIT_SECTION.\n> */\n> cleanupSize = GinGetPendingListCleanupSize(index);\n> if (metadata->nPendingPages * GIN_PAGE_FREESIZE > cleanupSize * 1024L)\n> needCleanup = true;\n>\n> ISTM the gin_pending_list_limit in the above comment corresponds to\n> the following code in ginfast.c,\n>\n> /*\n> * We are called from regular insert and if we see concurrent cleanup\n> * just exit in hope that concurrent process will clean up pending\n> * list.\n> */\n> if (!ConditionalLockPage(index, GIN_METAPAGE_BLKNO, ExclusiveLock))\n> return;\n> workMemory = work_mem;\n>\n> If work_mem is smaller than gin_pending_list_limit the pending list\n> cleanup would behave against the intention of the above comment that\n> prefers to do all the work in a single collection cycle while pending\n> list is still small enough to fit into gin_pending_list_limit.\n>\n\nThat's right, but OTOH, if the user specifies gin_pending_list_limit\nas an option during Create Index with a value greater than GUC\ngin_pending_list_limit, then also we will face the same problem. It\nseems to me that we haven't thought enough on memory usage during Gin\npending list cleanup and I don't want to independently make a decision\nto change it. So unless the original author/committer or some other\npeople who have worked in this area share their opinion, we can leave\nit as it is and make a parallel vacuum patch adapt to it.\n\nThe suggestion from others is welcome.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Oct 2019 13:43:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Fri, Oct 11, 2019 at 5:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 11, 2019 at 7:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Oct 10, 2019 at 6:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > It seems you want to say about commit id\n> > > a1b395b6a26ae80cde17fdfd2def8d351872f399.\n> >\n> > Yeah thanks.\n> >\n> > > I wonder why they have not\n> > > changed it to gin_penidng_list_limit (at that time\n> > > pending_list_cleanup_size) in that commit itself? I think if we want\n> > > to use gin_pending_list_limit in this context then we can replace both\n> > > work_mem and maintenance_work_mem with gin_penidng_list_limit.\n> >\n> > Hmm as far as I can see the discussion, no one mentioned about\n> > maintenance_work_mem. Perhaps we just oversighted?\n> >\n>\n> It is possible, but we can't be certain until those people confirm the same.\n>\n> > I also didn't know\n> > that.\n> >\n> > I also think we can replace at least the work_mem for cleanup of\n> > pending list with gin_pending_list_limit. In the following comment in\n> > ginfast.c,\n> >\n>\n> Agreed, but that won't solve the original purpose for which we started\n> this thread.\n>\n> > /*\n> > * Force pending list cleanup when it becomes too long. And,\n> > * ginInsertCleanup could take significant amount of time, so we prefer to\n> > * call it when it can do all the work in a single collection cycle. In\n> > * non-vacuum mode, it shouldn't require maintenance_work_mem, so fire it\n> > * while pending list is still small enough to fit into\n> > * gin_pending_list_limit.\n> > *\n> > * ginInsertCleanup() should not be called inside our CRIT_SECTION.\n> > */\n> > cleanupSize = GinGetPendingListCleanupSize(index);\n> > if (metadata->nPendingPages * GIN_PAGE_FREESIZE > cleanupSize * 1024L)\n> > needCleanup = true;\n> >\n> > ISTM the gin_pending_list_limit in the above comment corresponds to\n> > the following code in ginfast.c,\n> >\n> > /*\n> > * We are called from regular insert and if we see concurrent cleanup\n> > * just exit in hope that concurrent process will clean up pending\n> > * list.\n> > */\n> > if (!ConditionalLockPage(index, GIN_METAPAGE_BLKNO, ExclusiveLock))\n> > return;\n> > workMemory = work_mem;\n> >\n> > If work_mem is smaller than gin_pending_list_limit the pending list\n> > cleanup would behave against the intention of the above comment that\n> > prefers to do all the work in a single collection cycle while pending\n> > list is still small enough to fit into gin_pending_list_limit.\n> >\n>\n> That's right, but OTOH, if the user specifies gin_pending_list_limit\n> as an option during Create Index with a value greater than GUC\n> gin_pending_list_limit, then also we will face the same problem. It\n> seems to me that we haven't thought enough on memory usage during Gin\n> pending list cleanup and I don't want to independently make a decision\n> to change it. So unless the original author/committer or some other\n> people who have worked in this area share their opinion, we can leave\n> it as it is and make a parallel vacuum patch adapt to it.\n\nYeah I totally agreed.\n\nApart from the GIN problem can we discuss whether need to change the\ncurrent memory usage policy of parallel utility command described in\nthe doc? We cannot control the memory usage in index AM after all but\nwe need to generically consider how much memory is used during\nparallel vacuum.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Sat, 12 Oct 2019 14:18:43 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Sat, Oct 12, 2019 at 10:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Oct 11, 2019 at 5:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > That's right, but OTOH, if the user specifies gin_pending_list_limit\n> > as an option during Create Index with a value greater than GUC\n> > gin_pending_list_limit, then also we will face the same problem. It\n> > seems to me that we haven't thought enough on memory usage during Gin\n> > pending list cleanup and I don't want to independently make a decision\n> > to change it. So unless the original author/committer or some other\n> > people who have worked in this area share their opinion, we can leave\n> > it as it is and make a parallel vacuum patch adapt to it.\n>\n> Yeah I totally agreed.\n>\n> Apart from the GIN problem can we discuss whether need to change the\n> current memory usage policy of parallel utility command described in\n> the doc? We cannot control the memory usage in index AM after all but\n> we need to generically consider how much memory is used during\n> parallel vacuum.\n>\n\nDo you mean to say change the docs for a parallel vacuum patch in this\nregard? If so, I think we might want to do something for\nmaintenance_work_mem for parallel vacuum as described in one of the\nemails above [1] and then change the docs accordingly.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JhpNsTiHj%2BJOy3N8uCGyTBMH8xDhUEtBw8ZeCAPRGp6Q%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 12 Oct 2019 17:14:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Sat, Oct 12, 2019 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Oct 12, 2019 at 10:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Oct 11, 2019 at 5:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > That's right, but OTOH, if the user specifies gin_pending_list_limit\n> > > as an option during Create Index with a value greater than GUC\n> > > gin_pending_list_limit, then also we will face the same problem. It\n> > > seems to me that we haven't thought enough on memory usage during Gin\n> > > pending list cleanup and I don't want to independently make a decision\n> > > to change it. So unless the original author/committer or some other\n> > > people who have worked in this area share their opinion, we can leave\n> > > it as it is and make a parallel vacuum patch adapt to it.\n> >\n> > Yeah I totally agreed.\n> >\n> > Apart from the GIN problem can we discuss whether need to change the\n> > current memory usage policy of parallel utility command described in\n> > the doc? We cannot control the memory usage in index AM after all but\n> > we need to generically consider how much memory is used during\n> > parallel vacuum.\n> >\n>\n> Do you mean to say change the docs for a parallel vacuum patch in this\n> regard? If so, I think we might want to do something for\n> maintenance_work_mem for parallel vacuum as described in one of the\n> emails above [1] and then change the docs accordingly.\n>\n\nYes agreed. I thought that we can discuss that while waiting for other\nopinion on the memory usage of gin index's pending list cleanup. For\nexample one of your suggestions[1] is simple and maybe acceptable but\nI guess that it can deal with only gin indexes but not other index AMs\nthat might consume more memory.\n\n[1] https://www.postgresql.org/message-id/CAA4eK1JhpNsTiHj%2BJOy3N8uCGyTBMH8xDhUEtBw8ZeCAPRGp6Q%40mail.gmail.com\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Wed, 16 Oct 2019 10:49:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Wed, Oct 16, 2019 at 7:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Oct 12, 2019 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Oct 12, 2019 at 10:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Oct 11, 2019 at 5:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > That's right, but OTOH, if the user specifies gin_pending_list_limit\n> > > > as an option during Create Index with a value greater than GUC\n> > > > gin_pending_list_limit, then also we will face the same problem. It\n> > > > seems to me that we haven't thought enough on memory usage during Gin\n> > > > pending list cleanup and I don't want to independently make a decision\n> > > > to change it. So unless the original author/committer or some other\n> > > > people who have worked in this area share their opinion, we can leave\n> > > > it as it is and make a parallel vacuum patch adapt to it.\n> > >\n> > > Yeah I totally agreed.\n> > >\n> > > Apart from the GIN problem can we discuss whether need to change the\n> > > current memory usage policy of parallel utility command described in\n> > > the doc? We cannot control the memory usage in index AM after all but\n> > > we need to generically consider how much memory is used during\n> > > parallel vacuum.\n> > >\n> >\n> > Do you mean to say change the docs for a parallel vacuum patch in this\n> > regard? If so, I think we might want to do something for\n> > maintenance_work_mem for parallel vacuum as described in one of the\n> > emails above [1] and then change the docs accordingly.\n> >\n>\n> Yes agreed. I thought that we can discuss that while waiting for other\n> opinion on the memory usage of gin index's pending list cleanup. For\n> example one of your suggestions[1] is simple and maybe acceptable but\n> I guess that it can deal with only gin indexes but not other index AMs\n> that might consume more memory.\n>\n\nIt is not that currently, other indexes don't use any additional\nmemory (except for maintainence_work_mem). For example, Gist index\ncan use memory for collecting empty leaf pages and internal pages. I\nam not sure if we can do anything for such cases. The case for Gin\nindex seems to be clear and it seems to be having the risk of using\nmuch more memory, so why not try to do something for it? We can\nprobably write the code such that it can be extended for other index\nmethods in future if required.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Oct 2019 12:18:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Wed, Oct 16, 2019 at 3:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 16, 2019 at 7:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Oct 12, 2019 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Oct 12, 2019 at 10:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Fri, Oct 11, 2019 at 5:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > That's right, but OTOH, if the user specifies gin_pending_list_limit\n> > > > > as an option during Create Index with a value greater than GUC\n> > > > > gin_pending_list_limit, then also we will face the same problem. It\n> > > > > seems to me that we haven't thought enough on memory usage during Gin\n> > > > > pending list cleanup and I don't want to independently make a decision\n> > > > > to change it. So unless the original author/committer or some other\n> > > > > people who have worked in this area share their opinion, we can leave\n> > > > > it as it is and make a parallel vacuum patch adapt to it.\n> > > >\n> > > > Yeah I totally agreed.\n> > > >\n> > > > Apart from the GIN problem can we discuss whether need to change the\n> > > > current memory usage policy of parallel utility command described in\n> > > > the doc? We cannot control the memory usage in index AM after all but\n> > > > we need to generically consider how much memory is used during\n> > > > parallel vacuum.\n> > > >\n> > >\n> > > Do you mean to say change the docs for a parallel vacuum patch in this\n> > > regard? If so, I think we might want to do something for\n> > > maintenance_work_mem for parallel vacuum as described in one of the\n> > > emails above [1] and then change the docs accordingly.\n> > >\n> >\n> > Yes agreed. I thought that we can discuss that while waiting for other\n> > opinion on the memory usage of gin index's pending list cleanup. For\n> > example one of your suggestions[1] is simple and maybe acceptable but\n> > I guess that it can deal with only gin indexes but not other index AMs\n> > that might consume more memory.\n> >\n>\n> It is not that currently, other indexes don't use any additional\n> memory (except for maintainence_work_mem). For example, Gist index\n> can use memory for collecting empty leaf pages and internal pages. I\n> am not sure if we can do anything for such cases. The case for Gin\n> index seems to be clear and it seems to be having the risk of using\n> much more memory, so why not try to do something for it?\n\nYeah gin indexes is clear now and I agree that we need to do something\nfor it. But I'm also concerned third party index AMs. Similar to the\nproblem related to IndexBulkDeleteResult structure that we're\ndiscussing on another thread I thought that we have the same problem\non this.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Wed, 16 Oct 2019 21:04:40 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "It's a bit unfortunate that we're doing the pending list flush while the\nvacuum memory is allocated at all. Is there any reason other than the way\nthe callbacks are defined that gin doesn't do the pending list flush before\nvacuum does the heap scan and before it allocates any memory using\nmaintenance_work_mem?\n\n(I'm guessing doing it after vacuum is finished would have different\nproblems with tuples in the pending queue not getting vacuumed?)\n\nIt's a bit unfortunate that we're doing the pending list flush while the vacuum memory is allocated at all. Is there any reason other than the way the callbacks are defined that gin doesn't do the pending list flush before vacuum does the heap scan and before it allocates any memory using maintenance_work_mem?(I'm guessing doing it after vacuum is finished would have different problems with tuples in the pending queue not getting vacuumed?)", "msg_date": "Thu, 17 Oct 2019 02:37:17 +0200", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Thu, Oct 17, 2019 at 6:07 AM Greg Stark <stark@mit.edu> wrote:\n>\n> It's a bit unfortunate that we're doing the pending list flush while the vacuum memory is allocated at all. Is there any reason other than the way the callbacks are defined that gin doesn't do the pending list flush before vacuum does the heap scan and before it allocates any memory using maintenance_work_mem?\n>\n\nI can't think of any other reason. Can we think of doing it as a\nseparate phase for indexes? That can help in containing the memory\nusage to a maximum of maintenance_work_mem for Gin indexes, but I am\nnot sure how useful it is for other index AM's. Another idea could be\nthat we do something special (cleanup of pending list) just for Gin\nindexes before heap scan in Vacuum.\n\n> (I'm guessing doing it after vacuum is finished would have different problems with tuples in the pending queue not getting vacuumed?)\n\nYeah, I also think so.\n\nBTW, good point!\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Oct 2019 10:40:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Wed, Oct 16, 2019 at 5:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 16, 2019 at 3:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > It is not that currently, other indexes don't use any additional\n> > memory (except for maintainence_work_mem). For example, Gist index\n> > can use memory for collecting empty leaf pages and internal pages. I\n> > am not sure if we can do anything for such cases. The case for Gin\n> > index seems to be clear and it seems to be having the risk of using\n> > much more memory, so why not try to do something for it?\n>\n> Yeah gin indexes is clear now and I agree that we need to do something\n> for it. But I'm also concerned third party index AMs. Similar to the\n> problem related to IndexBulkDeleteResult structure that we're\n> discussing on another thread I thought that we have the same problem\n> on this.\n>\n\nI understand your concern, but I am not sure what is a good way to\ndeal with it. I think we can do something generic like divide the\nmaintainence_work_mem equally among workers, but then the indexes that\nuse maintainence_work_mem will suffer if the number of such indexes is\nmuch less than the indexes that don't use maintainence_work_mem.\nAnother idea could be each index AM tell whether it uses\nmaintainence_work_mem or not and based on that we can do the\ncomputation (divide the maintainence_work_mem by the number of such\nindexes during parallel vacuum). Do you have any other ideas for\nthis?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Oct 2019 14:43:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Another idea could be each index AM tell whether it uses\n> maintainence_work_mem or not and based on that we can do the\n> computation (divide the maintainence_work_mem by the number of such\n> indexes during parallel vacuum).\n\nFWIW, that seems like a perfectly reasonable API addition to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Oct 2019 11:40:26 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Thu, Oct 17, 2019 at 6:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 16, 2019 at 5:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 16, 2019 at 3:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > It is not that currently, other indexes don't use any additional\n> > > memory (except for maintainence_work_mem). For example, Gist index\n> > > can use memory for collecting empty leaf pages and internal pages. I\n> > > am not sure if we can do anything for such cases. The case for Gin\n> > > index seems to be clear and it seems to be having the risk of using\n> > > much more memory, so why not try to do something for it?\n> >\n> > Yeah gin indexes is clear now and I agree that we need to do something\n> > for it. But I'm also concerned third party index AMs. Similar to the\n> > problem related to IndexBulkDeleteResult structure that we're\n> > discussing on another thread I thought that we have the same problem\n> > on this.\n> >\n>\n> I understand your concern, but I am not sure what is a good way to\n> deal with it. I think we can do something generic like divide the\n> maintainence_work_mem equally among workers, but then the indexes that\n> use maintainence_work_mem will suffer if the number of such indexes is\n> much less than the indexes that don't use maintainence_work_mem.\n> Another idea could be each index AM tell whether it uses\n> maintainence_work_mem or not and based on that we can do the\n> computation (divide the maintainence_work_mem by the number of such\n> indexes during parallel vacuum). Do you have any other ideas for\n> this?\n>\n\nI was thinking the similar idea to the latter idea: ask index AM the\namount of memory (that should be part of maintenance_work_mem) it will\nconsume and then compute the new limit for both heap scan and index\nvacuuming based on that.\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Thu, 17 Oct 2019 21:34:41 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Thu, Oct 17, 2019 at 6:05 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 17, 2019 at 6:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 16, 2019 at 5:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 16, 2019 at 3:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > It is not that currently, other indexes don't use any additional\n> > > > memory (except for maintainence_work_mem). For example, Gist index\n> > > > can use memory for collecting empty leaf pages and internal pages. I\n> > > > am not sure if we can do anything for such cases. The case for Gin\n> > > > index seems to be clear and it seems to be having the risk of using\n> > > > much more memory, so why not try to do something for it?\n> > >\n> > > Yeah gin indexes is clear now and I agree that we need to do something\n> > > for it. But I'm also concerned third party index AMs. Similar to the\n> > > problem related to IndexBulkDeleteResult structure that we're\n> > > discussing on another thread I thought that we have the same problem\n> > > on this.\n> > >\n> >\n> > I understand your concern, but I am not sure what is a good way to\n> > deal with it. I think we can do something generic like divide the\n> > maintainence_work_mem equally among workers, but then the indexes that\n> > use maintainence_work_mem will suffer if the number of such indexes is\n> > much less than the indexes that don't use maintainence_work_mem.\n> > Another idea could be each index AM tell whether it uses\n> > maintainence_work_mem or not and based on that we can do the\n> > computation (divide the maintainence_work_mem by the number of such\n> > indexes during parallel vacuum). Do you have any other ideas for\n> > this?\n> >\n>\n> I was thinking the similar idea to the latter idea: ask index AM the\n> amount of memory (that should be part of maintenance_work_mem) it will\n> consume and then compute the new limit for both heap scan and index\n> vacuuming based on that.\n>\n\nOh, that would be tricky as compared to what I am proposing because it\nmight not be easy for indexAM to tell upfront the amount of memory it\nneeds. I think we can keep it simple for now.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 08:11:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Thu, Oct 17, 2019 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Another idea could be each index AM tell whether it uses\n> > maintainence_work_mem or not and based on that we can do the\n> > computation (divide the maintainence_work_mem by the number of such\n> > indexes during parallel vacuum).\n>\n> FWIW, that seems like a perfectly reasonable API addition to me.\n>\n\nThanks, Sawada-san, if you also think this API makes sense, then we\ncan try to write a patch and see how it turns out?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 08:13:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: maintenance_work_mem used by Vacuum" }, { "msg_contents": "On Fri, Oct 18, 2019, 11:43 Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Oct 17, 2019 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > Another idea could be each index AM tell whether it uses\n> > > maintainence_work_mem or not and based on that we can do the\n> > > computation (divide the maintainence_work_mem by the number of such\n> > > indexes during parallel vacuum).\n> >\n> > FWIW, that seems like a perfectly reasonable API addition to me.\n> >\n>\n> Thanks, Sawada-san, if you also think this API makes sense, then we\n> can try to write a patch and see how it turns out?\n>\n>\n Yeah agreed. I can write this patch next week and will\nshare it.\n\nRegards,\n\nOn Fri, Oct 18, 2019, 11:43 Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Oct 17, 2019 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Another idea could be each index AM tell whether it uses\n> > maintainence_work_mem or not and based on that we can do the\n> > computation (divide the maintainence_work_mem by the number of such\n> > indexes during parallel vacuum).\n>\n> FWIW, that seems like a perfectly reasonable API addition to me.\n>\n\nThanks,  Sawada-san, if you also think this API makes sense, then we\ncan try to write a patch and see how it turns out? Yeah agreed. I can write this patch next week and will share it.Regards,", "msg_date": "Fri, 18 Oct 2019 12:27:43 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: maintenance_work_mem used by Vacuum" } ]
[ { "msg_contents": "Hi!\n\nThis message is follow up to the \"Get rid of the StdRdOptions\" patch thread:\nhttps://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m\n\nI've split patch into even smaller parts and commitfest want each patch in \nseparate thread. So it is new thread.\n\nThe idea of this patch is following: If you read the code, partitioned tables \ndo not have any options (you will not find RELOPT_KIND_PARTITIONED in \nboolRelOpts, intRelOpts, realRelOpts, stringRelOpts and enumRelOpts in \nreloption.c), but it uses StdRdOptions to store them (these no options).\n\nIf partitioned table is to have it's own option set (even if it is empty now) \nit would be better to save it into separate structure, like it is done for \nViews, not adding them to StdRdOptions, like current code hints to do.\n\nSo in this patch I am creating a structure that would store partitioned table \noptions (it is empty for now as there are no options for this relation kind), \nand all other code that would use this structure as soon as the first option \ncomes.\n\nI think it is bad idea to suggest option adder to ad it to StdRdOption, we \nalready have a big mess there. Better if he add it to an new empty structure.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)", "msg_date": "Sun, 06 Oct 2019 15:47:46 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "[PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "On Sun, Oct 06, 2019 at 03:47:46PM +0300, Nikolay Shaplov wrote:\n> This message is follow up to the \"Get rid of the StdRdOptions\" patch thread:\n> https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m\n> \n> I've split patch into even smaller parts and commitfest want each patch in \n> separate thread. So it is new thread.\n\nSplitting concepts into different threads may be fine, and usually\nrecommended. Splitting a set of patches into multiple entries to ease\nreview and your goal to get a patch integrated and posted all these\ninto the same thread is usually recommended. Now posting a full set\nof patches across multiple threads, in way so as they have\ndependencies with each other, is what I would call a confusing\nsituation. That's hard to follow.\n\n> The idea of this patch is following: If you read the code, partitioned tables \n> do not have any options (you will not find RELOPT_KIND_PARTITIONED in \n> boolRelOpts, intRelOpts, realRelOpts, stringRelOpts and enumRelOpts in \n> reloption.c), but it uses StdRdOptions to store them (these no options).\n\nI am not even sure that we actually need that. What kind of reloption\nyou would think would suit into this subset?\n--\nMichael", "msg_date": "Mon, 7 Oct 2019 14:57:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "В письме от понедельник, 7 октября 2019 г. 14:57:14 MSK пользователь Michael \nPaquier написал:\n> On Sun, Oct 06, 2019 at 03:47:46PM +0300, Nikolay Shaplov wrote:\n> > This message is follow up to the \"Get rid of the StdRdOptions\" patch\n> > thread: https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m\n> > \n> > I've split patch into even smaller parts and commitfest want each patch in\n> > separate thread. So it is new thread.\n> \n> Splitting concepts into different threads may be fine, and usually\n> recommended. Splitting a set of patches into multiple entries to ease\n> review and your goal to get a patch integrated and posted all these\n> into the same thread is usually recommended. Now posting a full set\n> of patches across multiple threads, in way so as they have\n> dependencies with each other, is what I would call a confusing\n> situation. That's hard to follow.\nI understand that. I've tried to add new patches to original thread, but \ncommitfest did not accept that for some reason. You can try to add patch from \nthis letter https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m \njust to see how it works.\n\nSince discussion actually did not started yet, it can be moved anywhere you \nsuggest, but please tell how exactly it should be done, because I do not \nunderstand what is the better way.\n\n> > The idea of this patch is following: If you read the code, partitioned\n> > tables do not have any options (you will not find RELOPT_KIND_PARTITIONED\n> > in boolRelOpts, intRelOpts, realRelOpts, stringRelOpts and enumRelOpts in\n> > reloption.c), but it uses StdRdOptions to store them (these no options).\n> I am not even sure that we actually need that. What kind of reloption\n> you would think would suit into this subset?\n\nActually I do not know. But the author of partitioned patch, added a stub for \npartitioned tables to have some reloptions in future. But this stub is \ndesigned to use StdRdOptions. Which is not correct, as I presume. So here I am \ncorrecting the stub.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)", "msg_date": "Mon, 07 Oct 2019 12:42:39 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "Hello,\n\nOn Mon, Oct 7, 2019 at 6:43 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> В письме от понедельник, 7 октября 2019 г. 14:57:14 MSK пользователь Michael\n> Paquier написал:\n> > On Sun, Oct 06, 2019 at 03:47:46PM +0300, Nikolay Shaplov wrote:\n> > > The idea of this patch is following: If you read the code, partitioned\n> > > tables do not have any options (you will not find RELOPT_KIND_PARTITIONED\n> > > in boolRelOpts, intRelOpts, realRelOpts, stringRelOpts and enumRelOpts in\n> > > reloption.c), but it uses StdRdOptions to store them (these no options).\n> > I am not even sure that we actually need that. What kind of reloption\n> > you would think would suit into this subset?\n>\n> Actually I do not know. But the author of partitioned patch, added a stub for\n> partitioned tables to have some reloptions in future. But this stub is\n> designed to use StdRdOptions. Which is not correct, as I presume. So here I am\n> correcting the stub.\n\nI wrote the patch that introduced RELOPT_KIND_PARTITIONED. Yes, it\nwas added as a stub relopt_kind to eventually apply to reloptions that\ncould be sensibly applied to partitioned tables. We never got around\nto working on making the existing reloptions relevant to partitioned\ntables, nor did we add any new partitioned table-specific reloptions,\nso it remained an unused relopt_kind.\n\nIIUC, this patch invents PartitionedRelOptions as the binary\nrepresentation for future RELOPT_KIND_PARTITIONED parameters. As long\nas others are on board with using different *Options structs for\ndifferent object kinds, I see no problem with this idea.\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 8 Oct 2019 16:00:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "В письме от вторник, 8 октября 2019 г. 16:00:49 MSK пользователь Amit Langote \nнаписал:\n\n> > > > The idea of this patch is following: If you read the code, partitioned\n> > > > tables do not have any options (you will not find\n> > > > RELOPT_KIND_PARTITIONED\n> > > > in boolRelOpts, intRelOpts, realRelOpts, stringRelOpts and enumRelOpts\n> > > > in\n> > > > reloption.c), but it uses StdRdOptions to store them (these no\n> > > > options).\n> > > \n> > > I am not even sure that we actually need that. What kind of reloption\n> > > you would think would suit into this subset?\n> > \n> > Actually I do not know. But the author of partitioned patch, added a stub\n> > for partitioned tables to have some reloptions in future. But this stub\n> > is designed to use StdRdOptions. Which is not correct, as I presume. So\n> > here I am correcting the stub.\n> \n> I wrote the patch that introduced RELOPT_KIND_PARTITIONED. Yes, it\n> was added as a stub relopt_kind to eventually apply to reloptions that\n> could be sensibly applied to partitioned tables. We never got around\n> to working on making the existing reloptions relevant to partitioned\n> tables, nor did we add any new partitioned table-specific reloptions,\n> so it remained an unused relopt_kind.\nThank you for clarifying thing.\n \n> IIUC, this patch invents PartitionedRelOptions as the binary\n> representation for future RELOPT_KIND_PARTITIONED parameters. As long\n> as others are on board with using different *Options structs for\n> different object kinds, I see no problem with this idea.\nYes, this is correct. Some Access Methods already use it's own Options \nstructure. As far as I can guess StdRdOptions still exists only for historical \nreasons, and became quite a mess because of adding all kind of stuff there. \nBetter to separate it.\n\nBTW, as far as you are familiar with this part of the code, may be you will \njoin the review if this particular patch?\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)\n\n\n", "msg_date": "Tue, 08 Oct 2019 13:50:23 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "Hello,\n\nOn Tue, Oct 8, 2019 at 7:50 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> В письме от вторник, 8 октября 2019 г. 16:00:49 MSK пользователь Amit Langote\n> написал:\n> > IIUC, this patch invents PartitionedRelOptions as the binary\n> > representation for future RELOPT_KIND_PARTITIONED parameters. As long\n> > as others are on board with using different *Options structs for\n> > different object kinds, I see no problem with this idea.\n> Yes, this is correct. Some Access Methods already use it's own Options\n> structure. As far as I can guess StdRdOptions still exists only for historical\n> reasons, and became quite a mess because of adding all kind of stuff there.\n> Better to separate it.\n>\n> BTW, as far as you are familiar with this part of the code, may be you will\n> join the review if this particular patch?\n\nSure, I will try to check your patch.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 10 Oct 2019 14:58:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "Hello,\n\nOn Sun, Oct 6, 2019 at 9:48 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> This message is follow up to the \"Get rid of the StdRdOptions\" patch thread:\n> https://www.postgresql.org/message-id/2620882.s52SJui4ql@x200m\n>\n> I've split patch into even smaller parts and commitfest want each patch in\n> separate thread. So it is new thread.\n>\n> The idea of this patch is following: If you read the code, partitioned tables\n> do not have any options (you will not find RELOPT_KIND_PARTITIONED in\n> boolRelOpts, intRelOpts, realRelOpts, stringRelOpts and enumRelOpts in\n> reloption.c), but it uses StdRdOptions to store them (these no options).\n>\n> If partitioned table is to have it's own option set (even if it is empty now)\n> it would be better to save it into separate structure, like it is done for\n> Views, not adding them to StdRdOptions, like current code hints to do.\n>\n> So in this patch I am creating a structure that would store partitioned table\n> options (it is empty for now as there are no options for this relation kind),\n> and all other code that would use this structure as soon as the first option\n> comes.\n>\n> I think it is bad idea to suggest option adder to ad it to StdRdOption, we\n> already have a big mess there. Better if he add it to an new empty structure.\n\nI tend to agree that this improves readability of the reloptions code a bit.\n\nSome comments on the patch:\n\nHow about naming the function partitioned_table_reloptions() instead\nof partitioned_reloptions()?\n\n+ * Option parser for partitioned relations\n\nSince partitioned_reloptions only caters to partitioned \"tables\",\nmaybe use \"tables\" here instead of \"relations\".\n\n+ /*\n+ * Since there is no options for patitioned table for now, we just do\n+ * validation to report incorrect option error and leave.\n+ */\n\nFix typo and minor rewording:\n\n\"Since there are no options for partitioned tables...\"\n\n+ switch ((int)relkind)\n\nNeeds a space between (int) and relkind, but I don't think the (int)\ncast is really necessary. I don't see it in other places.\n\n+ * Binary representation of relation options for Partitioned relations.\n\n\"for partitioned tables\".\n\nSpeaking of partitioned relations vs tables, I see that you didn't\ntouch partitioned indexes (RELKIND_PARTITIONED_INDEX relations). Is\nthat because we leave option handling to the index AMs?\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 10 Oct 2019 15:50:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "В Thu, 10 Oct 2019 15:50:05 +0900\nAmit Langote <amitlangote09@gmail.com> пишет:\n\n> > I think it is bad idea to suggest option adder to ad it to\n> > StdRdOption, we already have a big mess there. Better if he add it\n> > to an new empty structure.\n> \n> I tend to agree that this improves readability of the reloptions code\n> a bit.\n> \n> Some comments on the patch:\n> \n> How about naming the function partitioned_table_reloptions() instead\n> of partitioned_reloptions()?\n\nI have my doubts about using word table here... In relational model\nthere are no such concept as \"table\", it uses concept \"relation\". And\nin postgres there were no tables, there were only relations. Heap\nrelation, toast relation, all kind of index relations... I do not\nunderstand when and why word \"table\" appeared when we speak about some\nvirtual relation that is made of several real heap relation. That is\nwhy I am not using the word table here...\n\nBut since you are the author of partition table code, and this code is\nalready accepted in the core, you should know better. So I will change\nit the way you say.\n \n> + switch ((int)relkind)\n> \n> Needs a space between (int) and relkind, but I don't think the (int)\n> cast is really necessary. I don't see it in other places.\nOh. Yeh. This is my mistake... I had some strange compilation\nproblems, and this is a remnant of my efforts to find the cause of\nit, I've forgot to clean...\nThanks!\n \n> Speaking of partitioned relations vs tables, I see that you didn't\n> touch partitioned indexes (RELKIND_PARTITIONED_INDEX relations). Is\n> that because we leave option handling to the index AMs?\n\nBecause for partitioned indexes the code says \"Use same options\noriginal index does\"\n\nbytea * \nextractRelOptions(HeapTuple tuple, TupleDesc tupdesc, \n amoptions_function amoptions) \n{ \n............ \n switch (classForm->relkind) \n { \n case RELKIND_RELATION: \n case RELKIND_TOASTVALUE: \n case RELKIND_MATVIEW: \n options = heap_reloptions(classForm->relkind, datum, false); \n break; \n case RELKIND_PARTITIONED_TABLE: \n options = partitioned_table_reloptions(datum, false); \n break; \n case RELKIND_VIEW: \n options = view_reloptions(datum, false); \n break; \n case RELKIND_INDEX: \n case RELKIND_PARTITIONED_INDEX: \n options = index_reloptions(amoptions, datum, false); \n break; \n\n\nHere you see the function accepts amoptions method that know how to\nparse options for a particular index, and passes it to index_reloptions\nfunctions. For both indexes and partitioned indexes it is taken from AM\n\"method\" amoptions. So options uses exactly the same code and same\noptions both for indexes and for partitioned indexes.\n\nI do not know if it is correct from global point of view, but from the\npoint of view of reloptions engine, it does not require any attention:\nchange index options and get partitioned_index options for free...\n\nActually I would expect some problems there, because sooner or later,\nsomebody would want to set custom fillfactor to partitioned table, or\nset some custom autovacuum options for it. But I would prefer to think\nabout it when I am done with reloption engine rewriting... Working in\nboth direction will cause more trouble then get benefits...", "msg_date": "Fri, 11 Oct 2019 13:37:59 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "> > I think it is bad idea to suggest option adder to ad it to\n> > StdRdOption, we already have a big mess there. Better if he add it\n> > to an new empty structure. \n>\n> I tend to agree that this improves readability of the reloptions code\n> a bit.\n>\n> Some comments on the patch:\n>\n> How about naming the function partitioned_table_reloptions() instead\n> of partitioned_reloptions()? \n\nI have my doubts about using word table here... In relational model\nthere are no such concept as \"table\", it uses concept \"relation\". And\nin postgres there were no tables, there were only relations. Heap\nrelation, toast relation, all kind of index relations... I do not\nunderstand when and why word \"table\" appeared when we speak about some\nvirtual relation that is made of several real heap relation. That is\nwhy I am not using the word table here...\n\nBut since you are the author of partition table code, and this code is\nalready accepted in the core, you should know better. So I will change\nit the way you say.\n \n> + switch ((int)relkind)\n>\n> Needs a space between (int) and relkind, but I don't think the (int)\n> cast is really necessary. I don't see it in other places. \nOh. Yeh. This is my mistake... I had some strange compilation\nproblems, and this is a remnant of my efforts to find the cause of\nit, I've forgot to clean...\nThanks!\n \n> Speaking of partitioned relations vs tables, I see that you didn't\n> touch partitioned indexes (RELKIND_PARTITIONED_INDEX relations). Is\n> that because we leave option handling to the index AMs? \n\nBecause for partitioned indexes the code says \"Use same options\noriginal index does\"\n\nbytea\n* extractRelOptions(HeapTuple tuple, TupleDesc\ntupdesc, amoptions_function amoptions) \n{ \n............ \n switch\n(classForm->relkind)\n{ case\nRELKIND_RELATION: case\nRELKIND_TOASTVALUE: case\nRELKIND_MATVIEW: options = heap_reloptions(classForm->relkind, datum,\nfalse);\nbreak; case\nRELKIND_PARTITIONED_TABLE: options =\npartitioned_table_reloptions(datum, false);\nbreak; case\nRELKIND_VIEW: options = view_reloptions(datum,\nfalse);\nbreak; case\nRELKIND_INDEX: case\nRELKIND_PARTITIONED_INDEX: options = index_reloptions(amoptions, datum,\nfalse); break; \n\n\nHere you see the function accepts amoptions method that know how to\nparse options for a particular index, and passes it to index_reloptions\nfunctions. For both indexes and partitioned indexes it is taken from AM\n\"method\" amoptions. So options uses exactly the same code and same\noptions both for indexes and for partitioned indexes.\n\nI do not know if it is correct from global point of view, but from the\npoint of view of reloptions engine, it does not require any attention:\nchange index options and get partitioned_index options for free...\n\nActually I would expect some problems there, because sooner or later,\nsomebody would want to set custom fillfactor to partitioned table, or\nset some custom autovacuum options for it. But I would prefer to think\nabout it when I am done with reloption engine rewriting... Working in\nboth direction will cause more trouble then get benefits...", "msg_date": "Fri, 11 Oct 2019 13:58:41 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "Hi Nikolay,\n\nSorry for the late reply.\n\nOn Fri, Oct 11, 2019 at 7:38 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> В Thu, 10 Oct 2019 15:50:05 +0900\n> Amit Langote <amitlangote09@gmail.com> пишет:\n> > > I think it is bad idea to suggest option adder to ad it to\n> > > StdRdOption, we already have a big mess there. Better if he add it\n> > > to an new empty structure.\n> >\n> > I tend to agree that this improves readability of the reloptions code\n> > a bit.\n> >\n> > Some comments on the patch:\n> >\n> > How about naming the function partitioned_table_reloptions() instead\n> > of partitioned_reloptions()?\n>\n> I have my doubts about using word table here... In relational model\n> there are no such concept as \"table\", it uses concept \"relation\". And\n> in postgres there were no tables, there were only relations. Heap\n> relation, toast relation, all kind of index relations... I do not\n> understand when and why word \"table\" appeared when we speak about some\n> virtual relation that is made of several real heap relation. That is\n> why I am not using the word table here...\n>\n> But since you are the author of partition table code, and this code is\n> already accepted in the core, you should know better. So I will change\n> it the way you say.\n\nSure, they're all relations in the abstract, but we've got to\ndistinguish different kinds in the code somehow.\n\nAnyway, I just want the code to be consistent with what we've already\ngot, especially, considering that we might need similar function for\npartitioned \"indexes\" in the future.\n\n> > Speaking of partitioned relations vs tables, I see that you didn't\n> > touch partitioned indexes (RELKIND_PARTITIONED_INDEX relations). Is\n> > that because we leave option handling to the index AMs?\n>\n> Because for partitioned indexes the code says \"Use same options\n> original index does\"\n>\n> bytea *\n> extractRelOptions(HeapTuple tuple, TupleDesc tupdesc,\n> amoptions_function amoptions)\n> {\n> ............\n> switch (classForm->relkind)\n> {\n> case RELKIND_RELATION:\n> case RELKIND_TOASTVALUE:\n> case RELKIND_MATVIEW:\n> options = heap_reloptions(classForm->relkind, datum, false);\n> break;\n> case RELKIND_PARTITIONED_TABLE:\n> options = partitioned_table_reloptions(datum, false);\n> break;\n> case RELKIND_VIEW:\n> options = view_reloptions(datum, false);\n> break;\n> case RELKIND_INDEX:\n> case RELKIND_PARTITIONED_INDEX:\n> options = index_reloptions(amoptions, datum, false);\n> break;\n>\n>\n> Here you see the function accepts amoptions method that know how to\n> parse options for a particular index, and passes it to index_reloptions\n> functions. For both indexes and partitioned indexes it is taken from AM\n> \"method\" amoptions. So options uses exactly the same code and same\n> options both for indexes and for partitioned indexes.\n>\n> I do not know if it is correct from global point of view, but from the\n> point of view of reloptions engine, it does not require any attention:\n> change index options and get partitioned_index options for free...\n>\n> Actually I would expect some problems there, because sooner or later,\n> somebody would want to set custom fillfactor to partitioned table, or\n> set some custom autovacuum options for it.\n\nYeah, I have never run into this code before, but this might need\nrevisiting, if only to be consistent with the table counterpart.\n\n> But I would prefer to think\n> about it when I am done with reloption engine rewriting... Working in\n> both direction will cause more trouble then get benefits...\n\nSure, this seems like a topic for another thread.\n\nI looked atthe v2 patch and noticed a typo:\n\n+ * Binary representation of relation options for rtitioned tables.\n\ns/rtitioned/partitioned/g\n\nOther than that, looks good.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 23 Oct 2019 11:59:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "В письме от среда, 23 октября 2019 г. 11:59:45 MSK пользователь Amit Langote \nнаписал:\n\n> Sorry for the late reply.\nSame apologies from my side. Get decent time-slot for postgres dev only now.\n\n\n\n> I looked atthe v2 patch and noticed a typo:\n> \n> + * Binary representation of relation options for rtitioned tables.\n> \n> s/rtitioned/partitioned/g\n> \n> Other than that, looks good.\nHere goes v3 patch with the typo fixed\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)", "msg_date": "Mon, 11 Nov 2019 17:22:32 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "On Mon, Nov 11, 2019 at 05:22:32PM +0300, Nikolay Shaplov wrote:\n> Here goes v3 patch with the typo fixed\n\nStill one here in v3 of the patch:\n+ * Since there are no options for patitioned tables for now, we just do\n+ * validation to report incorrect option error and leave.\nIt looks like you are having a hard time with partitioned tables.\n\n+ if (validate)\n+ parseRelOptions(reloptions, validate, RELOPT_KIND_PARTITIONED,\n+ &numoptions);\nWe have been through great length to have build_reloptions, so\nwouldn't it be better to also have this code path do so? Sure you\nneed to pass NULL for the parsing table.. But there is a point to\nreduce the code paths using directly parseRelOptions() and the\nfollow-up, expected calls to allocate and to fill in the set of\nreloptions.\n--\nMichael", "msg_date": "Tue, 12 Nov 2019 13:50:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "On Tue, Nov 12, 2019 at 01:50:03PM +0900, Michael Paquier wrote:\n> We have been through great length to have build_reloptions, so\n> wouldn't it be better to also have this code path do so? Sure you\n> need to pass NULL for the parsing table.. But there is a point to\n> reduce the code paths using directly parseRelOptions() and the\n> follow-up, expected calls to allocate and to fill in the set of\n> reloptions.\n\nSo I have been looking at this one, and finished with the attached.\nIt looks much better to use build_reloptions() IMO, taking advantage\nof the same sanity checks present for the other relkinds.\n--\nMichael", "msg_date": "Wed, 13 Nov 2019 16:30:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "В письме от среда, 13 ноября 2019 г. 16:30:29 MSK пользователь Michael Paquier \nнаписал:\n> On Tue, Nov 12, 2019 at 01:50:03PM +0900, Michael Paquier wrote:\n> > We have been through great length to have build_reloptions, so\n> > wouldn't it be better to also have this code path do so? Sure you\n> > need to pass NULL for the parsing table.. But there is a point to\n> > reduce the code paths using directly parseRelOptions() and the\n> > follow-up, expected calls to allocate and to fill in the set of\n> > reloptions.\n> \n> So I have been looking at this one, and finished with the attached.\n> It looks much better to use build_reloptions() IMO, taking advantage\n> of the same sanity checks present for the other relkinds.\nThanks!\n\nI did not read that thread yet, when I sent v3 patch.\nbuild_reloptions is a good stuff and we should use it for sure.\n\nI've looked at yours v4 version of the patch, it is exactly what we need here. \nCan we commit it as it is?\n\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)\n\n\n", "msg_date": "Wed, 13 Nov 2019 17:02:24 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" }, { "msg_contents": "On Wed, Nov 13, 2019 at 05:02:24PM +0300, Nikolay Shaplov wrote:\n> I did not read that thread yet, when I sent v3 patch.\n> build_reloptions is a good stuff and we should use it for sure.\n> \n> I've looked at yours v4 version of the patch, it is exactly what we need here. \n> Can we commit it as it is?\n\nI have done an extra lookup, removing PartitionedRelOptions because we\nhave no need for it yet, and committed the split. \n--\nMichael", "msg_date": "Thu, 14 Nov 2019 12:38:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] use separate PartitionedRelOptions structure to store\n partitioned table options" } ]
[ { "msg_contents": "Hi! I am starting a new thread as commitfest wants new thread for new patch.\n\nThis new thread is a follow up to an https://www.postgresql.org/message-id/\n2620882.s52SJui4ql%40x200m thread, where I've been trying to get rid of \nStdRdOpions structure, and now I've splitted the patch into smaller parts.\n\nRead the quote below, to get what this patch is about\n\n> I've been thinking about this patch and came to a conclusion that it\n> would be better to split it to even smaller parts, so they can be\n> easily reviewed, one by one. May be even leaving most complex\n> Heap/Toast part for later.\n> \n> This is first part of this patch. Here we give each Access Method it's\n> own option binary representation instead of StdRdOptions.\n> \n> I think this change is obvious. Because, first, Access Methods do not\n> use most of the values defined in StdRdOptions.\n> \n> Second this patch make Options structure code unified for all core\n> Access Methods. Before some AM used StdRdOptions, some AM used it's own\n> structure, now all AM uses own structures and code is written in the\n> same style, so it would be more easy to update it in future.\n> \n> John Dent, would you join reviewing this part of the patch? I hope it\n> will be more easy now...\n\nAnd now I have a newer version of the patch, as I forgot to remove \nvacuum_cleanup_index_scale_factor from StdRdOptions as it was used only in \nBtree index and now do not used at all. New version fixes it.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)", "msg_date": "Sun, 06 Oct 2019 16:45:27 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "Hi Nikolay,\n\nI like the new approach. And I agree with the ambition — to split out the representation from StdRdOptions.\n\nHowever, with that change, in the AM’s *options() function, it looks as if you could simply add new fields to the relopt_parse_elt list. That’s still not true, because parseRelOptions() will fail to find a matching entry, causing numoptions to be left zero, and an early exit made. (At least, that’s if I correctly understand how things work.)\n\nI think that is fine as an interim limitation, because your change has not yet fully broken the connection to the boolRelOpts, intRelOpts, realRelOpts and stringRelOpts strutures. But perhaps a comment would help to make it clear. Perhaps add something like this above the tab[]: \"When adding or changing a relopt in the relopt_parse_elt tab[], ensure the corresponding change is made to boolRelOpts, intRelOpts, realRelOpts or stringRelOpts.\"\n\ndenty.\n\n> On 6 Oct 2019, at 14:45, Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> \n> Hi! I am starting a new thread as commitfest wants new thread for new patch.\n> \n> This new thread is a follow up to an https://www.postgresql.org/message-id/\n> 2620882.s52SJui4ql%40x200m thread, where I've been trying to get rid of \n> StdRdOpions structure, and now I've splitted the patch into smaller parts.\n> \n> Read the quote below, to get what this patch is about\n> \n>> I've been thinking about this patch and came to a conclusion that it\n>> would be better to split it to even smaller parts, so they can be\n>> easily reviewed, one by one. May be even leaving most complex\n>> Heap/Toast part for later.\n>> \n>> This is first part of this patch. Here we give each Access Method it's\n>> own option binary representation instead of StdRdOptions.\n>> \n>> I think this change is obvious. Because, first, Access Methods do not\n>> use most of the values defined in StdRdOptions.\n>> \n>> Second this patch make Options structure code unified for all core\n>> Access Methods. Before some AM used StdRdOptions, some AM used it's own\n>> structure, now all AM uses own structures and code is written in the\n>> same style, so it would be more easy to update it in future.\n>> \n>> John Dent, would you join reviewing this part of the patch? I hope it\n>> will be more easy now...\n> \n> And now I have a newer version of the patch, as I forgot to remove \n> vacuum_cleanup_index_scale_factor from StdRdOptions as it was used only in \n> Btree index and now do not used at all. New version fixes it.\n> \n> -- \n> Software Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\n> Body-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)<do-not-use-StdRdOptions-in-AM_2.diff>\n\n\n\n", "msg_date": "Mon, 7 Oct 2019 18:55:20 +0100", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "В письме от понедельник, 7 октября 2019 г. 18:55:20 MSK пользователь Dent John \nнаписал:\n\n> I like the new approach. And I agree with the ambition — to split out the\n> representation from StdRdOptions.\nThanks.\n \n> However, with that change, in the AM’s *options() function, it looks as if\n> you could simply add new fields to the relopt_parse_elt list. That’s still\n> not true, because parseRelOptions() will fail to find a matching entry,\n> causing numoptions to be left zero, and an early exit made. (At least,\n> that’s if I correctly understand how things work.)\n> \n> I think that is fine as an interim limitation, because your change has not\n> yet fully broken the connection to the boolRelOpts, intRelOpts, realRelOpts\n> and stringRelOpts strutures. But perhaps a comment would help to make it\n> clear. Perhaps add something like this above the tab[]: \"When adding or\n> changing a relopt in the relopt_parse_elt tab[], ensure the corresponding\n> change is made to boolRelOpts, intRelOpts, realRelOpts or stringRelOpts.\"\nWhoa-whoa!\n\nI am not inventing something new here. Same code is already used in brin \n(brin.c:820), gin (ginutils.c:602) and gist (gistutils.c:909) indexes. I've \njust copied the idea, to do all index code uniform.\n\nThis does not mean that these code can't be improved, but as far as I can \nguess it is better to do it in small steps, first make option code uniform, and \nthen improve all of it this way or another...\n\nSo I here I would suggest to discuss it I copied this code correctly, without \ngoing very deeply into discussion how we can improve code I've used as a \nsource for cloning.\n\nAnd then I have ideas how to do it better. But this will come later...\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)\n\n\n", "msg_date": "Tue, 08 Oct 2019 13:33:34 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "> On 8 Oct 2019, at 11:33, Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> \n> Whoa-whoa!\n> \n> I am not inventing something new here. Same code is already used in brin \n> (brin.c:820), gin (ginutils.c:602) and gist (gistutils.c:909) indexes. I've \n> just copied the idea, to do all index code uniform.\n> \n> This does not mean that these code can't be improved, but as far as I can \n> guess it is better to do it in small steps, first make option code uniform, and \n> then improve all of it this way or another...\n\nI didn’t spot it was an existing pattern.\n\nAnd I agree — making the code uniform will make it easier to evolve in future.\n\nGets my vote.\n\ndenty.\n\n", "msg_date": "Wed, 9 Oct 2019 20:26:14 +0100", "msg_from": "Dent John <denty@QQdd.eu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "В письме от среда, 9 октября 2019 г. 20:26:14 MSK пользователь Dent John \nнаписал:\n\n> I didn’t spot it was an existing pattern.\nSorry, this might be my mistake I should explicitly mention it in the first \nplace. \n \n> And I agree — making the code uniform will make it easier to evolve in\n> future.\n> \n> Gets my vote.\nThanks!\n\nCan you please check, I did not do any mistake while copying and adapting the \ncode.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)", "msg_date": "Thu, 10 Oct 2019 00:37:44 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "Hello Nikolay,\n\nI read comments that Tomas left at:\nhttps://www.postgresql.org/message-id/20190727173841.7ypzo4xuzizvijge%40development\n\nI'd like to join Michael in reiterating one point from Tomas' review.\nI think the patch can go further in trying to make the code in this\narea more maintainable.\n\nFor example, even without this patch, the following stanza is repeated\nin many places:\n\n options = parseRelOptions(reloptions, validate, foo_relopt_kind,\n&numoptions);\n rdopts = allocateReloptStruct(sizeof(FooOptions), options, numoptions);\n fillRelOptions((void *) rdopts, sizeof(FooOptions), options, numoptions,\n validate, foo_relopt_tab, lengthof(foo_relopt_tab));\n return (bytea *) rdopts;\n\nand this patch adds few more instances as it's adding more Options structs.\n\nI think it wouldn't be hard to encapsulate the above stanza in a new\npublic function in reloptions.c and call it from the various places\nthat now have the above code.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 10 Oct 2019 17:17:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "В письме от четверг, 10 октября 2019 г. 17:17:30 MSK пользователь Amit Langote \nнаписал:\n\n> I read comments that Tomas left at:\n> https://www.postgresql.org/message-id/20190727173841.7ypzo4xuzizvijge%40deve\n> lopment\n> \n> I'd like to join Michael in reiterating one point from Tomas' review.\n> I think the patch can go further in trying to make the code in this\n> area more maintainable.\n> \n> For example, even without this patch, the following stanza is repeated\n> in many places:\n> \n> options = parseRelOptions(reloptions, validate, foo_relopt_kind,\n> &numoptions);\n> rdopts = allocateReloptStruct(sizeof(FooOptions), options, numoptions);\n> fillRelOptions((void *) rdopts, sizeof(FooOptions), options, numoptions,\n> validate, foo_relopt_tab, lengthof(foo_relopt_tab)); return (bytea *)\n> rdopts;\n> \n> and this patch adds few more instances as it's adding more Options structs.\n> \n> I think it wouldn't be hard to encapsulate the above stanza in a new\n> public function in reloptions.c and call it from the various places\n> that now have the above code.\nThe code of reloptions is very historical and illogical. I also noticed that \nthese lines are repeated several time. And may be it would be better to put \nthem into reloptions.c. But could anybody clearly explain what are they doing? \nJust to give function a proper name. I understand what they are doing, but I \nam unable to give short and clear explanation.\n\nI am planning to rewrite this part completely. So we have none of this lines \nrepeated. I had a proposal you can see it here https://\ncommitfest.postgresql.org/15/992/ but people on the list told be that patch is \ntoo complex and I should commit it part by part.\n\nSo I am doing it now. I am almost done. But to provide clear and logical patch \nthat introduces my concept, I need StdRdOption to be divided into separated \nstructures. At least for AM. And I need at to be done as simply as possible \nbecause the rest of the code is going to be rewritten anyway.\n\nThat is why I want to follow the steps: make the code uniform, and then \nimprove it. I have improvement in the pocket, but I need a uniform code before \nrevealing it.\n\nIf you think it is absolutely necessary to put these line into one function, I \nwill do it. It will not make code more clear, I guess. I See no benefits, but \nI can do it, but I would avoid doing it, if possible. At least at this step.\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)\n\n\n", "msg_date": "Fri, 11 Oct 2019 13:54:25 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "Hi Nikolay,\n\nSorry for the late reply.\n\nOn Fri, Oct 11, 2019 at 7:54 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> В письме от четверг, 10 октября 2019 г. 17:17:30 MSK пользователь Amit Langote\n> написал:\n> > I read comments that Tomas left at:\n> > https://www.postgresql.org/message-id/20190727173841.7ypzo4xuzizvijge%40deve\n> > lopment\n> >\n> > I'd like to join Michael in reiterating one point from Tomas' review.\n> > I think the patch can go further in trying to make the code in this\n> > area more maintainable.\n> >\n> > For example, even without this patch, the following stanza is repeated\n> > in many places:\n> >\n> > options = parseRelOptions(reloptions, validate, foo_relopt_kind,\n> > &numoptions);\n> > rdopts = allocateReloptStruct(sizeof(FooOptions), options, numoptions);\n> > fillRelOptions((void *) rdopts, sizeof(FooOptions), options, numoptions,\n> > validate, foo_relopt_tab, lengthof(foo_relopt_tab)); return (bytea *)\n> > rdopts;\n> >\n> > and this patch adds few more instances as it's adding more Options structs.\n> >\n> > I think it wouldn't be hard to encapsulate the above stanza in a new\n> > public function in reloptions.c and call it from the various places\n> > that now have the above code.\n>\n> The code of reloptions is very historical and illogical. I also noticed that\n> these lines are repeated several time. And may be it would be better to put\n> them into reloptions.c. But could anybody clearly explain what are they doing?\n> Just to give function a proper name. I understand what they are doing, but I\n> am unable to give short and clear explanation.\n\nMaybe call it BuildRelOptions(), which takes in reloptions in the text\narray format and returns a struct whose size is specified by the\ncaller. See the attached patch.\n\n> I am planning to rewrite this part completely. So we have none of this lines\n> repeated. I had a proposal you can see it here https://\n> commitfest.postgresql.org/15/992/ but people on the list told be that patch is\n> too complex and I should commit it part by part.\n>\n> So I am doing it now. I am almost done. But to provide clear and logical patch\n> that introduces my concept, I need StdRdOption to be divided into separated\n> structures. At least for AM. And I need at to be done as simply as possible\n> because the rest of the code is going to be rewritten anyway.\n>\n> That is why I want to follow the steps: make the code uniform, and then\n> improve it. I have improvement in the pocket, but I need a uniform code before\n> revealing it.\n>\n> If you think it is absolutely necessary to put these line into one function, I\n> will do it. It will not make code more clear, I guess. I See no benefits, but\n> I can do it, but I would avoid doing it, if possible. At least at this step.\n\nIMO, parts of the patch that only refactors the existing code should\nbe first in the list as it is easier to review, especially if it adds\nno new concepts. In this case, your patch to break StdRdOptions into\nmore manageable chunks would be easier to understand if it built upon\na simplified framework of parsing reloptions text arrays.\n\nThanks,\nAmit", "msg_date": "Wed, 23 Oct 2019 11:16:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Wed, Oct 23, 2019 at 11:16:25AM +0900, Amit Langote wrote:\n> IMO, parts of the patch that only refactors the existing code should\n> be first in the list as it is easier to review, especially if it adds\n> no new concepts. In this case, your patch to break StdRdOptions into\n> more manageable chunks would be easier to understand if it built upon\n> a simplified framework of parsing reloptions text arrays.\n\nThanks for doing a split. This helps in proving the point that this\nportion has independent value.\n\ns/BuildRelOptions/buildRelOptions/ for consistency with the other\nroutines (see first character's case-ing)?\n\n+/*\n+ * Using parseRelOptions(), allocateReloptStruct(), and fillRelOptions()\n+ * directly is Deprecated; use BuildRelOptions() instead.\n+ */\n extern relopt_value *parseRelOptions(Datum options, bool validate, \nCompatibility is surely a concern for existing extensions, but that's\nnot the first interface related to reloptions that we'd break in this\nrelease (/me whistles). So my take would be to move all the past\nroutines to be static and only within reloptions.c, and just publish\nthe new one. That's by far not the most popular API we provide.\n\n+ /*\n+ * Allocate and fill the struct. Caller-specified struct size and the\n+ * relopt_parse_elt table (relopt_elems + num_relopt_elems) must match.\n+ */\nThe comment should be about a multiplication, no? It seems to me that\nan assertion would be appropriate here then to insist on the\nrelationship between all that, and also it would be nice to document\nbetter what's expected from the caller of the new routine regarding\nall the arguments needed. In short, what's expected of\nrelopt_struct_size, relopt_elems and num_relopt_elems?\n--\nMichael", "msg_date": "Wed, 23 Oct 2019 12:51:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "Hi Michael,\n\nThanks for taking a look at this.\n\nOn Wed, Oct 23, 2019 at 12:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Oct 23, 2019 at 11:16:25AM +0900, Amit Langote wrote:\n> > IMO, parts of the patch that only refactors the existing code should\n> > be first in the list as it is easier to review, especially if it adds\n> > no new concepts. In this case, your patch to break StdRdOptions into\n> > more manageable chunks would be easier to understand if it built upon\n> > a simplified framework of parsing reloptions text arrays.\n>\n> Thanks for doing a split. This helps in proving the point that this\n> portion has independent value.\n>\n> s/BuildRelOptions/buildRelOptions/ for consistency with the other\n> routines (see first character's case-ing)?\n\nHmm, if we're inventing a new API to replace the old one, why not use\nthat opportunity to be consistent with our general style, which\npredominantly seems to be either words_separated_by_underscore() or\nUpperCamelCase(). Thoughts?\n\n> +/*\n> + * Using parseRelOptions(), allocateReloptStruct(), and fillRelOptions()\n> + * directly is Deprecated; use BuildRelOptions() instead.\n> + */\n> extern relopt_value *parseRelOptions(Datum options, bool validate,\n> Compatibility is surely a concern for existing extensions, but that's\n> not the first interface related to reloptions that we'd break in this\n> release (/me whistles). So my take would be to move all the past\n> routines to be static and only within reloptions.c, and just publish\n> the new one. That's by far not the most popular API we provide.\n\nOK, done.\n\n> + /*\n> + * Allocate and fill the struct. Caller-specified struct size and the\n> + * relopt_parse_elt table (relopt_elems + num_relopt_elems) must match.\n> + */\n> The comment should be about a multiplication, no?\n\nI didn't really mean to specify any mathematical operation by the \"+\"\nin that comment, but I can see how it's confusing. :)\n\n> It seems to me that\n> an assertion would be appropriate here then to insist on the\n> relationship between all that, and also it would be nice to document\n> better what's expected from the caller of the new routine regarding\n> all the arguments needed. In short, what's expected of\n> relopt_struct_size, relopt_elems and num_relopt_elems?\n\nYou might know already, but in short, the values in the passed-in\nrelopt_parse_elts array (relopt_elems) must fit within\nrelopt_struct_size. Writing an Assert turned out to be tricky given\nthat alignment must be considered, but I have tried to add one. Pleas\ncheck, it very well might be wrong. ;)\n\nAttached updated patch. It would be nice to hear whether this patch\nis really what Nikolay intended to eventually do with this code.\n\nThanks,\nAmit", "msg_date": "Fri, 25 Oct 2019 16:42:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Fri, Oct 25, 2019 at 04:42:24PM +0900, Amit Langote wrote:\n> Hmm, if we're inventing a new API to replace the old one, why not use\n> that opportunity to be consistent with our general style, which\n> predominantly seems to be either words_separated_by_underscore() or\n> UpperCamelCase(). Thoughts?\n\nNot wrong. Using small-case characters separated with underscores\nwould be more consistent with the rest perhaps? We use that for the\ninitialization of custom variables and for all the relkind-related\ninterfaces.\n\n> You might know already, but in short, the values in the passed-in\n> relopt_parse_elts array (relopt_elems) must fit within\n> relopt_struct_size. Writing an Assert turned out to be tricky given\n> that alignment must be considered, but I have tried to add one. Pleas\n> check, it very well might be wrong. ;)\n\nHmm. I didn't expect it to be this confusing with relopt_type_size[].\nI'll try to think about something :(\n\n+ * Parses reloptions provided by the caller in text array format and\n+ * fills and returns a struct containing the parsed option values\nThe sentence structure is weird, perhaps:\nThis routine parses \"reloptions\" provided by the caller in text-array\nformat. The parsing is done with a table describing the allowed\noptions, defined by \"relopt_elems\" of length \"num_relopt_elems\". The\nreturned result is a structure containing all the parsed option\nvalues.\n\n> Attached updated patch. It would be nice to hear whether this patch\n> is really what Nikolay intended to eventually do with this code.\n\nOkay, let's check if Nikolay likes this idea.\n--\nMichael", "msg_date": "Sat, 26 Oct 2019 11:45:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Sat, Oct 26, 2019 at 11:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Oct 25, 2019 at 04:42:24PM +0900, Amit Langote wrote:\n> > Hmm, if we're inventing a new API to replace the old one, why not use\n> > that opportunity to be consistent with our general style, which\n> > predominantly seems to be either words_separated_by_underscore() or\n> > UpperCamelCase(). Thoughts?\n>\n> Not wrong. Using small-case characters separated with underscores\n> would be more consistent with the rest perhaps? We use that for the\n> initialization of custom variables and for all the relkind-related\n> interfaces.\n\nOK, I went with build_reloptions(), which looks very similar to nearby\nexported functions.\n\n> + * Parses reloptions provided by the caller in text array format and\n> + * fills and returns a struct containing the parsed option values\n> The sentence structure is weird, perhaps:\n> This routine parses \"reloptions\" provided by the caller in text-array\n> format. The parsing is done with a table describing the allowed\n> options, defined by \"relopt_elems\" of length \"num_relopt_elems\". The\n> returned result is a structure containing all the parsed option\n> values.\n\nThanks, I have expanded the header comment based on your text.\n\n> > Attached updated patch. It would be nice to hear whether this patch\n> > is really what Nikolay intended to eventually do with this code.\n>\n> Okay, let's check if Nikolay likes this idea.\n\nAttached updated patch.\n\nThanks,\nAmit", "msg_date": "Mon, 28 Oct 2019 17:16:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On 2019-Oct-23, Michael Paquier wrote:\n\n> On Wed, Oct 23, 2019 at 11:16:25AM +0900, Amit Langote wrote:\n> > IMO, parts of the patch that only refactors the existing code should\n> > be first in the list as it is easier to review, especially if it adds\n> > no new concepts. In this case, your patch to break StdRdOptions into\n> > more manageable chunks would be easier to understand if it built upon\n> > a simplified framework of parsing reloptions text arrays.\n> \n> Thanks for doing a split. This helps in proving the point that this\n> portion has independent value.\n\nNot a split, yes? AFAICS this code is nowhere in Nikolay's proposed\npatchset -- it seems completely new development by Amit. Am I wrong?\n\nI also think that this has value -- let's go for it. I think I'll be\nback on Wednesday to review it, if you would prefer to wait.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 28 Oct 2019 12:02:20 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Mon, Oct 28, 2019 at 12:02:20PM -0300, Alvaro Herrera wrote:\n> I also think that this has value -- let's go for it. I think I'll be\n> back on Wednesday to review it, if you would prefer to wait.\n\nNo worries, thanks for looking it.\n--\nMichael", "msg_date": "Tue, 29 Oct 2019 09:15:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "Hi Alvaro,\n\nOn Tue, Oct 29, 2019 at 12:02 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Oct-23, Michael Paquier wrote:\n> > On Wed, Oct 23, 2019 at 11:16:25AM +0900, Amit Langote wrote:\n> > > IMO, parts of the patch that only refactors the existing code should\n> > > be first in the list as it is easier to review, especially if it adds\n> > > no new concepts. In this case, your patch to break StdRdOptions into\n> > > more manageable chunks would be easier to understand if it built upon\n> > > a simplified framework of parsing reloptions text arrays.\n> >\n> > Thanks for doing a split. This helps in proving the point that this\n> > portion has independent value.\n>\n> Not a split, yes? AFAICS this code is nowhere in Nikolay's proposed\n> patchset -- it seems completely new development by Amit. Am I wrong?\n\nIIUC, Nikolay intended to write such a patch but only after getting\nsome consensus on breaking up StdRdOptions. I didn't look closely but\nan idea similar to the patch I posted (really as a PoC) might have\nbeen discussed couple of years ago, as Nikolay mentioned upthread:\n\nhttps://commitfest.postgresql.org/15/992/\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 29 Oct 2019 13:23:15 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Mon, Oct 28, 2019 at 05:16:54PM +0900, Amit Langote wrote:\n> On Sat, Oct 26, 2019 at 11:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Fri, Oct 25, 2019 at 04:42:24PM +0900, Amit Langote wrote:\n>> > Hmm, if we're inventing a new API to replace the old one, why not use\n>> > that opportunity to be consistent with our general style, which\n>> > predominantly seems to be either words_separated_by_underscore() or\n>> > UpperCamelCase(). Thoughts?\n>>\n>> Not wrong. Using small-case characters separated with underscores\n>> would be more consistent with the rest perhaps? We use that for the\n>> initialization of custom variables and for all the relkind-related\n>> interfaces.\n>\n> OK, I went with build_reloptions(), which looks very similar to nearby\n> exported functions.\n\nThanks.\n\n>> + * Parses reloptions provided by the caller in text array format and\n>> + * fills and returns a struct containing the parsed option values\n>> The sentence structure is weird, perhaps:\n>> This routine parses \"reloptions\" provided by the caller in text-array\n>> format. The parsing is done with a table describing the allowed\n>> options, defined by \"relopt_elems\" of length \"num_relopt_elems\". The\n>> returned result is a structure containing all the parsed option\n>> values.\n>\n> Thanks, I have expanded the header comment based on your text.\n\nLooks fine. I have done some refinements as per the attached.\n\nRunning the regression tests of dummy_index_am has proved to be able\nto break the assertion you have added. I don't have a good idea of\nhow to make that more simple and reliable, but there is one thing\noutstanding here: the number of options parsed by parseRelOptions\nshould never be higher than num_relopt_elems. So let's at least be\nsafer about that.\n\nAlso if some options are parsed options will never be NULL, so there\nis no need to check for it before pfree()-ing it, no?\n\nAny comments from others? Alvaro perhaps?\n--\nMichael", "msg_date": "Wed, 30 Oct 2019 12:11:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "Hi Michael,\n\nOn Wed, Oct 30, 2019 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Looks fine. I have done some refinements as per the attached.\n\nThanks. This stood out to me:\n\n+ * The result is a structure containing all the parsed option values in\n+ * text-array format.\n\nThis sentence sounds wrong, because the result structure doesn't\ncontain values in text-array format. Individual values in the struct\nwould be in their native format (C bool for RELOPT_TYPE_BOOL, options,\netc.).\n\nMaybe we don't need this sentence, because the first line already says\nwe return a struct.\n\n> Running the regression tests of dummy_index_am has proved to be able\n> to break the assertion you have added.\n\nThis breakage seems to have to do with the fact that the definition of\nDummyIndexOptions struct and the entries of relopt_parse_elt table\ndon't agree?\n\nThese are the last two members of DummyIndexOptions struct:\n\n int option_string_val_offset;\n int option_string_null_offset;\n} DummyIndexOptions;\n\nbut di_relopt_tab's last two entries are these:\n\n add_string_reloption(di_relopt_kind, \"option_string_val\",\n \"String option for dummy_index_am with\nnon-NULL default\",\n \"DefaultValue\", &validate_string_option,\n AccessExclusiveLock);\n di_relopt_tab[4].optname = \"option_string_val\";\n di_relopt_tab[4].opttype = RELOPT_TYPE_STRING;\n di_relopt_tab[4].offset = offsetof(DummyIndexOptions,\n option_string_val_offset);\n\n /*\n * String option for dummy_index_am with NULL default, and without\n * description.\n */\n add_string_reloption(di_relopt_kind, \"option_string_null\",\n NULL, /* description */\n NULL, &validate_string_option,\n AccessExclusiveLock);\n di_relopt_tab[5].optname = \"option_string_null\";\n di_relopt_tab[5].opttype = RELOPT_TYPE_STRING;\n di_relopt_tab[5].offset = offsetof(DummyIndexOptions,\n option_string_null_offset);\n\nIf I fix the above code like this:\n\n@@ -113,7 +113,7 @@ create_reloptions_table(void)\n \"DefaultValue\", &validate_string_option,\n AccessExclusiveLock);\n di_relopt_tab[4].optname = \"option_string_val\";\n- di_relopt_tab[4].opttype = RELOPT_TYPE_STRING;\n+ di_relopt_tab[4].opttype = RELOPT_TYPE_INT;\n di_relopt_tab[4].offset = offsetof(DummyIndexOptions,\n option_string_val_offset);\n\n@@ -126,7 +126,7 @@ create_reloptions_table(void)\n NULL, &validate_string_option,\n AccessExclusiveLock);\n di_relopt_tab[5].optname = \"option_string_null\";\n- di_relopt_tab[5].opttype = RELOPT_TYPE_STRING;\n+ di_relopt_tab[5].opttype = RELOPT_TYPE_INT;\n di_relopt_tab[5].offset = offsetof(DummyIndexOptions,\n option_string_null_offset);\n }\n\ntest passes.\n\nBut maybe this Assert isn't all that robust, so I'm happy to take it out.\n\n> Also if some options are parsed options will never be NULL, so there\n> is no need to check for it before pfree()-ing it, no?\n\nI haven't fully read parseRelOptions(), but I will trust you. :)\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 31 Oct 2019 16:38:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Thu, Oct 31, 2019 at 04:38:55PM +0900, Amit Langote wrote:\n> On Wed, Oct 30, 2019 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> This sentence sounds wrong, because the result structure doesn't\n> contain values in text-array format. Individual values in the struct\n> would be in their native format (C bool for RELOPT_TYPE_BOOL, options,\n> etc.).\n> \n> Maybe we don't need this sentence, because the first line already says\n> we return a struct.\n\nLet's remove it then.\n\n> This breakage seems to have to do with the fact that the definition of\n> DummyIndexOptions struct and the entries of relopt_parse_elt table\n> don't agree?\n> \n> These are the last two members of DummyIndexOptions struct:\n> \n> @@ -126,7 +126,7 @@ create_reloptions_table(void)\n> NULL, &validate_string_option,\n> AccessExclusiveLock);\n> di_relopt_tab[5].optname = \"option_string_null\";\n> - di_relopt_tab[5].opttype = RELOPT_TYPE_STRING;\n> + di_relopt_tab[5].opttype = RELOPT_TYPE_INT;\n> di_relopt_tab[5].offset = offsetof(DummyIndexOptions,\n> option_string_null_offset);\n> }\n> \n> test passes.\n> \n> But maybe this Assert isn't all that robust, so I'm happy to take it out.\n\nThis one should remain a string reloption, and what's stored in the\nstructure is an offset to get the string. See for example around\nRelationHasCascadedCheckOption before it got switched to an enum in\n773df88 regarding the way to get the value.\n--\nMichael", "msg_date": "Thu, 31 Oct 2019 16:49:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Thu, Oct 31, 2019 at 4:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Oct 31, 2019 at 04:38:55PM +0900, Amit Langote wrote:\n> > On Wed, Oct 30, 2019 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > This sentence sounds wrong, because the result structure doesn't\n> > contain values in text-array format. Individual values in the struct\n> > would be in their native format (C bool for RELOPT_TYPE_BOOL, options,\n> > etc.).\n> >\n> > Maybe we don't need this sentence, because the first line already says\n> > we return a struct.\n>\n> Let's remove it then.\n\nRemoved in the attached.\n\n> > This breakage seems to have to do with the fact that the definition of\n> > DummyIndexOptions struct and the entries of relopt_parse_elt table\n> > don't agree?\n> >\n> > These are the last two members of DummyIndexOptions struct:\n> >\n> > @@ -126,7 +126,7 @@ create_reloptions_table(void)\n> > NULL, &validate_string_option,\n> > AccessExclusiveLock);\n> > di_relopt_tab[5].optname = \"option_string_null\";\n> > - di_relopt_tab[5].opttype = RELOPT_TYPE_STRING;\n> > + di_relopt_tab[5].opttype = RELOPT_TYPE_INT;\n> > di_relopt_tab[5].offset = offsetof(DummyIndexOptions,\n> > option_string_null_offset);\n> > }\n> >\n> > test passes.\n> >\n> > But maybe this Assert isn't all that robust, so I'm happy to take it out.\n>\n> This one should remain a string reloption, and what's stored in the\n> structure is an offset to get the string. See for example around\n> RelationHasCascadedCheckOption before it got switched to an enum in\n> 773df88 regarding the way to get the value.\n\nI see. I didn't know that about STRING options when writing that\nAssert, so it was indeed broken to begin with.\n\nv5 attached.\n\nThanks,\nAmit", "msg_date": "Thu, 31 Oct 2019 17:18:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Thu, Oct 31, 2019 at 05:18:46PM +0900, Amit Langote wrote:\n> On Thu, Oct 31, 2019 at 4:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Let's remove it then.\n> \n> Removed in the attached.\n\nThanks. I exactly did the same thing on my local branch.\n--\nMichael", "msg_date": "Thu, 31 Oct 2019 17:55:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Thu, Oct 31, 2019 at 05:55:55PM +0900, Michael Paquier wrote:\n> Thanks. I exactly did the same thing on my local branch.\n\nHearing nothing more, I have done some adjustments to the patch and\ncommitted it. Please note that I have not switched the old interface\nto be static to reloptions.c as if you look closely at reloptions.h we\nallow much more flexibility around HANDLE_INT_RELOPTION to fill and\nparse the reloptions in custom AM. AM maintainers had better use the\nnew interface, but there could be a point for more customized error\nmessages.\n--\nMichael", "msg_date": "Tue, 5 Nov 2019 09:22:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Tue, Nov 5, 2019 at 9:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Oct 31, 2019 at 05:55:55PM +0900, Michael Paquier wrote:\n> > Thanks. I exactly did the same thing on my local branch.\n>\n> Hearing nothing more, I have done some adjustments to the patch and\n> committed it.\n\nThank you.\n\n> Please note that I have not switched the old interface\n> to be static to reloptions.c as if you look closely at reloptions.h we\n> allow much more flexibility around HANDLE_INT_RELOPTION to fill and\n> parse the reloptions in custom AM. AM maintainers had better use the\n> new interface, but there could be a point for more customized error\n> messages.\n\nI looked around but don't understand why these macros need to be\nexposed. I read this comment:\n\n * Note that this is more or less the same that fillRelOptions does, so only\n * use this if you need to do something non-standard within some option's\n * code block.\n\nbut don't see how an AM author might be able to do something\nnon-standard with this interface.\n\nMaybe Alvaro knows this better.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 7 Nov 2019 10:49:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Thu, Nov 07, 2019 at 10:49:38AM +0900, Amit Langote wrote:\n> I looked around but don't understand why these macros need to be\n> exposed. I read this comment:\n> \n> * Note that this is more or less the same that fillRelOptions does, so only\n> * use this if you need to do something non-standard within some option's\n> * code block.\n> \n> but don't see how an AM author might be able to do something\n> non-standard with this interface.\n> \n> Maybe Alvaro knows this better.\n\nPerhaps there is a point in cleaning up all that more, but I am not\nsure that it is worth potentially breaking other people's code.\n--\nMichael", "msg_date": "Thu, 7 Nov 2019 10:54:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Thu, Nov 7, 2019 at 10:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Nov 07, 2019 at 10:49:38AM +0900, Amit Langote wrote:\n> > I looked around but don't understand why these macros need to be\n> > exposed. I read this comment:\n> >\n> > * Note that this is more or less the same that fillRelOptions does, so only\n> > * use this if you need to do something non-standard within some option's\n> > * code block.\n> >\n> > but don't see how an AM author might be able to do something\n> > non-standard with this interface.\n> >\n> > Maybe Alvaro knows this better.\n>\n> Perhaps there is a point in cleaning up all that more, but I am not\n> sure that it is worth potentially breaking other people's code.\n\nSure. Maybe, we could add a deprecation note for these more\nfine-grained APIs like my first patch did.\n\n+/*\n+ * Using parseRelOptions(), allocateReloptStruct(), and fillRelOptions()\n+ * directly is Deprecated; use build_reloptions() instead.\n+ */\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 7 Nov 2019 10:58:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On 2019-Nov-07, Amit Langote wrote:\n\n> On Tue, Nov 5, 2019 at 9:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> > Please note that I have not switched the old interface\n> > to be static to reloptions.c as if you look closely at reloptions.h we\n> > allow much more flexibility around HANDLE_INT_RELOPTION to fill and\n> > parse the reloptions in custom AM. AM maintainers had better use the\n> > new interface, but there could be a point for more customized error\n> > messages.\n> \n> I looked around but don't understand why these macros need to be\n> exposed. I read this comment:\n> \n> * Note that this is more or less the same that fillRelOptions does, so only\n> * use this if you need to do something non-standard within some option's\n> * code block.\n> \n> but don't see how an AM author might be able to do something\n> non-standard with this interface.\n> \n> Maybe Alvaro knows this better.\n\nI think the idea was that you could have external code doing things in a\ndifferent way for some reason, ie. not use a bytea varlena struct that\ncould be filled by fillRelOptions but instead ... do something else.\nThat's why those macros are exposed. Now, this idea doesn't seem to be\naged very well. Maybe exposing that stuff is unnecessary.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 Nov 2019 18:55:46 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "Hi Alvaro,\n\nOn Wed, Nov 13, 2019 at 6:55 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Nov-07, Amit Langote wrote:\n> > On Tue, Nov 5, 2019 at 9:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > Please note that I have not switched the old interface\n> > > to be static to reloptions.c as if you look closely at reloptions.h we\n> > > allow much more flexibility around HANDLE_INT_RELOPTION to fill and\n> > > parse the reloptions in custom AM. AM maintainers had better use the\n> > > new interface, but there could be a point for more customized error\n> > > messages.\n> >\n> > I looked around but don't understand why these macros need to be\n> > exposed. I read this comment:\n> >\n> > * Note that this is more or less the same that fillRelOptions does, so only\n> > * use this if you need to do something non-standard within some option's\n> > * code block.\n> >\n> > but don't see how an AM author might be able to do something\n> > non-standard with this interface.\n> >\n> > Maybe Alvaro knows this better.\n>\n> I think the idea was that you could have external code doing things in a\n> different way for some reason, ie. not use a bytea varlena struct that\n> could be filled by fillRelOptions but instead ... do something else.\n> That's why those macros are exposed. Now, this idea doesn't seem to be\n> aged very well. Maybe exposing that stuff is unnecessary.\n\nThanks for chiming in about that. I guess that means that we don't\nneed those macros, except GET_STRING_RELOPTION_LEN() that's used in\nallocateReloptStruct(), which can be moved to reloptions.c. Is that\ncorrect?\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 13 Nov 2019 10:52:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Wed, Nov 13, 2019 at 10:52:52AM +0900, Amit Langote wrote:\n> Thanks for chiming in about that. I guess that means that we don't\n> need those macros, except GET_STRING_RELOPTION_LEN() that's used in\n> allocateReloptStruct(), which can be moved to reloptions.c. Is that\n> correct?\n\nI have been looking on the net to see if there are any traces of code\nusing those macros, but could not find any. The last trace of a macro\nuse is in 8ebe1e3, which just relies on GET_STRING_RELOPTION_LEN. So\nit looks rather convincing now to just remove this code. Attached is\na patch for that. There could be an argument for keeping\nGET_STRING_RELOPTION actually which is still useful to get a string\nvalue in an option set using the stored offset, and we have\nthe recently-added dummy_index_am in this category. Any thoughts?\n--\nMichael", "msg_date": "Wed, 13 Nov 2019 14:18:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Wed, Nov 13, 2019 at 2:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Nov 13, 2019 at 10:52:52AM +0900, Amit Langote wrote:\n> > Thanks for chiming in about that. I guess that means that we don't\n> > need those macros, except GET_STRING_RELOPTION_LEN() that's used in\n> > allocateReloptStruct(), which can be moved to reloptions.c. Is that\n> > correct?\n>\n> I have been looking on the net to see if there are any traces of code\n> using those macros, but could not find any. The last trace of a macro\n> use is in 8ebe1e3, which just relies on GET_STRING_RELOPTION_LEN. So\n> it looks rather convincing now to just remove this code. Attached is\n> a patch for that.\n\nThank you.\n\n> There could be an argument for keeping\n> GET_STRING_RELOPTION actually which is still useful to get a string\n> value in an option set using the stored offset, and we have\n> the recently-added dummy_index_am in this category. Any thoughts?\n\nNot sure, maybe +0.5 on keeping GET_STRING_RELOPTION.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 13 Nov 2019 14:29:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Wed, Nov 13, 2019 at 02:29:49PM +0900, Amit Langote wrote:\n> On Wed, Nov 13, 2019 at 2:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> There could be an argument for keeping\n>> GET_STRING_RELOPTION actually which is still useful to get a string\n>> value in an option set using the stored offset, and we have\n>> the recently-added dummy_index_am in this category. Any thoughts?\n> \n> Not sure, maybe +0.5 on keeping GET_STRING_RELOPTION.\n\nThinking more about it, I would tend to keep this one around. I'll\nwait a couple of days before coming back to it.\n--\nMichael", "msg_date": "Wed, 13 Nov 2019 16:05:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "В письме от среда, 13 ноября 2019 г. 16:05:20 MSK пользователь Michael Paquier \nнаписал:\n\nGuys! Sorry for being away for so long. I did not have much opportunities to \npay my attention to postgres recently.\n\nThank you for introducing build_reloptions function. It is approximately the \ndirection I wanted to move afterwards myself. \n\nBut nevertheless, I am steady on my way, and I want to get rid of StdRdOptions \nbefore doing anything else myself. This structure is long outdated and is not \nsuitable for access method's options at all.\n\nI've changed the patch to use build_reloptions function and again propose it \nto the commitfest.\n\n\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)", "msg_date": "Wed, 13 Nov 2019 16:26:53 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Wed, Nov 13, 2019 at 04:05:20PM +0900, Michael Paquier wrote:\n> On Wed, Nov 13, 2019 at 02:29:49PM +0900, Amit Langote wrote:\n> > On Wed, Nov 13, 2019 at 2:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> There could be an argument for keeping\n> >> GET_STRING_RELOPTION actually which is still useful to get a string\n> >> value in an option set using the stored offset, and we have\n> >> the recently-added dummy_index_am in this category. Any thoughts?\n> > \n> > Not sure, maybe +0.5 on keeping GET_STRING_RELOPTION.\n> \n> Thinking more about it, I would tend to keep this one around. I'll\n> wait a couple of days before coming back to it.\n\nCommitted this one and kept GET_STRING_RELOPTION(). With the removal\nof those macros, it is possible to actually move a portion of the\nparsing definitions to reloptions.c for each type, but as we expose\nthe validation function string and the enum element definition that\nwould be more confusing IMO, so I have left that out.\n--\nMichael", "msg_date": "Thu, 14 Nov 2019 14:09:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Wed, Nov 13, 2019 at 04:26:53PM +0300, Nikolay Shaplov wrote:\n> I've changed the patch to use build_reloptions function and again propose it \n> to the commitfest.\n\nThanks for the new patch. I have not reviewed the patch in details,\nbut I have a small comment.\n\n> +#define SpGistGetFillFactor(relation) \\\n> +\t((relation)->rd_options ? \\\n> +\t\t((SpGistOptions *) (relation)->rd_options)->fillfactor : \\\n> +\t\tSPGIST_DEFAULT_FILLFACTOR)\n> +\nWouldn't it make sense to add assertions here to make sure that the\nrelkind is an index? You basically did that in commit 3967737.\n--\nMichael", "msg_date": "Thu, 14 Nov 2019 16:50:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "В письме от четверг, 14 ноября 2019 г. 16:50:18 MSK пользователь Michael \nPaquier написал:\n\n> > I've changed the patch to use build_reloptions function and again propose\n> > it to the commitfest.\n> \n> Thanks for the new patch. I have not reviewed the patch in details,\n> but I have a small comment.\n> \n> > +#define SpGistGetFillFactor(relation) \\\n> > +\t((relation)->rd_options ? \\\n> > +\t\t((SpGistOptions *) (relation)->rd_options)->fillfactor : \\\n> > +\t\tSPGIST_DEFAULT_FILLFACTOR)\n> > +\n> \n> Wouldn't it make sense to add assertions here to make sure that the\n> relkind is an index? You basically did that in commit 3967737.\n\nFor me there is no mush sense in it, as it does not prevent us from wrong type \ncasting. Indexes can use all kinds of structures for reloptions, and checking \nthat this is index, will not do much.\n\nDo you know way how to distinguish one index from another? If we can check in \nassertion this is index, and this index is spgist, then assertion will make \nsense for 100%. I just have no idea how to do it. As far as I can see it is \nnot possible now.\n\n\nAnother issue here is, that we should to it to all indexes, not only that have \nbeen using StdRdOptions, but to all indexes we have. (Damn code \ninconsistency). So I guess this should go as another patch to keep it step by \nstep improvements.\n\n\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)\n\n\n", "msg_date": "Thu, 14 Nov 2019 11:20:25 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Thu, Nov 14, 2019 at 11:20:25AM +0300, Nikolay Shaplov wrote:\n> For me there is no mush sense in it, as it does not prevent us from wrong type \n> casting. Indexes can use all kinds of structures for reloptions, and checking \n> that this is index, will not do much.\n\nIt seems to me that if the plan is to have one option structure for\neach index AM, which has actually the advantage to reduce the bloat of\neach relcache entry currently relying on StdRdOptions, then we could\nhave those extra assertion checks in the same patch, because the new\nmacros are introduced.\n\n> Do you know way how to distinguish one index from another? If we can check in \n> assertion this is index, and this index is spgist, then assertion will make \n> sense for 100%. I just have no idea how to do it. As far as I can see it is \n> not possible now.\n\nThere is rd_rel->relam. You can for example refer to pgstatindex.c\nwhich has AM-related checks to make sure that the correct index AM is\nbeing used.\n--\nMichael", "msg_date": "Fri, 15 Nov 2019 10:34:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Fri, Nov 15, 2019 at 10:34:55AM +0900, Michael Paquier wrote:\n> It seems to me that if the plan is to have one option structure for\n> each index AM, which has actually the advantage to reduce the bloat of\n> each relcache entry currently relying on StdRdOptions, then we could\n> have those extra assertion checks in the same patch, because the new\n> macros are introduced.\n\nI have looked at this patch, and did not like much having the\ncalculations of the page free space around, so I have moved that into\neach AM's dedicated header.\n\n> There is rd_rel->relam. You can for example refer to pgstatindex.c\n> which has AM-related checks to make sure that the correct index AM is\n> being used.\n\nWe can do something similar for GIN and BRIN on top of the rest, so\nupdated the patch with that. Nikolay, I would be fine to commit this\npatch as-is. Thanks for your patience on this stuff.\n--\nMichael", "msg_date": "Wed, 20 Nov 2019 16:44:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Wed, Nov 20, 2019 at 04:44:18PM +0900, Michael Paquier wrote:\n> We can do something similar for GIN and BRIN on top of the rest, so\n> updated the patch with that. Nikolay, I would be fine to commit this\n> patch as-is. Thanks for your patience on this stuff.\n\nSo, I have reviewed the full thread, and this patch presents a couple\nof advantages:\n1) Making the code more uniform in terms of reloption build and\nhandling for index AMs by using more build_reloptions() with custom\nparsing tables.\n2) Saving a couple of bytes for each relcache entries when rd_options\nare built for Btree, Hash and SpGist (StdRdOptions gets a small cut).\nThe results of this shaving are not much but my take is that it is\nalways good to shave things if this does not cause extra code\nreadability churns (even if we have more places which waste more).\n\nAndres, Tomas, I can see that upthread you voiced concerns about the\nmemory part but not the consistency part. The patch has become much\nsmaller after the initial refactoring steps and it is easier to\nfollow. Any opinions or objections to share regarding the recent\nprogress done?\n--\nMichael", "msg_date": "Thu, 21 Nov 2019 14:22:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "В письме от среда, 20 ноября 2019 г. 16:44:18 MSK пользователь Michael Paquier \nнаписал:\n\n> > It seems to me that if the plan is to have one option structure for\n> > each index AM, which has actually the advantage to reduce the bloat of\n> > each relcache entry currently relying on StdRdOptions, then we could\n> > have those extra assertion checks in the same patch, because the new\n> > macros are introduced.\n> I have looked at this patch, and did not like much having the\n> calculations of the page free space around, so I have moved that into\n> each AM's dedicated header.\nSounds as a good idea. I try not touch such things following the rule \"is not \nbroken do not fix\" but this way it is definitely better. Thanks!\n\n> > There is rd_rel->relam. You can for example refer to pgstatindex.c\n> > which has AM-related checks to make sure that the correct index AM is\n> > being used.\n> \n> We can do something similar for GIN and BRIN on top of the rest, so\n> updated the patch with that.\nThat is what I've been trying to tell speaking about code consistency. But ok, \nthis way is also good.\n\n> Nikolay, I would be fine to commit this patch as-is.\nYeah. I've read the patch. I like it, actually I started doing same thing \nmyself but you were faster. I have opportunity to pay attention to postgres \nonce a week these days...\n\nI like the patch, and also agree that it should be commited as is.\n\nThough I have a notion to think about.\n\nBRIN_AM_OID and friends are defined in catalog/pg_am_d.h so for core indexes \nwe can do relation->rd_rel->relam == BRIN_AM_OID check. But for contrib \nindexes we can't do such a thing.\nBloom index does not need such check as it uses options only when index is \ncreated. At that point you can not choose wrong relation. But if in future we \nwill have some contrib index that uses options when it some data is inserted \n(as it is done with fillfactor in core indexes) then index author will not be \nable to do such relam check. I would not call it a big problem, but it is \nsomething to think about, for sure...\n\n> Thanks for your patience on this stuff.\nThaks for joining this work, and sorry for late replies. Now I quite rarely \nhave time for postgres :-(\n\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)\n\n\n", "msg_date": "Thu, 21 Nov 2019 21:39:53 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "Hello,\n\nOn Thu, Nov 21, 2019 at 2:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Any opinions or objections to share regarding the recent\n> progress done?\n\nThe latest patch looks good to me, except, maybe the comment of\nStdRdOptions should be updated:\n\n * StdRdOptions\n * Standard contents of rd_options for heaps and generic indexes.\n\nIIUC, StdRdOptions no longer applies to indexes, right?\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 22 Nov 2019 11:01:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Thu, Nov 21, 2019 at 09:39:53PM +0300, Nikolay Shaplov wrote:\n> BRIN_AM_OID and friends are defined in catalog/pg_am_d.h so for core indexes \n> we can do relation->rd_rel->relam == BRIN_AM_OID check. But for contrib \n> indexes we can't do such a thing.\n> Bloom index does not need such check as it uses options only when index is \n> created. At that point you can not choose wrong relation. But if in future we \n> will have some contrib index that uses options when it some data is inserted \n> (as it is done with fillfactor in core indexes) then index author will not be \n> able to do such relam check. I would not call it a big problem, but it is \n> something to think about, for sure...\n\nI don't think that you actually need that for custom index AMs anyway,\nas all code paths leading to the lookup of their reloption values is\nwithin the module they are defined in.\n\n> Thaks for joining this work, and sorry for late replies. Now I quite rarely \n> have time for postgres :-(\n\nWe all have a life, don't worry. I am glad to see you around.\n--\nMichael", "msg_date": "Fri, 22 Nov 2019 16:34:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Fri, Nov 22, 2019 at 11:01:54AM +0900, Amit Langote wrote:\n> The latest patch looks good to me, except, maybe the comment of\n> StdRdOptions should be updated:\n> \n> * StdRdOptions\n> * Standard contents of rd_options for heaps and generic indexes.\n> \n> IIUC, StdRdOptions no longer applies to indexes, right?\n\nNoted, thanks. I'll sit on this thing for a couple of days, and will\nlikely look at it again on Monday in order to commit it.\n--\nMichael", "msg_date": "Fri, 22 Nov 2019 16:44:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" }, { "msg_contents": "On Fri, Nov 22, 2019 at 04:44:50PM +0900, Michael Paquier wrote:\n> Noted, thanks. I'll sit on this thing for a couple of days, and will\n> likely look at it again on Monday in order to commit it.\n\nAnd done.\n--\nMichael", "msg_date": "Mon, 25 Nov 2019 09:43:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Do not use StdRdOptions in Access Methods" } ]
[ { "msg_contents": "Over in [1] we have a report of a postmaster shutdown that seems to\nhave occurred because some client logic was overaggressively spawning\nconnection requests, causing the postmaster's child-process arrays to\nbe temporarily full, and then some parallel query tried to launch a\nnew bgworker process. The postmaster's bgworker-spawning logic lacks\nany check for the arrays being full, so when AssignPostmasterChildSlot\nfailed to find a free slot, kaboom!\n\nThe attached proposed patch fixes this by making bgworker spawning\ninclude a canAcceptConnections() test. That's perhaps overkill, since\nwe really just need to check the CountChildren() total; but I judged\nthat going through that function and having it decide what to test or\nnot test was a better design than duplicating the CountChildren() test\nelsewhere.\n\nI'd first imagined also replacing the one-size-fits-all check\n\n if (CountChildren(BACKEND_TYPE_ALL) >= MaxLivePostmasterChildren())\n result = CAC_TOOMANY;\n\nwith something like\n\n switch (backend_type)\n {\n case BACKEND_TYPE_NORMAL:\n if (CountChildren(backend_type) >= 2 * MaxConnections)\n result = CAC_TOOMANY;\n break;\n case BACKEND_TYPE_AUTOVAC:\n if (CountChildren(backend_type) >= 2 * autovacuum_max_workers)\n result = CAC_TOOMANY;\n break;\n ...\n }\n\nso as to subdivide the pool of child-process slots and prevent client\nrequests from consuming slots meant for background processes. But on\ncloser examination that's not really worth the trouble, because this\npool is already considerably bigger than MaxBackends; so even if we\nprevented a failure here we could still have bgworker startup failure\nlater on when it tries to acquire a PGPROC.\n\nBarring objections, I'll apply and back-patch this soon.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CADCf-WNZk_9680Q0YjfBzuiR0Oe8LzvDs2Ts3_tq6Tv1e8raQQ%40mail.gmail.com", "msg_date": "Sun, 06 Oct 2019 13:17:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "On Sun, Oct 6, 2019 at 1:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Over in [1] we have a report of a postmaster shutdown that seems to\n> have occurred because some client logic was overaggressively spawning\n> connection requests, causing the postmaster's child-process arrays to\n> be temporarily full, and then some parallel query tried to launch a\n> new bgworker process. The postmaster's bgworker-spawning logic lacks\n> any check for the arrays being full, so when AssignPostmasterChildSlot\n> failed to find a free slot, kaboom!\n>\n> The attached proposed patch fixes this by making bgworker spawning\n> include a canAcceptConnections() test. That's perhaps overkill, since\n> we really just need to check the CountChildren() total; but I judged\n> that going through that function and having it decide what to test or\n> not test was a better design than duplicating the CountChildren() test\n> elsewhere.\n>\n> I'd first imagined also replacing the one-size-fits-all check\n>\n> if (CountChildren(BACKEND_TYPE_ALL) >= MaxLivePostmasterChildren())\n> result = CAC_TOOMANY;\n>\n> with something like\n>\n> switch (backend_type)\n> {\n> case BACKEND_TYPE_NORMAL:\n> if (CountChildren(backend_type) >= 2 * MaxConnections)\n> result = CAC_TOOMANY;\n> break;\n> case BACKEND_TYPE_AUTOVAC:\n> if (CountChildren(backend_type) >= 2 * autovacuum_max_workers)\n> result = CAC_TOOMANY;\n> break;\n> ...\n> }\n>\n> so as to subdivide the pool of child-process slots and prevent client\n> requests from consuming slots meant for background processes. But on\n> closer examination that's not really worth the trouble, because this\n> pool is already considerably bigger than MaxBackends; so even if we\n> prevented a failure here we could still have bgworker startup failure\n> later on when it tries to acquire a PGPROC.\n>\n> Barring objections, I'll apply and back-patch this soon.\n\nI think it used to work this way -- not sure if it was ever committed\nthis way, but it at least did during development -- and we ripped it\nout because somebody (Magnus?) pointed out that if you got close to\nthe connection limit, you could see parallel queries start failing,\nand that would suck. Falling back to non-parallel seems more OK in\nthat situation than actually failing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 7 Oct 2019 15:55:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Oct 6, 2019 at 1:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The attached proposed patch fixes this by making bgworker spawning\n>> include a canAcceptConnections() test.\n\n> I think it used to work this way -- not sure if it was ever committed\n> this way, but it at least did during development -- and we ripped it\n> out because somebody (Magnus?) pointed out that if you got close to\n> the connection limit, you could see parallel queries start failing,\n> and that would suck. Falling back to non-parallel seems more OK in\n> that situation than actually failing.\n\nI'm not following your point? Whatever you might think the appropriate\nresponse is, I'm pretty sure \"elog(FATAL) out of the postmaster\" is not\nit. Moreover, we have to --- and already do, I trust --- deal with\nother resource-exhaustion errors in exactly the same code path, notably\nfork(2) failure which we simply can't predict or prevent. Doesn't the\nparallel query logic already deal sanely with failure to obtain as many\nworkers as it wanted?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Oct 2019 16:03:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "On Mon, Oct 7, 2019 at 4:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not following your point? Whatever you might think the appropriate\n> response is, I'm pretty sure \"elog(FATAL) out of the postmaster\" is not\n> it. Moreover, we have to --- and already do, I trust --- deal with\n> other resource-exhaustion errors in exactly the same code path, notably\n> fork(2) failure which we simply can't predict or prevent. Doesn't the\n> parallel query logic already deal sanely with failure to obtain as many\n> workers as it wanted?\n\nIf we fail to obtain workers because there are not adequate workers\nslots available, parallel query deals with that smoothly. But, once\nwe have a slot, any further failure will trigger the parallel query to\nERROR out. For the case where we get a slot but can't start the\nworker process, see WaitForParallelWorkersToFinish and/or\nWaitForParallelWorkersToAttach and comments therein. Once we're\nattached, any error messages thrown by the worker are propagated back\nto the master; see HandleParallelMessages and pq_redirect_to_shm_mq.\n\nNow you could argue that the master ought to selectively ignore\ncertain kinds of errors and just continue on, while rethrowing others,\nsay based on the errcode(). Such design ideas have been roundly panned\nin other contexts, though, so I'm not sure it would be a great idea to\ndo it here either. But in any case, it's not how the current system\nbehaves, or was designed to behave.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 9 Oct 2019 10:10:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 7, 2019 at 4:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... Moreover, we have to --- and already do, I trust --- deal with\n>> other resource-exhaustion errors in exactly the same code path, notably\n>> fork(2) failure which we simply can't predict or prevent. Doesn't the\n>> parallel query logic already deal sanely with failure to obtain as many\n>> workers as it wanted?\n\n> If we fail to obtain workers because there are not adequate workers\n> slots available, parallel query deals with that smoothly. But, once\n> we have a slot, any further failure will trigger the parallel query to\n> ERROR out.\n\nWell, that means we have a not-very-stable system then.\n\nWe could improve on matters so far as the postmaster's child-process\narrays are concerned, by defining separate slot \"pools\" for the different\ntypes of child processes. But I don't see much point if the code is\nnot prepared to recover from a fork() failure --- and if it is, that\nwould a fortiori deal with out-of-child-slots as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 10:21:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "On Wed, Oct 9, 2019 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, that means we have a not-very-stable system then.\n>\n> We could improve on matters so far as the postmaster's child-process\n> arrays are concerned, by defining separate slot \"pools\" for the different\n> types of child processes. But I don't see much point if the code is\n> not prepared to recover from a fork() failure --- and if it is, that\n> would a fortiori deal with out-of-child-slots as well.\n\nI would say rather that if fork() is failing on your system, you have\na not very stable system. The fact that parallel query is going to\nfail is sad, but not as sad as the fact that connecting to the\ndatabase is also going to fail, and that logging into the system to\ntry to fix the problem may well fail as well. Code that tries to make\nparallel query cope with this situation without an error wouldn't\noften be tested, so it might be buggy, and it wouldn't necessarily be\na benefit if it did work. I expect many people would rather have the\nquery fail and free up slots in the system process table than consume\nprecisely all of them and then try to execute the query at a\nslower-than-expected rate.\n\nAnyway, here's some previous discussion on this topic for your consideration:\n\nhttps://www.postgresql.org/message-id/flat/CAKJS1f_6H2Gh3QyORyRP%2BG3YB3gZiNms_8QdtO5gvitfY5N9ig%40mail.gmail.com\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 9 Oct 2019 12:29:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Oct 9, 2019 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We could improve on matters so far as the postmaster's child-process\n>> arrays are concerned, by defining separate slot \"pools\" for the different\n>> types of child processes. But I don't see much point if the code is\n>> not prepared to recover from a fork() failure --- and if it is, that\n>> would a fortiori deal with out-of-child-slots as well.\n\n> I would say rather that if fork() is failing on your system, you have\n> a not very stable system. The fact that parallel query is going to\n> fail is sad, but not as sad as the fact that connecting to the\n> database is also going to fail, and that logging into the system to\n> try to fix the problem may well fail as well.\n\nTrue, it's not a situation you especially want to be in. However,\nI've lost count of the number of times that I've heard someone talk\nabout how their system was overstressed to the point that everything\nelse was failing, but Postgres kept chugging along. That's a good\nreputation to have and we shouldn't just walk away from it.\n\n> Code that tries to make\n> parallel query cope with this situation without an error wouldn't\n> often be tested, so it might be buggy, and it wouldn't necessarily be\n> a benefit if it did work. I expect many people would rather have the\n> query fail and free up slots in the system process table than consume\n> precisely all of them and then try to execute the query at a\n> slower-than-expected rate.\n\nI find that argument to be utter bunkum. The parallel query code is\n*already* designed to silently degrade performance when its primary\nresource limit (shared bgworker slots) is exhausted. How can it be\nall right to do that but not all right to cope with fork failure\nsimilarly? If we think running up against the kernel limits is a\ncase that we can roll over and die on, why don't we rip out the\nvirtual-FD stuff in fd.c?\n\nAs for \"might be buggy\", if we ripped out every part of Postgres\nthat's under-tested, I'm afraid there might not be much left.\nIn any case, a sane design for this would make as much as possible\nof the code handle \"out of shared bgworker slots\" just the same as\nresource failures later on, so that there wouldn't be that big a gap\nin coverage.\n\nHaving said all that, I made a patch that causes the postmaster\nto reserve separate child-process-array slots for autovac workers\nand bgworkers, as per attached, so that excessive connection\nrequests can't DOS those subsystems. But I'm not sure that it's\nworth the complication; it wouldn't be necessary if the parallel\nquery launch code were more robust.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 09 Oct 2019 18:26:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "On 2019-Oct-09, Tom Lane wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Oct 9, 2019 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> We could improve on matters so far as the postmaster's child-process\n> >> arrays are concerned, by defining separate slot \"pools\" for the different\n> >> types of child processes. But I don't see much point if the code is\n> >> not prepared to recover from a fork() failure --- and if it is, that\n> >> would a fortiori deal with out-of-child-slots as well.\n> \n> > I would say rather that if fork() is failing on your system, you have\n> > a not very stable system. The fact that parallel query is going to\n> > fail is sad, but not as sad as the fact that connecting to the\n> > database is also going to fail, and that logging into the system to\n> > try to fix the problem may well fail as well.\n> \n> True, it's not a situation you especially want to be in. However,\n> I've lost count of the number of times that I've heard someone talk\n> about how their system was overstressed to the point that everything\n> else was failing, but Postgres kept chugging along. That's a good\n> reputation to have and we shouldn't just walk away from it.\n\nI agree with this point in principle. Everything else (queries,\ncheckpointing) can fail, but it's critical that postmaster continues to\nrun -- that way, once the high load episode is over, connections can be\nre-established as needed, auxiliary processes can be re-launched, and\nthe system can be again working normally. If postmaster dies, all bets\nare off. Also: an idle postmaster is not using any resources; on its\nown, killing it or it dying would not free any useful resources for the\nsystem load to be back to low again.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 4 Nov 2019 12:42:19 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "On Mon, Nov 4, 2019 at 10:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > True, it's not a situation you especially want to be in. However,\n> > I've lost count of the number of times that I've heard someone talk\n> > about how their system was overstressed to the point that everything\n> > else was failing, but Postgres kept chugging along. That's a good\n> > reputation to have and we shouldn't just walk away from it.\n>\n> I agree with this point in principle. Everything else (queries,\n> checkpointing) can fail, but it's critical that postmaster continues to\n> run -- that way, once the high load episode is over, connections can be\n> re-established as needed, auxiliary processes can be re-launched, and\n> the system can be again working normally. If postmaster dies, all bets\n> are off. Also: an idle postmaster is not using any resources; on its\n> own, killing it or it dying would not free any useful resources for the\n> system load to be back to low again.\n\nSure, I'm not arguing that the postmaster should blow up and die.\n\nI was, however, arguing that if the postmaster fails to launch workers\nfor a parallel query due to process table exhaustion, it's OK for\n*that query* to error out.\n\nTom finds that argument to be \"utter bunkum,\" but I don't agree. I\nthink there might also be some implementation complexity there that is\nmore than meets the eye. If a process trying to register workers finds\nout that no worker slots are available, it discovers this at the time\nit tries to perform the registration. But fork() failure happens later\nand in a different process. The original process just finds out that\nthe worker is \"stopped,\" not whether or not it ever got started in the\nfirst place. We certainly can't ignore a worker that managed to start\nand then bombed out, because it might've already, for example, claimed\na block from a Parallel Seq Scan and not yet sent back the\ncorresponding tuples. We could ignore a worker that never started at\nall, due to EAGAIN or whatever else, but the original process that\nregistered the worker has no way of finding this out.\n\nNow you might think we could just fix that by having the postmaster\nrecord something in the slot, but that doesn't work either, because\nthe slot could get reused before the original process checks the\nstatus information. The fact that the slot has been reused is\nsufficient evidence that the worker was unregistered, which means it\neither stopped or we gave up on starting it, but it doesn't tell us\nwhich one. To be able to tell that, we'd have to have a mechanism to\nprevent slots from getting reused until any necessary exit status\ninformation had bene read, sort of like the OS-level zombie process\nmechanism (which we all love, I guess, and therefore definitely want\nto reinvent...?). The postmaster logic would need to be made more\ncomplicated, so that zombies couldn't accumulate: if a process asked\nfor status notifications, but then died, any zombies waiting for it\nwould need to be cleared. And you'd also have to make sure that a\nprocess which didn't die was guaranteed to read the status from the\nzombie to clear it, and that it did so in a reasonably timely fashion,\nwhich is currently in no way guaranteed and does not appear at all\nstraightforward to guarantee.\n\nAnd even if you solved for all of that, I think you might still find\nthat it breaks some parallel query (or parallel create index) code\nthat expects the number of workers to change at registration time, but\nnot afterwards. So, that could would all need to be adjusted.\n\nIn short, I think Tom wants a pony. But that does not mean we should\nnot fix this bug.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 4 Nov 2019 12:14:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "On 2019-Nov-04, Robert Haas wrote:\n\n> On Mon, Nov 4, 2019 at 10:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > True, it's not a situation you especially want to be in. However,\n> > > I've lost count of the number of times that I've heard someone talk\n> > > about how their system was overstressed to the point that everything\n> > > else was failing, but Postgres kept chugging along. That's a good\n> > > reputation to have and we shouldn't just walk away from it.\n> >\n> > I agree with this point in principle. Everything else (queries,\n> > checkpointing) can fail, but it's critical that postmaster continues to\n> > run [...]\n> \n> Sure, I'm not arguing that the postmaster should blow up and die.\n\nI must have misinterpreted you, then. But then I also misinterpreted\nTom, because I thought it was this stability problem that was \"utter\nbunkum\".\n\n> I was, however, arguing that if the postmaster fails to launch workers\n> for a parallel query due to process table exhaustion, it's OK for\n> *that query* to error out.\n\nThat position makes sense to me. It would be nice [..ponies..] for the\nquery to run regardless, but if it doesn't, it's not such a big deal;\nthe query could have equally failed to run in a single process anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 4 Nov 2019 14:29:50 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Nov-04, Robert Haas wrote:\n>> On Mon, Nov 4, 2019 at 10:42 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>> I agree with this point in principle. Everything else (queries,\n>>> checkpointing) can fail, but it's critical that postmaster continues to\n>>> run [...]\n\n>> Sure, I'm not arguing that the postmaster should blow up and die.\n\n> I must have misinterpreted you, then. But then I also misinterpreted\n> Tom, because I thought it was this stability problem that was \"utter\n> bunkum\".\n\nI fixed the postmaster crash problem in commit 3887e9455. The residual\nissue that I think is entirely bogus is that the parallel query start\ncode will silently continue without workers if it hits our internal\nresource limit of how many bgworker ProcArray slots there are, but\nnot do the same when it hits the external resource limit of the\nkernel refusing to fork(). I grant that there might be implementation\nreasons for that being difficult, but I reject Robert's apparent\nopinion that it's somehow desirable to behave that way. As things\nstand, we have all of the disadvantages that you can't predict how\nmany workers you'll get, and none of the advantages of robustness\nin the face of system resource exhaustion.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Nov 2019 13:07:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "Hi,\n\nOn 2019-10-09 12:29:18 -0400, Robert Haas wrote:\n> I would say rather that if fork() is failing on your system, you have\n> a not very stable system.\n\nI don't think that's really true, fwiw. It's often a good idea to turn\non strict memory overcommit accounting, and with that set, it's actually\nfairly common to see fork() fail with ENOMEM, even if there's\npractically a reasonable amount of resources. Especially with larger\nshared buffers and without huge pages, the amount of memory needed for a\npostmaster child in the worst case is not insubstantial.\n\n\n> The fact that parallel query is going to fail is sad, but not as sad\n> as the fact that connecting to the database is also going to fail, and\n> that logging into the system to try to fix the problem may well fail\n> as well.\n\nWell, but parallel query also has to the potential to much more quickly\nlead to a lot of new backends being started than you'd get new\nconnections on an analytics DB.\n\n\n> Code that tries to make parallel query cope with this situation\n> without an error wouldn't often be tested, so it might be buggy, and\n> it wouldn't necessarily be a benefit if it did work. I expect many\n> people would rather have the query fail and free up slots in the\n> system process table than consume precisely all of them and then try\n> to execute the query at a slower-than-expected rate.\n\nI concede that you have a point here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Nov 2019 10:53:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "Hi,\n\nOn 2019-11-04 12:14:53 -0500, Robert Haas wrote:\n> If a process trying to register workers finds out that no worker slots\n> are available, it discovers this at the time it tries to perform the\n> registration. But fork() failure happens later and in a different\n> process. The original process just finds out that the worker is\n> \"stopped,\" not whether or not it ever got started in the first\n> place.\n\nIs that really true? In the case where it started and failed we except\nthe error queue to have been attached to, and there to be either an\nerror 'E' or a 'X' response (cf HandleParallelMessage()). It doesn't\nstrike me as very complicated to keep track of whether any worker has\nsent an 'E' or not, no? I don't think we really need the\n\nFunny (?) anecdote: I learned about this part of the system recently,\nafter I had installed some crash handler inside postgres. Turns out that\nthat diverted, as a side-effect, SIGUSR1 to it's own signal handler. All\ntests in the main regression tests passed, except for ones getting stuck\nwaiting for WaitForParallelWorkersToFinish(), which could be fixed by\ndisabling parallelism aggressively. Took me like two hours to\ndebug... Also, a bit sad that parallel query is the only visible\nfailure (in the main tests) of breaking the sigusr1 infrastructure...\n\n\n> We certainly can't ignore a worker that managed to start and\n> then bombed out, because it might've already, for example, claimed a\n> block from a Parallel Seq Scan and not yet sent back the corresponding\n> tuples. We could ignore a worker that never started at all, due to\n> EAGAIN or whatever else, but the original process that registered the\n> worker has no way of finding this out.\n\nSure, but in that case we'd have gotten either an error back from the\nworker, or postmaster wouldhave PANIC restarted everyone due to an\nunhandled error in the worker, no?\n\n\n> And even if you solved for all of that, I think you might still find\n> that it breaks some parallel query (or parallel create index) code\n> that expects the number of workers to change at registration time, but\n> not afterwards. So, that could would all need to be adjusted.\n\nFair enough. Although I think practically nearly everything has to be\nready to handle workers just being slow to start up anyway, no? There's\nplenty cases where we just finish before all workers are getting around\nto do work.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Nov 2019 11:04:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-10-09 12:29:18 -0400, Robert Haas wrote:\n> > I would say rather that if fork() is failing on your system, you have\n> > a not very stable system.\n> \n> I don't think that's really true, fwiw. It's often a good idea to turn\n> on strict memory overcommit accounting, and with that set, it's actually\n> fairly common to see fork() fail with ENOMEM, even if there's\n> practically a reasonable amount of resources. Especially with larger\n> shared buffers and without huge pages, the amount of memory needed for a\n> postmaster child in the worst case is not insubstantial.\n\nI've not followed this thread very closely, but I agree with Andres here\nwrt fork() failing with ENOMEM in the field and not because the system\nisn't stable.\n\nThanks,\n\nStephen", "msg_date": "Mon, 4 Nov 2019 14:09:45 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "On Mon, Nov 4, 2019 at 2:04 PM Andres Freund <andres@anarazel.de> wrote:\n> Is that really true? In the case where it started and failed we except\n> the error queue to have been attached to, and there to be either an\n> error 'E' or a 'X' response (cf HandleParallelMessage()). It doesn't\n> strike me as very complicated to keep track of whether any worker has\n> sent an 'E' or not, no? I don't think we really need the\n\nOne of us is confused here, because I don't think that helps. Consider\nthree background workers Alice, Bob, and Charlie. Alice fails to\nlaunch because fork() fails. Bob launches but then exits unexpectedly.\nCharlie has no difficulties and carries out his assigned duties.\n\nNow, the system you are proposing will say that Charlie is OK but\nAlice and Bob are a problem. However, that's the way it already works.\nWhat Tom wants is to distinguish Alice from Bob, and your proposal is\nof no help at all with that problem, so far as I can see.\n\n> > We certainly can't ignore a worker that managed to start and\n> > then bombed out, because it might've already, for example, claimed a\n> > block from a Parallel Seq Scan and not yet sent back the corresponding\n> > tuples. We could ignore a worker that never started at all, due to\n> > EAGAIN or whatever else, but the original process that registered the\n> > worker has no way of finding this out.\n>\n> Sure, but in that case we'd have gotten either an error back from the\n> worker, or postmaster wouldhave PANIC restarted everyone due to an\n> unhandled error in the worker, no?\n\nAn unhandled ERROR in the worker is not a PANIC. I think it's just an\nERROR that ends up being fatal in effect, but even if it's actually\npromoted to FATAL, it's not a PANIC.\n\nIt is *generally* true that if a worker hits an ERROR, the error will\nbe propagated back to the leader, but it is not an invariable rule.\nOne pretty common way that it fails to happen - common in the sense\nthat it comes up during development, not common on production systems\nI hope - is if a worker dies before reaching the call to\npq_redirect_to_shm_mq(). Before that, there's no possibility of\ncommunicating anything. Granted, at that point we shouldn't yet have\ndone any work that might mess up the query results. Similarly, once\nwe reach that point, we are dependent on a certain amount of good\nbehavior for things to work as expected; yeah, any code that calls\nproc_exit() is suppose to signal an ERROR or FATAL first, but what if\nit doesn't? Granted, in that case we'd probably fail to send an 'X'\nmessage, too, so the leader would still have a chance to realize\nsomething is wrong.\n\nI guess I agree with you to this extent: I made a policy decision that\nif a worker is successfully fails to show up, that's an ERROR. It\nwould be possible to adopt the opposite policy, namely that if a\nworker doesn't show up, that's an \"oh well.\" You'd have to be very\ncertain that the worker wasn't going to show up later, though. For\ninstance, suppose you check all of the shared memory queues used for\nreturning tuples and find that every queue is either in a state where\n(1) nobody's ever attached to it or (2) somebody attached and then\ndetached. This is not good enough, because it's possible that after\nyou checked queue #1, and found it in the former state, someone\nattached and read a block, which caused queue #2 to enter the latter\nstate before you got around to checking it. If you decide that it's OK\nto decide that we're done at this point, you'll never return the\ntuples that are pushed through queue #1.\n\nBut, assuming you nailed the door shut so that such problems could not\noccur, I think we could make a decision to ignore works that failed\nbefore doing anything interesting. Whether that would be a good policy\ndecision is pretty questionable in my mind. In addition to what I\nmentioned before, I think there's a serious danger that errors that\nusers would have really wanted to know about - or developers would\nreally want to have known about - would get ignored. You could have\nsome horrible problem that's making your workers fail to launch, and\nthe system would just carry on as if everything were fine, except with\nbad query plans. I realize that you and others might say \"oh, well,\nmonitor your logs, then,\" but I think there is certainly some value in\nan ordinary user being able to know that things didn't go well without\nhaving to look into the PostgreSQL log for errors. Now, maybe you\nthink that's not enough value to justify having it work the way it\ndoes today, and I certainly respect that, but I don't view it that way\nmyself.\n\nWhat I mostly want to emphasize here is that, while parallel query has\nhad a number of bugs in this area that were the result of shoddy\ndesign or inadequate testing - principally by me - this isn't one of\nthem. This decision was made consciously by me because I thought it\ngave us the best chance of having a system that would be reliable and\nhave satisfying behavior for users. Sounds like not everybody agrees,\nand that's fine, but I just want to get it out there that this wasn't\naccidental on my part.\n\n> > And even if you solved for all of that, I think you might still find\n> > that it breaks some parallel query (or parallel create index) code\n> > that expects the number of workers to change at registration time, but\n> > not afterwards. So, that could would all need to be adjusted.\n>\n> Fair enough. Although I think practically nearly everything has to be\n> ready to handle workers just being slow to start up anyway, no? There's\n> plenty cases where we just finish before all workers are getting around\n> to do work.\n\nBecause of the shutdown race mentioned above, we generally have to\nwait for workers to exit before we can shut down parallelism. See\ncommit\n2badb5afb89cd569500ef7c3b23c7a9d11718f2f (whose commit message also\ndocuments some of the behaviors now in question). So we tolerate slow\nstartup in that it doesn't prevent us from getting started on query\nexecution, but not to the extent that we can finish query execution\nwithout knowing definitively that every worker is either already gone\nor will never be showing up.\n\n(Is it possible to do better there? Perhaps. If we could somehow throw\nup a brick wall to prevent new workers from doing anything that would\ncause problems, then verify that every worker which got past the brick\nwall has exited cleanly, then we could ignore the risk of more workers\nshowing up later, because they'd hit the brick wall before causing any\ntrouble.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 4 Nov 2019 14:58:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" }, { "msg_contents": "Hi,\n\nOn 2019-11-04 14:58:20 -0500, Robert Haas wrote:\n> On Mon, Nov 4, 2019 at 2:04 PM Andres Freund <andres@anarazel.de> wrote:\n> > Is that really true? In the case where it started and failed we except\n> > the error queue to have been attached to, and there to be either an\n> > error 'E' or a 'X' response (cf HandleParallelMessage()). It doesn't\n> > strike me as very complicated to keep track of whether any worker has\n> > sent an 'E' or not, no? I don't think we really need the\n> \n> One of us is confused here, because I don't think that helps. Consider\n> three background workers Alice, Bob, and Charlie. Alice fails to\n> launch because fork() fails. Bob launches but then exits unexpectedly.\n> Charlie has no difficulties and carries out his assigned duties.\n> \n> Now, the system you are proposing will say that Charlie is OK but\n> Alice and Bob are a problem. However, that's the way it already works.\n> What Tom wants is to distinguish Alice from Bob, and your proposal is\n> of no help at all with that problem, so far as I can see.\n\nI don't see how what I'm saying treats Alice and Bob the same. What I'm\nsaying is that if a worker has been started and shut down, without\nsignalling an error via the error queue, and without an exit code that\ncauses postmaster to worry, then we can just ignore the worker for the\npurpose of determining whether the query succeeded. Without a meaningful\nloss in reliably. And we can detect such a cases easily, we already do,\nwe just have remove an ereport(), and document things.\n\n\n> > > We certainly can't ignore a worker that managed to start and\n> > > then bombed out, because it might've already, for example, claimed a\n> > > block from a Parallel Seq Scan and not yet sent back the corresponding\n> > > tuples. We could ignore a worker that never started at all, due to\n> > > EAGAIN or whatever else, but the original process that registered the\n> > > worker has no way of finding this out.\n> >\n> > Sure, but in that case we'd have gotten either an error back from the\n> > worker, or postmaster wouldhave PANIC restarted everyone due to an\n> > unhandled error in the worker, no?\n> \n> An unhandled ERROR in the worker is not a PANIC. I think it's just an\n> ERROR that ends up being fatal in effect, but even if it's actually\n> promoted to FATAL, it's not a PANIC.\n\nIf it's an _exit without going through the PG machinery, it'll\neventually be PANIC, albeit with a slight delay. And once we're actually\nexecuting the parallel query, we better have error reporting set up for\nparallel queries.\n\n\n> It is *generally* true that if a worker hits an ERROR, the error will\n> be propagated back to the leader, but it is not an invariable rule.\n> One pretty common way that it fails to happen - common in the sense\n> that it comes up during development, not common on production systems\n> I hope - is if a worker dies before reaching the call to\n> pq_redirect_to_shm_mq(). Before that, there's no possibility of\n> communicating anything. Granted, at that point we shouldn't yet have\n> done any work that might mess up the query results.\n\nRight.\n\n\n> Similarly, once we reach that point, we are dependent on a certain amount of good\n> behavior for things to work as expected; yeah, any code that calls\n> proc_exit() is suppose to signal an ERROR or FATAL first, but what if\n> it doesn't? Granted, in that case we'd probably fail to send an 'X'\n> message, too, so the leader would still have a chance to realize\n> something is wrong.\n\nI mean, in that case so many more things are screwed up, I don't buy\nthat it's worth pessimizing ENOMEM handling for this. And if you're\nreally concerned, we could add before_shmem_exit hook or such that makes\nextra double sure that we've signalled something.\n\n\n> I guess I agree with you to this extent: I made a policy decision that\n> if a worker is successfully fails to show up, that's an ERROR. It\n> would be possible to adopt the opposite policy, namely that if a\n> worker doesn't show up, that's an \"oh well.\" You'd have to be very\n> certain that the worker wasn't going to show up later, though. For\n> instance, suppose you check all of the shared memory queues used for\n> returning tuples and find that every queue is either in a state where\n> (1) nobody's ever attached to it or (2) somebody attached and then\n> detached. This is not good enough, because it's possible that after\n> you checked queue #1, and found it in the former state, someone\n> attached and read a block, which caused queue #2 to enter the latter\n> state before you got around to checking it. If you decide that it's OK\n> to decide that we're done at this point, you'll never return the\n> tuples that are pushed through queue #1.\n\nThat's why the code *already* waits for workers to attach, or for the\nslot to be marked unused/invalid/reused. I don't see how that applies to\nnot explicitly erroring out when we know that the worker *failed* to\nstart:\n\nvoid\nWaitForParallelWorkersToFinish(ParallelContext *pcxt)\n...\n\n\n\t\t\t/*\n\t\t\t * We didn't detect any living workers, but not all workers are\n\t\t\t * known to have exited cleanly. Either not all workers have\n\t\t\t * launched yet, or maybe some of them failed to start or\n\t\t\t * terminated abnormally.\n\t\t\t */\n\t\t\tfor (i = 0; i < pcxt->nworkers_launched; ++i)\n\t\t\t{\n\t\t\t\tpid_t\t\tpid;\n\t\t\t\tshm_mq\t *mq;\n\n\t\t\t\t/*\n\t\t\t\t * If the worker is BGWH_NOT_YET_STARTED or BGWH_STARTED, we\n\t\t\t\t * should just keep waiting. If it is BGWH_STOPPED, then\n\t\t\t\t * further investigation is needed.\n\t\t\t\t */\n\t\t\t\tif (pcxt->worker[i].error_mqh == NULL ||\n\t\t\t\t\tpcxt->worker[i].bgwhandle == NULL ||\n\t\t\t\t\tGetBackgroundWorkerPid(pcxt->worker[i].bgwhandle,\n\t\t\t\t\t\t\t\t\t\t &pid) != BGWH_STOPPED)\n\t\t\t\t\tcontinue;\n\n\t\t\t\t/*\n\t\t\t\t * Check whether the worker ended up stopped without ever\n\t\t\t\t * attaching to the error queue. If so, the postmaster was\n\t\t\t\t * unable to fork the worker or it exited without initializing\n\t\t\t\t * properly. We must throw an error, since the caller may\n\t\t\t\t * have been expecting the worker to do some work before\n\t\t\t\t * exiting.\n\t\t\t\t */\n\t\t\t\tmq = shm_mq_get_queue(pcxt->worker[i].error_mqh);\n\t\t\t\tif (shm_mq_get_sender(mq) == NULL)\n\t\t\t\t\tereport(ERROR,\n\t\t\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\t\t\t\t\t\t\t errmsg(\"parallel worker failed to initialize\"),\n\t\t\t\t\t\t\t errhint(\"More details may be available in the server log.\")));\n\n\t\t\t\t/*\n\t\t\t\t * The worker is stopped, but is attached to the error queue.\n\t\t\t\t * Unless there's a bug somewhere, this will only happen when\n\t\t\t\t * the worker writes messages and terminates after the\n\t\t\t\t * CHECK_FOR_INTERRUPTS() near the top of this function and\n\t\t\t\t * before the call to GetBackgroundWorkerPid(). In that case,\n\t\t\t\t * or latch should have been set as well and the right things\n\t\t\t\t * will happen on the next pass through the loop.\n\t\t\t\t */\n\t\t\t}\n\t\t}\n\n\n\n> But, assuming you nailed the door shut so that such problems could not\n> occur, I think we could make a decision to ignore works that failed\n> before doing anything interesting. Whether that would be a good policy\n> decision is pretty questionable in my mind. In addition to what I\n> mentioned before, I think there's a serious danger that errors that\n> users would have really wanted to know about - or developers would\n> really want to have known about - would get ignored. You could have\n> some horrible problem that's making your workers fail to launch, and\n> the system would just carry on as if everything were fine, except with\n> bad query plans. I realize that you and others might say \"oh, well,\n> monitor your logs, then,\" but I think there is certainly some value in\n> an ordinary user being able to know that things didn't go well without\n> having to look into the PostgreSQL log for errors. Now, maybe you\n> think that's not enough value to justify having it work the way it\n> does today, and I certainly respect that, but I don't view it that way\n> myself.\n\nYea, this is somewhat of a pickle. I'm inclined to think that the\nproblem of unnecessarily ERRORing out queries is worse than the disease\n(causing unnecessary failures to make debugging of some not all that\nlikely errors easier is somewhat a severe measure).\n\n\nI think I mentioned this to you on chat, but I think the in-core use of\nbgworkers, at least as they currently are designed, is/was an\narchitecturally bad idea. There's numerous problems flowing from\nthat, with error handling being one big recurring theme.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Nov 2019 12:41:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missed check for too-many-children in bgworker spawning" } ]
[ { "msg_contents": "This is a \"heads up\" for others upgrading to v12. I found a solution for our\nuse case, but it'd be easy to miss this, even if you read the release notes.\n\nI saw this and updated our scripts with pg_restore -f-\nhttps://www.postgresql.org/docs/12/release-12.html\n|In pg_restore, require specification of -f - to send the dump contents to standard output (Euler Taveira)\n|Previously, this happened by default if no destination was specified, but that was deemed to be unfriendly.\n\nWhat I didn't realize at first is that -f- has no special meaning in v11 - it\njust writes a file called ./- And it's considered untennable to change\nbehavior of v11.\n\nIn our use, that was being piped to sed, which then saw nothing on its stdin\nand just exits.. I changed our script to use pg_restore -f /dev/stdout, which\nseems to be portable across postgres versions for the OS distribution we're\nrunning. Unfortunately, I can't think of anything portable across *OS* or\nuseful to include in documentation. In the worst case, someone might need to\ncall pg_restore differently based on its version.\n\nJustin\n\n\n", "msg_date": "Sun, 6 Oct 2019 14:08:39 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v12 and pg_restore -f-" }, { "msg_contents": "[ redirecting to -hackers ]\n\nJustin Pryzby <pryzby@telsasoft.com> writes:\n> I saw this and updated our scripts with pg_restore -f-\n\n> https://www.postgresql.org/docs/12/release-12.html\n> |In pg_restore, require specification of -f - to send the dump contents to standard output (Euler Taveira)\n> |Previously, this happened by default if no destination was specified, but that was deemed to be unfriendly.\n\n> What I didn't realize at first is that -f- has no special meaning in v11 - it\n> just writes a file called ./-\n\nUgh. I didn't realize that either, or I would have made a stink about\nthis patch. Reducing the risk of getting a dump spewed at you is\ncompletely not worth the cost of making it impossible to have\ncross-version-compatible scripting of pg_restore.\n\nPerhaps we could change the back branches so that they interpret \"-f -\"\nas \"write to stdout\", but without enforcing that you use that syntax.\nNobody is going to wish that to mean \"write to a file named '-'\", so\nI don't think this would be an unacceptable change.\n\nAlternatively, we could revert the v12 behavior change. On the whole\nthat might be the wiser course. I do not think the costs and benefits\nof this change were all that carefully thought through.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Oct 2019 16:43:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Perhaps we could change the back branches so that they interpret\n Tom> \"-f -\" as \"write to stdout\", but without enforcing that you use\n Tom> that syntax.\n\nWe should definitely do that.\n\n Tom> Alternatively, we could revert the v12 behavior change. On the\n Tom> whole that might be the wiser course. I do not think the costs and\n Tom> benefits of this change were all that carefully thought through.\n\nFailing to specify -d is a _really fricking common_ mistake for\ninexperienced users, who may not realize that the fact that they're\nseeing a ton of SQL on their terminal is not the normal result.\nSeriously, this comes up on a regular basis on IRC (which is why I\nsuggested initially that we should do something about it).\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Sun, 06 Oct 2019 22:02:49 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Perhaps we could change the back branches so that they interpret\n> Tom> \"-f -\" as \"write to stdout\", but without enforcing that you use\n> Tom> that syntax.\n\n> We should definitely do that.\n\n> Tom> Alternatively, we could revert the v12 behavior change. On the\n> Tom> whole that might be the wiser course. I do not think the costs and\n> Tom> benefits of this change were all that carefully thought through.\n\n> Failing to specify -d is a _really fricking common_ mistake for\n> inexperienced users, who may not realize that the fact that they're\n> seeing a ton of SQL on their terminal is not the normal result.\n> Seriously, this comes up on a regular basis on IRC (which is why I\n> suggested initially that we should do something about it).\n\nNo doubt, but that seems like a really poor excuse for breaking\nmaintenance scripts in a way that basically can't be fixed. Even\nwith the change suggested above, scripts couldn't rely on \"-f -\"\nworking anytime soon, because you couldn't be sure whether a\nback-rev pg_restore had the update or not.\n\nThe idea I'm leaning to after more thought is that we should change\n*all* the branches to accept \"-f -\", but not throw an error if you\ndon't use it. Several years from now, we could put the error back in;\nbut not until there's a plausible argument that nobody is running\nold versions of pg_restore anymore.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Oct 2019 17:15:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "BTW, the prior discussion is here:\n\nhttps://www.postgresql.org/message-id/24868.1550106683%40sss.pgh.pa.us\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Sun, 06 Oct 2019 22:39:10 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> > \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > Tom> Perhaps we could change the back branches so that they interpret\n> > Tom> \"-f -\" as \"write to stdout\", but without enforcing that you use\n> > Tom> that syntax.\n> \n> > We should definitely do that.\n\nI agree that this would be a reasonable course of action. Really, it\nshould have always meant that...\n\n> > Tom> Alternatively, we could revert the v12 behavior change. On the\n> > Tom> whole that might be the wiser course. I do not think the costs and\n> > Tom> benefits of this change were all that carefully thought through.\n> \n> > Failing to specify -d is a _really fricking common_ mistake for\n> > inexperienced users, who may not realize that the fact that they're\n> > seeing a ton of SQL on their terminal is not the normal result.\n> > Seriously, this comes up on a regular basis on IRC (which is why I\n> > suggested initially that we should do something about it).\n> \n> No doubt, but that seems like a really poor excuse for breaking\n> maintenance scripts in a way that basically can't be fixed. Even\n> with the change suggested above, scripts couldn't rely on \"-f -\"\n> working anytime soon, because you couldn't be sure whether a\n> back-rev pg_restore had the update or not.\n\nMaintenance scripts break across major versions. We completely\ndemolished everything around how recovery works, and some idea that you\ncould craft up something easy that would work in a backwards-compatible\nway is outright ridiculous, so I don't see why we're so concerned about\na change to how pg_restore works here.\n\n> The idea I'm leaning to after more thought is that we should change\n> *all* the branches to accept \"-f -\", but not throw an error if you\n> don't use it. Several years from now, we could put the error back in;\n> but not until there's a plausible argument that nobody is running\n> old versions of pg_restore anymore.\n\nNo, I don't agree with this, at all.\n\nThanks,\n\nStephen", "msg_date": "Tue, 8 Oct 2019 14:08:40 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Em ter, 8 de out de 2019 às 15:08, Stephen Frost <sfrost@snowman.net> escreveu:\n>\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> > > \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > > Tom> Perhaps we could change the back branches so that they interpret\n> > > Tom> \"-f -\" as \"write to stdout\", but without enforcing that you use\n> > > Tom> that syntax.\n> >\n> > > We should definitely do that.\n>\n> I agree that this would be a reasonable course of action. Really, it\n> should have always meant that...\n>\nIndeed, it was a broken behavior and the idea was to fix it. However,\nchanging pg_restore in back-branches is worse than do nothing because\nit could break existent scripts.\n\n> > > Tom> Alternatively, we could revert the v12 behavior change. On the\n> > > Tom> whole that might be the wiser course. I do not think the costs and\n> > > Tom> benefits of this change were all that carefully thought through.\n> >\n> > > Failing to specify -d is a _really fricking common_ mistake for\n> > > inexperienced users, who may not realize that the fact that they're\n> > > seeing a ton of SQL on their terminal is not the normal result.\n> > > Seriously, this comes up on a regular basis on IRC (which is why I\n> > > suggested initially that we should do something about it).\n> >\n> > No doubt, but that seems like a really poor excuse for breaking\n> > maintenance scripts in a way that basically can't be fixed. Even\n> > with the change suggested above, scripts couldn't rely on \"-f -\"\n> > working anytime soon, because you couldn't be sure whether a\n> > back-rev pg_restore had the update or not.\n>\n> Maintenance scripts break across major versions. We completely\n> demolished everything around how recovery works, and some idea that you\n> could craft up something easy that would work in a backwards-compatible\n> way is outright ridiculous, so I don't see why we're so concerned about\n> a change to how pg_restore works here.\n>\nYeah, if you check pg_restore version, you could use new syntax for\n12+. We break scripts every release (mainly with catalog changes) and\nI don't know why this change is different than the other ones. The\npg_restore changes is more user-friendly and less error-prone.\n\n\nRegards,\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Wed, 9 Oct 2019 09:07:40 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Euler Taveira (euler@timbira.com.br) wrote:\n> Em ter, 8 de out de 2019 às 15:08, Stephen Frost <sfrost@snowman.net> escreveu:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> > > > \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > > > Tom> Perhaps we could change the back branches so that they interpret\n> > > > Tom> \"-f -\" as \"write to stdout\", but without enforcing that you use\n> > > > Tom> that syntax.\n> > >\n> > > > We should definitely do that.\n> >\n> > I agree that this would be a reasonable course of action. Really, it\n> > should have always meant that...\n> >\n> Indeed, it was a broken behavior and the idea was to fix it. However,\n> changing pg_restore in back-branches is worse than do nothing because\n> it could break existent scripts.\n\nI can certainly respect that argument, in general, but in this specific\ncase, I've got a really hard time believeing that people wrote scripts\nwhich use '-f -' with the expectation that a './-' file was to be\ncreated.\n\nThanks,\n\nStephen", "msg_date": "Wed, 9 Oct 2019 08:45:05 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Hi,\r\n\r\nOn Sun, Oct 6, 2019 at 7:09 PM, Justin Pryzby wrote:\r\n> I saw this and updated our scripts with pg_restore -f-\r\n> https://www.postgresql.org/docs/12/release-12.html\r\n> |In pg_restore, require specification of -f - to send the dump contents to standard output (Euler Taveira)\r\n> |Previously, this happened by default if no destination was specified, but that was deemed to be unfriendly.\r\n> \r\n> What I didn't realize at first is that -f- has no special meaning in v11 - it\r\n> just writes a file called ./- And it's considered untennable to change\r\nbehavior of v11.\r\n\r\nAhh... I totally missed thinking about the behavior of \"-f -\" in v11 when I reviewed this patch.\r\n\r\n\r\nOn Wed, Oct 9, 2019 at 0:45 PM, Stephen Frost wrote:\r\n> * Euler Taveira (euler@timbira.com.br) wrote:\r\n> > Em ter, 8 de out de 2019 às 15:08, Stephen Frost <sfrost@snowman.net> escreveu:\r\n> > > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\r\n> > > > Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\r\n> > > > > \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\r\n> > > > > Tom> Perhaps we could change the back branches so that they\r\n> > > > > interpret Tom> \"-f -\" as \"write to stdout\", but without\r\n> > > > > enforcing that you use Tom> that syntax.\r\n> > > >\r\n> > > > > We should definitely do that.\r\n> > >\r\n> > > I agree that this would be a reasonable course of action. Really,\r\n> > > it should have always meant that...\r\n> > >\r\n> > Indeed, it was a broken behavior and the idea was to fix it. However,\r\n> > changing pg_restore in back-branches is worse than do nothing because\r\n> > it could break existent scripts.\r\n> \r\n> I can certainly respect that argument, in general, but in this specific case, I've got a really hard time believeing\r\n> that people wrote scripts which use '-f -' with the expectation that a './-' file was to be created.\r\n\r\n+1.\r\n\r\nIf we only think of the problem that we can't use \"-f -\" with the meaning \"dump to the stdout\" in v11 and before ones, it seems a bug and we should fix it.\r\nOf course, if we fix it, some people would go into the trouble, but such people are who wrote scripts which use '-f -' with the expectation that a './-' file.\r\nI don't think there are such people a lot.\r\n\r\n\r\n--\r\nYoshikazu Imai\r\n", "msg_date": "Wed, 16 Oct 2019 06:25:33 +0000", "msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* imai.yoshikazu@fujitsu.com (imai.yoshikazu@fujitsu.com) wrote:\n> On Sun, Oct 6, 2019 at 7:09 PM, Justin Pryzby wrote:\n> > I saw this and updated our scripts with pg_restore -f-\n> > https://www.postgresql.org/docs/12/release-12.html\n> > |In pg_restore, require specification of -f - to send the dump contents to standard output (Euler Taveira)\n> > |Previously, this happened by default if no destination was specified, but that was deemed to be unfriendly.\n> > \n> > What I didn't realize at first is that -f- has no special meaning in v11 - it\n> > just writes a file called ./- And it's considered untennable to change\n> > behavior of v11.\n> \n> Ahh... I totally missed thinking about the behavior of \"-f -\" in v11 when I reviewed this patch.\n\nClearly you weren't the only one.\n\n> On Wed, Oct 9, 2019 at 0:45 PM, Stephen Frost wrote:\n> > * Euler Taveira (euler@timbira.com.br) wrote:\n> > > Em ter, 8 de out de 2019 às 15:08, Stephen Frost <sfrost@snowman.net> escreveu:\n> > > > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > > > Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> > > > > > \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > > > > > Tom> Perhaps we could change the back branches so that they\n> > > > > > interpret Tom> \"-f -\" as \"write to stdout\", but without\n> > > > > > enforcing that you use Tom> that syntax.\n> > > > >\n> > > > > > We should definitely do that.\n> > > >\n> > > > I agree that this would be a reasonable course of action. Really,\n> > > > it should have always meant that...\n> > > >\n> > > Indeed, it was a broken behavior and the idea was to fix it. However,\n> > > changing pg_restore in back-branches is worse than do nothing because\n> > > it could break existent scripts.\n> > \n> > I can certainly respect that argument, in general, but in this specific case, I've got a really hard time believeing\n> > that people wrote scripts which use '-f -' with the expectation that a './-' file was to be created.\n> \n> +1.\n> \n> If we only think of the problem that we can't use \"-f -\" with the meaning \"dump to the stdout\" in v11 and before ones, it seems a bug and we should fix it.\n> Of course, if we fix it, some people would go into the trouble, but such people are who wrote scripts which use '-f -' with the expectation that a './-' file.\n> I don't think there are such people a lot.\n\nThis topic, unfortunately, seems a bit stuck right now. Maybe there's a\nway we can pry it loose and get to a resolution.\n\nOn the one hand, we have Yoshikazu Imai, Andrew, and I pushing to change\nback-branches, Eurler argueing that we shouldn't do anything, and Tom\nargueing to make all versions accept '-f -'.\n\nFirst, I'd like to clarify what I believe Tom's suggestion is, and then\ntalk through that, as his vote sways this topic pretty heavily.\n\nTom, I take it your suggestion is to have '-f -' be accepted to mean\n'goes to stdout' in all branches? That goes against the argument that\nwe don't want to break existing scripts, as it's possible that there are\nexisting scripts that depend on '-f -' actually going to a './-' file.\n\nIs your argument here that, with the above, existing scripts could be\nupdated to use '-f -' explicitly and work with multiple versions, and\nthat scripts which aren't changed would work as-is?\n\nIf so, then I don't agree with it- if we really don't want to break\nexisting scripts when moving from pre-v12 to v12, then this patch never\nshould have been accepted at all as that's the only way to avoid\nbreaking anything, but then, we shouldn't be making a lot of other\nchanges between major versions either because there's often a good\nchance that we'll break things. Instead, the patch was accepted, as a\ngood and forward-moving change, with the understanding that it would\nrequire some users to update their scripts when they move to v12, so,\nfor my part at least, that question was answered when we committed the\nchange and released with it. Now, if we wish to adjust back-branches to\nmake it easier for users to have scripts that work with both versions,\nthat seems like a worthwhile change and is very unlikely to to cause\nbreakage- and it's certainly more likely to have users actually change\ntheir scripts to use '-f -' explicitly when they start working with v12,\ninstead of depending on stdout being the default, which is ultimately\nthe goal and why the change was made in the first place.\n\nIf the concern is that we can expect folks to install v12 and then\nrefuse or be unable to upgrade back-branches, then I just don't have any\nsympathy for that either- minor updates are extremely important, and new\nmajor versions are certainly no cake walk to get installed, so that\nargument just doesn't hold water with me- if they can upgrade to v12,\nthen they can update to the latest minor versions, if they actually need\nto work with both concurrently (which strikes me as already at least\nrelatively uncommon...).\n\nIf you meant for all branches to accept '-f -' and have it go to a './-'\nfile then that's just a revert of this entire change, which I can't\nagree with either- really, folks who are depending on that are depending\non buggy behavior in the first place.\n\nThanks,\n\nStephen", "msg_date": "Wed, 16 Oct 2019 13:21:48 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On Sun, Oct 06, 2019 at 04:43:13PM -0400, Tom Lane wrote:\n> Nobody is going to wish that to mean \"write to a file named '-'\", so\n\nProbably right, but it occurs to me that someone could make a named pipe called\nthat, possibly to make \"dump to stdout\" work with scripts that don't support\ndumping to stdout, but with what's arguably a self-documenting syntax.\n\nOn Wed, Oct 09, 2019 at 09:07:40AM -0300, Euler Taveira wrote:\n> > Maintenance scripts break across major versions.\n...\n> Yeah, if you check pg_restore version, you could use new syntax for\n> 12+. We break scripts every release (mainly with catalog changes) and\n> I don't know why this change is different than the other ones. The\n> pg_restore changes is more user-friendly and less error-prone.\n\nThe issue isn't that scripts broke, but that after I fixed scripts to work with\nv12, it's impossible (within pg_restore and without relying on shell or linux\nconventions) to write something that works on both v11 and v12, without\nconditionalizing on pg_restore --version.\n\nOn Wed, Oct 16, 2019 at 01:21:48PM -0400, Stephen Frost wrote:\n> [...] if they actually need to work with both concurrently (which strikes me\n> as already at least relatively uncommon...).\n\nI doubt it's uncommon. In our case, we have customers running centos6 and 7.\nThere's no postgis RPMs provided for centos6 which allow an upgrade path to\nv12, so we'll end up supporting at least (centos6, pg11) and (centos7,pg12) for\nmonths, at least.\n\nI have a half dozen maintenance scripts to do things like reindex, vacuum,\ncluster, alter tblspace. In the immediate case, our backup script runs\npg_restore to check if an existing pg_dump backup of an old table is empty when\nthe table is not itself empty - which has happened before due to logic errors\nand mishandled DST... (We're taking advantage of timeseries partitioning so\ndaily backups exclude tables older than a certain thershold, which are assumed\nto be unchanged, or at least not have data updated).\n\nI'd *like* to be able to deploy our most recent maint scripts during the\ninterval of time our customers are running different major versions. The\nalternative being to try to remember to avoid deploying updated v12 scripts at\ncustomers still running v11. I went to the effort to make our vacuum/analyze\nscript support both versions following the OID change.\n\nI worked around the pg_restore change using /dev/stdout ; possibly the\ndocumentation should mention that workaround for portability to earlier\nversions: that would work for maybe 85% of cases. If need be, one could check\npg_restore --version. But it's nicer not to need to.\n\nTom's proposed in February to backpatch the -f- behavior, so ISTM that we're\nright now exactly where we (or at least he) planned to be, except that the\nbackpatch ideally should've been included in the minor releases in August,\nbefore v12 was released.\n\nhttps://www.postgresql.org/message-id/24868.1550106683%40sss.pgh.pa.us\n\nJustin\n\n\n", "msg_date": "Wed, 16 Oct 2019 12:58:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Sun, Oct 06, 2019 at 04:43:13PM -0400, Tom Lane wrote:\n> > Nobody is going to wish that to mean \"write to a file named '-'\", so\n> \n> Probably right, but it occurs to me that someone could make a named pipe called\n> that, possibly to make \"dump to stdout\" work with scripts that don't support\n> dumping to stdout, but with what's arguably a self-documenting syntax.\n\nI'm not super keen to stress a great deal over \"someone could\" cases.\nYes, we can come up with lots of \"what ifs\" here to justify why someone\nmight think to do this but it still seems extremely rare to me. It'd be\nnice to get some actual numbers somehow but I haven't got any great\nideas about how to do that. Actual research into this would probably be\nto go digging through source code that's available to try and figure out\nif such a case exists anywhere public.\n\n> On Wed, Oct 09, 2019 at 09:07:40AM -0300, Euler Taveira wrote:\n> > > Maintenance scripts break across major versions.\n> ...\n> > Yeah, if you check pg_restore version, you could use new syntax for\n> > 12+. We break scripts every release (mainly with catalog changes) and\n> > I don't know why this change is different than the other ones. The\n> > pg_restore changes is more user-friendly and less error-prone.\n> \n> The issue isn't that scripts broke, but that after I fixed scripts to work with\n> v12, it's impossible (within pg_restore and without relying on shell or linux\n> conventions) to write something that works on both v11 and v12, without\n> conditionalizing on pg_restore --version.\n\nRight- you were happy (more-or-less) to update the scripts to deal with\nthe v12 changes, but didn't like that those changes then broke when run\nagainst v11, something that would be fixed by correcting those earlier\nversions.\n\n> On Wed, Oct 16, 2019 at 01:21:48PM -0400, Stephen Frost wrote:\n> > [...] if they actually need to work with both concurrently (which strikes me\n> > as already at least relatively uncommon...).\n> \n> I doubt it's uncommon. In our case, we have customers running centos6 and 7.\n> There's no postgis RPMs provided for centos6 which allow an upgrade path to\n> v12, so we'll end up supporting at least (centos6, pg11) and (centos7,pg12) for\n> months, at least.\n\nI suppose the issue here is that you don't want to have different\nversions of some scripts for centos6/pg11 vs. centos7/pg12? I'm a bit\nsurprised that you don't have to for reasons unrelated to pg_restore.\n\n> I have a half dozen maintenance scripts to do things like reindex, vacuum,\n> cluster, alter tblspace. In the immediate case, our backup script runs\n> pg_restore to check if an existing pg_dump backup of an old table is empty when\n> the table is not itself empty - which has happened before due to logic errors\n> and mishandled DST... (We're taking advantage of timeseries partitioning so\n> daily backups exclude tables older than a certain thershold, which are assumed\n> to be unchanged, or at least not have data updated).\n> \n> I'd *like* to be able to deploy our most recent maint scripts during the\n> interval of time our customers are running different major versions. The\n> alternative being to try to remember to avoid deploying updated v12 scripts at\n> customers still running v11. I went to the effort to make our vacuum/analyze\n> script support both versions following the OID change.\n\nAnd I suppose you don't want to install v12 client tools for the v11\nsystems..? I get that there's an argument for that, but it does also\nseem like it'd be an alternative solution.\n\n> I worked around the pg_restore change using /dev/stdout ; possibly the\n> documentation should mention that workaround for portability to earlier\n> versions: that would work for maybe 85% of cases. If need be, one could check\n> pg_restore --version. But it's nicer not to need to.\n> \n> Tom's proposed in February to backpatch the -f- behavior, so ISTM that we're\n> right now exactly where we (or at least he) planned to be, except that the\n> backpatch ideally should've been included in the minor releases in August,\n> before v12 was released.\n> \n> https://www.postgresql.org/message-id/24868.1550106683%40sss.pgh.pa.us\n\nThat continues to strike me as a good way forward, and I'm guessing you\nagree on that? If so, sorry for not including you in my earlier email.\n\nThanks,\n\nStephen", "msg_date": "Wed, 16 Oct 2019 15:04:52 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On Wed, Oct 16, 2019 at 03:04:52PM -0400, Stephen Frost wrote:\n\n> > On Wed, Oct 16, 2019 at 01:21:48PM -0400, Stephen Frost wrote:\n> > > [...] if they actually need to work with both concurrently (which strikes me\n> > > as already at least relatively uncommon...).\n> > \n> > I doubt it's uncommon. In our case, we have customers running centos6 and 7.\n> > There's no postgis RPMs provided for centos6 which allow an upgrade path to\n> > v12, so we'll end up supporting at least (centos6, pg11) and (centos7,pg12) for\n> > months, at least.\n> \n> I suppose the issue here is that you don't want to have different\n> versions of some scripts for centos6/pg11 vs. centos7/pg12? I'm a bit\n> surprised that you don't have to for reasons unrelated to pg_restore.\n\nRight, I don't want to \"have to\" do anything :)\n\nIf we really need a separate script, or conditional, then we'll do, but it's\nnicer to be ABLE write something (like this sanity check) in one line and not\nNEED TO write it in six. So far these maint scripts have significant a bit\ntelsasoft-specific logic, but very little specific to postgres versions. I\nrecall the schema-qualification stuff caused some churn, but it was in a minor\nrelease, so everyone was upgraded within 30-40 days, and if they weren't, I\nprobably knew not to deploy updated scripts, either.\n\n> > I'd *like* to be able to deploy our most recent maint scripts during the\n> > interval of time our customers are running different major versions. The\n> > alternative being to try to remember to avoid deploying updated v12 scripts at\n> > customers still running v11. I went to the effort to make our vacuum/analyze\n> > script support both versions following the OID change.\n> \n> And I suppose you don't want to install v12 client tools for the v11\n> systems..? I get that there's an argument for that, but it does also\n> seem like it'd be an alternative solution.\n\nHm, I'd be opened to it, except that when I tries this during beta, the PGDG\nRPMs are compiled with GSSAPI, which creates lots of warnings...but maybe\nthat's just in nagios...\n\n> > Tom's proposed in February to backpatch the -f- behavior, so ISTM that we're\n> > right now exactly where we (or at least he) planned to be, except that the\n> > backpatch ideally should've been included in the minor releases in August,\n> > before v12 was released.\n> > \n> > https://www.postgresql.org/message-id/24868.1550106683%40sss.pgh.pa.us\n> \n> That continues to strike me as a good way forward, and I'm guessing you\n> agree on that? If so, sorry for not including you in my earlier email.\n\nI believe you did include me (?) - I started the thread (on -general).\nhttps://www.postgresql.org/message-id/20191016172148.GH6962%40tamriel.snowman.net\n\nI think it's a good idea to do some combination of backpatching -f-, and maybe\ndocument behavior of pre-12 pg_restore in v12 release notes, and suggest\n/dev/stdout as a likely workaround. Of course, if backpatched, the behavior of\npre-12 will vary, and should be documented as such, which is a kind of alot,\nbut well.\n\n|In pg_restore, require specification of -f - to send the dump contents to standard output (Euler Taveira)\n|Previously, this happened by default if no destination was specified, but that was deemed to be unfriendly.\n|In the latest minor releases of versions v11 and earlier, pg_restore -f - is updated for\n|consistency with the new behavior of v12, to allow scripts to be written which\n|work on both. But note that earlier releases of v9.3 to v11 don't specially\n|handle \"-f -\", which will cause them to write to a file called \"-\" and not\n|stdout. If called under most unix shells, -f /dev/stdout will write to stdout on all versions of pg_restore.\n\nIt's not perfect - someone who wants portable behavior has to apply November's\nminor upgrade before installing any v12 server. And vendors (something like\npgadmin) will end up \"having to\" write to a file to be portable, or else check\nthe full version, not just the major version.\n\nJustin\n\n\n", "msg_date": "Wed, 16 Oct 2019 14:28:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Wed, Oct 16, 2019 at 03:04:52PM -0400, Stephen Frost wrote:\n> \n> > > On Wed, Oct 16, 2019 at 01:21:48PM -0400, Stephen Frost wrote:\n> > > > [...] if they actually need to work with both concurrently (which strikes me\n> > > > as already at least relatively uncommon...).\n> > > \n> > > I doubt it's uncommon. In our case, we have customers running centos6 and 7.\n> > > There's no postgis RPMs provided for centos6 which allow an upgrade path to\n> > > v12, so we'll end up supporting at least (centos6, pg11) and (centos7,pg12) for\n> > > months, at least.\n> > \n> > I suppose the issue here is that you don't want to have different\n> > versions of some scripts for centos6/pg11 vs. centos7/pg12? I'm a bit\n> > surprised that you don't have to for reasons unrelated to pg_restore.\n> \n> Right, I don't want to \"have to\" do anything :)\n\nSure, fair enough.\n\n> If we really need a separate script, or conditional, then we'll do, but it's\n> nicer to be ABLE write something (like this sanity check) in one line and not\n> NEED TO write it in six. So far these maint scripts have significant a bit\n> telsasoft-specific logic, but very little specific to postgres versions. I\n> recall the schema-qualification stuff caused some churn, but it was in a minor\n> release, so everyone was upgraded within 30-40 days, and if they weren't, I\n> probably knew not to deploy updated scripts, either.\n\nHmm, that's interesting as a data point, at least.\n\n> > > I'd *like* to be able to deploy our most recent maint scripts during the\n> > > interval of time our customers are running different major versions. The\n> > > alternative being to try to remember to avoid deploying updated v12 scripts at\n> > > customers still running v11. I went to the effort to make our vacuum/analyze\n> > > script support both versions following the OID change.\n> > \n> > And I suppose you don't want to install v12 client tools for the v11\n> > systems..? I get that there's an argument for that, but it does also\n> > seem like it'd be an alternative solution.\n> \n> Hm, I'd be opened to it, except that when I tries this during beta, the PGDG\n> RPMs are compiled with GSSAPI, which creates lots of warnings...but maybe\n> that's just in nagios...\n\nWarnings in the server log because they attempt to start a GSSAPI\nencrypted session first, if you are authenticating with GSSAPI? If so\nthen I'm sympathetic, but at least you could address that by setting\nPGGSSENCMODE to disable, and that'd work for pre-v12 and v12+.\n\n> > > Tom's proposed in February to backpatch the -f- behavior, so ISTM that we're\n> > > right now exactly where we (or at least he) planned to be, except that the\n> > > backpatch ideally should've been included in the minor releases in August,\n> > > before v12 was released.\n> > > \n> > > https://www.postgresql.org/message-id/24868.1550106683%40sss.pgh.pa.us\n> > \n> > That continues to strike me as a good way forward, and I'm guessing you\n> > agree on that? If so, sorry for not including you in my earlier email.\n> \n> I believe you did include me (?) - I started the thread (on -general).\n> https://www.postgresql.org/message-id/20191016172148.GH6962%40tamriel.snowman.net\n\nAh, no, I mean in the list of who was taking what position- I only named\nYoshikazu Imai, Andrew, Eurler, Tom and myself.\n\n> I think it's a good idea to do some combination of backpatching -f-, and maybe\n> document behavior of pre-12 pg_restore in v12 release notes, and suggest\n> /dev/stdout as a likely workaround. Of course, if backpatched, the behavior of\n> pre-12 will vary, and should be documented as such, which is a kind of alot,\n> but well.\n> \n> |In pg_restore, require specification of -f - to send the dump contents to standard output (Euler Taveira)\n> |Previously, this happened by default if no destination was specified, but that was deemed to be unfriendly.\n> |In the latest minor releases of versions v11 and earlier, pg_restore -f - is updated for\n> |consistency with the new behavior of v12, to allow scripts to be written which\n> |work on both. But note that earlier releases of v9.3 to v11 don't specially\n> |handle \"-f -\", which will cause them to write to a file called \"-\" and not\n> |stdout. If called under most unix shells, -f /dev/stdout will write to stdout on all versions of pg_restore.\n\nWe'd probably have to list the specific minor versions instead of just\nsaying \"latest\" and if we're suggesting an alternative course of action\nthen we might want to actually include that in the documentation\nsomewhere.. I'm not really sure that we want to get into such\nplatform-specific recommendations though.\n\n> It's not perfect - someone who wants portable behavior has to apply November's\n> minor upgrade before installing any v12 server. And vendors (something like\n> pgadmin) will end up \"having to\" write to a file to be portable, or else check\n> the full version, not just the major version.\n\nSee- folks like pgadmin I would expect to have to routinely write custom\ncode for each version since the goal there is to support all of the\noptions available from the utility, so I'm not really sure that this\nwould actually be much of a hardship for them. Of course, I don't\nreally hack on pgAdmin, so I might be wrong there.\n\nThanks,\n\nStephen", "msg_date": "Wed, 16 Oct 2019 17:44:24 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> First, I'd like to clarify what I believe Tom's suggestion is, and then\n> talk through that, as his vote sways this topic pretty heavily.\n\n> Tom, I take it your suggestion is to have '-f -' be accepted to mean\n> 'goes to stdout' in all branches?\n\nYes.\n\n> That goes against the argument that\n> we don't want to break existing scripts, as it's possible that there are\n> existing scripts that depend on '-f -' actually going to a './-' file.\n\nWhile that's theoretically possible, I think that the number of cases\nwhere somebody is actually expecting that is epsilon. It seems more\nuseful to tell people that they can now use \"-f -\" in all branches,\nand it's required to use it as of v12.\n\nAlternatively, we could revoke the requirement to use \"-f -\" in 12,\nand wait a couple releases before enforcing it. The fundamental\nproblem here is that we tried to go from \"-f - doesn't work\" to\n\"you must use -f -\" with no grace period where \"-f - is optional\".\nIn hindsight that was a bad idea.\n\n> If you meant for all branches to accept '-f -' and have it go to a './-'\n> file then that's just a revert of this entire change, which I can't\n> agree with either\n\nNo, I'm not proposing a full revert. But there's certainly room to\nconsider reverting the part that says you *must* write \"-f -\" to get\noutput to stdout.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Oct 2019 12:24:10 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On Thu, Oct 17, 2019 at 12:24:10PM +0200, Tom Lane wrote:\n> Alternatively, we could revoke the requirement to use \"-f -\" in 12,\n> and wait a couple releases before enforcing it. The fundamental\n> problem here is that we tried to go from \"-f - doesn't work\" to\n> \"you must use -f -\" with no grace period where \"-f - is optional\".\n> In hindsight that was a bad idea.\n\nI'm going to make an argument in favour of keeping the enforcement of -f- in\nv12.\n\nIf there's no enforcement, I don't know if many people would naturally start to\nuse -f-, which means that tools which need to work across a wide range of\n(minor) versions may never confront this until it's enforced in v14/v15, at\nwhich point we probably end up revisiting the whole thing again.\n\nFailing pg_restore forces people to confront the new/different behavior. If we\ndefer failing for 2 years, it probably just means it'll be an issue again 2\nyears from now.\n\nHowever, it's still an issue if (old) back branches (like 11.5) don't support\n-f-, and I think the differing behavior should be called out in the v12 release\nnotes, as succinctly as possible.\n\nAlso, I'm taking the opportunity to correct myself, before someone else does:\n\nOn Wed, Oct 16, 2019 at 02:28:40PM -0500, Justin Pryzby wrote:\n> And vendors (something like pgadmin) will end up \"having to\" write to a file\n> to be portable, or else check the full version, not just the major version.\n\nI take back that part .. before v12, they'd get stdout by not specifying -f,\nand since 12.0 they'd get stdout with -f-. No need to check the minor version,\nsince the \"need to\" specify -f- wouldn't be backpatched, of course.\n\nJustin\n\n\n", "msg_date": "Thu, 17 Oct 2019 10:30:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Thu, Oct 17, 2019 at 12:24:10PM +0200, Tom Lane wrote:\n> > Alternatively, we could revoke the requirement to use \"-f -\" in 12,\n> > and wait a couple releases before enforcing it. The fundamental\n> > problem here is that we tried to go from \"-f - doesn't work\" to\n> > \"you must use -f -\" with no grace period where \"-f - is optional\".\n> > In hindsight that was a bad idea.\n> \n> I'm going to make an argument in favour of keeping the enforcement of -f- in\n> v12.\n> \n> If there's no enforcement, I don't know if many people would naturally start to\n> use -f-, which means that tools which need to work across a wide range of\n> (minor) versions may never confront this until it's enforced in v14/v15, at\n> which point we probably end up revisiting the whole thing again.\n> \n> Failing pg_restore forces people to confront the new/different behavior. If we\n> defer failing for 2 years, it probably just means it'll be an issue again 2\n> years from now.\n\nAbsolutely agreed on this- deferring the pain doesn't really change\nthings here.\n\n> However, it's still an issue if (old) back branches (like 11.5) don't support\n> -f-, and I think the differing behavior should be called out in the v12 release\n> notes, as succinctly as possible.\n\nI agree that we should call it out in the release notes, of course, and\nthat, in this case, it's alright to fix the '-f-' bug that exists in the\nback branches as a bug and not something else.\n\n> Also, I'm taking the opportunity to correct myself, before someone else does:\n> \n> On Wed, Oct 16, 2019 at 02:28:40PM -0500, Justin Pryzby wrote:\n> > And vendors (something like pgadmin) will end up \"having to\" write to a file\n> > to be portable, or else check the full version, not just the major version.\n> \n> I take back that part .. before v12, they'd get stdout by not specifying -f,\n> and since 12.0 they'd get stdout with -f-. No need to check the minor version,\n> since the \"need to\" specify -f- wouldn't be backpatched, of course.\n\nAh, yes, that's true.\n\nThanks,\n\nStephen", "msg_date": "Thu, 17 Oct 2019 18:02:57 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On 2019-Oct-17, Tom Lane wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> > First, I'd like to clarify what I believe Tom's suggestion is, and then\n> > talk through that, as his vote sways this topic pretty heavily.\n> \n> > Tom, I take it your suggestion is to have '-f -' be accepted to mean\n> > 'goes to stdout' in all branches?\n> \n> Yes.\n\n+1 for this, FWIW. Let's get it done before next week minors. Is\nanybody writing a patch? If not, I can do it.\n\n> > If you meant for all branches to accept '-f -' and have it go to a './-'\n> > file then that's just a revert of this entire change, which I can't\n> > agree with either\n> \n> No, I'm not proposing a full revert. But there's certainly room to\n> consider reverting the part that says you *must* write \"-f -\" to get\n> output to stdout.\n\nI don't think this will buy us anything, if we get past branches updated\npromptly.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 4 Nov 2019 11:53:36 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Oct-17, Tom Lane wrote:\n>> Stephen Frost <sfrost@snowman.net> writes:\n>>> Tom, I take it your suggestion is to have '-f -' be accepted to mean\n>>> 'goes to stdout' in all branches?\n\n>> Yes.\n\n> +1 for this, FWIW. Let's get it done before next week minors. Is\n> anybody writing a patch? If not, I can do it.\n\nPlease do.\n\n>> No, I'm not proposing a full revert. But there's certainly room to\n>> consider reverting the part that says you *must* write \"-f -\" to get\n>> output to stdout.\n\n> I don't think this will buy us anything, if we get past branches updated\n> promptly.\n\nI'm okay with that approach.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Nov 2019 10:05:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Em seg., 4 de nov. de 2019 às 11:53, Alvaro Herrera\n<alvherre@2ndquadrant.com> escreveu:\n>\n> On 2019-Oct-17, Tom Lane wrote:\n>\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > First, I'd like to clarify what I believe Tom's suggestion is, and then\n> > > talk through that, as his vote sways this topic pretty heavily.\n> >\n> > > Tom, I take it your suggestion is to have '-f -' be accepted to mean\n> > > 'goes to stdout' in all branches?\n> >\n> > Yes.\n>\n> +1 for this, FWIW. Let's get it done before next week minors. Is\n> anybody writing a patch? If not, I can do it.\n>\nI'm not.\n\n> > > If you meant for all branches to accept '-f -' and have it go to a './-'\n> > > file then that's just a revert of this entire change, which I can't\n> > > agree with either\n> >\n> > No, I'm not proposing a full revert. But there's certainly room to\n> > consider reverting the part that says you *must* write \"-f -\" to get\n> > output to stdout.\n>\n> I don't think this will buy us anything, if we get past branches updated\n> promptly.\n>\n+1.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Mon, 4 Nov 2019 12:14:09 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Oct-17, Tom Lane wrote:\n> >> Stephen Frost <sfrost@snowman.net> writes:\n> >>> Tom, I take it your suggestion is to have '-f -' be accepted to mean\n> >>> 'goes to stdout' in all branches?\n> \n> >> Yes.\n> \n> > +1 for this, FWIW. Let's get it done before next week minors. Is\n> > anybody writing a patch? If not, I can do it.\n> \n> Please do.\n\n+1\n\nThanks,\n\nStephen", "msg_date": "Mon, 4 Nov 2019 11:06:44 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On 2019-Nov-04, Stephen Frost wrote:\n\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> > > +1 for this, FWIW. Let's get it done before next week minors. Is\n> > > anybody writing a patch? If not, I can do it.\n\nTurns out that this is a simple partial cherry-pick of the original\ncommit.\n\nI'm not sure if we need to call out the incompatibility in the minors'\nrelease notes (namely, that people using \"-f-\" to dump to ./- will need\nto choose a different file name).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 4 Nov 2019 16:12:38 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Turns out that this is a simple partial cherry-pick of the original\n> commit.\n\nIn the back branches, you should keep the statement that stdout\nis the default output file. Looks sane otherwise (I didn't test it).\n\n> I'm not sure if we need to call out the incompatibility in the minors'\n> release notes (namely, that people using \"-f-\" to dump to ./- will need\n> to choose a different file name).\n\nWell, we'll have to document the addition of the feature. I think it\ncan be phrased positively though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Nov 2019 14:24:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Em seg., 4 de nov. de 2019 às 16:12, Alvaro Herrera\n<alvherre@2ndquadrant.com> escreveu:\n> I'm not sure if we need to call out the incompatibility in the minors'\n> release notes (namely, that people using \"-f-\" to dump to ./- will need\n> to choose a different file name).\n>\nShould we break translations? I'm -0.5 on changing usage(). If you are\nusing 9.5, you know that it does not work. If you try it by accident\n(because it works in v12), it will work but it is not that important\nto inform it in --help (if you are in doubt, checking the docs will\nanswer your question).\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Mon, 4 Nov 2019 16:35:03 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On 2019-Nov-04, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Turns out that this is a simple partial cherry-pick of the original\n> > commit.\n> \n> In the back branches, you should keep the statement that stdout\n> is the default output file. Looks sane otherwise (I didn't test it).\n\nI propose this:\n\n <para>\n Specify output file for generated script, or for the listing\n when used with <option>-l</option>. Use <literal>-</literal>\n for the standard output, which is also the default.\n </para>\n\nLess invasive formulations sound repetitive (such as \"Use - for stdout.\nThe default is stdout\"). I'm open to suggestions.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 4 Nov 2019 16:43:05 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On 2019-Nov-04, Euler Taveira wrote:\n\n> Em seg., 4 de nov. de 2019 �s 16:12, Alvaro Herrera\n> <alvherre@2ndquadrant.com> escreveu:\n> > I'm not sure if we need to call out the incompatibility in the minors'\n> > release notes (namely, that people using \"-f-\" to dump to ./- will need\n> > to choose a different file name).\n> >\n> Should we break translations? I'm -0.5 on changing usage(). If you are\n> using 9.5, you know that it does not work. If you try it by accident\n> (because it works in v12), it will work but it is not that important\n> to inform it in --help (if you are in doubt, checking the docs will\n> answer your question).\n\nI would rather break the translations, and make all users aware if they\nlook at --help.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 4 Nov 2019 16:44:19 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On 2019-Nov-04, Alvaro Herrera wrote:\n\n> On 2019-Nov-04, Stephen Frost wrote:\n> \n> > > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> \n> > > > +1 for this, FWIW. Let's get it done before next week minors. Is\n> > > > anybody writing a patch? If not, I can do it.\n> \n> Turns out that this is a simple partial cherry-pick of the original\n> commit.\n\nPushed, with the documentation change suggested downthread.\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Nov 2019 10:12:12 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On 2019-11-04 15:53, Alvaro Herrera wrote:\n>> No, I'm not proposing a full revert. But there's certainly room to\n>> consider reverting the part that says you*must* write \"-f -\" to get\n>> output to stdout.\n> I don't think this will buy us anything, if we get past branches updated\n> promptly.\n\nUsers with with hundreds or thousands of servers and various ancient \nmaintenance scripts lying around in hard-to-track ways are not going be \nable to get everything upgraded to the latest minors *and* new script \nversions any time soon. Until they do, they are effectively blocked \nfrom introducing PG12 into their environment. This is very complicated \nand risky for them. I think we should revert the part that requires \nusing -f - at least for PG12.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Nov 2019 14:47:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-11-04 15:53, Alvaro Herrera wrote:\n> >>No, I'm not proposing a full revert. But there's certainly room to\n> >>consider reverting the part that says you*must* write \"-f -\" to get\n> >>output to stdout.\n> >I don't think this will buy us anything, if we get past branches updated\n> >promptly.\n> \n> Users with with hundreds or thousands of servers and various ancient\n> maintenance scripts lying around in hard-to-track ways are not going be able\n> to get everything upgraded to the latest minors *and* new script versions\n> any time soon. Until they do, they are effectively blocked from introducing\n> PG12 into their environment. This is very complicated and risky for them.\n> I think we should revert the part that requires using -f - at least for\n> PG12.\n\nAbsolutely not. This argument could be made, with a great deal more\njustification, against the changes to remove recovery.conf, and I'm sure\nquite a few other changes that we've made between major versions over\nthe years, but to do so would be to hamstring our ability to make\nprogress and to improve PG.\n\nWe don't guarantee this kind of compatibility between major versions.\nThose users have years to address these kinds of changes, that's why we\nhave back-branches and support major versions for 5 years.\n\nThanks,\n\nStephen", "msg_date": "Tue, 5 Nov 2019 09:11:32 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On 2019-11-05 15:11, Stephen Frost wrote:\n> We don't guarantee this kind of compatibility between major versions.\n\nWe do generally ensure compatibility of client side tools across major \nversions. I don't recall a case where we broke compatibility in a \ncomparable way without a generous transition period.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Nov 2019 15:19:33 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n>> On 2019-11-04 15:53, Alvaro Herrera wrote:\n>>>> No, I'm not proposing a full revert. But there's certainly room to\n>>>> consider reverting the part that says you*must* write \"-f -\" to get\n>>>> output to stdout.\n\n>>> I don't think this will buy us anything, if we get past branches updated\n>>> promptly.\n\n>> I think we should revert the part that requires using -f - at least for\n>> PG12.\n\n> Absolutely not. This argument could be made, with a great deal more\n> justification, against the changes to remove recovery.conf, and I'm sure\n> quite a few other changes that we've made between major versions over\n> the years, but to do so would be to hamstring our ability to make\n> progress and to improve PG.\n\nIn this case, not in the least: we would simply be imposing the sort\nof *orderly* feature introduction that I thought was the plan from\nthe very beginning [1]. That is, first make \"-f -\" available, and\nmake it required only in some later version. If we'd back-patched\nthe optional feature back in April, it might've been okay to require\nit in v12, but we failed to provide any transition period.\n\nI'm in favor of making v12 act like the older branches now do, and\nrequiring \"-f -\" only as of v13. Yeah, the transition will be a\nlittle slower, but this feature is not of such huge value that it\nreally justifies breaking scripts with zero notice.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/24868.1550106683%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 05 Nov 2019 09:28:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2019-11-05 15:11, Stephen Frost wrote:\n> >We don't guarantee this kind of compatibility between major versions.\n> \n> We do generally ensure compatibility of client side tools across major\n> versions. I don't recall a case where we broke compatibility in a\n> comparable way without a generous transition period.\n\nNo, we don't.\n\nThe compatibility of client side tools across major versions that we\nprovide is that newer versions will work with older databases- eg:\npg_dump will work back many many years, as will psql, and that's very\nclear as we have specific code in those client side tools to work with\nolder database versions. However, if you're migrating to the newer\nversion of the tool or the database, you need to test with that new\nversion and should expect to have to make some changes.\n\nWe routinely make changes, like the removal of recovery.conf, changing\nthe name of pg_xlog to pg_wal, renaming pg_xlogdump to pg_waldump and\npg_resetxlog to pg_resetwal, the other xlog -> wal name changes, that\nwill break scripts that people have written (not to mention serious\napplications like pgAdmin, barman, pgbackrest, postgres_exporter,\npg_partman, et al), and certainly we do that without any more lead time\nthan \"this is what's in the new release.\" The XLOG -> WAL changes were\neven committed quite late in the cycle, February it looks like, with the\nlocation -> lsn changes happening in May.\n\nI also can't recall off-hand a specific case where we said \"this\nbehavior is going to change in release current_release+2, so be\nprepared\", mostly because the point is well made consistently that: a)\nwe don't want to guarantee any such change will actually happen in some\nkind of timeline like that, meaning people can't actually plan for it,\nand b) people will either make the change proactively because they track\nwhat we're doing closely, or will wait until they actually go to try and\nuse the new version, in which case if it works then they won't bother\nchanging and if it doesn't then they'll put in the effort to make the\nchange, there's no real middle ground there.\n\nThanks,\n\nStephen", "msg_date": "Tue, 5 Nov 2019 09:34:45 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> In this case, not in the least: we would simply be imposing the sort\n> of *orderly* feature introduction that I thought was the plan from\n> the very beginning [1]. That is, first make \"-f -\" available, and\n> make it required only in some later version. If we'd back-patched\n> the optional feature back in April, it might've been okay to require\n> it in v12, but we failed to provide any transition period.\n\n... just like we didn't provide any transistion period for the\nrecovery.conf changes.\n\n> I'm in favor of making v12 act like the older branches now do, and\n> requiring \"-f -\" only as of v13. Yeah, the transition will be a\n> little slower, but this feature is not of such huge value that it\n> really justifies breaking scripts with zero notice.\n\nThe recovery.conf changes provided absolutely zero value in this\nrelease and breaks a great deal more things.\n\nThis argument simply doesn't hold with what we've done historically or\neven in this release, so, no, I don't agree that it makes sense to\nrevert this change any more than it makes sense to revert the\nrecovery.conf changes. Maybe, if this had come up over the summer and\nthis agreement came out that these changes weren't worth the breakage\nthat they cause, we could have reverted them, but that ship has sailed\nat this point.\n\nThanks,\n\nStephen", "msg_date": "Tue, 5 Nov 2019 09:38:12 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> In this case, not in the least: we would simply be imposing the sort\n>> of *orderly* feature introduction that I thought was the plan from\n>> the very beginning [1]. That is, first make \"-f -\" available, and\n>> make it required only in some later version. If we'd back-patched\n>> the optional feature back in April, it might've been okay to require\n>> it in v12, but we failed to provide any transition period.\n\n> ... just like we didn't provide any transistion period for the\n> recovery.conf changes.\n\nSure, because there wasn't any practical way to provide a transition\nperiod. I think that case is entirely not comparable to this one,\neither as to whether a transition period is possible, or as to whether\nthe benefits of the change merit forced breakage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Nov 2019 09:46:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> In this case, not in the least: we would simply be imposing the sort\n> >> of *orderly* feature introduction that I thought was the plan from\n> >> the very beginning [1]. That is, first make \"-f -\" available, and\n> >> make it required only in some later version. If we'd back-patched\n> >> the optional feature back in April, it might've been okay to require\n> >> it in v12, but we failed to provide any transition period.\n> \n> > ... just like we didn't provide any transistion period for the\n> > recovery.conf changes.\n> \n> Sure, because there wasn't any practical way to provide a transition\n> period. I think that case is entirely not comparable to this one,\n> either as to whether a transition period is possible, or as to whether\n> the benefits of the change merit forced breakage.\n\nWe didn't put any effort into trying to provide a transition period, and\nfor good reason- everyone gets 5 years of transition time. I'd be just\nas happy to not even commit the change to make -f- go to stdout in the\nback-branches, if I didn't feel that the behavior of it going to a file\ncalled ./- was really just an outright bug in the first place.\n\nThanks,\n\nStephen", "msg_date": "Tue, 5 Nov 2019 10:00:35 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On 2019-Nov-05, Tom Lane wrote:\n\n> Sure, because there wasn't any practical way to provide a transition\n> period. I think that case is entirely not comparable to this one,\n> either as to whether a transition period is possible, or as to whether\n> the benefits of the change merit forced breakage.\n\nWe're not forcing anyone into upgrading. Older versions continue to\nwork, and many people still use those. People who already upgraded\nand needed a cross-version scriptable mechanism can already use\n\"-f/dev/stdout\" as Justin documented in this thread's OP. People\nupgrading after next week release set can use \"-f-\". People not\nupgrading soon can keep their scripts for a while yet.\n\nI think this teapot doesn't need the tempest, and nobody's drowning in\nit anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Nov 2019 12:07:39 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "On Tue, Nov 5, 2019 at 10:07 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think this teapot doesn't need the tempest, and nobody's drowning in\n> it anyway.\n\nYeah, I think we're getting awfully worked up over not much. If I had\nbeen reviewing this feature initially, I believe I would have voted\nfor making -f- go to stdout first, and requiring it only in a later\nrelease. But what's done is done. I don't see this as being such an\nemergency as to justify whacking around the back-branches or reverting\nalready-release features. We could easily cause more damage by jerking\nthe behavior around than was caused by the original decision.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 6 Nov 2019 11:14:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Nov 5, 2019 at 10:07 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> I think this teapot doesn't need the tempest, and nobody's drowning in\n>> it anyway.\n\n> Yeah, I think we're getting awfully worked up over not much.\n\nSeems like that's getting to be the consensus opinion. Let's leave\nthings as they stand. I'm happy that we back-patched the ability\nto use \"-f -\", and I think that'll probably be enough to satisfy\nanyone who's really unhappy with the state of affairs as of 12.0.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Nov 2019 11:21:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12 and pg_restore -f-" } ]
[ { "msg_contents": "One cannot currently add partitioned tables to a publication.\n\ncreate table p (a int, b int) partition by hash (a);\ncreate table p1 partition of p for values with (modulus 3, remainder 0);\ncreate table p2 partition of p for values with (modulus 3, remainder 1);\ncreate table p3 partition of p for values with (modulus 3, remainder 2);\n\ncreate publication publish_p for table p;\nERROR: \"p\" is a partitioned table\nDETAIL: Adding partitioned tables to publications is not supported.\nHINT: You can add the table partitions individually.\n\nOne can do this instead:\n\ncreate publication publish_p1 for table p1;\ncreate publication publish_p2 for table p2;\ncreate publication publish_p3 for table p3;\n\nbut maybe that's too much code to maintain for users.\n\nI propose that we make this command:\n\ncreate publication publish_p for table p;\n\nautomatically add all the partitions to the publication. Also, any\nfuture partitions should also be automatically added to the\npublication. So, publishing a partitioned table automatically\npublishes all of its existing and future partitions. Attached patch\nimplements that.\n\nWhat doesn't change with this patch is that the partitions on the\nsubscription side still have to match one-to-one with the partitions\non the publication side, because the changes are still replicated as\nbeing made to the individual partitions, not as the changes to the\nroot partitioned table. It might be useful to implement that\nfunctionality on the publication side, because it allows users to\ndefine the replication target any way they need to, but this patch\ndoesn't implement that.\n\nThanks,\nAmit", "msg_date": "Mon, 7 Oct 2019 09:55:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "adding partitioned tables to publications" }, { "msg_contents": "On Mon, Oct 7, 2019 at 9:55 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> One cannot currently add partitioned tables to a publication.\n>\n> create table p (a int, b int) partition by hash (a);\n> create table p1 partition of p for values with (modulus 3, remainder 0);\n> create table p2 partition of p for values with (modulus 3, remainder 1);\n> create table p3 partition of p for values with (modulus 3, remainder 2);\n>\n> create publication publish_p for table p;\n> ERROR: \"p\" is a partitioned table\n> DETAIL: Adding partitioned tables to publications is not supported.\n> HINT: You can add the table partitions individually.\n>\n> One can do this instead:\n>\n> create publication publish_p1 for table p1;\n> create publication publish_p2 for table p2;\n> create publication publish_p3 for table p3;\n>\n> but maybe that's too much code to maintain for users.\n>\n> I propose that we make this command:\n>\n> create publication publish_p for table p;\n>\n> automatically add all the partitions to the publication. Also, any\n> future partitions should also be automatically added to the\n> publication. So, publishing a partitioned table automatically\n> publishes all of its existing and future partitions. Attached patch\n> implements that.\n>\n> What doesn't change with this patch is that the partitions on the\n> subscription side still have to match one-to-one with the partitions\n> on the publication side, because the changes are still replicated as\n> being made to the individual partitions, not as the changes to the\n> root partitioned table. It might be useful to implement that\n> functionality on the publication side, because it allows users to\n> define the replication target any way they need to, but this patch\n> doesn't implement that.\n\nAdded this to the next CF: https://commitfest.postgresql.org/25/2301/\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 10 Oct 2019 15:28:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Thu, 10 Oct 2019 at 08:29, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Mon, Oct 7, 2019 at 9:55 AM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > One cannot currently add partitioned tables to a publication.\n> >\n> > create table p (a int, b int) partition by hash (a);\n> > create table p1 partition of p for values with (modulus 3, remainder 0);\n> > create table p2 partition of p for values with (modulus 3, remainder 1);\n> > create table p3 partition of p for values with (modulus 3, remainder 2);\n> >\n> > create publication publish_p for table p;\n> > ERROR: \"p\" is a partitioned table\n> > DETAIL: Adding partitioned tables to publications is not supported.\n> > HINT: You can add the table partitions individually.\n> >\n> > One can do this instead:\n> >\n> > create publication publish_p1 for table p1;\n> > create publication publish_p2 for table p2;\n> > create publication publish_p3 for table p3;\n> >\n> > but maybe that's too much code to maintain for users.\n> >\n> > I propose that we make this command:\n> >\n> > create publication publish_p for table p;\n> >\n> > automatically add all the partitions to the publication. Also, any\n> > future partitions should also be automatically added to the\n> > publication. So, publishing a partitioned table automatically\n> > publishes all of its existing and future partitions. Attached patch\n> > implements that.\n> >\n> > What doesn't change with this patch is that the partitions on the\n> > subscription side still have to match one-to-one with the partitions\n> > on the publication side, because the changes are still replicated as\n> > being made to the individual partitions, not as the changes to the\n> > root partitioned table. It might be useful to implement that\n> > functionality on the publication side, because it allows users to\n> > define the replication target any way they need to, but this patch\n> > doesn't implement that.\n>\n> Added this to the next CF: https://commitfest.postgresql.org/25/2301/\n>\n> Hi Amit,\n\nLately I was exploring logical replication feature of postgresql and I\nfound this addition in the scope of feature for partitioned tables a useful\none.\n\nIn order to understand the working of your patch a bit more, I performed an\nexperiment wherein I created a partitioned table with several children and\na default partition at the publisher side and normal tables of the same\nname as parent, children, and default partition of the publisher side at\nthe subscriber side. Next I established the logical replication connection\nand to my surprise the data was successfully replicated from partitioned\ntables to normal tables and then this error filled the logs,\nLOG: logical replication table synchronization worker for subscription\n\"my_subscription\", table \"parent\" has started\nERROR: table \"public.parent\" not found on publisher\n\nhere parent is the name of the partitioned table at the publisher side and\nit is present as normal table at subscriber side as well. Which is\nunderstandable, it is trying to find a normal table of the same name but\ncouldn't find one, maybe it should not worry about that now also if not at\nreplication time.\n\nPlease let me know if this is something expected because in my opinion this\nis not desirable, there should be some check to check the table type for\nreplication. This wasn't important till now maybe because only normal\ntables were to be replicated, but with the extension of the scope of\nlogical replication to more objects such checks would be helpful.\n\nOn a separate note was thinking for partitioned tables, wouldn't it be\ncleaner to have something like you create only partition table at the\nsubscriber and then when logical replication starts it creates the child\ntables accordingly. Or would that be too much in future...?\n\n-- \nRegards,\nRafia Sabih\n\nOn Thu, 10 Oct 2019 at 08:29, Amit Langote <amitlangote09@gmail.com> wrote:On Mon, Oct 7, 2019 at 9:55 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> One cannot currently add partitioned tables to a publication.\n>\n> create table p (a int, b int) partition by hash (a);\n> create table p1 partition of p for values with (modulus 3, remainder 0);\n> create table p2 partition of p for values with (modulus 3, remainder 1);\n> create table p3 partition of p for values with (modulus 3, remainder 2);\n>\n> create publication publish_p for table p;\n> ERROR:  \"p\" is a partitioned table\n> DETAIL:  Adding partitioned tables to publications is not supported.\n> HINT:  You can add the table partitions individually.\n>\n> One can do this instead:\n>\n> create publication publish_p1 for table p1;\n> create publication publish_p2 for table p2;\n> create publication publish_p3 for table p3;\n>\n> but maybe that's too much code to maintain for users.\n>\n> I propose that we make this command:\n>\n> create publication publish_p for table p;\n>\n> automatically add all the partitions to the publication.  Also, any\n> future partitions should also be automatically added to the\n> publication.  So, publishing a partitioned table automatically\n> publishes all of its existing and future partitions.  Attached patch\n> implements that.\n>\n> What doesn't change with this patch is that the partitions on the\n> subscription side still have to match one-to-one with the partitions\n> on the publication side, because the changes are still replicated as\n> being made to the individual partitions, not as the changes to the\n> root partitioned table.  It might be useful to implement that\n> functionality on the publication side, because it allows users to\n> define the replication target any way they need to, but this patch\n> doesn't implement that.\n\nAdded this to the next CF: https://commitfest.postgresql.org/25/2301/Hi Amit,Lately I was exploring logical replication feature of postgresql and I found this addition in the scope of feature for partitioned tables a useful one. In order to understand the working of your patch a bit more, I performed an experiment wherein I created a partitioned table with several  children and a default partition at the publisher side and normal tables of the same name as parent, children, and default partition of the publisher side at the subscriber side. Next I established the logical replication connection and to my surprise the data was successfully replicated from partitioned tables to normal tables and then this error filled the logs,LOG:  logical replication table synchronization worker for subscription \"my_subscription\", table \"parent\" has startedERROR:  table \"public.parent\" not found on publisherhere parent is the name of the partitioned table at the publisher side and it is present as normal table at subscriber side as well. Which is understandable, it is trying to find a normal table of the same name but couldn't find one, maybe it should not worry about that now also if not at replication time.Please let me know if this is something expected because in my opinion this is not desirable, there should be some check to check the table type for replication. This wasn't important till now maybe because only normal tables were to be replicated, but with the extension of the scope of logical replication to more objects such checks would be helpful.On a separate note was thinking for partitioned tables, wouldn't it be cleaner to have something like you create only partition table at the subscriber and then when logical replication starts it creates the child tables accordingly. Or would that be too much in future...?-- Regards,Rafia Sabih", "msg_date": "Thu, 10 Oct 2019 15:13:13 +0200", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hello Rafia,\n\nGreat to hear that you are interested in this feature and thanks for\ntesting the patch.\n\nOn Thu, Oct 10, 2019 at 10:13 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> Lately I was exploring logical replication feature of postgresql and I found this addition in the scope of feature for partitioned tables a useful one.\n>\n> In order to understand the working of your patch a bit more, I performed an experiment wherein I created a partitioned table with several children and a default partition at the publisher side and normal tables of the same name as parent, children, and default partition of the publisher side at the subscriber side. Next I established the logical replication connection and to my surprise the data was successfully replicated from partitioned tables to normal tables and then this error filled the logs,\n> LOG: logical replication table synchronization worker for subscription \"my_subscription\", table \"parent\" has started\n> ERROR: table \"public.parent\" not found on publisher\n>\n> here parent is the name of the partitioned table at the publisher side and it is present as normal table at subscriber side as well. Which is understandable, it is trying to find a normal table of the same name but couldn't find one, maybe it should not worry about that now also if not at replication time.\n>\n> Please let me know if this is something expected because in my opinion this is not desirable, there should be some check to check the table type for replication. This wasn't important till now maybe because only normal tables were to be replicated, but with the extension of the scope of logical replication to more objects such checks would be helpful.\n\nThanks for sharing this case. I hadn't considered it, but you're\nright that it should be handled sensibly. I have fixed table sync\ncode to handle this case properly. Could you please check your case\nwith the attached updated patch?\n\n> On a separate note was thinking for partitioned tables, wouldn't it be cleaner to have something like you create only partition table at the subscriber and then when logical replication starts it creates the child tables accordingly. Or would that be too much in future...?\n\nHmm, we'd first need to built the \"automatic partition creation\"\nfeature to consider doing something like that. I'm sure you'd agree\nthat we should undertake that project separately from this tiny\nlogical replication usability improvement project. :)\n\nThanks again.\n\nRegards,\nAmit", "msg_date": "Fri, 11 Oct 2019 15:05:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi,\n\nOn 07/10/2019 02:55, Amit Langote wrote:\n> One cannot currently add partitioned tables to a publication.\n> \n> create table p (a int, b int) partition by hash (a);\n> create table p1 partition of p for values with (modulus 3, remainder 0);\n> create table p2 partition of p for values with (modulus 3, remainder 1);\n> create table p3 partition of p for values with (modulus 3, remainder 2);\n> \n> create publication publish_p for table p;\n> ERROR: \"p\" is a partitioned table\n> DETAIL: Adding partitioned tables to publications is not supported.\n> HINT: You can add the table partitions individually.\n> \n> One can do this instead:\n> \n> create publication publish_p1 for table p1;\n> create publication publish_p2 for table p2;\n> create publication publish_p3 for table p3;\n\nOr just create publication publish_p for table p1, p2, p3;\n\n> \n> but maybe that's too much code to maintain for users.\n> \n> I propose that we make this command:\n> \n> create publication publish_p for table p;\n> \n\n+1\n\n> automatically add all the partitions to the publication. Also, any\n> future partitions should also be automatically added to the\n> publication. So, publishing a partitioned table automatically\n> publishes all of its existing and future partitions. Attached patch\n> implements that.\n> \n> What doesn't change with this patch is that the partitions on the\n> subscription side still have to match one-to-one with the partitions\n> on the publication side, because the changes are still replicated as\n> being made to the individual partitions, not as the changes to the\n> root partitioned table. It might be useful to implement that\n> functionality on the publication side, because it allows users to\n> define the replication target any way they need to, but this patch\n> doesn't implement that.\n> \n\nYeah for that to work subscription would need to also need to be able to \nwrite to partitioned tables, so it needs both sides to add support for \nthis. I think if we do both what you did and the transparent handling of \nroot only, we'll need new keyword to differentiate the two. It might \nmake sense to think about if we want your way to need an extra keyword \nor the transparent one will need it.\n\nOne issue that I see reading the patch is following set of commands:\n\nCREATE TABLE foo ...;\nCREATE PUBLICATION mypub FOR TABLE foo;\n\nCREATE TABLE bar ...;\nALTER PUBLICATION mypub ADD TABLE bar;\n\nALTER TABLE foo ATTACH PARTITION bar ...;\nALTER TABLE foo DETACH PARTITION bar ...;\n\nThis will end up with bar not being in any publication even though it \nwas explicitly added. That might be acceptable caveat but it at least \nshould be clearly documented (IMHO with warning).\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Sat, 12 Oct 2019 22:01:55 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Mon, Oct 07, 2019 at 09:55:23AM +0900, Amit Langote wrote:\n> One cannot currently add partitioned tables to a publication.\n> \n> create table p (a int, b int) partition by hash (a);\n> create table p1 partition of p for values with (modulus 3, remainder 0);\n> create table p2 partition of p for values with (modulus 3, remainder 1);\n> create table p3 partition of p for values with (modulus 3, remainder 2);\n> \n> create publication publish_p for table p;\n> ERROR: \"p\" is a partitioned table\n> DETAIL: Adding partitioned tables to publications is not supported.\n> HINT: You can add the table partitions individually.\n> \n> One can do this instead:\n> \n> create publication publish_p1 for table p1;\n> create publication publish_p2 for table p2;\n> create publication publish_p3 for table p3;\n> \n> but maybe that's too much code to maintain for users.\n> \n> I propose that we make this command:\n> \n> create publication publish_p for table p;\n> \n> automatically add all the partitions to the publication. Also, any\n> future partitions should also be automatically added to the\n> publication. So, publishing a partitioned table automatically\n> publishes all of its existing and future partitions. Attached patch\n> implements that.\n> \n> What doesn't change with this patch is that the partitions on the\n> subscription side still have to match one-to-one with the partitions\n> on the publication side, because the changes are still replicated as\n> being made to the individual partitions, not as the changes to the\n> root partitioned table. It might be useful to implement that\n> functionality on the publication side, because it allows users to\n> define the replication target any way they need to, but this patch\n> doesn't implement that.\n\nWith this patch, is it possible to remove a partition manually from a\nsubscription, or will it just get automatically re-added at some\npoint?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sun, 13 Oct 2019 09:55:17 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi David,\n\nOn Sun, Oct 13, 2019 at 4:55 PM David Fetter <david@fetter.org> wrote:\n> On Mon, Oct 07, 2019 at 09:55:23AM +0900, Amit Langote wrote:\n> > I propose that we make this command:\n> >\n> > create publication publish_p for table p;\n> >\n> > automatically add all the partitions to the publication. Also, any\n> > future partitions should also be automatically added to the\n> > publication. So, publishing a partitioned table automatically\n> > publishes all of its existing and future partitions. Attached patch\n> > implements that.\n> >\n> > What doesn't change with this patch is that the partitions on the\n> > subscription side still have to match one-to-one with the partitions\n> > on the publication side, because the changes are still replicated as\n> > being made to the individual partitions, not as the changes to the\n> > root partitioned table. It might be useful to implement that\n> > functionality on the publication side, because it allows users to\n> > define the replication target any way they need to, but this patch\n> > doesn't implement that.\n>\n> With this patch, is it possible to remove a partition manually from a\n> subscription, or will it just get automatically re-added at some\n> point?\n\nHmm, I don't think there is any way (commands) to manually remove\ntables from a subscription. Testing shows that if you drop a table on\nthe subscription server that is currently being fed data via a\nsubscription, then a subscription worker will complain and quit if it\nreceives a row targeting the dropped table and workers that are\nsubsequently started will do the same thing. Interestingly, this\nbehavior prevents replication for any other tables in the subscription\nfrom proceeding, which seems unfortunate.\n\nIf you were asking if the patch extends the subscription side\nfunctionality to re-add needed partitions that were manually removed\nlikely by accident, then no.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 21 Oct 2019 15:43:22 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi Petr,\n\nThanks for your comments.\n\nOn Sun, Oct 13, 2019 at 5:01 AM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> On 07/10/2019 02:55, Amit Langote wrote:\n> > One cannot currently add partitioned tables to a publication.\n> >\n> > create table p (a int, b int) partition by hash (a);\n> > create table p1 partition of p for values with (modulus 3, remainder 0);\n> > create table p2 partition of p for values with (modulus 3, remainder 1);\n> > create table p3 partition of p for values with (modulus 3, remainder 2);\n> >\n> > create publication publish_p for table p;\n> > ERROR: \"p\" is a partitioned table\n> > DETAIL: Adding partitioned tables to publications is not supported.\n> > HINT: You can add the table partitions individually.\n> >\n> > One can do this instead:\n> >\n> > create publication publish_p1 for table p1;\n> > create publication publish_p2 for table p2;\n> > create publication publish_p3 for table p3;\n>\n> Or just create publication publish_p for table p1, p2, p3;\n\nYep, facepalm! :)\n\nSo, one doesn't really need as many publication objects as there are\npartitions as my version suggests, which is good. Although, as you\ncan tell, a user would still manually need to keep the set of\npublished partitions up to date, for example when new partitions are\nadded.\n\n> > but maybe that's too much code to maintain for users.\n> >\n> > I propose that we make this command:\n> >\n> > create publication publish_p for table p;\n> >\n>\n> +1\n>\n> > automatically add all the partitions to the publication. Also, any\n> > future partitions should also be automatically added to the\n> > publication. So, publishing a partitioned table automatically\n> > publishes all of its existing and future partitions. Attached patch\n> > implements that.\n> >\n> > What doesn't change with this patch is that the partitions on the\n> > subscription side still have to match one-to-one with the partitions\n> > on the publication side, because the changes are still replicated as\n> > being made to the individual partitions, not as the changes to the\n> > root partitioned table. It might be useful to implement that\n> > functionality on the publication side, because it allows users to\n> > define the replication target any way they need to, but this patch\n> > doesn't implement that.\n> >\n>\n> Yeah for that to work subscription would need to also need to be able to\n> write to partitioned tables, so it needs both sides to add support for\n> this.\n\nAh, I didn't know that the subscription code doesn't out-of-the-box\nsupport tuple routing. Indeed, we will need to fix that.\n\n> I think if we do both what you did and the transparent handling of\n> root only, we'll need new keyword to differentiate the two. It might\n> make sense to think about if we want your way to need an extra keyword\n> or the transparent one will need it.\n\nI didn't think about that but maybe you are right.\n\n> One issue that I see reading the patch is following set of commands:\n>\n> CREATE TABLE foo ...;\n> CREATE PUBLICATION mypub FOR TABLE foo;\n>\n> CREATE TABLE bar ...;\n> ALTER PUBLICATION mypub ADD TABLE bar;\n>\n> ALTER TABLE foo ATTACH PARTITION bar ...;\n> ALTER TABLE foo DETACH PARTITION bar ...;\n>\n> This will end up with bar not being in any publication even though it\n> was explicitly added.\n\nI tested and bar continues to be in the publication with above steps:\n\ncreate table foo (a int) partition by list (a);\ncreate publication mypub for table foo;\ncreate table bar (a int);\nalter publication mypub add table bar;\n\\d bar\n Table \"public.bar\"\n Column │ Type │ Collation │ Nullable │ Default\n────────┼─────────┼───────────┼──────────┼─────────\n a │ integer │ │ │\nPublications:\n \"mypub\"\n\nalter table foo attach partition bar for values in (1);\n\\d bar\n Table \"public.bar\"\n Column │ Type │ Collation │ Nullable │ Default\n────────┼─────────┼───────────┼──────────┼─────────\n a │ integer │ │ │\nPartition of: foo FOR VALUES IN (1)\nPublications:\n \"mypub\"\n\n-- can't now drop bar from mypub (its membership is no longer standalone)\nalter publication mypub drop table bar;\nERROR: cannot drop partition \"bar\" from an inherited publication\nHINT: Drop the parent from publication instead.\n\nalter table foo detach partition bar;\n\n-- bar is still in mypub (now a standalone member)\n\\d bar\n Table \"public.bar\"\n Column │ Type │ Collation │ Nullable │ Default\n────────┼─────────┼───────────┼──────────┼─────────\n a │ integer │ │ │\nPublications:\n \"mypub\"\n\n-- ok to drop now from mypub\nalter publication mypub drop table bar;\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 21 Oct 2019 16:08:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "This patch seems excessively complicated to me. Why don't you just add \nthe actual partitioned table to pg_publication_rel and then expand the \npartition hierarchy in pgoutput (get_rel_sync_entry() or \nGetRelationPublications() or somewhere around there). Then you don't \nneed to do any work in table DDL to keep the list of published tables up \nto date.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 4 Nov 2019 12:00:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi Amit,\n\nOn Fri, 11 Oct 2019 at 08:06, Amit Langote <amitlangote09@gmail.com> wrote:\n\n>\n> Thanks for sharing this case. I hadn't considered it, but you're\n> right that it should be handled sensibly. I have fixed table sync\n> code to handle this case properly. Could you please check your case\n> with the attached updated patch?\n>\n> I was checking this today and found that the behavior doesn't change much\nwith the updated patch. The tables are still replicated, just that a select\ncount from parent table shows 0, rest of the partitions including default\none has the data from the publisher. I was expecting more like an error at\nsubscriber saying the table type is not same.\n\nPlease find the attached file for the test case, in case something is\nunclear.\n\n-- \nRegards,\nRafia Sabih", "msg_date": "Mon, 4 Nov 2019 16:41:40 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Sorry about the delay.\n\nOn Mon, Nov 4, 2019 at 8:00 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> This patch seems excessively complicated to me. Why don't you just add\n> the actual partitioned table to pg_publication_rel and then expand the\n> partition hierarchy in pgoutput (get_rel_sync_entry() or\n> GetRelationPublications() or somewhere around there). Then you don't\n> need to do any work in table DDL to keep the list of published tables up\n> to date.\n\nI tend to agree that having to manage this at the DDL level would be\nbug-prone, not to mention pretty complicated code to implement it.\n\nI have tried to implement it the way you suggested. So every decoded\nchange to a leaf partition will now be published not only via its own\npublication but also via publications of its ancestors if any. That\nirrespective of whether a row is directly inserted into the leaf\npartition or got routed to it via insert done on an ancestor. In this\nimplementation, the only pg_publication_rel entry is the one\ncorresponding to the partitioned table.\n\nOn the subscription side, when creating pg_subscription_rel entries,\nfor a publication containing a partitioned table, all of its\npartitions too must be fetched as being included in the publication.\nThat is necessary, because the initial syncing copy and subsequently\nreceived changes must be applied to individual partitions. That could\nbe changed in the future by publishing leaf partition changes as\nchanges to the actually published partitioned table. That future\nimplementation will also hopefully take care of the concern that Rafia\nmentioned on this thread that even with this patch, one must make sure\nthat tables match one-to-one when they're in publish-subscribe\nrelationship, which actually needs us to bake in low-level details\nlike table's relkind in the protocol exchanges.\n\nAnyway, I've attached two patches -- 0001 is a refactoring patch. 0002\nimplements the feature.\n\nThanks,\nAmit", "msg_date": "Fri, 8 Nov 2019 13:27:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hello Rafia,\n\nOn Tue, Nov 5, 2019 at 12:41 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> On Fri, 11 Oct 2019 at 08:06, Amit Langote <amitlangote09@gmail.com> wrote:\n>> Thanks for sharing this case. I hadn't considered it, but you're\n>> right that it should be handled sensibly. I have fixed table sync\n>> code to handle this case properly. Could you please check your case\n>> with the attached updated patch?\n>>\n> I was checking this today and found that the behavior doesn't change much with the updated patch. The tables are still replicated, just that a select count from parent table shows 0, rest of the partitions including default one has the data from the publisher. I was expecting more like an error at subscriber saying the table type is not same.\n>\n> Please find the attached file for the test case, in case something is unclear.\n\nThanks for the test case.\n\nWith the latest patch I posted, you'll get the following error on subscriber:\n\ncreate subscription mysub connection 'host=localhost port=5432\ndbname=postgres' publication mypub;\nERROR: cannot use relation \"public.t\" as logical replication target\nDETAIL: \"public.t\" is a regular table on subscription side whereas a\npartitioned table on publication side\n\nAlthough to be honest, I'd rather not see the error. As I mentioned\nin my email earlier, it'd be nice to be able sync a partitioned table\nand a regular table (or vice versa) via replication.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 8 Nov 2019 14:31:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Fri, Nov 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Anyway, I've attached two patches -- 0001 is a refactoring patch. 0002\n> implements the feature.\n\n0002 didn't contain necessary pg_dump changes, which fixed in the\nattached new version.\n\nThanks,\nAmit", "msg_date": "Mon, 11 Nov 2019 16:59:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2019-11-11 08:59, Amit Langote wrote:\n> On Fri, Nov 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Anyway, I've attached two patches -- 0001 is a refactoring patch. 0002\n>> implements the feature.\n> \n> 0002 didn't contain necessary pg_dump changes, which fixed in the\n> attached new version.\n\nThat looks more pleasant.\n\nI don't understand why you go through great lengths to ensure that the \nrelkinds match between publisher and subscriber. We already ensure that \nonly regular tables are published and only regular tables are allowed as \nsubscription target. In the future, we may want to allow further \ncombinations. What situation are you trying to address here?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 11 Nov 2019 13:48:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Mon, Nov 11, 2019 at 9:49 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-11-11 08:59, Amit Langote wrote:\n> > On Fri, Nov 8, 2019 at 1:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> Anyway, I've attached two patches -- 0001 is a refactoring patch. 0002\n> >> implements the feature.\n> >\n> > 0002 didn't contain necessary pg_dump changes, which fixed in the\n> > attached new version.\n>\n> That looks more pleasant.\n\nThanks for looking.\n\n> I don't understand why you go through great lengths to ensure that the\n> relkinds match between publisher and subscriber. We already ensure that\n> only regular tables are published and only regular tables are allowed as\n> subscription target. In the future, we may want to allow further\n> combinations. What situation are you trying to address here?\n\nI'd really want to see the requirement for relkinds to have to match\ngo away, but as you can see, this patch doesn't modify enough of\npgoutput.c and worker.c to make that possible. Both the code for the\ninitital syncing and that for the subsequent real-time replication\nassume that both source and target are regular tables. So even if\npartitioned tables can now be in a publication, they're never sent in\nthe protocol messages, only their leaf partitions are. Initial\nsyncing code can be easily modified to support any combination of\nsource and target relations, but changes needed for real-time\nreplication seem non-trivial. Do you think we should do that before\nwe can say partitioned tables support logical replication?\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 12 Nov 2019 10:11:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Tue, Nov 12, 2019 at 10:11 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Initial\n> syncing code can be easily modified to support any combination of\n> source and target relations, but changes needed for real-time\n> replication seem non-trivial.\n\nI have spent some time hacking on this. With the attached updated\npatch, adding a partitioned table to publication results in publishing\nthe inserts, updates, deletes of the table's leaf partitions as\ninserts, updates, deletes of the table itself (it all happens inside\npgoutput). So, the replication target table doesn't necessarily have\nto be a partitioned table and even if it is partitioned its partitions\ndon't have to match one-to-one.\n\nOne restriction remains though: partitioned tables on a subscriber\ncan't accept updates and deletes, because we'd need to map those to\nupdates and deletes of their partitions, including handling a tuple\npossibly moving from one partition to another during an update.\n\nAlso, I haven't added subscription tests yet.\n\nAttached updated patch. The previous division into a refactoring\npatch and feature patch no longer made to sense to me, so there is\nonly one this time.\n\nThanks,\nAmit", "msg_date": "Mon, 18 Nov 2019 17:53:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2019-11-18 09:53, Amit Langote wrote:\n> I have spent some time hacking on this. With the attached updated\n> patch, adding a partitioned table to publication results in publishing\n> the inserts, updates, deletes of the table's leaf partitions as\n> inserts, updates, deletes of the table itself (it all happens inside\n> pgoutput). So, the replication target table doesn't necessarily have\n> to be a partitioned table and even if it is partitioned its partitions\n> don't have to match one-to-one.\n> \n> One restriction remains though: partitioned tables on a subscriber\n> can't accept updates and deletes, because we'd need to map those to\n> updates and deletes of their partitions, including handling a tuple\n> possibly moving from one partition to another during an update.\n\nRight. Without that second part, the first part isn't really that \nuseful yet, is it?\n\nI'm not sure what your intent with this patch is now. I thought the \nprevious behavior -- add a partitioned table to a publication and its \nleaf tables appear in the replication output -- was pretty welcome. Do \nwe not want that anymore?\n\nThere should probably be an option to pick the behavior, like we do in \npg_dump.\n\nWhat happens when you add a leaf table directly to a publication? Is it \nreplicated under its own identity or under its ancestor partitioned \ntable? (What if both the leaf table and a partitioned table are \npublication members?)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 20 Nov 2019 08:55:39 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2019-11-12 02:11, Amit Langote wrote:\n>> I don't understand why you go through great lengths to ensure that the\n>> relkinds match between publisher and subscriber. We already ensure that\n>> only regular tables are published and only regular tables are allowed as\n>> subscription target. In the future, we may want to allow further\n>> combinations. What situation are you trying to address here?\n> I'd really want to see the requirement for relkinds to have to match\n> go away, but as you can see, this patch doesn't modify enough of\n> pgoutput.c and worker.c to make that possible. Both the code for the\n> initital syncing and that for the subsequent real-time replication\n> assume that both source and target are regular tables. So even if\n> partitioned tables can now be in a publication, they're never sent in\n> the protocol messages, only their leaf partitions are. Initial\n> syncing code can be easily modified to support any combination of\n> source and target relations, but changes needed for real-time\n> replication seem non-trivial. Do you think we should do that before\n> we can say partitioned tables support logical replication?\n\nMy question was more simply why you have this check:\n\n+ /*\n+ * Cannot replicate from a regular to a partitioned table or vice\n+ * versa.\n+ */\n+ if (local_relkind != pt->relkind)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"cannot use relation \\\"%s.%s\\\" as logical \nreplication target\",\n+ rv->schemaname, rv->relname),\n\nIt doesn't seem necessary. What happens if you remove it?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 20 Nov 2019 08:58:04 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi Peter,\n\nOn Wed, Nov 20, 2019 at 4:55 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-11-18 09:53, Amit Langote wrote:\n> > I have spent some time hacking on this. With the attached updated\n> > patch, adding a partitioned table to publication results in publishing\n> > the inserts, updates, deletes of the table's leaf partitions as\n> > inserts, updates, deletes of the table itself (it all happens inside\n> > pgoutput). So, the replication target table doesn't necessarily have\n> > to be a partitioned table and even if it is partitioned its partitions\n> > don't have to match one-to-one.\n> >\n> > One restriction remains though: partitioned tables on a subscriber\n> > can't accept updates and deletes, because we'd need to map those to\n> > updates and deletes of their partitions, including handling a tuple\n> > possibly moving from one partition to another during an update.\n>\n> Right. Without that second part, the first part isn't really that\n> useful yet, is it?\n\nI would say yes.\n\n> I'm not sure what your intent with this patch is now. I thought the\n> previous behavior -- add a partitioned table to a publication and its\n> leaf tables appear in the replication output -- was pretty welcome. Do\n> we not want that anymore?\n\nHmm, I thought it would be more desirable to not expose a published\npartitioned table's leaf partitions to a subscriber, because it allows\nthe target table to be defined more flexibly.\n\n> There should probably be an option to pick the behavior, like we do in\n> pg_dump.\n\nI don't understand which existing behavior. Can you clarify?\n\nRegarding allowing users to choose between publishing partitioned\ntable changes using leaf tables' schema vs as using own schema, I tend\nto agree that there would be value in that. Users who choose the\nformer will have to ensure that target leaf partitions match exactly.\nUsers who want flexibility in how the target table is defined can use\nthe latter.\n\n> What happens when you add a leaf table directly to a publication? Is it\n> replicated under its own identity or under its ancestor partitioned\n> table? (What if both the leaf table and a partitioned table are\n> publication members?)\n\nIf both a leaf partition and an ancestor belong to the same\npublication, then leaf partition changes are replicated using the\nancestor's schema. For a leaf partition to be replicated using its\nown schema it must be published via a separate publication that\ndoesn't contain the ancestor. At least that's what the current patch\ndoes.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 22 Nov 2019 15:28:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2019-11-22 07:28, Amit Langote wrote:\n> Hmm, I thought it would be more desirable to not expose a published\n> partitioned table's leaf partitions to a subscriber, because it allows\n> the target table to be defined more flexibly.\n\nThere are multiple different variants that we probably eventually want \nto support. But I think there is value in exposing the partition \nstructure to the subscriber. Most notably, it allows the subscriber to \nrun the initial table sync per partition rather than in one big chunk -- \nwhich ultimately reflects one of the reasons partitioning exists.\n\nThe other way, exposing only the partitioned table, is also useful, \nespecially if you want to partition differently on the subscriber. But \nwithout the ability to target a partitioned table on the subscriber, \nthis would right now only allow you to replicate a partitioned table \ninto a non-partitioned table. Which is valid but probably not often useful.\n\n>> What happens when you add a leaf table directly to a publication? Is it\n>> replicated under its own identity or under its ancestor partitioned\n>> table? (What if both the leaf table and a partitioned table are\n>> publication members?)\n> \n> If both a leaf partition and an ancestor belong to the same\n> publication, then leaf partition changes are replicated using the\n> ancestor's schema. For a leaf partition to be replicated using its\n> own schema it must be published via a separate publication that\n> doesn't contain the ancestor. At least that's what the current patch\n> does.\n\nHmm, that seems confusing. This would mean that if you add a \npartitioned table to a publication that already contains leaf tables, \nthe publication behavior of the leaf tables would change. So again, I \nthink this alternative behavior of publishing partitions under the name \nof their root table should be an explicit option on a publication, and \nthen it should be ensured somehow that individual partitions are not \nadded to the publication in confusing ways.\n\nSo, it's up to you which aspect of this you want to tackle, but I \nthought your original goal of being able to add partitioned tables to \npublications and have that implicitly expand to all member partitions on \nthe publication side seemed quite useful, self-contained, and \nuncontroversial.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 22 Nov 2019 11:46:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Fri, Nov 22, 2019 at 4:16 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-11-22 07:28, Amit Langote wrote:\n>\n> >> What happens when you add a leaf table directly to a publication? Is it\n> >> replicated under its own identity or under its ancestor partitioned\n> >> table? (What if both the leaf table and a partitioned table are\n> >> publication members?)\n> >\n> > If both a leaf partition and an ancestor belong to the same\n> > publication, then leaf partition changes are replicated using the\n> > ancestor's schema. For a leaf partition to be replicated using its\n> > own schema it must be published via a separate publication that\n> > doesn't contain the ancestor. At least that's what the current patch\n> > does.\n>\n> Hmm, that seems confusing. This would mean that if you add a\n> partitioned table to a publication that already contains leaf tables,\n> the publication behavior of the leaf tables would change. So again, I\n> think this alternative behavior of publishing partitions under the name\n> of their root table should be an explicit option on a publication, and\n> then it should be ensured somehow that individual partitions are not\n> added to the publication in confusing ways.\n>\n\nYeah, it can probably detect and throw an error for such cases.\n\n> So, it's up to you which aspect of this you want to tackle, but I\n> thought your original goal of being able to add partitioned tables to\n> publications and have that implicitly expand to all member partitions on\n> the publication side seemed quite useful, self-contained, and\n> uncontroversial.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Nov 2019 16:34:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Fri, Nov 22, 2019 at 7:46 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-11-22 07:28, Amit Langote wrote:\n> > Hmm, I thought it would be more desirable to not expose a published\n> > partitioned table's leaf partitions to a subscriber, because it allows\n> > the target table to be defined more flexibly.\n>\n> There are multiple different variants that we probably eventually want\n> to support. But I think there is value in exposing the partition\n> structure to the subscriber. Most notably, it allows the subscriber to\n> run the initial table sync per partition rather than in one big chunk --\n> which ultimately reflects one of the reasons partitioning exists.\n\nI agree that replicating leaf-to-leaf has the least overhead.\n\n> The other way, exposing only the partitioned table, is also useful,\n> especially if you want to partition differently on the subscriber. But\n> without the ability to target a partitioned table on the subscriber,\n> this would right now only allow you to replicate a partitioned table\n> into a non-partitioned table. Which is valid but probably not often useful.\n\nHandling non-partitioned target tables was the main reason for me to\nmake publishing using the root parent's schema the default behavior.\nBut given that replicating from partitioned tables into\nnon-partitioned ones would be rare, I agree to replicating using the\nleaf schema by default.\n\n> >> What happens when you add a leaf table directly to a publication? Is it\n> >> replicated under its own identity or under its ancestor partitioned\n> >> table? (What if both the leaf table and a partitioned table are\n> >> publication members?)\n> >\n> > If both a leaf partition and an ancestor belong to the same\n> > publication, then leaf partition changes are replicated using the\n> > ancestor's schema. For a leaf partition to be replicated using its\n> > own schema it must be published via a separate publication that\n> > doesn't contain the ancestor. At least that's what the current patch\n> > does.\n>\n> Hmm, that seems confusing. This would mean that if you add a\n> partitioned table to a publication that already contains leaf tables,\n> the publication behavior of the leaf tables would change. So again, I\n> think this alternative behavior of publishing partitions under the name\n> of their root table should be an explicit option on a publication, and\n> then it should be ensured somehow that individual partitions are not\n> added to the publication in confusing ways.\n>\n> So, it's up to you which aspect of this you want to tackle, but I\n> thought your original goal of being able to add partitioned tables to\n> publications and have that implicitly expand to all member partitions on\n> the publication side seemed quite useful, self-contained, and\n> uncontroversial.\n\nOK, let's make whether to publish with root or leaf schema an option,\nwith the latter being the default. I will see about updating the\npatch that way.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 25 Nov 2019 18:37:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Mon, Nov 25, 2019 at 6:37 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> OK, let's make whether to publish with root or leaf schema an option,\n> with the latter being the default. I will see about updating the\n> patch that way.\n\nHere are the updated patches.\n\n0001: Adding a partitioned table to a publication implicitly adds all\nits partitions. The receiving side must have tables matching the\npublished partitions, which is typically the case, because the same\npartition tree is defined on both nodes.\n\n0002: Add a new Boolean publication parameter\n'publish_using_root_schema'. If true, a partitioned table's\npartitions are not exposed to the subscriber, that is, changes of its\npartitions are published as its own. This allows to replicate\npartitioned table changes to a non-partitioned table (seldom useful)\nor to a partitioned table that has different set of partitions than on\nthe publisher (a reasonable use case). This patch only adds the\nparameter and doesn't implement any of that behavior.\n\n0003: A refactoring patch for worker.c to allow handling partitioned\ntables as targets of logical of replication commands a bit easier.\n\n0004: This implements the 'publish_using_root_schema = true' behavior\ndescribed above. (An unintended benefit of making partitioned tables\nan accepted relation type in worker.c is that it allows partitions on\nsubscriber to be sub-partitioned even if they are not on the\npublisher, that is, when replicating partition-to-partition!)\n\nThanks,\nAmit", "msg_date": "Fri, 6 Dec 2019 16:48:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2019-12-06 08:48, Amit Langote wrote:\n> 0001: Adding a partitioned table to a publication implicitly adds all\n> its partitions. The receiving side must have tables matching the\n> published partitions, which is typically the case, because the same\n> partition tree is defined on both nodes.\n\nThis looks pretty good to me now. But you need to make all the changed \nqueries version-aware so that you can still replicate from and to older \nversions. (For example, pg_partition_tree is not very old.)\n\nThis part looks a bit fishy:\n\n+ /*\n+ * If either table is partitioned, skip copying. Individual \npartitions\n+ * will be copied instead.\n+ */\n+ if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE ||\n+ remote_relkind == RELKIND_PARTITIONED_TABLE)\n+ {\n+ logicalrep_rel_close(relmapentry, NoLock);\n+ return;\n+ }\n\nI don't think you want to filter out a partitioned table on the local \nside, since (a) COPY can handle that, and (b) it's (as of this patch) an \nerror to have a partitioned table in the subscription table set.\n\nI'm not a fan of the new ValidateSubscriptionRel() function. It's too \nobscure, especially the return value. Doesn't seem worth it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Dec 2019 16:48:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Thanks for checking.\n\nOn Thu, Dec 12, 2019 at 12:48 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-12-06 08:48, Amit Langote wrote:\n> > 0001: Adding a partitioned table to a publication implicitly adds all\n> > its partitions. The receiving side must have tables matching the\n> > published partitions, which is typically the case, because the same\n> > partition tree is defined on both nodes.\n>\n> This looks pretty good to me now. But you need to make all the changed\n> queries version-aware so that you can still replicate from and to older\n> versions. (For example, pg_partition_tree is not very old.)\n\nTrue, fixed that.\n\n> This part looks a bit fishy:\n>\n> + /*\n> + * If either table is partitioned, skip copying. Individual\n> partitions\n> + * will be copied instead.\n> + */\n> + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE ||\n> + remote_relkind == RELKIND_PARTITIONED_TABLE)\n> + {\n> + logicalrep_rel_close(relmapentry, NoLock);\n> + return;\n> + }\n>\n> I don't think you want to filter out a partitioned table on the local\n> side, since (a) COPY can handle that, and (b) it's (as of this patch) an\n> error to have a partitioned table in the subscription table set.\n\nYeah, (b) is true, so copy_table() should only ever see regular tables\nwith only patch 0001 applied.\n\n> I'm not a fan of the new ValidateSubscriptionRel() function. It's too\n> obscure, especially the return value. Doesn't seem worth it.\n\nIt went through many variants since I first introduced it, but yeah I\nagree we don't need it if only because of the weird interface.\n\nIt occurred to me that, *as of 0001*, we should indeed disallow\nreplicating from a regular table on publisher node into a partitioned\ntable of the same name on subscriber node (as the earlier patches\ndid), because 0001 doesn't implement tuple routing support that would\nbe needed to apply such changes.\n\nAttached updated patches.\n\nThanks,\nAmit", "msg_date": "Mon, 16 Dec 2019 18:19:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Mon, Dec 16, 2019 at 2:50 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Thanks for checking.\n>\n> On Thu, Dec 12, 2019 at 12:48 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > On 2019-12-06 08:48, Amit Langote wrote:\n> > > 0001: Adding a partitioned table to a publication implicitly adds all\n> > > its partitions. The receiving side must have tables matching the\n> > > published partitions, which is typically the case, because the same\n> > > partition tree is defined on both nodes.\n> >\n> > This looks pretty good to me now. But you need to make all the changed\n> > queries version-aware so that you can still replicate from and to older\n> > versions. (For example, pg_partition_tree is not very old.)\n>\n> True, fixed that.\n>\n> > This part looks a bit fishy:\n> >\n> > + /*\n> > + * If either table is partitioned, skip copying. Individual\n> > partitions\n> > + * will be copied instead.\n> > + */\n> > + if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE ||\n> > + remote_relkind == RELKIND_PARTITIONED_TABLE)\n> > + {\n> > + logicalrep_rel_close(relmapentry, NoLock);\n> > + return;\n> > + }\n> >\n> > I don't think you want to filter out a partitioned table on the local\n> > side, since (a) COPY can handle that, and (b) it's (as of this patch) an\n> > error to have a partitioned table in the subscription table set.\n>\n> Yeah, (b) is true, so copy_table() should only ever see regular tables\n> with only patch 0001 applied.\n>\n> > I'm not a fan of the new ValidateSubscriptionRel() function. It's too\n> > obscure, especially the return value. Doesn't seem worth it.\n>\n> It went through many variants since I first introduced it, but yeah I\n> agree we don't need it if only because of the weird interface.\n>\n> It occurred to me that, *as of 0001*, we should indeed disallow\n> replicating from a regular table on publisher node into a partitioned\n> table of the same name on subscriber node (as the earlier patches\n> did), because 0001 doesn't implement tuple routing support that would\n> be needed to apply such changes.\n>\n> Attached updated patches.\n>\nI am planning to review this patch. Currently, it is not applying on\nthe head so can you rebase it?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Jan 2020 14:11:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi Amit,\n\nI went through this patch set once again today and here are my two cents.\n\nOn Mon, 16 Dec 2019 at 10:19, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Attached updated patches.\n- differently partitioned setup. Attempts to replicate tables other than\n- base tables will result in an error.\n+ Replication is only supported by regular and partitioned tables, although\n+ the table kind must match between the two servers, that is, one cannot\n\nI find the phrase 'table kind' a bit odd, how about something like\ntype of the table.\n\n/* Only plain tables can be aded to publications. */\n- if (tbinfo->relkind != RELKIND_RELATION)\n+ /* Only plain and partitioned tables can be added to publications. */\nIMHO using regular instead of plain would be more consistent.\n\n+ /*\n+ * Find a partition for the tuple contained in remoteslot.\n+ *\n+ * For insert, remoteslot is tuple to insert. For update and delete, it\n+ * is the tuple to be replaced and deleted, repectively.\n+ */\nMisspelled 'respectively'.\n\n+static void\n+apply_handle_tuple_routing(ResultRelInfo *relinfo,\n+ LogicalRepRelMapEntry *relmapentry,\n+ EState *estate, CmdType operation,\n+ TupleTableSlot *remoteslot,\n+ LogicalRepTupleData *newtup)\n+{\n+ Relation rel = relinfo->ri_RelationDesc;\n+ ModifyTableState *mtstate = NULL;\n+ PartitionTupleRouting *proute = NULL;\n+ ResultRelInfo *partrelinfo,\n+ *partrelinfo1;\n\nIMHO, partrelinfo1 can be better named to improve readability.\n\nOtherwise, as Dilip already mentioned, there is a rebase required\nparticularly for 0003 and 0004.\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Mon, 6 Jan 2020 12:25:32 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Mon, Jan 6, 2020 at 8:25 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> Hi Amit,\n>\n> I went through this patch set once again today and here are my two cents.\n\nThanks Rafia.\n\nRebased and updated to address your comments.\n\nRegards,\nAmit", "msg_date": "Tue, 7 Jan 2020 14:01:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Tue, 7 Jan 2020 at 06:02, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Rebased and updated to address your comments.\n>\n+ <para>\n+ Partitioned tables are not considered when <literal>FOR ALL TABLES</literal>\n+ is specified.\n+ </para>\n+\nWhat is the reason for above, I mean not for the comment but not\nincluding partitioned tables in for all tables options.\n\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Tue, 7 Jan 2020 15:18:27 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-01-07 06:01, Amit Langote wrote:\n> On Mon, Jan 6, 2020 at 8:25 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>> Hi Amit,\n>>\n>> I went through this patch set once again today and here are my two cents.\n> \n> Thanks Rafia.\n> \n> Rebased and updated to address your comments.\n\nLooking through 0001, I think perhaps there is a better way to structure \nsome of the API changes.\n\nInstead of passing the root_target_rel to CheckValidResultRel() and \nCheckCmdReplicaIdentity(), which we only need to check the publication \nactions of the root table, how about changing \nGetRelationPublicationActions() to automatically include the publication \ninformation of the root table. Then we have that information in the \nrelcache once and don't need to check the base table and the partition \nroot separately at each call site (of which there is only one right \nnow). (Would that work correctly with relcache invalidation?)\n\nSimilarly, couldn't GetRelationPublications() just automatically take \npartitioning into account? We don't need the separation between \nGetRelationPublications() and GetRelationAncestorPublications(). This \nwould also avoid errors of omission, for example the \nGetRelationPublications() call in ATPrepChangePersistence() doesn't take \nGetRelationAncestorPublications() into account.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jan 2020 11:54:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-01-07 15:18, Rafia Sabih wrote:\n> On Tue, 7 Jan 2020 at 06:02, Amit Langote <amitlangote09@gmail.com> wrote:\n> \n>> Rebased and updated to address your comments.\n>>\n> + <para>\n> + Partitioned tables are not considered when <literal>FOR ALL TABLES</literal>\n> + is specified.\n> + </para>\n> +\n> What is the reason for above, I mean not for the comment but not\n> including partitioned tables in for all tables options.\n\nThis comment is kind of a noop, because the leaf partitions are already \nincluded in FOR ALL TABLES, so whether partitioned tables are considered \nincluded in FOR ALL TABLES is irrelevant. I suggest removing the \ncomment to avoid any confusion.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jan 2020 11:57:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Jan 8, 2020 at 7:57 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-01-07 15:18, Rafia Sabih wrote:\n> > On Tue, 7 Jan 2020 at 06:02, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> >> Rebased and updated to address your comments.\n> >>\n> > + <para>\n> > + Partitioned tables are not considered when <literal>FOR ALL TABLES</literal>\n> > + is specified.\n> > + </para>\n> > +\n> > What is the reason for above, I mean not for the comment but not\n> > including partitioned tables in for all tables options.\n>\n> This comment is kind of a noop, because the leaf partitions are already\n> included in FOR ALL TABLES, so whether partitioned tables are considered\n> included in FOR ALL TABLES is irrelevant. I suggest removing the\n> comment to avoid any confusion.\n\nI agree. I had written that comment considering the other feature\nwhere the changes are published as root table's, but even in that case\nit'd be wrong to do what it says -- partitioned tables *should* be\nincluded in that case.\n\nI will fix the patches accordingly.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 9 Jan 2020 19:26:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi Peter\n,\nThanks for the review and sorry it took me a while to get back.\n\nOn Wed, Jan 8, 2020 at 7:54 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Looking through 0001, I think perhaps there is a better way to structure\n> some of the API changes.\n>\n> Instead of passing the root_target_rel to CheckValidResultRel() and\n> CheckCmdReplicaIdentity(), which we only need to check the publication\n> actions of the root table, how about changing\n> GetRelationPublicationActions() to automatically include the publication\n> information of the root table. Then we have that information in the\n> relcache once and don't need to check the base table and the partition\n> root separately at each call site (of which there is only one right\n> now). (Would that work correctly with relcache invalidation?)\n>\n> Similarly, couldn't GetRelationPublications() just automatically take\n> partitioning into account? We don't need the separation between\n> GetRelationPublications() and GetRelationAncestorPublications(). This\n> would also avoid errors of omission, for example the\n> GetRelationPublications() call in ATPrepChangePersistence() doesn't take\n> GetRelationAncestorPublications() into account.\n\nI have addressed these comments in the attached updated patch.\n\nOther than that, the updated patch contains following significant changes:\n\n* Changed pg_publication.c: GetPublicationRelations() so that any\npublished partitioned tables are expanded as needed\n\n* Since the pg_publication_tables view is backed by\nGetPublicationRelations(), that means subscriptioncmds.c:\nfetch_table_list() no longer needs to craft a query to include\npartitions when needed, because partitions are included at source.\nThat seems better, because it allows to limit the complexity\nsurrounding publication of partitioned tables to the publication side.\n\n* Fixed the publication table DDL to spot more cases of tables being\nadded to a publication in a duplicative manner. For example,\npartition being added to a publication which already contains its\nancestor and a partitioned tables being added to a publication\n(implying all of its partitions are added) which already contains a\npartition\n\nOnly attaching 0001. Will send the rest after polishing them a bit more.\n\nThanks,\nAmit", "msg_date": "Wed, 22 Jan 2020 14:38:06 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Jan 22, 2020 at 2:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Other than that, the updated patch contains following significant changes:\n>\n> * Changed pg_publication.c: GetPublicationRelations() so that any\n> published partitioned tables are expanded as needed\n>\n> * Since the pg_publication_tables view is backed by\n> GetPublicationRelations(), that means subscriptioncmds.c:\n> fetch_table_list() no longer needs to craft a query to include\n> partitions when needed, because partitions are included at source.\n> That seems better, because it allows to limit the complexity\n> surrounding publication of partitioned tables to the publication side.\n>\n> * Fixed the publication table DDL to spot more cases of tables being\n> added to a publication in a duplicative manner. For example,\n> partition being added to a publication which already contains its\n> ancestor and a partitioned tables being added to a publication\n> (implying all of its partitions are added) which already contains a\n> partition\n\nOn second thought, this seems like an overkill. It might be OK after\nall for both a partitioned table and its partitions to be explicitly\nadded to a publication without complaining of duplication. IOW, it's\nthe user's call whether it makes sense to do that or not.\n\n> Only attaching 0001.\n\nAttached updated 0001 considering the above and the rest of the\npatches that add support for replicating partitioned tables using\ntheir own identity and schema. I have reorganized the other patches\nas follows:\n\n0002: refactoring of logical/worker.c without any functionality\nchanges (contains much less churn than in earlier versions)\n\n0003: support logical replication into partitioned tables on the\nsubscription side (allows replicating from a non-partitioned table on\npublisher node into a partitioned table on subscriber node)\n\n0004: support optionally replicating partitioned table changes (and\nchanges directly made to partitions) using root partitioned table\nidentity and schema\n\nThanks,\nAmit", "msg_date": "Thu, 23 Jan 2020 19:10:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-01-23 11:10, Amit Langote wrote:\n> On Wed, Jan 22, 2020 at 2:38 PM Amit Langote<amitlangote09@gmail.com> wrote:\n>> Other than that, the updated patch contains following significant changes:\n>>\n>> * Changed pg_publication.c: GetPublicationRelations() so that any\n>> published partitioned tables are expanded as needed\n>>\n>> * Since the pg_publication_tables view is backed by\n>> GetPublicationRelations(), that means subscriptioncmds.c:\n>> fetch_table_list() no longer needs to craft a query to include\n>> partitions when needed, because partitions are included at source.\n>> That seems better, because it allows to limit the complexity\n>> surrounding publication of partitioned tables to the publication side.\n>>\n>> * Fixed the publication table DDL to spot more cases of tables being\n>> added to a publication in a duplicative manner. For example,\n>> partition being added to a publication which already contains its\n>> ancestor and a partitioned tables being added to a publication\n>> (implying all of its partitions are added) which already contains a\n>> partition\n> On second thought, this seems like an overkill. It might be OK after\n> all for both a partitioned table and its partitions to be explicitly\n> added to a publication without complaining of duplication. IOW, it's\n> the user's call whether it makes sense to do that or not.\n\nThis structure looks good now.\n\nHowever, it does seem unfortunate that in pg_get_publication_tables() we \nneed to postprocess the result of GetPublicationRelations(). Since \nwe're already changing the API of GetPublicationRelations(), couldn't we \nalso make it optionally not include partitioned tables?\n\nFor the test, perhaps add test cases where partitions are attached and \ndetached so that we can see whether their publication relcache \ninformation is properly updated. (I'm not doubting that it works, but \nit would be good to have a test for, in case of future restructuring.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 28 Jan 2020 10:11:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi Amit,\n\nOnce again I went through this patch set and here are my few comments,\n\nOn Thu, 23 Jan 2020 at 11:10, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Jan 22, 2020 at 2:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Other than that, the updated patch contains following significant changes:\n> >\n> > * Changed pg_publication.c: GetPublicationRelations() so that any\n> > published partitioned tables are expanded as needed\n> >\n> > * Since the pg_publication_tables view is backed by\n> > GetPublicationRelations(), that means subscriptioncmds.c:\n> > fetch_table_list() no longer needs to craft a query to include\n> > partitions when needed, because partitions are included at source.\n> > That seems better, because it allows to limit the complexity\n> > surrounding publication of partitioned tables to the publication side.\n> >\n> > * Fixed the publication table DDL to spot more cases of tables being\n> > added to a publication in a duplicative manner. For example,\n> > partition being added to a publication which already contains its\n> > ancestor and a partitioned tables being added to a publication\n> > (implying all of its partitions are added) which already contains a\n> > partition\n>\n> On second thought, this seems like an overkill. It might be OK after\n> all for both a partitioned table and its partitions to be explicitly\n> added to a publication without complaining of duplication. IOW, it's\n> the user's call whether it makes sense to do that or not.\n>\n> > Only attaching 0001.\n>\n> Attached updated 0001 considering the above and the rest of the\n> patches that add support for replicating partitioned tables using\n> their own identity and schema. I have reorganized the other patches\n> as follows:\n>\n> 0002: refactoring of logical/worker.c without any functionality\n> changes (contains much less churn than in earlier versions)\n>\n> 0003: support logical replication into partitioned tables on the\n> subscription side (allows replicating from a non-partitioned table on\n> publisher node into a partitioned table on subscriber node)\n>\n> 0004: support optionally replicating partitioned table changes (and\n> changes directly made to partitions) using root partitioned table\n> identity and schema\n\n+ cannot replicate from a regular table into a partitioned able or vice\nHere is a missing t from table.\n\n+ <para>\n+ When a partitioned table is added to a publication, all of its existing\n+ and future partitions are also implicitly considered to be part of the\n+ publication. So, even operations that are performed directly on a\n+ partition are also published via its ancestors' publications.\n\nNow this is confusing, does it mean that when partitions are later\nadded to the table they will be replicated too, I think not, because\nyou need to first create them manually at the replication side, isn't\nit...?\n\n+ /* Must be a regular or partitioned table */\n+ if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&\n+ RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"\\\"%s\\\" is not a table\",\n\nIMHO the error message and details should be modified here to\nsomething along the lines of 'is neither a regular or partitioned\ntable'\n\n+ * published via an ancestor and when a partitioned tables's partitions\ntables's --> tables'\n\n+ if (get_rel_relispartition(relid))\n+ {\n+ List *ancestors = get_partition_ancestors(relid);\n\nNow, this is just for my understanding, why the ancestors have to be a\nlist, I always assumed that a partition could only have one ancestor\n-- the root table. Is there something more to it that I am totally\nmissing here or is it to cover the scenario of having partitions of\npartitions.\n\nHere I also want to clarify one thing, does it also happen like if a\npartitioned table is dropped from a publication then all its\npartitions are also implicitly dropped? As far as my understanding\ngoes that doesn't happen, so shouldn't there be some notice about it.\n\n-GetPublicationRelations(Oid pubid)\n+GetPublicationRelations(Oid pubid, bool include_partitions)\n\nHow about having an enum here with INCLUDE_PARTITIONS,\nINCLUDE_PARTITIONED_REL, and SKIP_PARTITIONS to address the three\npossibilities and avoiding reiterating through the list in\npg_get_publication_tables().\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Wed, 29 Jan 2020 07:55:25 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Tue, Jan 28, 2020 at 6:11 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> This structure looks good now.\n\nThanks for taking a look.\n\n> However, it does seem unfortunate that in pg_get_publication_tables() we\n> need to postprocess the result of GetPublicationRelations(). Since\n> we're already changing the API of GetPublicationRelations(), couldn't we\n> also make it optionally not include partitioned tables?\n\nHmm, okay. We really need GetPublicationRelations() to handle\npartitioned tables in 3 ways:\n\n1. Don't expand and return them as-is\n2. Expand and return only leaf partitions\n3. Expand and return all partitions\n\nI will try that in the new patch.\n\n> For the test, perhaps add test cases where partitions are attached and\n> detached so that we can see whether their publication relcache\n> information is properly updated. (I'm not doubting that it works, but\n> it would be good to have a test for, in case of future restructuring.)\n\nOkay, I will add some to publication.sql.\n\nWill send updated patches after addressing Rafia's comments.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 29 Jan 2020 16:29:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Thank Rafia for the review.\n\nOn Wed, Jan 29, 2020 at 3:55 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> On Thu, 23 Jan 2020 at 11:10, Amit Langote <amitlangote09@gmail.com> wrote:\n> > v10 patches\n> + cannot replicate from a regular table into a partitioned able or vice\n> Here is a missing t from table.\n\nOops, fixed.\n\n> + <para>\n> + When a partitioned table is added to a publication, all of its existing\n> + and future partitions are also implicitly considered to be part of the\n> + publication. So, even operations that are performed directly on a\n> + partition are also published via its ancestors' publications.\n>\n> Now this is confusing, does it mean that when partitions are later\n> added to the table they will be replicated too, I think not, because\n> you need to first create them manually at the replication side, isn't\n> it...?\n\nYes, it's upon the user to make sure that they have set up the\npartitions correctly on the subscriber. I don't see how that's very\ndifferent from what needs to be done when tables are added to a\npublication after-the-fact. Did I misunderstand you?\n\n> + /* Must be a regular or partitioned table */\n> + if (RelationGetForm(targetrel)->relkind != RELKIND_RELATION &&\n> + RelationGetForm(targetrel)->relkind != RELKIND_PARTITIONED_TABLE)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"\\\"%s\\\" is not a table\",\n>\n> IMHO the error message and details should be modified here to\n> something along the lines of 'is neither a regular or partitioned\n> table'\n\nHmm, this is simply following a convention that's used in most places\naround the code, although I'm not really a fan of these \"not a\n<whatever>\"-style messages to begin with. It's less ambiguous with a\n\"cannot perform <action> on <relkind>\"-style message, which some\nplaces already use.\n\nIn that view, I have changed the documentation too to say this:\n\n+ Replication is only supported by tables, partitioned or not, although a\n+ given table must either be partitioned on both servers or not partitioned\n+ at all. Also, when replicating between partitioned tables, the actual\n+ replication occurs between leaf partitions, so partitions on the two\n+ servers must match one-to-one.\n\nIn retrospect, the confusion surrounding how we communicate the\nvarious operations and properties that cannot be supported on a table\nif partitioned, both in the error messages and the documentation,\ncould have been avoided if it wasn't based on relkind. I guess it's\ntoo late now though. :(\n\n> + * published via an ancestor and when a partitioned tables's partitions\n> tables's --> tables'\n>\n> + if (get_rel_relispartition(relid))\n> + {\n> + List *ancestors = get_partition_ancestors(relid);\n>\n> Now, this is just for my understanding, why the ancestors have to be a\n> list, I always assumed that a partition could only have one ancestor\n> -- the root table. Is there something more to it that I am totally\n> missing here or is it to cover the scenario of having partitions of\n> partitions.\n\nYes, with multi-level partitioning.\n\n> Here I also want to clarify one thing, does it also happen like if a\n> partitioned table is dropped from a publication then all its\n> partitions are also implicitly dropped? As far as my understanding\n> goes that doesn't happen, so shouldn't there be some notice about it.\n\nActually, that is what happens, unless partitions were explicitly\nadded to the publication, in which case they will continue to be\npublished.\n\n> -GetPublicationRelations(Oid pubid)\n> +GetPublicationRelations(Oid pubid, bool include_partitions)\n>\n> How about having an enum here with INCLUDE_PARTITIONS,\n> INCLUDE_PARTITIONED_REL, and SKIP_PARTITIONS to address the three\n> possibilities and avoiding reiterating through the list in\n> pg_get_publication_tables().\n\nI have done something similar in the updated patch, as I mentioned in\nmy earlier reply.\n\nPlease check the updated patches.\n\nThanks,\nAmit", "msg_date": "Wed, 29 Jan 2020 17:39:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "I have committed the 0001 patch of this series (partitioned table member \nof publication). I changed the new argument of \nGetPublicationRelations() to an enum and reformatted some comments. \nI'll continue looking through the subsequent patches.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 09:52:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Tue, Mar 10, 2020 at 5:52 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I have committed the 0001 patch of this series (partitioned table member\n> of publication). I changed the new argument of\n> GetPublicationRelations() to an enum and reformatted some comments.\n> I'll continue looking through the subsequent patches.\n\nThank you.\n\n- Amit\n\n\n", "msg_date": "Tue, 10 Mar 2020 22:03:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "I was trying to extract some preparatory work from the remaining patches \nand came up with the attached. This is part of your patch 0003, but \nalso relevant for part 0004. The problem was that COPY (SELECT *) is \nnot sufficient when the table has generated columns, so we need to build \nthe column list explicitly.\n\nThoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 16 Mar 2020 13:49:26 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi Peter,\n\nOn Mon, Mar 16, 2020 at 9:49 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> I was trying to extract some preparatory work from the remaining patches\n> and came up with the attached. This is part of your patch 0003, but\n> also relevant for part 0004. The problem was that COPY (SELECT *) is\n> not sufficient when the table has generated columns, so we need to build\n> the column list explicitly.\n>\n> Thoughts?\n\nThank you for that.\n\n+ if (isnull || !remote_is_publishable)\n+ ereport(ERROR,\n+ (errmsg(\"table \\\"%s.%s\\\" on the publisher is not publishable\",\n+ nspname, relname)));\n\nMaybe add a one-line comment above this to say it's an \"not supposed\nto happen\" error or am I missing something? Wouldn't elog() suffice\nfor this?\n\nOther than that, looks good.\n\n--\nThank you,\nAmit\n\n\n", "msg_date": "Wed, 18 Mar 2020 12:06:41 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Mar 18, 2020 at 12:06 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Hi Peter,\n>\n> On Mon, Mar 16, 2020 at 9:49 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > I was trying to extract some preparatory work from the remaining patches\n> > and came up with the attached. This is part of your patch 0003, but\n> > also relevant for part 0004. The problem was that COPY (SELECT *) is\n> > not sufficient when the table has generated columns, so we need to build\n> > the column list explicitly.\n> >\n> > Thoughts?\n>\n> Thank you for that.\n>\n> + if (isnull || !remote_is_publishable)\n> + ereport(ERROR,\n> + (errmsg(\"table \\\"%s.%s\\\" on the publisher is not publishable\",\n> + nspname, relname)));\n>\n> Maybe add a one-line comment above this to say it's an \"not supposed\n> to happen\" error or am I missing something? Wouldn't elog() suffice\n> for this?\n>\n> Other than that, looks good.\n\nWait, the following Assert in copy_table() should now be gone:\n\n Assert(relmapentry->localrel->rd_rel->relkind == RELKIND_RELATION);\n\nbecause just below it:\n\n /* Start copy on the publisher. */\n initStringInfo(&cmd);\n- appendStringInfo(&cmd, \"COPY %s TO STDOUT\",\n- quote_qualified_identifier(lrel.nspname, lrel.relname));\n+ if (lrel.relkind == RELKIND_RELATION)\n+ appendStringInfo(&cmd, \"COPY %s TO STDOUT\",\n+ quote_qualified_identifier(lrel.nspname,\nlrel.relname));\n+ else\n+ {\n+ /*\n+ * For non-tables, we need to do COPY (SELECT ...), but we can't just\n+ * do SELECT * because we need to not copy generated columns.\n+ */\n\nBy the way, I have rebased the patches, although maybe you've got your\nown copies; attached.\n\n-- \nThank you,\nAmit", "msg_date": "Wed, 18 Mar 2020 16:33:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-03-18 04:06, Amit Langote wrote:\n> + if (isnull || !remote_is_publishable)\n> + ereport(ERROR,\n> + (errmsg(\"table \\\"%s.%s\\\" on the publisher is not publishable\",\n> + nspname, relname)));\n> \n> Maybe add a one-line comment above this to say it's an \"not supposed\n> to happen\" error or am I missing something? Wouldn't elog() suffice\n> for this?\n\nOn second thought, maybe we should just drop this check. The list of \ntables that is part of the publication was already filtered by the \npublisher, so this query doesn't need to check it again. We just need \nthe relkind to be able to construct the COPY command, but we don't need \nto second-guess it beyond that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 12:16:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Mar 18, 2020 at 8:16 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-03-18 04:06, Amit Langote wrote:\n> > + if (isnull || !remote_is_publishable)\n> > + ereport(ERROR,\n> > + (errmsg(\"table \\\"%s.%s\\\" on the publisher is not publishable\",\n> > + nspname, relname)));\n> >\n> > Maybe add a one-line comment above this to say it's an \"not supposed\n> > to happen\" error or am I missing something? Wouldn't elog() suffice\n> > for this?\n>\n> On second thought, maybe we should just drop this check. The list of\n> tables that is part of the publication was already filtered by the\n> publisher, so this query doesn't need to check it again. We just need\n> the relkind to be able to construct the COPY command, but we don't need\n> to second-guess it beyond that.\n\nAgreed.\n\n-- \nThank you,\nAmit\n\n\n", "msg_date": "Wed, 18 Mar 2020 23:19:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-03-18 15:19, Amit Langote wrote:\n> On Wed, Mar 18, 2020 at 8:16 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2020-03-18 04:06, Amit Langote wrote:\n>>> + if (isnull || !remote_is_publishable)\n>>> + ereport(ERROR,\n>>> + (errmsg(\"table \\\"%s.%s\\\" on the publisher is not publishable\",\n>>> + nspname, relname)));\n>>>\n>>> Maybe add a one-line comment above this to say it's an \"not supposed\n>>> to happen\" error or am I missing something? Wouldn't elog() suffice\n>>> for this?\n>>\n>> On second thought, maybe we should just drop this check. The list of\n>> tables that is part of the publication was already filtered by the\n>> publisher, so this query doesn't need to check it again. We just need\n>> the relkind to be able to construct the COPY command, but we don't need\n>> to second-guess it beyond that.\n> \n> Agreed.\n\nCommitted with that change then.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 09:05:32 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-03-18 08:33, Amit Langote wrote:\n> By the way, I have rebased the patches, although maybe you've got your\n> own copies; attached.\n\nLooking through 0002 and 0003 now.\n\nThe structure looks generally good.\n\nIn 0002, the naming of apply_handle_insert() vs. \napply_handle_do_insert() etc. seems a bit prone to confusion. How about \nsomething like apply_handle_insert_internal()? Also, should we put each \nof those internal functions next to their internal function instead of \nin a separate group like you have it?\n\nIn apply_handle_do_insert(), the argument localslot should probably be \nremoteslot.\n\nIn apply_handle_do_delete(), the ExecOpenIndices() call was moved to a \ndifferent location relative to the rest of the code. That was probably \nnot intended.\n\nIn 0003, you have /* TODO, use inverse lookup hashtable? */. Is this \nsomething you plan to address in this cycle, or is that more for future \ngenerations?\n\n0003 could use some more tests. The one test that you adjusted just \nensures the data goes somewhere instead of being rejected, but there are \nno tests that check whether it ends up in the right partition, whether \ncross-partition updates work etc.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 15:17:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Thu, Mar 19, 2020 at 11:18 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-03-18 08:33, Amit Langote wrote:\n> > By the way, I have rebased the patches, although maybe you've got your\n> > own copies; attached.\n>\n> Looking through 0002 and 0003 now.\n>\n> The structure looks generally good.\n\nThanks for the review.\n\n> In 0002, the naming of apply_handle_insert() vs.\n> apply_handle_do_insert() etc. seems a bit prone to confusion. How about\n> something like apply_handle_insert_internal()? Also, should we put each\n> of those internal functions next to their internal function instead of\n> in a separate group like you have it?\n\nSure.\n\n> In apply_handle_do_insert(), the argument localslot should probably be\n> remoteslot.\n\nYou're right, fixed.\n\n> In apply_handle_do_delete(), the ExecOpenIndices() call was moved to a\n> different location relative to the rest of the code. That was probably\n> not intended.\n\nFixed.\n\n> In 0003, you have /* TODO, use inverse lookup hashtable? */. Is this\n> something you plan to address in this cycle, or is that more for future\n> generations?\n\nSorry, this is simply a copy-paste from logicalrep_relmap_invalidate_cb().\n\n> 0003 could use some more tests. The one test that you adjusted just\n> ensures the data goes somewhere instead of being rejected, but there are\n> no tests that check whether it ends up in the right partition, whether\n> cross-partition updates work etc.\n\nOkay, added some tests.\n\nAttached updated patches.\n\n--\nThank you,\nAmit", "msg_date": "Mon, 23 Mar 2020 14:02:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-03-23 06:02, Amit Langote wrote:\n> Okay, added some tests.\n> \n> Attached updated patches.\n\nI have committed the worker.c refactoring patch.\n\n\"Add subscription support to replicate into partitioned tables\" still \nhas lacking test coverage. Your changes in relation.c are not exercised \nat all because the partitioned table branch in apply_handle_update() is \nnever taken. This is critical and tricky code, so I would look for \nsignificant testing.\n\nThe code looks okay to me. I would remove this code\n\n+ memset(entry->attrmap->attnums, -1,\n+ entry->attrmap->maplen * sizeof(AttrNumber));\n\nbecause the entries are explicitly filled right after anyway, and \nfilling the bytes with -1 has an unclear effect. There is also \nseemingly some fishiness in this code around whether attribute numbers \nare zero- or one-based. Perhaps this could be documented briefly. \nMaybe I'm misunderstanding something.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Mar 2020 13:29:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Mar 25, 2020 at 9:29 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-03-23 06:02, Amit Langote wrote:\n> > Okay, added some tests.\n> >\n> > Attached updated patches.\n>\n> I have committed the worker.c refactoring patch.\n>\n> \"Add subscription support to replicate into partitioned tables\" still\n> has lacking test coverage. Your changes in relation.c are not exercised\n> at all because the partitioned table branch in apply_handle_update() is\n> never taken. This is critical and tricky code, so I would look for\n> significant testing.\n\nWhile trying some tests around the code you mentioned, I found what\nlooks like a bug, which looking into now.\n\n> The code looks okay to me. I would remove this code\n>\n> + memset(entry->attrmap->attnums, -1,\n> + entry->attrmap->maplen * sizeof(AttrNumber));\n>\n> because the entries are explicitly filled right after anyway, and\n> filling the bytes with -1 has an unclear effect. There is also\n> seemingly some fishiness in this code around whether attribute numbers\n> are zero- or one-based. Perhaps this could be documented briefly.\n> Maybe I'm misunderstanding something.\n\nWill check and fix as necessary.\n\n--\nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Mar 2020 23:23:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Thu, Mar 26, 2020 at 11:23 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Mar 25, 2020 at 9:29 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > On 2020-03-23 06:02, Amit Langote wrote:\n> > > Okay, added some tests.\n> > >\n> > > Attached updated patches.\n> >\n> > I have committed the worker.c refactoring patch.\n> >\n> > \"Add subscription support to replicate into partitioned tables\" still\n> > has lacking test coverage. Your changes in relation.c are not exercised\n> > at all because the partitioned table branch in apply_handle_update() is\n> > never taken. This is critical and tricky code, so I would look for\n> > significant testing.\n>\n> While trying some tests around the code you mentioned, I found what\n> looks like a bug, which looking into now.\n\nTurns out the code in apply_handle_tuple_routing() for the UPDATE\nmessage was somewhat bogus, which fixed in the updated version. I\nended up with anothing refactoring patch, which attached as 0001.\n\nIt appears to me that the tests now seem enough to cover\napply_handle_tuple_routing(), although more could still be added.\n\n> > The code looks okay to me. I would remove this code\n> >\n> > + memset(entry->attrmap->attnums, -1,\n> > + entry->attrmap->maplen * sizeof(AttrNumber));\n> >\n> > because the entries are explicitly filled right after anyway, and\n> > filling the bytes with -1 has an unclear effect. There is also\n> > seemingly some fishiness in this code around whether attribute numbers\n> > are zero- or one-based. Perhaps this could be documented briefly.\n> > Maybe I'm misunderstanding something.\n>\n> Will check and fix as necessary.\n\nRemoved that memset. I have added a comment about one- vs. zero-based\nindexes contained in the maps coming from two different modules, viz.\ntuple routing and logical replication, resp.\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 27 Mar 2020 22:10:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "I have updated the comments in apply_handle_tuple_routing() (see 0002)\nto better explain what's going on with UPDATE handling. I also\nrearranged the tests a bit for clarity.\n\nAttached updated patches.\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 31 Mar 2020 00:42:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-03-30 17:42, Amit Langote wrote:\n> I have updated the comments in apply_handle_tuple_routing() (see 0002)\n> to better explain what's going on with UPDATE handling. I also\n> rearranged the tests a bit for clarity.\n> \n> Attached updated patches.\n\nTest coverage for 0002 is still a bit lacking. Please do a coverage \nbuild yourself and get at least one test case to exercise every branch \nin apply_handle_tuple_routing(). Right now, I don't see any coverage \nfor updates without attribute remapping and updates that don't move to a \nnew partition.\n\nAlso, the coverage report reveals that in logicalrep_partmap_init(), the \npatch is mistakenly initializing LogicalRepRelMapContext instead of \nLogicalRepPartMapContext. (Hmm, how does it even work like that?)\n\nI think apart from some of these details, this patch is okay, but I \ndon't have deep experience in the partitioning code, I can just see that \nit looks like other code elsewhere. Perhaps someone with more knowledge \ncan give this a look as well.\n\nAbout patch 0003, I was talking to some people offline about the name of \nthe option. There was some confusion about using the term \"schema\". \nHow about naming it \"publish_via_partition_root\", which also matches the \nname of the analogous option in pg_dump.\n\nCode coverage here could also be improved. A lot of the new code in \npgoutput.c is not tested.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 2 Apr 2020 14:23:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi,\n\nOn 02/04/2020 14:23, Peter Eisentraut wrote:\n> On 2020-03-30 17:42, Amit Langote wrote:\n>> I have updated the comments in apply_handle_tuple_routing() (see 0002)\n>> to better explain what's going on with UPDATE handling.  I also\n>> rearranged the tests a bit for clarity.\n>>\n>> Attached updated patches.\n> > Also, the coverage report reveals that in logicalrep_partmap_init(), the\n> patch is mistakenly initializing LogicalRepRelMapContext instead of \n> LogicalRepPartMapContext.  (Hmm, how does it even work like that?)\n> \n\nIt works because it's just a MemoryContext and it's long lived. I wonder \nif the fix here is to simply remove the LogicalRepPartMapContext...\n\n> I think apart from some of these details, this patch is okay, but I \n> don't have deep experience in the partitioning code, I can just see that \n> it looks like other code elsewhere.  Perhaps someone with more knowledge \n> can give this a look as well.\n> \n\nFWIW it looks okay to me as well from perspective of somebody who \nimplemented something similar outside of core.\n\n> About patch 0003, I was talking to some people offline about the name of \n> the option.  There was some confusion about using the term \"schema\". How \n> about naming it \"publish_via_partition_root\", which also matches the \n> name of the analogous option in pg_dump.\n> \n\n+1 (disclaimer: I was one of the people who discussed this offline)\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:52:07 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Fri, Apr 3, 2020 at 4:52 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> On 02/04/2020 14:23, Peter Eisentraut wrote:\n> > On 2020-03-30 17:42, Amit Langote wrote:\n> >> I have updated the comments in apply_handle_tuple_routing() (see 0002)\n> >> to better explain what's going on with UPDATE handling. I also\n> >> rearranged the tests a bit for clarity.\n> >>\n> >> Attached updated patches.\n> > > Also, the coverage report reveals that in logicalrep_partmap_init(), the\n> > patch is mistakenly initializing LogicalRepRelMapContext instead of\n> > LogicalRepPartMapContext. (Hmm, how does it even work like that?)\n> >\n>\n> It works because it's just a MemoryContext and it's long lived. I wonder\n> if the fix here is to simply remove the LogicalRepPartMapContext...\n\nActually, there is no LogicalRepPartMapContext in the patches posted\nso far, but I have decided to add it in the updated patch. One\nadvantage beside avoiding confusion is that it might help to tell\nmemory consumed by the partitions apart from that consumed by the\nactual replication targets.\n\n> > I think apart from some of these details, this patch is okay, but I\n> > don't have deep experience in the partitioning code, I can just see that\n> > it looks like other code elsewhere. Perhaps someone with more knowledge\n> > can give this a look as well.\n> >\n>\n> FWIW it looks okay to me as well from perspective of somebody who\n> implemented something similar outside of core.\n\nThanks for giving it a look.\n\n> > About patch 0003, I was talking to some people offline about the name of\n> > the option. There was some confusion about using the term \"schema\". How\n> > about naming it \"publish_via_partition_root\", which also matches the\n> > name of the analogous option in pg_dump.\n> >\n>\n> +1 (disclaimer: I was one of the people who discussed this offline)\n\nOkay, I like that too.\n\nI am checking test coverage at the moment and should have the patches\nready by sometime later today.\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 18:34:20 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Fri, Apr 3, 2020 at 6:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I am checking test coverage at the moment and should have the patches\n> ready by sometime later today.\n\nAttached updated patches.\n\nI confirmed using a coverage build that all the new code in\nlogical/worker.c due to 0002 is now covered. For some reason, coverage\nreport for pgoutput.c doesn't say the same thing for 0003's changes,\nalthough I doubt that result. It seems strange to believe that *none*\nof the new code is tested. I even checked by adding debugging elog()s\nnext to the lines that the coverage report says aren't exercised,\nwhich tell me that that's not true. Perhaps my coverage build is\nsomehow getting messed up, so it would be nice if someone with\nreliable coverage builds can confirm one way or the other. I will\ncontinue to check what's wrong.\n\nI fixed a couple of bugs in 0002. One of the bugs was that the\n\"partition map\" hash table in logical/relation.c didn't really work,\nso logicalrep_partition_would() always create a new entry.\n\nIn 0003, changed the publication parameter name to publish_via_partition_root.\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Apr 2020 23:25:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Hi,\n\nOn 03/04/2020 16:25, Amit Langote wrote:\n> On Fri, Apr 3, 2020 at 6:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I am checking test coverage at the moment and should have the patches\n>> ready by sometime later today.\n> \n> Attached updated patches.\n> \n> I confirmed using a coverage build that all the new code in\n> logical/worker.c due to 0002 is now covered. For some reason, coverage\n> report for pgoutput.c doesn't say the same thing for 0003's changes,\n> although I doubt that result. It seems strange to believe that *none*\n> of the new code is tested. I even checked by adding debugging elog()s\n> next to the lines that the coverage report says aren't exercised,\n> which tell me that that's not true. Perhaps my coverage build is\n> somehow getting messed up, so it would be nice if someone with\n> reliable coverage builds can confirm one way or the other. I will\n> continue to check what's wrong.\n> \n\nAFAIK gcov can't handle multiple instances of same process being started \nas it just overwrites the coverage files. So for TAP test it will report \nbogus info (as in some code that's executed will look as not executed). \nWe'd probably have to do some kind of `GCOV_PREFIX` magic in the TAP \nframework and merge (gcov/lcov can do that AFAIK) the resulting files to \nget accurate coverage info. But that's beyond this patch IMHO.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Fri, 3 Apr 2020 16:43:17 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Petr Jelinek <petr@2ndquadrant.com> writes:\n> AFAIK gcov can't handle multiple instances of same process being started \n> as it just overwrites the coverage files. So for TAP test it will report \n> bogus info (as in some code that's executed will look as not executed). \n\nHm, really? I routinely run \"make check\" (ie, parallel regression\ntests) under coverage, and I get results that seem sane. If I were\nlosing large chunks of the data, I think I'd have noticed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 10:59:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 03/04/2020 16:59, Tom Lane wrote:\n> Petr Jelinek <petr@2ndquadrant.com> writes:\n>> AFAIK gcov can't handle multiple instances of same process being started\n>> as it just overwrites the coverage files. So for TAP test it will report\n>> bogus info (as in some code that's executed will look as not executed).\n> \n> Hm, really? I routinely run \"make check\" (ie, parallel regression\n> tests) under coverage, and I get results that seem sane. If I were\n> losing large chunks of the data, I think I'd have noticed.\n> \n\nParallel regression still just starts single postgres instance no?\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Fri, 3 Apr 2020 17:04:23 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Petr Jelinek <petr@2ndquadrant.com> writes:\n> On 03/04/2020 16:59, Tom Lane wrote:\n>> Petr Jelinek <petr@2ndquadrant.com> writes:\n>>> AFAIK gcov can't handle multiple instances of same process being started\n>>> as it just overwrites the coverage files. So for TAP test it will report\n>>> bogus info (as in some code that's executed will look as not executed).\n\n>> Hm, really? I routinely run \"make check\" (ie, parallel regression\n>> tests) under coverage, and I get results that seem sane. If I were\n>> losing large chunks of the data, I think I'd have noticed.\n\n> Parallel regression still just starts single postgres instance no?\n\nBut the forked-off children have to write the gcov files independently,\ndon't they?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 11:51:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 03/04/2020 17:51, Tom Lane wrote:\n> Petr Jelinek <petr@2ndquadrant.com> writes:\n>> On 03/04/2020 16:59, Tom Lane wrote:\n>>> Petr Jelinek <petr@2ndquadrant.com> writes:\n>>>> AFAIK gcov can't handle multiple instances of same process being started\n>>>> as it just overwrites the coverage files. So for TAP test it will report\n>>>> bogus info (as in some code that's executed will look as not executed).\n> \n>>> Hm, really? I routinely run \"make check\" (ie, parallel regression\n>>> tests) under coverage, and I get results that seem sane. If I were\n>>> losing large chunks of the data, I think I'd have noticed.\n> \n>> Parallel regression still just starts single postgres instance no?\n> \n> But the forked-off children have to write the gcov files independently,\n> don't they?\n> \n\nHmm that's very good point. I did see these missing coverage issue when \nrunning tests that explicitly start more instances of postgres before \nthough. And with some quick googling, parallel testing seems to be issue \nwith gcov for more people.\n\nI wonder if the program checksum that gcov calculates when merging the \n.gcda data while updating it is somehow different for separately started \ninstances but not for the ones forked from same parent or something. I \ndon't know internals of gcov well enough to say how exactly that works.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Sat, 4 Apr 2020 07:07:29 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Petr Jelinek <petr@2ndquadrant.com> writes:\n> On 03/04/2020 17:51, Tom Lane wrote:\n>> But the forked-off children have to write the gcov files independently,\n>> don't they?\n\n> Hmm that's very good point. I did see these missing coverage issue when \n> running tests that explicitly start more instances of postgres before \n> though. And with some quick googling, parallel testing seems to be issue \n> with gcov for more people.\n\nI poked around and found this:\n\nhttps://gcc.gnu.org/legacy-ml/gcc-help/2005-11/msg00074.html\n\nwhich says\n\n gcov instrumentation is multi-process safe, but not multi-thread\n safe. The multi-processing safety relies on OS level file locking,\n which is not available on some systems.\n\nThat would explain why it works for me, but then there's a question\nof why it doesn't work for you ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Apr 2020 01:25:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 04/04/2020 07:25, Tom Lane wrote:\n> Petr Jelinek <petr@2ndquadrant.com> writes:\n>> On 03/04/2020 17:51, Tom Lane wrote:\n>>> But the forked-off children have to write the gcov files independently,\n>>> don't they?\n> \n>> Hmm that's very good point. I did see these missing coverage issue when\n>> running tests that explicitly start more instances of postgres before\n>> though. And with some quick googling, parallel testing seems to be issue\n>> with gcov for more people.\n> \n> I poked around and found this:\n> \n> https://gcc.gnu.org/legacy-ml/gcc-help/2005-11/msg00074.html\n> \n> which says\n> \n> gcov instrumentation is multi-process safe, but not multi-thread\n> safe. The multi-processing safety relies on OS level file locking,\n> which is not available on some systems.\n> \n> That would explain why it works for me, but then there's a question\n> of why it doesn't work for you ...\n\nHmm, I wonder if it has something to do with docker then (I rarely run \nany tests directly on the main system nowadays). But that does not \nexplain why it does not work for Amit either.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Sat, 4 Apr 2020 10:56:55 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Sat, Apr 4, 2020 at 5:56 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> On 04/04/2020 07:25, Tom Lane wrote:\n> > Petr Jelinek <petr@2ndquadrant.com> writes:\n> >> On 03/04/2020 17:51, Tom Lane wrote:\n> >>> But the forked-off children have to write the gcov files independently,\n> >>> don't they?\n> >\n> >> Hmm that's very good point. I did see these missing coverage issue when\n> >> running tests that explicitly start more instances of postgres before\n> >> though. And with some quick googling, parallel testing seems to be issue\n> >> with gcov for more people.\n> >\n> > I poked around and found this:\n> >\n> > https://gcc.gnu.org/legacy-ml/gcc-help/2005-11/msg00074.html\n> >\n> > which says\n> >\n> > gcov instrumentation is multi-process safe, but not multi-thread\n> > safe. The multi-processing safety relies on OS level file locking,\n> > which is not available on some systems.\n> >\n> > That would explain why it works for me, but then there's a question\n> > of why it doesn't work for you ...\n>\n> Hmm, I wonder if it has something to do with docker then (I rarely run\n> any tests directly on the main system nowadays). But that does not\n> explain why it does not work for Amit either.\n\nOne thing to I must clarify: coverage for most of pgoutput.c looks\nokay on each run. I am concerned that the coverage for the code added\nby the patch is shown to be close to zero, which is a mystery to me,\nbecause I can confirm by other means such as debugging elogs() to next\nto the new code that the newly added tests do cover them.\n\n--\nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Apr 2020 23:43:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> One thing to I must clarify: coverage for most of pgoutput.c looks\n> okay on each run. I am concerned that the coverage for the code added\n> by the patch is shown to be close to zero, which is a mystery to me,\n> because I can confirm by other means such as debugging elogs() to next\n> to the new code that the newly added tests do cover them.\n\nAccording to\n\nhttps://coverage.postgresql.org/src/backend/replication/pgoutput/index.html\n\nthe coverage is pretty good. Maybe you're doing something wrong\nin enabling coverage testing locally?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Apr 2020 11:22:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-04-03 16:25, Amit Langote wrote:\n> On Fri, Apr 3, 2020 at 6:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I am checking test coverage at the moment and should have the patches\n>> ready by sometime later today.\n> \n> Attached updated patches.\n\nCommitted 0001 now. I'll work on the rest tomorrow.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Apr 2020 15:25:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Mon, Apr 6, 2020 at 10:25 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-04-03 16:25, Amit Langote wrote:\n> > On Fri, Apr 3, 2020 at 6:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> I am checking test coverage at the moment and should have the patches\n> >> ready by sometime later today.\n> >\n> > Attached updated patches.\n>\n> Committed 0001 now. I'll work on the rest tomorrow.\n\nThank you. I have rebased the one remaining.\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Apr 2020 00:04:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Tue, Apr 7, 2020 at 12:04 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Mon, Apr 6, 2020 at 10:25 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > On 2020-04-03 16:25, Amit Langote wrote:\n> > > On Fri, Apr 3, 2020 at 6:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > >> I am checking test coverage at the moment and should have the patches\n> > >> ready by sometime later today.\n> > >\n> > > Attached updated patches.\n> >\n> > Committed 0001 now. I'll work on the rest tomorrow.\n>\n> Thank you. I have rebased the one remaining.\n\nI updated the patch to make the following changes:\n\n* Rewrote the tests to match in style with those committed yesterday\n* Renamed all variables that had pubasroot in it to have pubviaroot\ninstead to match the publication parameter\n* Updated pg_publication catalog documentation\n\n-- \nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Apr 2020 15:44:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-04-07 08:44, Amit Langote wrote:\n> I updated the patch to make the following changes:\n> \n> * Rewrote the tests to match in style with those committed yesterday\n> * Renamed all variables that had pubasroot in it to have pubviaroot\n> instead to match the publication parameter\n> * Updated pg_publication catalog documentation\n\nThanks. I have some further questions:\n\nThe change in nodeModifyTable.c to add CheckValidResultRel() is unclear. \n It doesn't seem to do anything, and it's not clear how it's related to \nthis patch.\n\nThe changes in GetRelationPublications() are confusing to me:\n\n+ if (published_rels)\n+ {\n+ num = list_length(result);\n+ for (i = 0; i < num; i++)\n+ *published_rels = lappend_oid(*published_rels, relid);\n+ }\n\nThis adds relid to the output list \"num\" times, where num is the number \nof publications found. Shouldn't \"i\" be used in the loop somehow? \nSimilarly later in the function.\n\nThe descriptions of the new fields in RelationSyncEntry don't seem to \nmatch the code accurately, or at least it's confusing. \nreplicate_as_relid is always filled in with an ancestor, even if \npubviaroot is not set.\n\nI think the pubviaroot field is actually not necessary. We only need \nreplicate_as_relid.\n\nThere is a markup typo in logical-replication.sgml:\n\n <xref linkend==\"sql-createpublication\"/>\n\nIn pg_dump, you missed updating a branch for an older version. See \nattached patch.\n\nAlso attached a patch to rephrase the psql output a bit to make it not \nso long.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 7 Apr 2020 11:01:02 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "Thanks for the review.\n\nOn Tue, Apr 7, 2020 at 6:01 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-04-07 08:44, Amit Langote wrote:\n> > I updated the patch to make the following changes:\n> >\n> > * Rewrote the tests to match in style with those committed yesterday\n> > * Renamed all variables that had pubasroot in it to have pubviaroot\n> > instead to match the publication parameter\n> > * Updated pg_publication catalog documentation\n>\n> Thanks. I have some further questions:\n>\n> The change in nodeModifyTable.c to add CheckValidResultRel() is unclear.\n> It doesn't seem to do anything, and it's not clear how it's related to\n> this patch.\n\nCheckValidResultRel() checks that replica identity is present for\nreplicating given update/delete, which I think, it's better to perform\non the root table itself, rather than some partition that would be\naffected. The latter already occurs by way of CheckValidResultRel()\nbeing called on partitions to be updated. I think we get a more\nhelpful message if the root parent is flagged instead of a partition.\n\nupdate prt1 set b = b + 1 where a = 578;\nERROR: cannot update table \"prt1\" because it does not have a replica\nidentity and publishes updates\nHINT: To enable updating the table, set REPLICA IDENTITY using ALTER TABLE.\n\nvs.\n\n-- checking the partition\nupdate prt1 set b = b + 1 where a = 578;\nERROR: cannot update table \"prt1_p3\" because it does not have a\nreplica identity and publishes updates\nHINT: To enable updating the table, set REPLICA IDENTITY using ALTER TABLE.\n\nI am okay to get rid of the check on root table if flagging invidual\npartitions seems good enough.\n\n> The changes in GetRelationPublications() are confusing to me:\n>\n> + if (published_rels)\n> + {\n> + num = list_length(result);\n> + for (i = 0; i < num; i++)\n> + *published_rels = lappend_oid(*published_rels, relid);\n> + }\n>\n> This adds relid to the output list \"num\" times, where num is the number\n> of publications found. Shouldn't \"i\" be used in the loop somehow?\n> Similarly later in the function.\n\npublished_rels contains an *OID* for each publication that will be in\nresult. Callers should iterate the two lists together and for each\npublication found in result, it will know which relation it is\nassociated with using the OID found in published_rels being scanned in\nparallel. If publishing through an ancestor's publication, we need to\nknow which ancestor, so the whole dance.\n\nI have thought this to be a bit ugly before, but after having to\nexplain it, I think it's better to use some other approach for this.\nI have updated the patch so that GetRelationPublications no longer\nconsiders a relation's ancestors. That way, it doesn't have to\nsecond-guess what other information will be needed by the caller.\n\nI hope that's clearer, because all the logic is in one place and that\nis get_rel_sync_entry().\n\n> The descriptions of the new fields in RelationSyncEntry don't seem to\n> match the code accurately, or at least it's confusing.\n> replicate_as_relid is always filled in with an ancestor, even if\n> pubviaroot is not set.\n\nGiven this confusion, I have changed how replicate_as_relid works so\nthat it's now always set -- if different from the relation's own OID,\nthe code for \"publishing via root\" kicks in in various places.\n\n> I think the pubviaroot field is actually not necessary. We only need\n> replicate_as_relid.\n\nLooking through the code, I agree. I guess I only kept it around to\ngo with pubupdate, etc.\n\nI guess it might also be a good idea to call it publish_as_relid\ninstead of replicate_as_relid for consistency.\n\n> There is a markup typo in logical-replication.sgml:\n>\n> <xref linkend==\"sql-createpublication\"/>\n\nOops, fixed.\n\n> In pg_dump, you missed updating a branch for an older version. See\n> attached patch.\n>\n> Also attached a patch to rephrase the psql output a bit to make it not\n> so long.\n\nThank you, merged.\n\nAttached updated patch with above changes.\n\n--\nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Apr 2020 01:22:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Apr 8, 2020 at 1:22 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Apr 7, 2020 at 6:01 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > The descriptions of the new fields in RelationSyncEntry don't seem to\n> > match the code accurately, or at least it's confusing.\n> > replicate_as_relid is always filled in with an ancestor, even if\n> > pubviaroot is not set.\n>\n> Given this confusion, I have changed how replicate_as_relid works so\n> that it's now always set -- if different from the relation's own OID,\n> the code for \"publishing via root\" kicks in in various places.\n>\n> > I think the pubviaroot field is actually not necessary. We only need\n> > replicate_as_relid.\n>\n> Looking through the code, I agree. I guess I only kept it around to\n> go with pubupdate, etc.\n\nThink I broke truncate replication with this. Fixed in the attached\nupdated patch.\n\n--\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Apr 2020 14:45:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-04-08 07:45, Amit Langote wrote:\n> On Wed, Apr 8, 2020 at 1:22 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Tue, Apr 7, 2020 at 6:01 PM Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> The descriptions of the new fields in RelationSyncEntry don't seem to\n>>> match the code accurately, or at least it's confusing.\n>>> replicate_as_relid is always filled in with an ancestor, even if\n>>> pubviaroot is not set.\n>>\n>> Given this confusion, I have changed how replicate_as_relid works so\n>> that it's now always set -- if different from the relation's own OID,\n>> the code for \"publishing via root\" kicks in in various places.\n>>\n>>> I think the pubviaroot field is actually not necessary. We only need\n>>> replicate_as_relid.\n>>\n>> Looking through the code, I agree. I guess I only kept it around to\n>> go with pubupdate, etc.\n> \n> Think I broke truncate replication with this. Fixed in the attached\n> updated patch.\n\nAll committed.\n\nThank you and everyone very much for working on this. I'm very happy \nthat these two features from PG10 have finally met. :)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Apr 2020 11:26:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Apr 8, 2020 at 6:26 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> All committed.\n>\n> Thank you and everyone very much for working on this. I'm very happy\n> that these two features from PG10 have finally met. :)\n\nThanks a lot for reviewing and committing.\n\nprion seems to have failed:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-04-08%2009%3A53%3A13\n\nAlso, still unsure why the coverage report for pgoutput.c changes not good:\nhttps://coverage.postgresql.org/src/backend/replication/pgoutput/pgoutput.c.gcov.html\n\nWill check.\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 20:16:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-04-08 13:16, Amit Langote wrote:\n> On Wed, Apr 8, 2020 at 6:26 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> All committed.\n>>\n>> Thank you and everyone very much for working on this. I'm very happy\n>> that these two features from PG10 have finally met. :)\n> \n> Thanks a lot for reviewing and committing.\n> \n> prion seems to have failed:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-04-08%2009%3A53%3A13\n\nThis comes from -DRELCACHE_FORCE_RELEASE.\n\n> Also, still unsure why the coverage report for pgoutput.c changes not good:\n> https://coverage.postgresql.org/src/backend/replication/pgoutput/pgoutput.c.gcov.html\n\nI think this is because the END { } section in PostgresNode.pm shuts \ndown all running instances in immediate mode, which doesn't save \ncoverage properly.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Apr 2020 14:21:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Apr 8, 2020 at 9:21 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-04-08 13:16, Amit Langote wrote:\n> > On Wed, Apr 8, 2020 at 6:26 PM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com> wrote:\n> >> All committed.\n> >>\n> >> Thank you and everyone very much for working on this. I'm very happy\n> >> that these two features from PG10 have finally met. :)\n> >\n> > Thanks a lot for reviewing and committing.\n> >\n> > prion seems to have failed:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-04-08%2009%3A53%3A13\n>\n> This comes from -DRELCACHE_FORCE_RELEASE.\n\nI'm seeing some funny stuff on such a build locally too, although\nhaven't been able to make sense of it yet.\n\n> > Also, still unsure why the coverage report for pgoutput.c changes not good:\n> > https://coverage.postgresql.org/src/backend/replication/pgoutput/pgoutput.c.gcov.html\n>\n> I think this is because the END { } section in PostgresNode.pm shuts\n> down all running instances in immediate mode, which doesn't save\n> coverage properly.\n\nThanks for that tip. Appending the following at the end of the test\nfile has fixed the coverage reporting for me.\n\nI noticed the following coverage issues:\n\n1. The previous commit f1ac27bfd missed a command that I had included\nto cover the following blocks of apply_handle_tuple_routing():\n\n 1165 : else\n 1166 : {\n 1167 0 : remoteslot =\nExecCopySlot(remoteslot, remoteslot_part);\n 1168 0 : slot_getallattrs(remoteslot);\n 1169 : }\n...\n\n 1200 2 : if (map != NULL)\n 1201 : {\n 1202 0 : remoteslot_part =\nexecute_attr_map_slot(map->attrMap,\n 1203 :\n remoteslot,\n 1204 :\n remoteslot_part);\n 1205 : }\n\n2. Now that I am able to see proper coverage fo\npublish_via_partition_root related changes, I can see that a block in\npgoutput_change() is missing coverage\n\nThe attached fixes these coverage issues.\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Apr 2020 23:07:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Apr 8, 2020 at 11:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Apr 8, 2020 at 9:21 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > I think this is because the END { } section in PostgresNode.pm shuts\n> > down all running instances in immediate mode, which doesn't save\n> > coverage properly.\n>\n> Thanks for that tip. Appending the following at the end of the test\n> file has fixed the coverage reporting for me.\n\nThe patch posted in the previous email has it, but I meant this by\n\"the following\":\n\n+\n+$node_publisher->stop('fast');\n+$node_subscriber1->stop('fast');\n+$node_subscriber2->stop('fast');\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 23:10:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Wed, Apr 8, 2020 at 11:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Apr 8, 2020 at 9:21 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > On 2020-04-08 13:16, Amit Langote wrote:\n> > > prion seems to have failed:\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-04-08%2009%3A53%3A13\n> >\n> > This comes from -DRELCACHE_FORCE_RELEASE.\n>\n> I'm seeing some funny stuff on such a build locally too, although\n> haven't been able to make sense of it yet.\n\nSo, I see the following repeated in the publisher's log\n(013_partition.pl) until PostgresNode.pm times out:\n\nsub_viaroot ERROR: number of columns (2601) exceeds limit (1664)\nsub_viaroot CONTEXT: slot \"sub_viaroot\", output plugin \"pgoutput\", in\nthe change callback, associated LSN 0/1621010\n\ncausing the tests introduced by this last commit to stall.\n\nJust before where the above starts repeating is this:\n\nsub_viaroot_16479_sync_16455 LOG: starting logical decoding for slot\n\"sub_viaroot_16479_sync_16455\"\nsub_viaroot_16479_sync_16455 DETAIL: Streaming transactions\ncommitting after 0/1620A40, reading WAL from 0/1620A08.\nsub_viaroot_16479_sync_16455 LOG: logical decoding found consistent\npoint at 0/1620A08\nsub_viaroot_16479_sync_16455 DETAIL: There are no running transactions.\nsub_viaroot_16479_sync_16470 LOG: statement: COPY public.tab3_1 TO STDOUT\nsub_viaroot_16479_sync_16470 LOG: statement: COMMIT\n\nSame thing for the other subscriber sub2.\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 12:39:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-04-09 05:39, Amit Langote wrote:\n> sub_viaroot ERROR: number of columns (2601) exceeds limit (1664)\n> sub_viaroot CONTEXT: slot \"sub_viaroot\", output plugin \"pgoutput\", in\n> the change callback, associated LSN 0/1621010\n\nI think the problem is that in maybe_send_schema(), \nRelationClose(ancestor) releases the relcache entry, but the tuple \ndescriptors, which are part of the relcache entry, are still pointed to \nby the tuple map.\n\nThis patch makes the tests pass for me:\n\ndiff --git a/src/backend/replication/pgoutput/pgoutput.c \nb/src/backend/replication/pgoutput/pgoutput.c\nindex 5fbf2d4367..cf6e8629c1 100644\n--- a/src/backend/replication/pgoutput/pgoutput.c\n+++ b/src/backend/replication/pgoutput/pgoutput.c\n@@ -305,7 +305,7 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n\n /* Map must live as long as the session does. */\n oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n- relentry->map = convert_tuples_by_name(indesc, outdesc);\n+ relentry->map = \nconvert_tuples_by_name(CreateTupleDescCopy(indesc), \nCreateTupleDescCopy(outdesc));\n MemoryContextSwitchTo(oldctx);\n send_relation_and_attrs(ancestor, ctx);\n RelationClose(ancestor);\n\nPlease check.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 09:14:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Thu, Apr 9, 2020 at 4:14 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-04-09 05:39, Amit Langote wrote:\n> > sub_viaroot ERROR: number of columns (2601) exceeds limit (1664)\n> > sub_viaroot CONTEXT: slot \"sub_viaroot\", output plugin \"pgoutput\", in\n> > the change callback, associated LSN 0/1621010\n>\n> I think the problem is that in maybe_send_schema(),\n> RelationClose(ancestor) releases the relcache entry, but the tuple\n> descriptors, which are part of the relcache entry, are still pointed to\n> by the tuple map.\n>\n> This patch makes the tests pass for me:\n>\n> diff --git a/src/backend/replication/pgoutput/pgoutput.c\n> b/src/backend/replication/pgoutput/pgoutput.c\n> index 5fbf2d4367..cf6e8629c1 100644\n> --- a/src/backend/replication/pgoutput/pgoutput.c\n> +++ b/src/backend/replication/pgoutput/pgoutput.c\n> @@ -305,7 +305,7 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n>\n> /* Map must live as long as the session does. */\n> oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n> - relentry->map = convert_tuples_by_name(indesc, outdesc);\n> + relentry->map =\n> convert_tuples_by_name(CreateTupleDescCopy(indesc),\n> CreateTupleDescCopy(outdesc));\n> MemoryContextSwitchTo(oldctx);\n> send_relation_and_attrs(ancestor, ctx);\n> RelationClose(ancestor);\n>\n> Please check.\n\nThanks. Yes, that's what I just found out too and was about to send a\npatch, which is basically same as yours as far as the fix for this\nissue is concerned.\n\nWhile figuring this out, I thought the nearby code could be rearranged\na bit, especially to de-duplicate the code. Also, I think\nget_rel_sync_entry() may be a better place to set the map, rather than\nmaybe_send_schema(). Thoughts?\n\n-- \n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 9 Apr 2020 16:28:15 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On 2020-04-09 09:28, Amit Langote wrote:\n>> This patch makes the tests pass for me:\n>>\n>> diff --git a/src/backend/replication/pgoutput/pgoutput.c\n>> b/src/backend/replication/pgoutput/pgoutput.c\n>> index 5fbf2d4367..cf6e8629c1 100644\n>> --- a/src/backend/replication/pgoutput/pgoutput.c\n>> +++ b/src/backend/replication/pgoutput/pgoutput.c\n>> @@ -305,7 +305,7 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n>>\n>> /* Map must live as long as the session does. */\n>> oldctx = MemoryContextSwitchTo(CacheMemoryContext);\n>> - relentry->map = convert_tuples_by_name(indesc, outdesc);\n>> + relentry->map =\n>> convert_tuples_by_name(CreateTupleDescCopy(indesc),\n>> CreateTupleDescCopy(outdesc));\n>> MemoryContextSwitchTo(oldctx);\n>> send_relation_and_attrs(ancestor, ctx);\n>> RelationClose(ancestor);\n>>\n>> Please check.\n> \n> Thanks. Yes, that's what I just found out too and was about to send a\n> patch, which is basically same as yours as far as the fix for this\n> issue is concerned.\n\nI have committed my patch but not ...\n\n> While figuring this out, I thought the nearby code could be rearranged\n> a bit, especially to de-duplicate the code. Also, I think\n> get_rel_sync_entry() may be a better place to set the map, rather than\n> maybe_send_schema(). Thoughts?\n\nbecause I didn't really have an opinion on that at the time, but if you \nstill want it considered or have any open thoughts on this thread, \nplease resend or explain.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:23:50 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: adding partitioned tables to publications" }, { "msg_contents": "On Fri, Apr 17, 2020 at 10:23 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-04-09 09:28, Amit Langote wrote:\n> > While figuring this out, I thought the nearby code could be rearranged\n> > a bit, especially to de-duplicate the code. Also, I think\n> > get_rel_sync_entry() may be a better place to set the map, rather than\n> > maybe_send_schema(). Thoughts?\n>\n> because I didn't really have an opinion on that at the time, but if you\n> still want it considered or have any open thoughts on this thread,\n> please resend or explain.\n\nSure, thanks for taking care of the bug.\n\nRebased the code rearrangement patch. Also resending the patch to fix\nTAP tests for improving coverage as described in:\nhttps://www.postgresql.org/message-id/CA%2BHiwqFyydvQ5g%3Dqa54UM%2BXjm77BdhX-nM4dXQkNOgH%3DzvDjoA%40mail.gmail.com\n\nTo summarize:\n1. Missing coverage for a couple of related blocks in\napply_handle_tuple_routing()\n2. Missing coverage report for the code in pgoutput.c added by 83fd4532\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Apr 2020 23:58:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: adding partitioned tables to publications" } ]
[ { "msg_contents": "Hello,\n\nThis is to introduce a patch to lower the memory footprint of JITed\ncode by optimizing functions at the function level (i.e. with\nfunction-level optimization passes) as soon as they are generated.\nThis addresses the code comment inside llvm_optimize_module():\n\n/*\n * Do function level optimization. This could be moved to the point where\n * functions are emitted, to reduce memory usage a bit.\n */\n LLVMInitializeFunctionPassManager(llvm_fpm);\n\n --\n Soumyadeep (Deep)", "msg_date": "Sun, 6 Oct 2019 18:38:19 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": true, "msg_subject": "JIT: Optimize generated functions earlier to lower memory usage" } ]
[ { "msg_contents": "Hello\n\nAttached is a patch for adding uri as an encoding option for\nencode/decode. It uses what's called \"percent-encoding\" in rfc3986\n(https://tools.ietf.org/html/rfc3986#section-2.1).\n\nThe background for this patch is that I could easily build urls in\nplpgsql, but doing the actual encoding of the url parts is painfully\nslow. The list of available encodings for encode/decode looks quite\narbitrary to me, so I can't see any reason this one couldn't be in\nthere.\n\nIn modern web scenarios one would probably most likely want to encode\nthe utf8 representation of a text string for inclusion in a url, in\nwhich case correct invocation would be ENCODE(CONVERT_TO('some text in\ndatabase encoding goes here', 'UTF8'), 'uri'), but uri\npercent-encoding can of course also be used for other text encodings\nand arbitrary binary data.\n\nRegards,\nAnders", "msg_date": "Mon, 7 Oct 2019 09:14:38 +0200", "msg_from": "=?UTF-8?Q?Anders_=C3=85strand?= <anders@449.se>", "msg_from_op": true, "msg_subject": "PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "On Mon, Oct 7, 2019 at 09:14:38AM +0200, Anders �strand wrote:\n> Hello\n> \n> Attached is a patch for adding uri as an encoding option for\n> encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n> (https://tools.ietf.org/html/rfc3986#section-2.1).\n\nOh, that's a cool idea. Can you add it to the commit-fest?\n\n\thttps://commitfest.postgresql.org/25/\n\n---------------------------------------------------------------------------\n\n\n> \n> The background for this patch is that I could easily build urls in\n> plpgsql, but doing the actual encoding of the url parts is painfully\n> slow. The list of available encodings for encode/decode looks quite\n> arbitrary to me, so I can't see any reason this one couldn't be in\n> there.\n> \n> In modern web scenarios one would probably most likely want to encode\n> the utf8 representation of a text string for inclusion in a url, in\n> which case correct invocation would be ENCODE(CONVERT_TO('some text in\n> database encoding goes here', 'UTF8'), 'uri'), but uri\n> percent-encoding can of course also be used for other text encodings\n> and arbitrary binary data.\n> \n> Regards,\n> Anders\n\n> diff --git a/src/backend/utils/adt/encode.c b/src/backend/utils/adt/encode.c\n> index 7293d66de5..33cf7bb57c 100644\n> --- a/src/backend/utils/adt/encode.c\n> +++ b/src/backend/utils/adt/encode.c\n> @@ -512,6 +512,131 @@ esc_dec_len(const char *src, unsigned srclen)\n> \treturn len;\n> }\n> \n> +/*\n> + * URI percent encoding\n> + *\n> + * Percent encodes all byte values except the unreserved ASCII characters as per RFC3986.\n> + */\n> +\n> +static const char upper_hex_digits[] = \"0123456789ABCDEF\";\n> +\n> +static unsigned\n> +uri_encode(const char *src, unsigned srclen, char *dst)\n> +{\n> +\tchar\t\t*d = dst;\n> +\n> +\tfor (const char *s = src; s < src + srclen; s++)\n> +\t{\n> +\t\tif ((*s >= 'A' && *s <= 'Z') ||\n> +\t\t\t(*s >= 'a' && *s <= 'z') ||\n> +\t\t\t(*s >= '0' && *s <= '9') ||\n> +\t\t\t*s == '-' ||\n> +\t\t\t*s == '_' ||\n> +\t\t\t*s == '.' ||\n> +\t\t\t*s == '~')\n> +\t\t{\n> +\t\t\t*d++ = *s;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\t*d++ = '%';\n> +\t\t\t*d++ = upper_hex_digits[(*s >> 4) & 0xF];\n> +\t\t\t*d++ = upper_hex_digits[*s & 0xF];\n> +\t\t}\n> +\t}\n> +\treturn d - dst;\n> +}\n> +\n> +static unsigned\n> +uri_decode(const char *src, unsigned srclen, char *dst)\n> +{\n> +\tconst char *s = src;\n> +\tconst char *srcend = src + srclen;\n> +\tchar\t\t*d = dst;\n> +\tchar\t\tval;\n> +\n> +\twhile (s < srcend)\n> +\t{\n> +\t\tif (*s == '%')\n> +\t\t{\n> +\t\t\tif (s > srcend - 3) {\n> +\t\t\t\t/* This will never get triggered since uri_dec_len already takes care of validation\n> +\t\t\t\t */\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t errmsg(\"invalid uri percent encoding\"),\n> +\t\t\t\t\t\t errhint(\"Input data ends prematurely.\")));\n> +\t\t\t}\n> +\n> +\t\t\t/* Skip '%' */\n> +\t\t\ts++;\n> +\n> +\t\t\tval = get_hex(*s++) << 4;\n> +\t\t\tval += get_hex(*s++);\n> +\t\t\t*d++ = val;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\t*d++ = *s++;\n> +\t\t}\n> +\t}\n> +\treturn d - dst;\n> +}\n> +\n> +static unsigned\n> +uri_enc_len(const char *src, unsigned srclen)\n> +{\n> +\tint\t\t\tlen = 0;\n> +\n> +\tfor (const char *s = src; s < src + srclen; s++)\n> +\t{\n> +\t\tif ((*s >= 'A' && *s <= 'Z') ||\n> +\t\t\t(*s >= 'a' && *s <= 'z') ||\n> +\t\t\t(*s >= '0' && *s <= '9') ||\n> +\t\t\t*s == '-' ||\n> +\t\t\t*s == '_' ||\n> +\t\t\t*s == '.' ||\n> +\t\t\t*s == '~')\n> +\t\t{\n> +\t\t\tlen++;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tlen += 3;\n> +\t\t}\n> +\t}\n> +\treturn len;\n> +}\n> +\n> +static unsigned\n> +uri_dec_len(const char *src, unsigned srclen)\n> +{\n> +\tconst char *s = src;\n> +\tconst char *srcend = src + srclen;\n> +\tint\t\t\tlen = 0;\n> +\n> +\twhile (s < srcend)\n> +\t{\n> +\t\tif (*s == '%')\n> +\t\t{\n> +\t\t\tif (s > srcend - 3) {\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t\t\t errmsg(\"invalid uri percent encoding\"),\n> +\t\t\t\t\t\t errhint(\"Input data ends prematurely.\")));\n> +\t\t\t}\n> +\t\t\ts++;\n> +\t\t\tget_hex(*s++);\n> +\t\t\tget_hex(*s++);\n> +\t\t}\n> +\t\telse {\n> +\t\t\ts++;\n> +\t\t}\n> +\t\tlen++;\n> +\t}\n> +\treturn len;\n> +}\n> +\n> /*\n> * Common\n> */\n> @@ -541,6 +666,12 @@ static const struct\n> \t\t\tesc_enc_len, esc_dec_len, esc_encode, esc_decode\n> \t\t}\n> \t},\n> +\t{\n> +\t\t\"uri\",\n> +\t\t{\n> +\t\t\turi_enc_len, uri_dec_len, uri_encode, uri_decode\n> +\t\t}\n> +\t},\n> \t{\n> \t\tNULL,\n> \t\t{\n> diff --git a/src/test/regress/expected/strings.out b/src/test/regress/expected/strings.out\n> index 2483966576..f89c5ec1c3 100644\n> --- a/src/test/regress/expected/strings.out\n> +++ b/src/test/regress/expected/strings.out\n> @@ -1870,3 +1870,24 @@ SELECT encode(overlay(E'Th\\\\000omas'::bytea placing E'\\\\002\\\\003'::bytea from 5\n> Th\\000o\\x02\\x03\n> (1 row)\n> \n> +SET bytea_output TO hex;\n> +SELECT encode(E'en\\\\300\\\\336d'::bytea, 'uri');\n> + encode \n> +-----------\n> + en%C0%DEd\n> +(1 row)\n> +\n> +SELECT decode('%De%c0%DEd', 'uri');\n> + decode \n> +------------\n> + \\xdec0de64\n> +(1 row)\n> +\n> +SELECT decode('error%Ex', 'uri');\n> +ERROR: invalid hexadecimal digit: \"x\"\n> +SELECT decode('error%E', 'uri');\n> +ERROR: invalid uri percent encoding\n> +HINT: Input data ends prematurely.\n> +SELECT decode('error%', 'uri');\n> +ERROR: invalid uri percent encoding\n> +HINT: Input data ends prematurely.\n> diff --git a/src/test/regress/sql/strings.sql b/src/test/regress/sql/strings.sql\n> index b5e75c344f..1d03836b6e 100644\n> --- a/src/test/regress/sql/strings.sql\n> +++ b/src/test/regress/sql/strings.sql\n> @@ -641,3 +641,10 @@ SELECT btrim(E'\\\\000trim\\\\000'::bytea, ''::bytea);\n> SELECT encode(overlay(E'Th\\\\000omas'::bytea placing E'Th\\\\001omas'::bytea from 2),'escape');\n> SELECT encode(overlay(E'Th\\\\000omas'::bytea placing E'\\\\002\\\\003'::bytea from 8),'escape');\n> SELECT encode(overlay(E'Th\\\\000omas'::bytea placing E'\\\\002\\\\003'::bytea from 5 for 3),'escape');\n> +\n> +SET bytea_output TO hex;\n> +SELECT encode(E'en\\\\300\\\\336d'::bytea, 'uri');\n> +SELECT decode('%De%c0%DEd', 'uri');\n> +SELECT decode('error%Ex', 'uri');\n> +SELECT decode('error%E', 'uri');\n> +SELECT decode('error%', 'uri');\n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 7 Oct 2019 15:52:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "On Mon, 7 Oct 2019 at 03:15, Anders Åstrand <anders@449.se> wrote:\n\n> Hello\n>\n> Attached is a patch for adding uri as an encoding option for\n> encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n> (https://tools.ietf.org/html/rfc3986#section-2.1).\n>\n> The background for this patch is that I could easily build urls in\n> plpgsql, but doing the actual encoding of the url parts is painfully\n> slow. The list of available encodings for encode/decode looks quite\n> arbitrary to me, so I can't see any reason this one couldn't be in\n> there.\n>\n> In modern web scenarios one would probably most likely want to encode\n> the utf8 representation of a text string for inclusion in a url, in\n> which case correct invocation would be ENCODE(CONVERT_TO('some text in\n> database encoding goes here', 'UTF8'), 'uri'), but uri\n> percent-encoding can of course also be used for other text encodings\n> and arbitrary binary data.\n>\n\nThis seems like a useful idea to me. I've used the equivalent in Python and\nit provides more options:\n\nhttps://docs.python.org/3/library/urllib.parse.html#url-quoting\n\nI suggest reviewing that documentation there, because there are a few\ndetails that need to be checked carefully. Whether or not space should be\nencoded as plus and whether certain byte values should be exempt from\n%-encoding is something that depends on the application. Unfortunately, as\nfar as I can tell there isn't a single version of URL encoding that\nsatisfies all situations (thus explaining the complexity of the Python\nimplementation). It might be feasible to suppress some of the Python\noptions (I'm wondering about the safe= parameter) but I'm pretty sure you\nat least need the equivalent of quote and quote_plus.\n\nOn Mon, 7 Oct 2019 at 03:15, Anders Åstrand <anders@449.se> wrote:Hello\n\nAttached is a patch for adding uri as an encoding option for\nencode/decode. It uses what's called \"percent-encoding\" in rfc3986\n(https://tools.ietf.org/html/rfc3986#section-2.1).\n\nThe background for this patch is that I could easily build urls in\nplpgsql, but doing the actual encoding of the url parts is painfully\nslow. The list of available encodings for encode/decode looks quite\narbitrary to me, so I can't see any reason this one couldn't be in\nthere.\n\nIn modern web scenarios one would probably most likely want to encode\nthe utf8 representation of a text string for inclusion in a url, in\nwhich case correct invocation would be ENCODE(CONVERT_TO('some text in\ndatabase encoding goes here', 'UTF8'), 'uri'), but uri\npercent-encoding can of course also be used for other text encodings\nand arbitrary binary data.This seems like a useful idea to me. I've used the equivalent in Python and it provides more options:https://docs.python.org/3/library/urllib.parse.html#url-quotingI suggest reviewing that documentation there, because there are a few details that need to be checked carefully. Whether or not space should be encoded as plus and whether certain byte values should be exempt from %-encoding is something that depends on the application. Unfortunately, as far as I can tell there isn't a single version of URL encoding that satisfies all situations (thus explaining the complexity of the Python implementation). It might be feasible to suppress some of the Python options (I'm wondering about the safe= parameter) but I'm pretty sure you at least need the equivalent of quote and quote_plus.", "msg_date": "Mon, 7 Oct 2019 17:38:15 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "On Mon, Oct 7, 2019 at 9:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Oct 7, 2019 at 09:14:38AM +0200, Anders Åstrand wrote:\n> > Hello\n> >\n> > Attached is a patch for adding uri as an encoding option for\n> > encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n> > (https://tools.ietf.org/html/rfc3986#section-2.1).\n>\n> Oh, that's a cool idea. Can you add it to the commit-fest?\n>\n> https://commitfest.postgresql.org/25/\n>\n>\n\nThanks for your reply! I added it but was unsure of what topic was\nappropriate and couldn't find a description of them anywhere. I went\nwith Miscellaneous for now.\n\n\n", "msg_date": "Tue, 8 Oct 2019 19:41:05 +0200", "msg_from": "=?UTF-8?Q?Anders_=C3=85strand?= <anders@449.se>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "On Mon, Oct 7, 2019 at 11:38 PM Isaac Morland <isaac.morland@gmail.com> wrote:\n>\n> On Mon, 7 Oct 2019 at 03:15, Anders Åstrand <anders@449.se> wrote:\n>>\n>> Hello\n>>\n>> Attached is a patch for adding uri as an encoding option for\n>> encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n>> (https://tools.ietf.org/html/rfc3986#section-2.1).\n>>\n>> The background for this patch is that I could easily build urls in\n>> plpgsql, but doing the actual encoding of the url parts is painfully\n>> slow. The list of available encodings for encode/decode looks quite\n>> arbitrary to me, so I can't see any reason this one couldn't be in\n>> there.\n>>\n>> In modern web scenarios one would probably most likely want to encode\n>> the utf8 representation of a text string for inclusion in a url, in\n>> which case correct invocation would be ENCODE(CONVERT_TO('some text in\n>> database encoding goes here', 'UTF8'), 'uri'), but uri\n>> percent-encoding can of course also be used for other text encodings\n>> and arbitrary binary data.\n>\n>\n> This seems like a useful idea to me. I've used the equivalent in Python and it provides more options:\n>\n> https://docs.python.org/3/library/urllib.parse.html#url-quoting\n>\n> I suggest reviewing that documentation there, because there are a few details that need to be checked carefully. Whether or not space should be encoded as plus and whether certain byte values should be exempt from %-encoding is something that depends on the application. Unfortunately, as far as I can tell there isn't a single version of URL encoding that satisfies all situations (thus explaining the complexity of the Python implementation). It might be feasible to suppress some of the Python options (I'm wondering about the safe= parameter) but I'm pretty sure you at least need the equivalent of quote and quote_plus.\n\nThanks a lot for your reply!\n\nI agree that some (but not all) of the options available to that\npython lib could be helpful for developers wanting to build urls\nwithout having to encode the separate parts of it and stitching it\ntogether, but not necessary for this patch to be useful. For generic\nuri encoding the slash (/) must be percent encoded, because it has\nspecial meaning in the standard. Some other extra characters may\nappear unencoded though depending on context, but it's generally safer\nto just encode them all and not hope that the encoder will know about\nthe context and skip over certain characters.\n\nThis does bring up an interesting point however. Maybe decode should\nvalidate that only characters that are allowed unencoded appear in the\ninput?\n\nLuckily, the plus-encoding of spaces are not part of the uri standard\nat all but instead part of the format referred to as\napplication/x-www-form-urlencoded data. Fortunately that format is\nclose to dying now that forms more often post json.\n\nRegards,\nAnders\n\n\n", "msg_date": "Tue, 8 Oct 2019 20:07:02 +0200", "msg_from": "=?UTF-8?Q?Anders_=C3=85strand?= <anders@449.se>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "Hello,\n\nOn 2019/10/07 16:14, Anders Åstrand wrote:\n> Hello\n> \n> Attached is a patch for adding uri as an encoding option for\n> encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n> (https://tools.ietf.org/html/rfc3986#section-2.1).\n\nThank you for the patch. I'm not very familiar with rfc3986. Is it \ninsist that an output should have upper case characters? If not maybe it \nis good to reuse hextbl[] (which is in encode.c) instead of adding new \nupper_hex_digits[].\n\nAlso can you correct the documentation. encode() is mentioned here:\nhttps://www.postgresql.org/docs/current/functions-binarystring.html\n\n-- \nArthur\n\n\n", "msg_date": "Fri, 20 Dec 2019 13:31:00 +0900", "msg_from": "Arthur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "On 2019-Oct-07, Anders �strand wrote:\n\n> Attached is a patch for adding uri as an encoding option for\n> encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n> (https://tools.ietf.org/html/rfc3986#section-2.1).\n\nThanks. Seems useful. I made a few cosmetic tweaks and it looks almost\nready to me; however, documentation is missing. I added a stub; can you\nplease complete that?\n\nTo answer Arthur Zakirov's question: yes, the standard recommends\n(\"should\") to use uppercase characters:\n\n: For consistency, URI producers and\n: normalizers should use uppercase hexadecimal digits for all percent-\n: encodings.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 20 Feb 2020 19:27:58 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "Thanks for keeping this alive even though I disappeared after submitting it!\n\nI can write documentation this weekend.\n\nThanks again.\n//Anders\n\nOn Thu, 20 Feb 2020, 23:28 Alvaro Herrera, <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Oct-07, Anders Åstrand wrote:\n>\n> > Attached is a patch for adding uri as an encoding option for\n> > encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n> > (https://tools.ietf.org/html/rfc3986#section-2.1).\n>\n> Thanks. Seems useful. I made a few cosmetic tweaks and it looks almost\n> ready to me; however, documentation is missing. I added a stub; can you\n> please complete that?\n>\n> To answer Arthur Zakirov's question: yes, the standard recommends\n> (\"should\") to use uppercase characters:\n>\n> : For consistency, URI producers and\n> : normalizers should use uppercase hexadecimal digits for all percent-\n> : encodings.\n>\n> Thanks,\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nThanks for keeping this alive even though I disappeared after submitting it!I can write documentation this weekend.Thanks again.//AndersOn Thu, 20 Feb 2020, 23:28 Alvaro Herrera, <alvherre@2ndquadrant.com> wrote:On 2019-Oct-07, Anders Åstrand wrote:\n\n> Attached is a patch for adding uri as an encoding option for\n> encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n> (https://tools.ietf.org/html/rfc3986#section-2.1).\n\nThanks.  Seems useful.  I made a few cosmetic tweaks and it looks almost\nready to me; however, documentation is missing.  I added a stub; can you\nplease complete that?\n\nTo answer Arthur Zakirov's question: yes, the standard recommends\n(\"should\") to use uppercase characters:\n\n:  For consistency, URI producers and\n:  normalizers should use uppercase hexadecimal digits for all percent-\n:  encodings.\n\nThanks,\n\n-- \nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 21 Feb 2020 07:29:26 +0100", "msg_from": "=?UTF-8?Q?Anders_=C3=85strand?= <anders@449.se>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "> On 20 Feb 2020, at 23:27, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2019-Oct-07, Anders Åstrand wrote:\n> \n>> Attached is a patch for adding uri as an encoding option for\n>> encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n>> (https://tools.ietf.org/html/rfc3986#section-2.1).\n> \n> Thanks. Seems useful. I made a few cosmetic tweaks and it looks almost\n> ready to me;\n\nI agree that uri decoding/encoding would be useful, but I'm not convinced that\nthis patch does the functionality justice enough to be useful. What is the\nusecase we envision to solve when not taking scheme into consideration?\n\nReserved characters have different meaning based on context and scheme, and\nshould not be encoded when used as a delimiter. This does make the patch a lot\nmore complicated, but if we provide a uri encoding which percent-encode the\ndelimiters in https:// I would expect that to be reported to pgsql-bugs@\nrepeatedly. Adding URIs with userinfo makes it even more problematic, as\nencoding the @ delimiter will break it.\n\nFurther, RFC6874 specifies that ipv6 URIs with zone identifiers are written as:\nIPv6address \"%25\" ZoneID. With this patch it would be encoded %2525 ZoneID\nwhich is incorrect.\n\nThat being said, if we do look at the scheme then we'll need to decide which\nURI standard we want to stick to as RFC3986 and WHATWG URL-spec aren't\ncompatible.\n\nPerhaps not calling it 'uri' and instead renaming it to 'percent-encoding' can\nmake it clearer, while sticking to the proposed feature?\n\ncheers ./daniel\n\n", "msg_date": "Wed, 4 Mar 2020 12:25:48 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "> On 4 Mar 2020, at 12:25, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 20 Feb 2020, at 23:27, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> \n>> On 2019-Oct-07, Anders Åstrand wrote:\n>> \n>>> Attached is a patch for adding uri as an encoding option for\n>>> encode/decode. It uses what's called \"percent-encoding\" in rfc3986\n>>> (https://tools.ietf.org/html/rfc3986#section-2.1).\n>> \n>> Thanks. Seems useful. I made a few cosmetic tweaks and it looks almost\n>> ready to me;\n> \n> I agree that uri decoding/encoding would be useful, but I'm not convinced that\n> this patch does the functionality justice enough to be useful. What is the\n> usecase we envision to solve when not taking scheme into consideration?\n> \n> Reserved characters have different meaning based on context and scheme, and\n> should not be encoded when used as a delimiter. This does make the patch a lot\n> more complicated, but if we provide a uri encoding which percent-encode the\n> delimiters in https:// I would expect that to be reported to pgsql-bugs@\n> repeatedly. Adding URIs with userinfo makes it even more problematic, as\n> encoding the @ delimiter will break it.\n> \n> Further, RFC6874 specifies that ipv6 URIs with zone identifiers are written as:\n> IPv6address \"%25\" ZoneID. With this patch it would be encoded %2525 ZoneID\n> which is incorrect.\n> \n> That being said, if we do look at the scheme then we'll need to decide which\n> URI standard we want to stick to as RFC3986 and WHATWG URL-spec aren't\n> compatible.\n> \n> Perhaps not calling it 'uri' and instead renaming it to 'percent-encoding' can\n> make it clearer, while sticking to the proposed feature?\n\nWith no response for 2 weeks during the commitfest, I propose to move this to\nthe next CF to allow time for discussions.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 19 Mar 2020 08:55:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "> On 19 Mar 2020, at 08:55, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> With no response for 2 weeks during the commitfest, I propose to move this to\n> the next CF to allow time for discussions.\n\nThis patch no longer applies, the failing hunk being in the docs part. As\nstated in my review earlier in the thread I don't think this feature is\ncomplete enough in its current form; having hacked on it a bit, what are your\nthoughts Alvaro?\n\nMarking as Waiting on Author for now.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 10:57:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "On 2020-Jul-01, Daniel Gustafsson wrote:\n\n> > On 19 Mar 2020, at 08:55, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> > With no response for 2 weeks during the commitfest, I propose to move this to\n> > the next CF to allow time for discussions.\n> \n> This patch no longer applies, the failing hunk being in the docs part. As\n> stated in my review earlier in the thread I don't think this feature is\n> complete enough in its current form; having hacked on it a bit, what are your\n> thoughts Alvaro?\n\nIf the author (or some other person interested in the feature) submits a\nversion addressing the feedback, by all means let's consider it further;\nbut if nothing happens during this commitfest, I'd say we close as RwF\nat end of July.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Jul 2020 10:58:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" }, { "msg_contents": "> On 1 Jul 2020, at 16:58, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Jul-01, Daniel Gustafsson wrote:\n> \n>>> On 19 Mar 2020, at 08:55, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> With no response for 2 weeks during the commitfest, I propose to move this to\n>>> the next CF to allow time for discussions.\n>> \n>> This patch no longer applies, the failing hunk being in the docs part. As\n>> stated in my review earlier in the thread I don't think this feature is\n>> complete enough in its current form; having hacked on it a bit, what are your\n>> thoughts Alvaro?\n> \n> If the author (or some other person interested in the feature) submits a\n> version addressing the feedback, by all means let's consider it further;\n> but if nothing happens during this commitfest, I'd say we close as RwF\n> at end of July.\n\nAs per discussion, this entry is closed as \"Returned with Feedback\".\n\ncheers ./daniel\n\n", "msg_date": "Wed, 29 Jul 2020 22:27:31 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add uri percent-encoding for binary data" } ]
[ { "msg_contents": "Hi,\n\nsome days ago I ran into a problem with the to_date() function. I \noriginally described it on StackExchange:\nhttps://dba.stackexchange.com/questions/250111/unexpected-behaviour-for-to-date-with-week-number-and-week-day\n\nThe problem:\n\nIf you want to parse a date string with year, week and day of week, you \ncan do this using the ISO week pattern: 'IYYY-IW-ID'. This works as \nexpected:\n\ndate string | to_date()\n------------+------------\n'2019-1-1' | 2018-12-31 -> Monday of the first week of the year \n(defined as the week that includes the 4th of January)\n'2019-1-2' | 2019-01-01\n'2019-1-3' | 2019-01-02\n'2019-1-4' | 2019-01-03\n'2019-1-5' | 2019-01-04\n'2019-1-6' | 2019-01-05\n'2019-1-7' | 2019-01-06\n\n'2019-2-1' | 2019-01-07\n'2019-2-2' | 2019-01-08\n\nBut if you are trying this with the non-ISO pattern 'YYYY-WW-D', the \nresult was not expected:\n\ndate string | to_date()\n-------------------------\n'2019-1-1' | 2019-01-01\n'2019-1-2' | 2019-01-01\n'2019-1-3' | 2019-01-01\n'2019-1-4' | 2019-01-01\n'2019-1-5' | 2019-01-01\n'2019-1-6' | 2019-01-01\n'2019-1-7' | 2019-01-01\n\n'2019-2-1' | 2019-01-08\n'2019-2-2' | 2019-01-08\n\nAs you can see, the 'D' part of the pattern doesn't influence the \nresulting date.\n\nThe answer of Laurenz Albe pointed to a part of the documentation, I \nmissed so far:\n\n\"In to_timestamp and to_date, weekday names or numbers (DAY, D, and \nrelated field types) are accepted but are ignored for purposes of \ncomputing the result. The same is true for quarter (Q) fields.\" \n(https://www.postgresql.org/docs/12/functions-formatting.html)\n\nSo, I had a look at the relevant code part. I decided to try a patch by \nmyself. Now it works as I would expect it:\n\ndate string | to_date()\n-------------------------\n'2019-1-1' | 2018-12-30 -> Sunday (!) of the first week of the year \n(the first week is at the first day of year)\n'2019-1-2' | 2018-12-31\n'2019-1-3' | 2019-01-01\n'2019-1-4' | 2019-01-02\n'2019-1-5' | 2019-01-03\n'2019-1-6' | 2019-01-04\n'2019-1-7' | 2019-01-05\n\n'2019-2-1' | 2019-01-06\n'2019-2-2' | 2019-01-07\n\nFurthermore, if you left the 'D' part, the date would be always set to \nthe first day of the corresponding week (in that case it is Sunday, in \ncontrast to the ISO week, which starts mondays).\n\nTo be consistent, I added similar code for the week of month pattern \n('W'). So, using the pattern 'YYYY-MM-W-D' yields in:\n\ndate string | to_date()\n---------------------------\n'2018-12-5-1' | 2018-12-23\n'2018-12-6-1' | 2018-12-30\n'2019-1-1-1' | 2018-12-30 -> First day (Su) of the first week of the \nfirst month of the year\n'2019-2-2-1' | 2019-02-03 -> First day (Su) of the second week of \nFebruary\n'2019-10-3-5' | 2019-10-17 -> Fifth day (Th) of the third week of \nOctober\n\nIf you left the 'D', it would be set to 1 as well.\n\nThe code can be seen here:\nhttps://github.com/S-Man42/postgres/commit/534e6bd70e23864f385d60ecf187496c7f4387c9\n\nI hope, keeping the code style of the surrounding code (especially the \nISO code) is ok for you.\n\nNow the questions:\n1. Although the ignorance of the 'D' pattern is well documented, does \nthe new behaviour might be interesting for you?\n2. Does it work as you'd expect it?\n3. Because this could be my very first contribution to the PostgreSQL \ncode base, I really want you to be as critical as possible. I am not \nquite sure if I didn't miss something important.\n4. Currently something like '2019-1-8' does not throw an exception but \nresults in the same as '2019-2-1' (8th is the same as the 1st of the \nnext week). On the other hand, currently, the ISO week conversion gives \nout the result of '2019-1-7' for every 'D' >= 7. I am not sure if this \nis better. I think a consistent exception handling should be discussed \nseparately (date roll over vs. out of range exception vs. ISO week \nbehaviour)\n\nSo far, I am very curious about your opinions!\n\nKind regards,\nMark/S-Man42\n\n\n", "msg_date": "Tue, 08 Oct 2019 15:25:26 +0200", "msg_from": "postgres <postgres@four-two.de>", "msg_from_op": true, "msg_subject": "Created feature for to_date() conversion using patterns 'YYYY-WW',\n 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Hi,\n\nI apologize for the mistake.\n\nFor the mailing list correspondence I created this mail account. But I \nforgot to change the sender name. So, the \"postgres\" name appeared as \nsender name in the mailing list. I changed it.\n\nKind regards,\nMark/S-Man42\n\n> Hi,\n> \n> some days ago I ran into a problem with the to_date() function. I\n> originally described it on StackExchange:\n> https://dba.stackexchange.com/questions/250111/unexpected-behaviour-for-to-date-with-week-number-and-week-day\n> \n> The problem:\n> \n> If you want to parse a date string with year, week and day of week,\n> you can do this using the ISO week pattern: 'IYYY-IW-ID'. This works\n> as expected:\n> \n> date string | to_date()\n> ------------+------------\n> '2019-1-1' | 2018-12-31 -> Monday of the first week of the year\n> (defined as the week that includes the 4th of January)\n> '2019-1-2' | 2019-01-01\n> '2019-1-3' | 2019-01-02\n> '2019-1-4' | 2019-01-03\n> '2019-1-5' | 2019-01-04\n> '2019-1-6' | 2019-01-05\n> '2019-1-7' | 2019-01-06\n> \n> '2019-2-1' | 2019-01-07\n> '2019-2-2' | 2019-01-08\n> \n> But if you are trying this with the non-ISO pattern 'YYYY-WW-D', the\n> result was not expected:\n> \n> date string | to_date()\n> -------------------------\n> '2019-1-1' | 2019-01-01\n> '2019-1-2' | 2019-01-01\n> '2019-1-3' | 2019-01-01\n> '2019-1-4' | 2019-01-01\n> '2019-1-5' | 2019-01-01\n> '2019-1-6' | 2019-01-01\n> '2019-1-7' | 2019-01-01\n> \n> '2019-2-1' | 2019-01-08\n> '2019-2-2' | 2019-01-08\n> \n> As you can see, the 'D' part of the pattern doesn't influence the\n> resulting date.\n> \n> The answer of Laurenz Albe pointed to a part of the documentation, I\n> missed so far:\n> \n> \"In to_timestamp and to_date, weekday names or numbers (DAY, D, and\n> related field types) are accepted but are ignored for purposes of\n> computing the result. The same is true for quarter (Q) fields.\"\n> (https://www.postgresql.org/docs/12/functions-formatting.html)\n> \n> So, I had a look at the relevant code part. I decided to try a patch\n> by myself. Now it works as I would expect it:\n> \n> date string | to_date()\n> -------------------------\n> '2019-1-1' | 2018-12-30 -> Sunday (!) of the first week of the year\n> (the first week is at the first day of year)\n> '2019-1-2' | 2018-12-31\n> '2019-1-3' | 2019-01-01\n> '2019-1-4' | 2019-01-02\n> '2019-1-5' | 2019-01-03\n> '2019-1-6' | 2019-01-04\n> '2019-1-7' | 2019-01-05\n> \n> '2019-2-1' | 2019-01-06\n> '2019-2-2' | 2019-01-07\n> \n> Furthermore, if you left the 'D' part, the date would be always set to\n> the first day of the corresponding week (in that case it is Sunday, in\n> contrast to the ISO week, which starts mondays).\n> \n> To be consistent, I added similar code for the week of month pattern\n> ('W'). So, using the pattern 'YYYY-MM-W-D' yields in:\n> \n> date string | to_date()\n> ---------------------------\n> '2018-12-5-1' | 2018-12-23\n> '2018-12-6-1' | 2018-12-30\n> '2019-1-1-1' | 2018-12-30 -> First day (Su) of the first week of the\n> first month of the year\n> '2019-2-2-1' | 2019-02-03 -> First day (Su) of the second week of \n> February\n> '2019-10-3-5' | 2019-10-17 -> Fifth day (Th) of the third week of \n> October\n> \n> If you left the 'D', it would be set to 1 as well.\n> \n> The code can be seen here:\n> https://github.com/S-Man42/postgres/commit/534e6bd70e23864f385d60ecf187496c7f4387c9\n> \n> I hope, keeping the code style of the surrounding code (especially the\n> ISO code) is ok for you.\n> \n> Now the questions:\n> 1. Although the ignorance of the 'D' pattern is well documented, does\n> the new behaviour might be interesting for you?\n> 2. Does it work as you'd expect it?\n> 3. Because this could be my very first contribution to the PostgreSQL\n> code base, I really want you to be as critical as possible. I am not\n> quite sure if I didn't miss something important.\n> 4. Currently something like '2019-1-8' does not throw an exception but\n> results in the same as '2019-2-1' (8th is the same as the 1st of the\n> next week). On the other hand, currently, the ISO week conversion\n> gives out the result of '2019-1-7' for every 'D' >= 7. I am not sure\n> if this is better. I think a consistent exception handling should be\n> discussed separately (date roll over vs. out of range exception vs.\n> ISO week behaviour)\n> \n> So far, I am very curious about your opinions!\n> \n> Kind regards,\n> Mark/S-Man42\n\n\n", "msg_date": "Tue, 08 Oct 2019 17:49:49 +0200", "msg_from": "Mark Lorenz <postgres@four-two.de>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Hi,\n\nwhile preparing the patch for the Commitfest, I found a bug in the \nto_char() function that is quite correlated with this issue:\n\nSELECT to_char('1997-02-01'::date, 'YYYY-WW-D')\n\nreturns: 1997-05-7 -> which is ok, I believe. Feb, 1st was on Saturday, \nso counting from Sundays, it was day 7 of week 5.\n\nSELECT to_char('1997-02-03'::date, 'YYYY-WW-D')\n\nreturns: 1997-05-2 -> This cannot be. The input date is two days laters, \nbut the result is 5 days earlier. I'd expect 1997-06-2 as result, but \nthis occurs another week later:\n\nSELECT to_char('1997-02-10'::date, 'YYYY-WW-D')\n\nThis is wrong, because this should be week 7 instead. On the other hand, \nthe ISO week formats work very well.\n\nI'll have a look at the code and try to fix it in the patch as well.\n\nKind regards,\nMark\n\n\nAm 2019-10-08 17:49, schrieb Mark Lorenz:\n> Hi,\n> \n> I apologize for the mistake.\n> \n> For the mailing list correspondence I created this mail account. But I\n> forgot to change the sender name. So, the \"postgres\" name appeared as\n> sender name in the mailing list. I changed it.\n> \n> Kind regards,\n> Mark/S-Man42\n> \n>> Hi,\n>> \n>> some days ago I ran into a problem with the to_date() function. I\n>> originally described it on StackExchange:\n>> https://dba.stackexchange.com/questions/250111/unexpected-behaviour-for-to-date-with-week-number-and-week-day\n>> \n>> The problem:\n>> \n>> If you want to parse a date string with year, week and day of week,\n>> you can do this using the ISO week pattern: 'IYYY-IW-ID'. This works\n>> as expected:\n>> \n>> date string | to_date()\n>> ------------+------------\n>> '2019-1-1' | 2018-12-31 -> Monday of the first week of the year\n>> (defined as the week that includes the 4th of January)\n>> '2019-1-2' | 2019-01-01\n>> '2019-1-3' | 2019-01-02\n>> '2019-1-4' | 2019-01-03\n>> '2019-1-5' | 2019-01-04\n>> '2019-1-6' | 2019-01-05\n>> '2019-1-7' | 2019-01-06\n>> \n>> '2019-2-1' | 2019-01-07\n>> '2019-2-2' | 2019-01-08\n>> \n>> But if you are trying this with the non-ISO pattern 'YYYY-WW-D', the\n>> result was not expected:\n>> \n>> date string | to_date()\n>> -------------------------\n>> '2019-1-1' | 2019-01-01\n>> '2019-1-2' | 2019-01-01\n>> '2019-1-3' | 2019-01-01\n>> '2019-1-4' | 2019-01-01\n>> '2019-1-5' | 2019-01-01\n>> '2019-1-6' | 2019-01-01\n>> '2019-1-7' | 2019-01-01\n>> \n>> '2019-2-1' | 2019-01-08\n>> '2019-2-2' | 2019-01-08\n>> \n>> As you can see, the 'D' part of the pattern doesn't influence the\n>> resulting date.\n>> \n>> The answer of Laurenz Albe pointed to a part of the documentation, I\n>> missed so far:\n>> \n>> \"In to_timestamp and to_date, weekday names or numbers (DAY, D, and\n>> related field types) are accepted but are ignored for purposes of\n>> computing the result. The same is true for quarter (Q) fields.\"\n>> (https://www.postgresql.org/docs/12/functions-formatting.html)\n>> \n>> So, I had a look at the relevant code part. I decided to try a patch\n>> by myself. Now it works as I would expect it:\n>> \n>> date string | to_date()\n>> -------------------------\n>> '2019-1-1' | 2018-12-30 -> Sunday (!) of the first week of the year\n>> (the first week is at the first day of year)\n>> '2019-1-2' | 2018-12-31\n>> '2019-1-3' | 2019-01-01\n>> '2019-1-4' | 2019-01-02\n>> '2019-1-5' | 2019-01-03\n>> '2019-1-6' | 2019-01-04\n>> '2019-1-7' | 2019-01-05\n>> \n>> '2019-2-1' | 2019-01-06\n>> '2019-2-2' | 2019-01-07\n>> \n>> Furthermore, if you left the 'D' part, the date would be always set to\n>> the first day of the corresponding week (in that case it is Sunday, in\n>> contrast to the ISO week, which starts mondays).\n>> \n>> To be consistent, I added similar code for the week of month pattern\n>> ('W'). So, using the pattern 'YYYY-MM-W-D' yields in:\n>> \n>> date string | to_date()\n>> ---------------------------\n>> '2018-12-5-1' | 2018-12-23\n>> '2018-12-6-1' | 2018-12-30\n>> '2019-1-1-1' | 2018-12-30 -> First day (Su) of the first week of the\n>> first month of the year\n>> '2019-2-2-1' | 2019-02-03 -> First day (Su) of the second week of \n>> February\n>> '2019-10-3-5' | 2019-10-17 -> Fifth day (Th) of the third week of \n>> October\n>> \n>> If you left the 'D', it would be set to 1 as well.\n>> \n>> The code can be seen here:\n>> https://github.com/S-Man42/postgres/commit/534e6bd70e23864f385d60ecf187496c7f4387c9\n>> \n>> I hope, keeping the code style of the surrounding code (especially the\n>> ISO code) is ok for you.\n>> \n>> Now the questions:\n>> 1. Although the ignorance of the 'D' pattern is well documented, does\n>> the new behaviour might be interesting for you?\n>> 2. Does it work as you'd expect it?\n>> 3. Because this could be my very first contribution to the PostgreSQL\n>> code base, I really want you to be as critical as possible. I am not\n>> quite sure if I didn't miss something important.\n>> 4. Currently something like '2019-1-8' does not throw an exception but\n>> results in the same as '2019-2-1' (8th is the same as the 1st of the\n>> next week). On the other hand, currently, the ISO week conversion\n>> gives out the result of '2019-1-7' for every 'D' >= 7. I am not sure\n>> if this is better. I think a consistent exception handling should be\n>> discussed separately (date roll over vs. out of range exception vs.\n>> ISO week behaviour)\n>> \n>> So far, I am very curious about your opinions!\n>> \n>> Kind regards,\n>> Mark/S-Man42\n\n\n", "msg_date": "Fri, 20 Dec 2019 09:38:16 +0100", "msg_from": "Mark Lorenz <postgres@four-two.de>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Hi,\n\nI fixed the described issue in the to char() function.\n\nThe output of the current version is:\n\npostgres=# SELECT to_char('1997-02-01'::date, 'YYYY-WW-D');\n to_char\n---------\n 1997-05-7\n(1 row)\n\npostgres=# SELECT to_char('1997-02-03'::date, 'YYYY-WW-D');\n to_char\n---------\n 1997-05-2\n(1 row)\n\npostgres=# SELECT to_char('1997-02-10'::date, 'YYYY-WW-D');\n to_char\n---------\n 1997-06-2\n(1 row)\n\nAs you can see, the week day of the Feb 3rd - which is two days AFTER \nFeb 1st - yields in a result which is 5 days BEFORE the earlier date, \nwhich obviously cannot be. Furthermore, using the Gregorian calendar, \nFeb 3rd is in week 6. So, the Feb 10th cannot be in week 6 as well.\n\nThe bug was, that the week day of Jan 1st was not considered in the \ncalculation of the week number. So, a possible offset has not been set.\n\nNew output:\n\npostgres=# SELECT to_char('1997-02-03'::date, 'YYYY-WW-D');\n to_char\n---------\n 1997-06-2\n(1 row)\n\npostgres=# SELECT to_char('1997-02-01'::date, 'YYYY-WW-D');\n to_char\n---------\n 1997-05-7\n(1 row)\n\npostgres=# SELECT to_char('1997-02-10'::date, 'YYYY-WW-D');\n to_char\n---------\n 1997-07-2\n(1 row)\n\n-------------------\n\nFurthermore I adjusted the to_date() functionality for the WW-D pattern \nas well. As said before in the thread, I know, ignoring the D part is \nknown and documented, but I think, if the ISO format recognizes the day \npart, the non-ISO format should as well - especially when the \"back\" \noperation does as well (meaning to_char()):\n\nOutput in the current version:\n\npostgres=# SELECT to_date('2019-1-1', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-01\n(1 row)\n\npostgres=# SELECT to_date('2019-1-2', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-01\n(1 row)\n\npostgres=# SELECT to_date('2019-1-3', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-01\n(1 row)\n\npostgres=# SELECT to_date('2019-1-7', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-01\n(1 row)\n\npostgres=# SELECT to_date('2019-2-1', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-08\n(1 row)\n\nNew output:\n\npostgres=# SELECT to_date('2019-1-1', 'YYYY-WW-D');\n to_date\n------------\n 2018-12-30\n(1 row)\n\npostgres=# SELECT to_date('2019-1-2', 'YYYY-WW-D');\n to_date\n------------\n 2018-12-31\n(1 row)\n\npostgres=# SELECT to_date('2019-1-3', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-01\n(1 row)\n\npostgres=# SELECT to_date('2019-1-7', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-05\n(1 row)\n\npostgres=# SELECT to_date('2019-2-1', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-06\n(1 row)\n\nI added the patch as plain text attachment. It contains the code and, of \ncourse, the regression tests. Some existing tests failed, because they \nworked with the old output. I have changed their expected output.\n\nHope you'll find it helpful.\n\nBest regards,\nMark Lorenz", "msg_date": "Fri, 20 Dec 2019 15:08:18 +0100", "msg_from": "Mark Lorenz <postgres@four-two.de>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Hi,\n\nI got the advice to split the patches for:\n- fixing the to_char() function\n- changing the to_date()/to_timestamp() behaviour\n\nSo I appended the split patches.\n\nKind regards,\nMark Lorenz", "msg_date": "Fri, 20 Dec 2019 17:02:01 +0100", "msg_from": "Mark Lorenz <postgres@four-two.de>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Mark Lorenz <postgres@four-two.de> writes:\n> I got the advice to split the patches for:\n> - fixing the to_char() function\n> - changing the to_date()/to_timestamp() behaviour\n> So I appended the split patches.\n\nI'm a bit skeptical of the premise here. The fine manual says\n\n In to_timestamp and to_date, weekday names or numbers (DAY, D, and\n related field types) are accepted but are ignored for purposes of\n computing the result. The same is true for quarter (Q) fields.\n\nYou appear to be trying to change that, but it's not at all clear\nwhat behavior you're changing it to, or whether the result is going\nto be any more sensible than it was before. In any case, this is\ncertainly not a \"bug fix\", because the code is working as documented.\nIt's a redefinition, and you haven't specified the new definition.\n\nAnother point is that these functions are meant to be Oracle-compatible,\nso I wonder what Oracle does in not-terribly-well-defined cases like\nthese.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Dec 2019 12:24:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Mark Lorenz <postgres@four-two.de> writes:\n> while preparing the patch for the Commitfest, I found a bug in the \n> to_char() function that is quite correlated with this issue:\n\n> SELECT to_char('1997-02-01'::date, 'YYYY-WW-D')\n\n> returns: 1997-05-7 -> which is ok, I believe. Feb, 1st was on Saturday, \n> so counting from Sundays, it was day 7 of week 5.\n\n> SELECT to_char('1997-02-03'::date, 'YYYY-WW-D')\n\n> returns: 1997-05-2 -> This cannot be.\n\nWhy not? These format codes are specified as\n\nD\tday of the week, Sunday (1) to Saturday (7)\nWW\tweek number of year (1–53) (the first week starts on the first day of the year)\n\nI don't see anything there that says that \"D\" is correlated with \"WW\".\nWe do have a connection between \"ID\" and \"IW\", so that ID ought to\nspecify a day within an IW week, but there's no connection between \"D\"\nand either \"W\" or \"WW\" week numbering. It's a day of the week, as\nper the regular calendar. Trying to define it as something else is\njust going to break stuff.\n\nThe only way to make \"D\" as it stands compatible with a week-numbering\nsystem is to ensure that your weeks always start on Sundays, that is,\njust as confusing as ISO weeks but slightly differently confusing.\n\nPerhaps it would be worth inventing format codes that do have the\nsame relationship to \"W\" and/or \"WW\" as \"ID\" does to \"IW\". But\nrepurposing \"D\" for that is a bad idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Dec 2019 12:37:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Hi Tom,\n\nthanks for answering!\n\nI commited two different patches:\n\n-------\n\nThe first one is for the strange behaviour of to_char(), which could be \nseen as a bug, I believe. As described earlier, to_char() with the \n'WW-D' pattern could return wrong week numbers.\n\nThe non-ISO week number is defined for weeks beginning with Sundays and \nending with Saturdays. The first week of the year is the week with \nJanuary, 1st.\n\nFor example:\n\npostgres=# SELECT to_char('1997-01-01'::date, 'YYYY-WW-D');\n to_char\n---------\n 1997-01-4\n(1 row)\n\n1997-01-01 was a Wednesday. So the first week in 1997 was from Jan 1st \nto Jan 4th (Saturday). Week 2 started on Jan 5th. But to_char() gives \nout week number 1 until Tuesday (!), Jan 7th.\n\npostgres=# SELECT to_char('1997-01-07'::date, 'YYYY-WW-D');\n to_char\n---------\n 1997-01-3\n(1 row)\n\nAfter that, on Jan 8th, the output switches from 01-3 to 02-4, which \nmakes no sense in my personal opinion. The week number should be \nconsistent from Sun to Sat and should not switch during any day in the \nweek. Furthermore, it is not clear why Jan 7th should return an earlier \nweek day (3) than Jan 1st (4).\n\nThe bug is, that the calculation of the week number only considers the \nnumber of days of the current year. But it ignores the first week day, \nwhich defines an offset. This has been fixed in the first patch.\n\n-------\n\nSecond patch:\n\nAs you stated correctly, this is not a bug fix, because the current \nbehaviour is documented and it works as the documentation states. I \ntried to describe my confusion in the very first post of this thread:\n\nI was wondering why the D part is not recognized in the non-ISO week \npattern while the ISO day is working very well. Although this is \ndocumented, there could be a chance that this simply was not implemented \nright now - so I tried.\n\nThe main aspect, I believe, is, that to_date() or to_timestamp() is some \nkind of \"back\" operation of the to_char() function. So, a new definition \nsimply should recognize the week day as the to_char() function does, \ninstead of setting the day part fix to any number (please see the \nexamples in the very first post for that).\n\n-------\n\nCombining both patches, the to_char() fix and the to_date() change, it \nis possible to calculate the non-ISO week pattern in both directions:\n\nSELECT to_date(to_char(anydate, 'YYYY-WW-D'), 'YYYY-WW-D')\n\nwould result in \"anydate\". Currently it does not:\n\npostgres=# SELECT to_date(to_char('1997-01-07'::date, 'YYYY-WW-D'), \n'YYYY-WW-D')\n to_char\n---------\n 1997-01-01\n(1 row)\n\npostgres=# SELECT to_char(to_date('1997-01-07', 'YYYY-WW-D'), \n'YYYY-WW-D')\n to_char\n---------\n 1997-01-04\n(1 row)\n\nOn the other hand, the ISO week calculations work as expected, \nespecially the there-and-back operation results in the original value:\n\npostgres=# SELECT to_date(to_char('1997-01-07'::date, 'IYYY-IW-ID'), \n'IYYY-IW-ID')\n to_char\n---------\n 1997-01-07\n(1 row)\n\npostgres=# SELECT to_char(to_date('1997-01-07', 'IYYY-IW-ID'), \n'IYYY-IW-ID')\n to_char\n---------\n 1997-01-7\n(1 row)\n\nThe only difference between ISO and non-ISO weeks is the beginning on \nMondays and the definition of the first week. But this cannot be the \nreason why one operation results in right values (comparing with a \ncalendar) and the other one does not.\n\nDoes this explanation make it clearer?\n\n\n", "msg_date": "Sat, 21 Dec 2019 01:15:07 +0100", "msg_from": "Mark Lorenz <postgres@four-two.de>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": ">> while preparing the patch for the Commitfest, I found a bug in the\n>> to_char() function that is quite correlated with this issue:\n> \n>> SELECT to_char('1997-02-01'::date, 'YYYY-WW-D')\n> \n>> returns: 1997-05-7 -> which is ok, I believe. Feb, 1st was on \n>> Saturday,\n>> so counting from Sundays, it was day 7 of week 5.\n> \n>> SELECT to_char('1997-02-03'::date, 'YYYY-WW-D')\n> \n>> returns: 1997-05-2 -> This cannot be.\n> \n> Why not? These format codes are specified as\n> \n> D\tday of the week, Sunday (1) to Saturday (7)\n> WW\tweek number of year (1–53) (the first week starts on the first day\n> of the year)\n> \n\nBecause 1997-05-2 is earlier than 1997-05-7. But 1997-02-03 is later \nthan 1997-02-01. From my point of view, this is confusing.\n\n> I don't see anything there that says that \"D\" is correlated with \"WW\".\n> We do have a connection between \"ID\" and \"IW\", so that ID ought to\n> specify a day within an IW week, but there's no connection between \"D\"\n> and either \"W\" or \"WW\" week numbering. It's a day of the week, as\n> per the regular calendar. Trying to define it as something else is\n> just going to break stuff.\n> \n> The only way to make \"D\" as it stands compatible with a week-numbering\n> system is to ensure that your weeks always start on Sundays, that is,\n> just as confusing as ISO weeks but slightly differently confusing.\n> \n> Perhaps it would be worth inventing format codes that do have the\n> same relationship to \"W\" and/or \"WW\" as \"ID\" does to \"IW\". But\n> repurposing \"D\" for that is a bad idea.\n> \n> \t\t\tregards, tom lane\n\nI don't want to create any connection here. The day is calculated \ncorrectly. But the week number is wrong. 1997-02-03 was in week number \n6, as well as 1997-02-04. But Postgres returns 5. The problem with \nto_char() is, that the week number is considering only the nmber of days \nin the year and divides them by 7. So, there is no diffence whether the \nyear starts on Sunday or any other week day. So, an offset is missing, \nwhich yields in wrong week numbers, as I can see...\n\n\n", "msg_date": "Sat, 21 Dec 2019 01:37:57 +0100", "msg_from": "Mark Lorenz <postgres@four-two.de>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Mark Lorenz <postgres@four-two.de> writes:\n>> Why not? These format codes are specified as\n>> D\tday of the week, Sunday (1) to Saturday (7)\n>> WW\tweek number of year (1–53) (the first week starts on the first day\n>> of the year)\n\n> I don't want to create any connection here. The day is calculated \n> correctly. But the week number is wrong. 1997-02-03 was in week number \n> 6, as well as 1997-02-04. But Postgres returns 5.\n\nThe week number is only wrong if you persist in ignoring the very clear\ndefinition given in the manual. According to the stated definition of WW,\n\"week 1\" consists of Jan 1 to Jan 7, \"week 2\" to Jan 8-14, etc. So it's\ncorrect for both of those dates to be in \"week 5\". There are other\npossible definitions of \"week\" of course, such as the ISO week, under\nwhich both those dates would be in week 6 (of 1997 anyway, not all other\nyears). But if you want ISO week you should ask for it with \"IW\", not\nexpect that we'll change the longstanding behavior of \"WW\" to match.\n\nAs far as I can see, the only way to make a week definition that\ngives sensible results in combination with \"D\" is to do something\nlike what ISO does, but with Sunday as the start day instead of Monday.\nBut having three different week definitions seems more likely to\nconfuse people (even more) than to be helpful. Plus you'd also need\nanalogs of IYYY, IDDD, etc.\n\nWhy not just use IYYY-IW-ID, instead? You'd have to adapt to\nweek-starts-on-Monday, but you'd be using a notation that a lot\nof people are already familiar with, instead of inventing your own.\n\nAnother possibility, perhaps, is to use WW in combination with\nsome new field that counts 1-7, 1-7, 1-7, ... starting on Jan 1.\nBut then that wouldn't have any easy mapping to day names, so\nthere's no free lunch.\n\nThrowing MM into the mix makes it even more exciting, as month\nboundaries don't correspond with week boundaries either. I don't\nsee any rational way to make YYYY-MM-W or YYYY-MM-W-D patterns\nthat behave in a numerically consistent fashion. (Note that ISO\ndidn't try --- there is no \"ISO month\".)\n\nThe bottom line is that these various definitions aren't mutually\nconsistent, and that's just a fact of life, not something that can\nbe fixed.\n\nIn any case, backwards compatibility alone would be a sufficient\nreason to reject a patch that changes the established behavior\nof the existing format codes. Whether you think they're buggy or\nnot, other people are relying on the existing documented behavior.\n\nPerhaps we'd consider a patch that adds some new format codes with\nnew behavior. But personally I'd vote against implementing new\nformat codes unless you can point to well-established standards\nsupporting their definitions. to_char/to_date are impossibly\ncomplex and unmaintainable already; we don't need to add more\nfeatures with narrow use-cases to them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Jan 2020 19:22:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Updated the chg_to_date_yyyywwd.patch with additional tests (because it \nworks not only for 'D' pattern but also for all day patterns like 'Day' \nor 'DY'). Added the necessary documentation change.\n\n(The fix_to_char_yyyywwd.patch from \nf4e740a8de3ad1e762a28f6ff253ea4f%40four-two.de is still up-to-date)", "msg_date": "Fri, 10 Jan 2020 13:22:38 +0100", "msg_from": "Mark Lorenz <postgres@four-two.de>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Em sex., 10 de jan. de 2020 às 09:22, Mark Lorenz <postgres@four-two.de>\nescreveu:\n\n> Updated the chg_to_date_yyyywwd.patch with additional tests (because it\n> works not only for 'D' pattern but also for all day patterns like 'Day'\n> or 'DY'). Added the necessary documentation change.\n>\n> (The fix_to_char_yyyywwd.patch from\n> f4e740a8de3ad1e762a28f6ff253ea4f%40four-two.de is still up-to-date)\n\n\n\nThe following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nHi Mark,\n\nthis is a review of the patch: chg_to_date_yyyywwd.patch\n\nThere hasn't been any problem, at least that I've been able to find.\n\nThis one applies cleanly.\n\nThe entire compilation went without error as well.\n\n############# Without patch #############\n\npostgres=# SELECT to_date('2019-1-1', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-01\n(1 row)\n\npostgres=# SELECT to_date('2019-1-2', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-01\n(1 row)\n\npostgres=# SELECT to_date('2019-1-9', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-01\n(1 row)\n\n\n############# With patch #############\n\npostgres=# SELECT to_date('2019-1-1', 'YYYY-WW-D');\n to_date\n------------\n 2018-12-30\n(1 row)\n\npostgres=# SELECT to_date('2019-1-2', 'YYYY-WW-D');\n to_date\n------------\n 2018-12-31\n(1 row)\n\npostgres=# SELECT to_date('2019-1-9', 'YYYY-WW-D');\n to_date\n------------\n 2019-01-07\n(1 row)\n\n+1 for committer review\n\n--\nCleysson Lima\n\nEm sex., 10 de jan. de 2020 às 09:22, Mark Lorenz <postgres@four-two.de> escreveu:Updated the chg_to_date_yyyywwd.patch with additional tests (because it \nworks not only for 'D' pattern but also for all day patterns like 'Day' \nor 'DY'). Added the necessary documentation change.\n\n(The fix_to_char_yyyywwd.patch from \nf4e740a8de3ad1e762a28f6ff253ea4f%40four-two.de is still up-to-date)The following review has been posted through the commitfest application:make installcheck-world:  tested, passedImplements feature:       tested, passedSpec compliant:           tested, passedDocumentation:            not testedHi Mark,this is a review of the patch: chg_to_date_yyyywwd.patchThere hasn't been any problem, at least that I've been able to find.This one applies cleanly. The entire compilation went without error as well.############# Without patch #############postgres=# SELECT to_date('2019-1-1', 'YYYY-WW-D');  to_date   ------------ 2019-01-01(1 row)postgres=# SELECT to_date('2019-1-2', 'YYYY-WW-D');  to_date   ------------ 2019-01-01(1 row)postgres=# SELECT to_date('2019-1-9', 'YYYY-WW-D');  to_date   ------------ 2019-01-01(1 row)############# With patch #############postgres=# SELECT to_date('2019-1-1', 'YYYY-WW-D');  to_date   ------------ 2018-12-30(1 row)postgres=# SELECT to_date('2019-1-2', 'YYYY-WW-D');  to_date   ------------ 2018-12-31(1 row)postgres=# SELECT to_date('2019-1-9', 'YYYY-WW-D');  to_date   ------------ 2019-01-07(1 row)+1 for committer review--Cleysson Lima", "msg_date": "Fri, 31 Jan 2020 19:34:18 -0300", "msg_from": "Cleysson Lima <cleyssondba@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Cleysson Lima <cleyssondba@gmail.com> writes:\n> this is a review of the patch: chg_to_date_yyyywwd.patch\n> There hasn't been any problem, at least that I've been able to find.\n\nAFAICS, the point of this patch is to make to_date symmetrical\nwith the definition of WW that the other patch wants for to_char.\nBut the other patch is wrong, for the reasons I explained upthread,\nso I doubt that we want this one either.\n\nI still think that it'd be necessary to invent at least one new\nformat field code in order to get to a sane version of this feature.\nAs they stand, 'WW' and 'D' do not agree on what a week is, and\nchanging the behavior of either one in order to make them agree\nis just not going to happen.\n\nBTW, I went to check on what Oracle thinks about this, since these\nfunctions are allegedly Oracle-compatible. On PG, I get this\nfor the WW and D values for the next few days:\n\nselect to_char(current_date+n, 'YYYY-MM-DD YYYY-WW-D Day')\nfrom generate_series(0,10) n;\n to_char \n--------------------------------\n 2020-01-31 2020-05-6 Friday \n 2020-02-01 2020-05-7 Saturday \n 2020-02-02 2020-05-1 Sunday \n 2020-02-03 2020-05-2 Monday \n 2020-02-04 2020-05-3 Tuesday \n 2020-02-05 2020-06-4 Wednesday\n 2020-02-06 2020-06-5 Thursday \n 2020-02-07 2020-06-6 Friday \n 2020-02-08 2020-06-7 Saturday \n 2020-02-09 2020-06-1 Sunday \n 2020-02-10 2020-06-2 Monday \n(11 rows)\n\nI did the same calculations using Oracle 11g R2 on sqlfiddle.com\nand got the same results. Interestingly, though, I also tried it on\n\nhttps://rextester.com/l/oracle_online_compiler\n\nand here's what I get there:\n\n2020-01-31 2020-05-5 Freitag\n2020-02-01 2020-05-6 Samstag\n2020-02-02 2020-05-7 Sonntag\n2020-02-03 2020-05-1 Montag\n2020-02-04 2020-05-2 Dienstag\n2020-02-05 2020-06-3 Mittwoch\n2020-02-06 2020-06-4 Donnerstag\n2020-02-07 2020-06-5 Freitag\n2020-02-08 2020-06-6 Samstag\n2020-02-09 2020-06-7 Sonntag\n2020-02-10 2020-06-1 Montag\n\n(I don't know how to switch locales on these sites, so I don't have\nany way to check what happens in other locales.)\n\nSo we agree with Oracle on what WW means, but they count D as 1-7\nstarting on either Sunday or Monday according to locale. I wonder\nwhether we should change to match that? Maybe \"TMD\" should act that\nway? It's already the case that their \"Day\" acts like our \"TMDay\",\nevidently.\n\nEither way, though, the WW weeks don't line up with the D weeks,\nand we're not likely to make them do so.\n\nSo I think an acceptable version of this feature has to involve\ndefining at least one new format code and maybe as many as three,\nto produce year, week and day values that agree on whichever\ndefinition of \"a week\" you want to use, and then to_date has to\nenforce that input uses matching year/week/day field types,\nvery much like it already does for ISO versus Gregorian dates.\n\nI also notice that neither patch touches the documentation.\nA minimum requirement here is defining what you think the underlying\n\"week\" is, if it's neither ISO nor the existing WW definition.\nAs I said before, it'd also be a good idea to provide some\nevidence that there are other people using that same week definition.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 31 Jan 2020 19:42:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "I wrote:\n> Either way, though, the WW weeks don't line up with the D weeks,\n> and we're not likely to make them do so.\n> So I think an acceptable version of this feature has to involve\n> defining at least one new format code and maybe as many as three,\n> to produce year, week and day values that agree on whichever\n> definition of \"a week\" you want to use, and then to_date has to\n> enforce that input uses matching year/week/day field types,\n> very much like it already does for ISO versus Gregorian dates.\n\nA different line of thought could be to accept the current to_char()\nbehavior for WW and D, and go ahead and teach to_date() to invert that.\nThat is, take YYYY plus WW as specifying a seven-day interval, and then\nD chooses the matching day within that interval. This would still have\nthe property you complained about originally that WW-plus-D don't form\na monotonically increasing sequence, but I think that ship has sailed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 01 Feb 2020 15:00:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "I just noticed that this patch has been classified under \"bug fixes\",\nbut per Tom's comments, this is not a bug fix -- it seems we would need\na new format code to implement some different week numbering mechanism.\nThat seems a new feature, not a bug fix.\n\nTherefore I propose to move this in Commitfest from \"Bug fixes\" to\n\"Server features\". This has implications such as not automatically\nmoving to next commitfest if no update appears during this one.\n\n\nI've never personally had to write calendaring applications, so I don't\nhave an opinion on whether this is useful. Why isn't it sufficient to\nrely on ISO week/day numbering (IW/ID), which appears to be more\nconsistent? I think we should consider adding more codes only if\nreal-world use cases exist for them.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 12:57:09 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Hi Tom,\n\nwith a bit space to this issue, I re-read your comments. I am beginning \nto understand what you mean or - better - what's wrong with my thoughts. \nWhen I understand you correctly, you say, the WW can start at any \nweekday, and is not fixed to Sunday, right? In your opinion the WW \nstarts with the weekday of Jan, 1st? That's what could be my problem: I \nalways thought (maybe triggered through the D pattern), that WW has to \nstart sundays. But, now I agree with you, the docs fit better to your \ninterpretation:\n\n\"the first week starts on the first day of the year\"\n\nI interpreted it with: It starts on the week, which includes the first \nof the year, but the Sunday before.\n\nDid I understand you correctly? In that case, I accept, that my patch is \nno bugfix (I think, it would be one, if my interpretion would be the \nexpected behaviour.).\n\nBut, nevertheless, what about adding the function to accept the DAY, D \n(and maybe the Q) patterns for to_date() - in this case, of course, in \nthe uncorrelated version? to_char() handles them properly. And, from my \npoint of view, there is no reason why they should give \"1\" instead the \nreal day number. What do you think?\n\n\n", "msg_date": "Mon, 23 Mar 2020 12:36:59 +0100", "msg_from": "Mark Lorenz <postgres@four-two.de>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "Mark Lorenz <postgres@four-two.de> writes:\n> with a bit space to this issue, I re-read your comments. I am beginning \n> to understand what you mean or - better - what's wrong with my thoughts. \n> When I understand you correctly, you say, the WW can start at any \n> weekday, and is not fixed to Sunday, right? In your opinion the WW \n> starts with the weekday of Jan, 1st? That's what could be my problem: I \n> always thought (maybe triggered through the D pattern), that WW has to \n> start sundays. But, now I agree with you, the docs fit better to your \n> interpretation:\n> \"the first week starts on the first day of the year\"\n\nYes, that's clearly what our code, and what Oracle's does too, given\nthe tests I showed upthread.\n\n> But, nevertheless, what about adding the function to accept the DAY, D \n> (and maybe the Q) patterns for to_date() - in this case, of course, in \n> the uncorrelated version? to_char() handles them properly. And, from my \n> point of view, there is no reason why they should give \"1\" instead the \n> real day number. What do you think?\n\nThe trick is to produce something sane. I think that a reasonable\nprecedent for this would be what to_date does with ISO-week fields:\nyou can ask it to parse IYYY-IW-ID but you can't mix that with regular\nmonth/day/year fields. So for example, it seems like it'd be possible\nto reconstruct a date from YYYY-WW-D, because that's enough to uniquely\nidentify a day. The D field isn't monotonically increasing within a\nweek, but nonetheless there's exactly one day in each YYYY-WW week that\nhas a particular D value. However you probably don't want to allow\ninconsistent mixtures like YYYY-WW-ID, because that's just a mess (and\nmore than likely, it's a mistake). And I would not be in favor of\nallowing YYYY-Q either, because that would not be enough to uniquely\nidentify a day, so there's really no point in allowing Q to enter into\nto_date's considerations at all.\n\nWhether there is actually any field demand for such a feature is\nnot clear to me. AFAICT Oracle doesn't support it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Apr 2020 12:39:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" }, { "msg_contents": "I wrote:\n> Mark Lorenz <postgres@four-two.de> writes:\n>> But, nevertheless, what about adding the function to accept the DAY, D \n>> (and maybe the Q) patterns for to_date() - in this case, of course, in \n>> the uncorrelated version? to_char() handles them properly. And, from my \n>> point of view, there is no reason why they should give \"1\" instead the \n>> real day number. What do you think?\n\n> The trick is to produce something sane. I think that a reasonable\n> precedent for this would be what to_date does with ISO-week fields:\n> you can ask it to parse IYYY-IW-ID but you can't mix that with regular\n> month/day/year fields. So for example, it seems like it'd be possible\n> to reconstruct a date from YYYY-WW-D, because that's enough to uniquely\n> identify a day. The D field isn't monotonically increasing within a\n> week, but nonetheless there's exactly one day in each YYYY-WW week that\n> has a particular D value. However you probably don't want to allow\n> inconsistent mixtures like YYYY-WW-ID, because that's just a mess (and\n> more than likely, it's a mistake). And I would not be in favor of\n> allowing YYYY-Q either, because that would not be enough to uniquely\n> identify a day, so there's really no point in allowing Q to enter into\n> to_date's considerations at all.\n> Whether there is actually any field demand for such a feature is\n> not clear to me. AFAICT Oracle doesn't support it.\n\nSince we're certainly not going to commit these patches as-presented,\nand nothing has happened on this thread since early April, I've marked\nboth the CF entries as Returned With Feedback. If you do write a patch\nto make to_date work as above, please file a new CF entry.\n\n(BTW, having two CF entries pointing at the same email thread is\npretty confusing to our not-that-bright tools. It's probably\nbetter to have just one entry per thread in the future.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 16:10:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Created feature for to_date() conversion using patterns\n 'YYYY-WW', 'YYYY-WW-D', 'YYYY-MM-W' and 'YYYY-MM-W-D'" } ]
[ { "msg_contents": "I want to read pg_database from pg_init...\n\nIs using heap_open() is possible? or else any other way is there ?\n\nI want to read pg_database from pg_init...Is using heap_open() is possible? or else any other way is there ?", "msg_date": "Tue, 8 Oct 2019 22:03:03 +0530", "msg_from": "Natarajan R <nataraj3098@gmail.com>", "msg_from_op": true, "msg_subject": "pg_init" }, { "msg_contents": "On Tue, Oct 08, 2019 at 10:03:03PM +0530, Natarajan R wrote:\n>I want to read pg_database from pg_init...\n>\n>Is using heap_open() is possible? or else any other way is there ?\n\nThis is way too vague question - I have no idea what you mean by\npg_init, for example. And it's probably a good idea to explain what\nyou're trying to achieve.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 8 Oct 2019 18:47:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_init" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\n> On Tue, Oct 08, 2019 at 10:03:03PM +0530, Natarajan R wrote:\n> >I want to read pg_database from pg_init...\n> >\n> >Is using heap_open() is possible? or else any other way is there ?\n> \n> This is way too vague question - I have no idea what you mean by\n> pg_init, for example. And it's probably a good idea to explain what\n> you're trying to achieve.\n\nThis question was familiar to me so I searched the archives. It seems related\nto\n\nhttps://www.postgresql.org/message-id/17058.1570166272%40sss.pgh.pa.us\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 08 Oct 2019 19:14:30 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pg_init" }, { "msg_contents": "On Wed, 9 Oct 2019 at 00:33, Natarajan R <nataraj3098@gmail.com> wrote:\n\n> I want to read pg_database from pg_init...\n>\n> Is using heap_open() is possible? or else any other way is there ?\n>\n\nIt's not possible from _PG_init .\n\nI replied to a similar thread with details on how bgworkers can access\ndifferent databases; look at the archives.\n\nThe gist is that you have to register a bgworker that attaches to shared\nmemory and to a database (or use InvalidOid if you only want shared catalog\naccess), then do your work from there.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 9 Oct 2019 at 00:33, Natarajan R <nataraj3098@gmail.com> wrote:I want to read pg_database from pg_init...Is using heap_open() is possible? or else any other way is there ? \nIt's not possible from _PG_init .I replied to a similar thread with details on how bgworkers can access different databases; look at the archives.The gist is that you have to register a bgworker that attaches to shared memory and to a database (or use InvalidOid if you only want shared catalog access), then do your work from there.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 10 Oct 2019 07:52:17 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_init" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16045\nLogged by: Hans Buschmann\nEmail address: buschmann@nidsa.net\nPostgreSQL version: 12.0\nOperating system: Windows 10 64bit\nDescription: \n\nI just did a pg_upgrade from pg 11.5 to pg 12.0 on my development machine\nunder Windows 64bit (both distributions from EDB).\r\n\r\ncpsdb=# select version ();\r\n version\r\n------------------------------------------------------------\r\n PostgreSQL 12.0, compiled by Visual C++ build 1914, 64-bit\r\n(1 row)\r\n\r\nThe pg_upgrade with --link went flawlessly, I started (only!) the new server\n12.0 and could connect and access individual databases.\r\n\r\nAs recommended by the resulting analyze_new_cluster.bat I tried a full\nvacuumdb with:\r\n\r\n\"N:/pgsql/bin/vacuumdb\" -U postgres --all --analyze-only\r\n\r\nwhich crashed with\r\nvacuumdb: vacuuming database \"cpsdb\"\r\nvacuumdb: error: vacuuming of table \"admin.q_tbl_archiv\" in database \"cpsdb\"\nfailed: ERROR: compressed data is corrupted\r\n\r\nI connected to the database through pgsql and looked at the table\n\"admin.q_tbl_archiv\"\r\n\r\ncpsdb=# \\d+ q_tbl_archiv;\r\n Table\n\"admin.q_tbl_archiv\"\r\n Column | Type | Collation |\nNullable | Default | Storage | Stats target | Description\r\n------------------+------------------------------------+-----------+----------+---------+----------+--------------+-------------\r\n table_name | information_schema.sql_identifier | | \n | | plain | |\r\n column_name | information_schema.sql_identifier | | \n | | plain | |\r\n ordinal_position | information_schema.cardinal_number | | \n | | plain | |\r\n col_qualifier | text | | \n | | extended | |\r\n id_column | information_schema.sql_identifier | | \n | | plain | |\r\n id_default | information_schema.character_data | | \n | | extended | |\r\nAccess method: heap\r\n\r\nWhen trying to select * from q_tbl_archiv I got:\r\n\r\ncpsdb=# select * from q_tbl_archiv;\r\nERROR: invalid memory alloc request size 18446744073709551613\r\n\r\nThis table was created a long time back under 9.5 or 9.6 with the (here\ntruncated) following command:\r\n\r\n\r\ncreate table q_tbl_archiv as\r\nwith\r\nqseason as (\r\nselect table_name,column_name, ordinal_position \r\n,replace(column_name,'_season','') as col_qualifier\r\n-- ,'id_'||replace(column_name,'_season','') as id_column\r\nfrom information_schema.columns \r\nwhere \r\ncolumn_name like '%_season'\r\nand ordinal_position < 10\r\nand table_name in (\r\n 'table1'\r\n,'table2'\r\n-- here truncated:\r\n-- ... (here where all table of mine having columns like xxx_season)\r\n-- to reproduce, change to your own tablenames in a test database\r\n)\r\norder by table_name\r\n)\r\nselect qs.*,c.column_name as id_column, c.column_default as id_default\r\nfrom \r\n\tqseason qs\r\n\tleft join information_schema.columns c on c.table_name=qs.table_name and\nc.column_name like 'id_%'\r\n;\r\n\r\nUntil now this table was always restored without error by migrating to a new\nmajor version through pg_dump/initdb/pr_restore.\r\n\r\nTo verify the integrity of the table I restored the dump taken under pg_dump\nfrom pg 11.5 just before the pg_upgrade to another machine.\r\n\r\nThe restore and analyze went OK and select * from q_tbl_archiv showed all\ntuples, eg (edited):\r\n\r\ncpsdb_dev=# select * from q_tbl_archiv;\r\n table_name | column_name | ordinal_position | col_qualifier\n| id_column | id_default\r\n--------------------------+--------------+------------------+---------------+-----------+----------------------------------------------------------\r\n table1 | chm_season | 2 | chm \n| |\r\n table2 | cs_season | 2 | cs \n| id_cs | nextval('table2_id_cs_seq'::regclass)\r\n...\r\n\r\nIn conclusion, this seems to me like an error/omission of pg_upgrade.\r\n\r\nIt seems to handle these specially derived tables from information_schema\nnot correctly, resulting in failures of the upgraded database.\r\n\r\nFor me, this error is not so crucial, because this table is only used for\nadministrative purposes and can easily be restored from backup.\r\n\r\nBut I want to share my findings for the sake of other users of pg_upgrade.\r\n\r\nThanks for investigating!\r\n\r\nHans Buschmann", "msg_date": "Tue, 08 Oct 2019 17:08:53 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #16045: vacuum_db crash and illegal memory alloc after pg_upgrade\n from PG11 to PG12" }, { "msg_contents": "On Tue, Oct 08, 2019 at 05:08:53PM +0000, PG Bug reporting form wrote:\n>The following bug has been logged on the website:\n>\n>Bug reference: 16045\n>Logged by: Hans Buschmann\n>Email address: buschmann@nidsa.net\n>PostgreSQL version: 12.0\n>Operating system: Windows 10 64bit\n>Description:\n>\n>I just did a pg_upgrade from pg 11.5 to pg 12.0 on my development machine\n>under Windows 64bit (both distributions from EDB).\n>\n>cpsdb=# select version ();\n> version\n>------------------------------------------------------------\n> PostgreSQL 12.0, compiled by Visual C++ build 1914, 64-bit\n>(1 row)\n>\n>The pg_upgrade with --link went flawlessly, I started (only!) the new server\n>12.0 and could connect and access individual databases.\n>\n>As recommended by the resulting analyze_new_cluster.bat I tried a full\n>vacuumdb with:\n>\n>\"N:/pgsql/bin/vacuumdb\" -U postgres --all --analyze-only\n>\n>which crashed with\n>vacuumdb: vacuuming database \"cpsdb\"\n>vacuumdb: error: vacuuming of table \"admin.q_tbl_archiv\" in database \"cpsdb\"\n>failed: ERROR: compressed data is corrupted\n>\n>I connected to the database through pgsql and looked at the table\n>\"admin.q_tbl_archiv\"\n>\n>cpsdb=# \\d+ q_tbl_archiv;\n> Table\n>\"admin.q_tbl_archiv\"\n> Column | Type | Collation |\n>Nullable | Default | Storage | Stats target | Description\n>------------------+------------------------------------+-----------+----------+---------+----------+--------------+-------------\n> table_name | information_schema.sql_identifier | |\n> | | plain | |\n> column_name | information_schema.sql_identifier | |\n> | | plain | |\n> ordinal_position | information_schema.cardinal_number | |\n> | | plain | |\n> col_qualifier | text | |\n> | | extended | |\n> id_column | information_schema.sql_identifier | |\n> | | plain | |\n> id_default | information_schema.character_data | |\n> | | extended | |\n>Access method: heap\n>\n>When trying to select * from q_tbl_archiv I got:\n>\n>cpsdb=# select * from q_tbl_archiv;\n>ERROR: invalid memory alloc request size 18446744073709551613\n>\n>This table was created a long time back under 9.5 or 9.6 with the (here\n>truncated) following command:\n>\n>\n>create table q_tbl_archiv as\n>with\n>qseason as (\n>select table_name,column_name, ordinal_position\n>,replace(column_name,'_season','') as col_qualifier\n>-- ,'id_'||replace(column_name,'_season','') as id_column\n>from information_schema.columns\n>where\n>column_name like '%_season'\n>and ordinal_position < 10\n>and table_name in (\n> 'table1'\n>,'table2'\n>-- here truncated:\n>-- ... (here where all table of mine having columns like xxx_season)\n>-- to reproduce, change to your own tablenames in a test database\n>)\n>order by table_name\n>)\n>select qs.*,c.column_name as id_column, c.column_default as id_default\n>from\n>\tqseason qs\n>\tleft join information_schema.columns c on c.table_name=qs.table_name and\n>c.column_name like 'id_%'\n>;\n>\n>Until now this table was always restored without error by migrating to a new\n>major version through pg_dump/initdb/pr_restore.\n>\n>To verify the integrity of the table I restored the dump taken under pg_dump\n>from pg 11.5 just before the pg_upgrade to another machine.\n>\n>The restore and analyze went OK and select * from q_tbl_archiv showed all\n>tuples, eg (edited):\n>\n>cpsdb_dev=# select * from q_tbl_archiv;\n> table_name | column_name | ordinal_position | col_qualifier\n>| id_column | id_default\n>--------------------------+--------------+------------------+---------------+-----------+----------------------------------------------------------\n> table1 | chm_season | 2 | chm\n>| |\n> table2 | cs_season | 2 | cs\n>| id_cs | nextval('table2_id_cs_seq'::regclass)\n>...\n>\n>In conclusion, this seems to me like an error/omission of pg_upgrade.\n>\n\nThere's clearly something bad happening. It's a bit strange, though. Had\nthis been a data corruption issue, I'd expect the pg_dump to fail too,\nbut it succeeds.\n\n>It seems to handle these specially derived tables from information_schema\n>not correctly, resulting in failures of the upgraded database.\n>\n\nWell, I don't see how that should make any difference. It's a CTAS and\nthat should create a regular table, that's not an issue. I wonder if\nthere were some changes to the data types involved, but that would be\nessentially a break in on-disk format and we're careful about not doing\nthat ...\n\n>For me, this error is not so crucial, because this table is only used for\n>administrative purposes and can easily be restored from backup.\n>\n>But I want to share my findings for the sake of other users of pg_upgrade.\n>\n\nOK, thanks. Could you maybe set\n\n log_error_verbosity = verbose\n\nbefore invoking the vacuum (you can set that in that session)? That\nshould give us more details about where exactly the error is triggered.\nEven better, if you could attach a debugger to the session, set\nbreakpoints on locations triggering 'invalid memory alloc request size'\nand then show the backtrace (obviously, that's more complicated).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 9 Oct 2019 15:24:36 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "FWIW I can reproduce this - it's enough to do this on the 11 cluster\n\ncreate table q_tbl_archiv as\nwith\nqseason as (\nselect table_name,column_name, ordinal_position\n,replace(column_name,'_season','') as col_qualifier\n-- ,'id_'||replace(column_name,'_season','') as id_column\nfrom information_schema.columns\norder by table_name\n)\nselect qs.*,c.column_name as id_column, c.column_default as id_default\nfrom\n qseason qs\n left join information_schema.columns c on c.table_name=qs.table_name and\nc.column_name like 'id_%';\n\n\nand then\n\n analyze q_tbl_archiv\n\nwhich produces backtrace like this:\n\nNo symbol \"stats\" in current context.\n(gdb) bt\n#0 0x0000746095262951 in __memmove_avx_unaligned_erms () from /lib64/libc.so.6\n#1 0x0000000000890a8e in varstrfastcmp_locale (a1p=0x17716b4 \"per_language\\a\", len1=<optimized out>, a2p=0x176af28 '\\177' <repeats 136 times>, \"\\021\\004\", len2=-4, ssup=<optimized out>, ssup=<optimized out>) at varlena.c:2320\n#2 0x0000000000890cb1 in varlenafastcmp_locale (x=24581808, y=24555300, ssup=0x7ffc649463f0) at varlena.c:2219\n#3 0x00000000005b73b4 in ApplySortComparator (ssup=0x7ffc649463f0, isNull2=false, datum2=<optimized out>, isNull1=false, datum1=<optimized out>) at ../../../src/include/utils/sortsupport.h:224\n#4 compare_scalars (a=<optimized out>, b=<optimized out>, arg=0x7ffc649463e0) at analyze.c:2700\n#5 0x00000000008f9953 in qsort_arg (a=a@entry=0x178fdc0, n=<optimized out>, n@entry=2158, es=es@entry=16, cmp=cmp@entry=0x5b7390 <compare_scalars>, arg=arg@entry=0x7ffc649463e0) at qsort_arg.c:140\n#6 0x00000000005b86a6 in compute_scalar_stats (stats=0x176a208, fetchfunc=<optimized out>, samplerows=<optimized out>, totalrows=2158) at analyze.c:2273\n#7 0x00000000005b9d95 in do_analyze_rel (onerel=onerel@entry=0x74608c00d3e8, params=params@entry=0x7ffc64946970, va_cols=va_cols@entry=0x0, acquirefunc=<optimized out>, relpages=22, inh=inh@entry=false, in_outer_xact=false, elevel=13)\n at analyze.c:529\n#8 0x00000000005bb2c9 in analyze_rel (relid=<optimized out>, relation=<optimized out>, params=params@entry=0x7ffc64946970, va_cols=0x0, in_outer_xact=<optimized out>, bstrategy=<optimized out>) at analyze.c:260\n#9 0x000000000062c7b0 in vacuum (relations=0x1727120, params=params@entry=0x7ffc64946970, bstrategy=<optimized out>, bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:413\n#10 0x000000000062cd49 in ExecVacuum (pstate=pstate@entry=0x16c9518, vacstmt=vacstmt@entry=0x16a82b8, isTopLevel=isTopLevel@entry=true) at vacuum.c:199\n#11 0x00000000007a6d64 in standard_ProcessUtility (pstmt=0x16a8618, queryString=0x16a77a8 \"\", context=<optimized out>, params=0x0, queryEnv=0x0, dest=0x16a8710, completionTag=0x7ffc64946cb0 \"\") at utility.c:670\n#12 0x00000000007a4006 in PortalRunUtility (portal=0x170f368, pstmt=0x16a8618, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x16a8710, completionTag=0x7ffc64946cb0 \"\") at pquery.c:1175\n#13 0x00000000007a4b61 in PortalRunMulti (portal=portal@entry=0x170f368, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x16a8710, altdest=altdest@entry=0x16a8710,\n completionTag=completionTag@entry=0x7ffc64946cb0 \"\") at pquery.c:1321\n#14 0x00000000007a5864 in PortalRun (portal=portal@entry=0x170f368, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x16a8710, altdest=altdest@entry=0x16a8710,\n completionTag=0x7ffc64946cb0 \"\") at pquery.c:796\n#15 0x00000000007a174e in exec_simple_query (query_string=0x16a77a8 \"\") at postgres.c:1215\n\nLooking at compute_scalar_stats, the \"stats\" parameter does not seem\nparticularly healthy:\n\n(gdb) p *stats\n$3 = {attr = 0x10, attrtypid = 12, attrtypmod = 0, attrtype = 0x1762e00, attrcollid = 356, anl_context = 0x7f7f7f7e00000000, compute_stats = 0x100, minrows = 144, extra_data = 0x1762e00, stats_valid = false, stanullfrac = 0,\n stawidth = 0, stadistinct = 0, stakind = {0, 0, 0, 0, 0}, staop = {0, 0, 0, 0, 0}, stacoll = {0, 0, 0, 0, 0}, numnumbers = {0, 0, 0, 0, 0}, stanumbers = {0x0, 0x0, 0x0, 0x0, 0x0}, numvalues = {0, 0, 0, 0, 2139062142}, stavalues = {\n 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f}, statypid = {2139062143, 2139062143, 2139062143, 2139062143, 2139062143}, statyplen = {32639, 32639, 32639, 32639, 32639},\n statypbyval = {127, 127, 127, 127, 127}, statypalign = \"\\177\\177\\177\\177\\177\", tupattnum = 2139062143, rows = 0x7f7f7f7f7f7f7f7f, tupDesc = 0x7f7f7f7f7f7f7f7f, exprvals = 0x8, exprnulls = 0x4, rowstride = 24522240}\n\nNot sure about the root cause yet.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 9 Oct 2019 15:59:07 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> FWIW I can reproduce this - it's enough to do this on the 11 cluster\n\nI failed to reproduce any problem from your example, but I was trying\nin C locale on a Linux machine. What environment are you testing?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 10:07:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Wed, Oct 09, 2019 at 03:59:07PM +0200, Tomas Vondra wrote:\n>FWIW I can reproduce this - it's enough to do this on the 11 cluster\n>\n>create table q_tbl_archiv as\n>with\n>qseason as (\n>select table_name,column_name, ordinal_position\n>,replace(column_name,'_season','') as col_qualifier\n>-- ,'id_'||replace(column_name,'_season','') as id_column\n>from information_schema.columns\n>order by table_name\n>)\n>select qs.*,c.column_name as id_column, c.column_default as id_default\n>from\n> qseason qs\n> left join information_schema.columns c on c.table_name=qs.table_name and\n>c.column_name like 'id_%';\n>\n>\n>and then\n>\n> analyze q_tbl_archiv\n>\n>which produces backtrace like this:\n>\n>No symbol \"stats\" in current context.\n>(gdb) bt\n>#0 0x0000746095262951 in __memmove_avx_unaligned_erms () from /lib64/libc.so.6\n>#1 0x0000000000890a8e in varstrfastcmp_locale (a1p=0x17716b4 \"per_language\\a\", len1=<optimized out>, a2p=0x176af28 '\\177' <repeats 136 times>, \"\\021\\004\", len2=-4, ssup=<optimized out>, ssup=<optimized out>) at varlena.c:2320\n>#2 0x0000000000890cb1 in varlenafastcmp_locale (x=24581808, y=24555300, ssup=0x7ffc649463f0) at varlena.c:2219\n>#3 0x00000000005b73b4 in ApplySortComparator (ssup=0x7ffc649463f0, isNull2=false, datum2=<optimized out>, isNull1=false, datum1=<optimized out>) at ../../../src/include/utils/sortsupport.h:224\n>#4 compare_scalars (a=<optimized out>, b=<optimized out>, arg=0x7ffc649463e0) at analyze.c:2700\n>#5 0x00000000008f9953 in qsort_arg (a=a@entry=0x178fdc0, n=<optimized out>, n@entry=2158, es=es@entry=16, cmp=cmp@entry=0x5b7390 <compare_scalars>, arg=arg@entry=0x7ffc649463e0) at qsort_arg.c:140\n>#6 0x00000000005b86a6 in compute_scalar_stats (stats=0x176a208, fetchfunc=<optimized out>, samplerows=<optimized out>, totalrows=2158) at analyze.c:2273\n>#7 0x00000000005b9d95 in do_analyze_rel (onerel=onerel@entry=0x74608c00d3e8, params=params@entry=0x7ffc64946970, va_cols=va_cols@entry=0x0, acquirefunc=<optimized out>, relpages=22, inh=inh@entry=false, in_outer_xact=false, elevel=13)\n> at analyze.c:529\n>#8 0x00000000005bb2c9 in analyze_rel (relid=<optimized out>, relation=<optimized out>, params=params@entry=0x7ffc64946970, va_cols=0x0, in_outer_xact=<optimized out>, bstrategy=<optimized out>) at analyze.c:260\n>#9 0x000000000062c7b0 in vacuum (relations=0x1727120, params=params@entry=0x7ffc64946970, bstrategy=<optimized out>, bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:413\n>#10 0x000000000062cd49 in ExecVacuum (pstate=pstate@entry=0x16c9518, vacstmt=vacstmt@entry=0x16a82b8, isTopLevel=isTopLevel@entry=true) at vacuum.c:199\n>#11 0x00000000007a6d64 in standard_ProcessUtility (pstmt=0x16a8618, queryString=0x16a77a8 \"\", context=<optimized out>, params=0x0, queryEnv=0x0, dest=0x16a8710, completionTag=0x7ffc64946cb0 \"\") at utility.c:670\n>#12 0x00000000007a4006 in PortalRunUtility (portal=0x170f368, pstmt=0x16a8618, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x16a8710, completionTag=0x7ffc64946cb0 \"\") at pquery.c:1175\n>#13 0x00000000007a4b61 in PortalRunMulti (portal=portal@entry=0x170f368, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x16a8710, altdest=altdest@entry=0x16a8710,\n> completionTag=completionTag@entry=0x7ffc64946cb0 \"\") at pquery.c:1321\n>#14 0x00000000007a5864 in PortalRun (portal=portal@entry=0x170f368, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x16a8710, altdest=altdest@entry=0x16a8710,\n> completionTag=0x7ffc64946cb0 \"\") at pquery.c:796\n>#15 0x00000000007a174e in exec_simple_query (query_string=0x16a77a8 \"\") at postgres.c:1215\n>\n>Looking at compute_scalar_stats, the \"stats\" parameter does not seem\n>particularly healthy:\n>\n>(gdb) p *stats\n>$3 = {attr = 0x10, attrtypid = 12, attrtypmod = 0, attrtype = 0x1762e00, attrcollid = 356, anl_context = 0x7f7f7f7e00000000, compute_stats = 0x100, minrows = 144, extra_data = 0x1762e00, stats_valid = false, stanullfrac = 0,\n> stawidth = 0, stadistinct = 0, stakind = {0, 0, 0, 0, 0}, staop = {0, 0, 0, 0, 0}, stacoll = {0, 0, 0, 0, 0}, numnumbers = {0, 0, 0, 0, 0}, stanumbers = {0x0, 0x0, 0x0, 0x0, 0x0}, numvalues = {0, 0, 0, 0, 2139062142}, stavalues = {\n> 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f}, statypid = {2139062143, 2139062143, 2139062143, 2139062143, 2139062143}, statyplen = {32639, 32639, 32639, 32639, 32639},\n> statypbyval = {127, 127, 127, 127, 127}, statypalign = \"\\177\\177\\177\\177\\177\", tupattnum = 2139062143, rows = 0x7f7f7f7f7f7f7f7f, tupDesc = 0x7f7f7f7f7f7f7f7f, exprvals = 0x8, exprnulls = 0x4, rowstride = 24522240}\n>\n>Not sure about the root cause yet.\n>\n\nOK, a couple more observations - the table schema looks like this:\n\n Table \"public.q_tbl_archiv\"\n Column | Type | Collation | Nullable | Default \n------------------+------------------------------------+-----------+----------+---------\n table_name | information_schema.sql_identifier | | | \n column_name | information_schema.sql_identifier | | | \n ordinal_position | information_schema.cardinal_number | | | \n col_qualifier | text | | | \n id_column | information_schema.sql_identifier | | | \n id_default | information_schema.character_data | | | \n\nand I can succesfully do this:\n\n test=# analyze q_tbl_archiv (table_name, column_name, ordinal_position, id_column, id_default);\n ANALYZE\n\nbut as soon as I include the col_qualifier column, it fails:\n\n test=# analyze q_tbl_archiv (table_name, column_name, ordinal_position, id_column, id_default, col_qualifier);\n ERROR: compressed data is corrupted\n\nBut it fails differently (with the segfault) when analyzing just the one\ncolumn:\n\n test=# analyze q_tbl_archiv (col_qualifier);\n server closed the connection unexpectedly\n \tThis probably means the server terminated abnormally\n \tbefore or while processing the request.\n The connection to the server was lost. Attempting reset: Succeeded.\n\nMoreover, there are some other interesting failures - I can do\n\n select max(table_name) from q_tbl_archiv;\n select max(column_name) from q_tbl_archiv;\n select max(ordinal_position) from q_tbl_archiv;\n\nbut as soon as I try doing that with col_qualifier, it crashes and\nburns:\n\n select max(col_qualifier) from q_tbl_archiv;\n\nThe backtrace is rather strange in this case (a lot of missing calls,\netc.). However, when called for the next two columns, it still crashes,\nbut the backtraces look somewhat saner:\n\n select max(id_column) from q_tbl_archiv;\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x00007db3186c6617 in __strlen_avx2 () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007db3186c6617 in __strlen_avx2 () from /lib64/libc.so.6\n#1 0x0000000000894ced in cstring_to_text (s=0x7db32ce38935 <error: Cannot access memory at address 0x7db32ce38935>) at varlena.c:173\n#2 name_text (fcinfo=<optimized out>) at varlena.c:3573\n#3 0x000000000063860d in ExecInterpExpr (state=0x1136900, econtext=0x1135128, isnull=<optimized out>) at execExprInterp.c:649\n#4 0x000000000064f699 in ExecEvalExprSwitchContext (isNull=0x7ffcfd8f3b2f, econtext=<optimized out>, state=<optimized out>) at ../../../src/include/executor/executor.h:307\n#5 advance_aggregates (aggstate=0x1134ef0, aggstate=0x1134ef0) at nodeAgg.c:679\n#6 agg_retrieve_direct (aggstate=0x1134ef0) at nodeAgg.c:1847\n#7 ExecAgg (pstate=0x1134ef0) at nodeAgg.c:1572\n#8 0x000000000063b58b in ExecProcNode (node=0x1134ef0) at ../../../src/include/executor/executor.h:239\n#9 ExecutePlan (execute_once=<optimized out>, dest=0x1144248, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1134ef0, estate=0x1134c98)\n at execMain.c:1646\n#10 standard_ExecutorRun (queryDesc=0x1094f18, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#11 0x00000000007a43cc in PortalRunSelect (portal=0x10da368, forward=<optimized out>, count=0, dest=<optimized out>) at pquery.c:929\n#12 0x00000000007a5958 in PortalRun (portal=portal@entry=0x10da368, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x1144248, altdest=altdest@entry=0x1144248,\n completionTag=0x7ffcfd8f3db0 \"\") at pquery.c:770\n#13 0x00000000007a177e in exec_simple_query (query_string=0x10727a8 \"select max(id_column) from q_tbl_archiv ;\") at postgres.c:1215\n#14 0x00000000007a2f3f in PostgresMain (argc=<optimized out>, argv=argv@entry=0x109e400, dbname=<optimized out>, username=<optimized out>) at postgres.c:4236\n#15 0x00000000007237ce in BackendRun (port=0x1097c30, port=0x1097c30) at postmaster.c:4437\n#16 BackendStartup (port=0x1097c30) at postmaster.c:4128\n#17 ServerLoop () at postmaster.c:1704\n#18 0x000000000072458e in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x106c350) at postmaster.c:1377\n#19 0x000000000047d101 in main (argc=3, argv=0x106c350) at main.c:228\n\n select max(id_default) from q_tbl_archiv;\n\nProgram received signal SIGABRT, Aborted.\n0x00007db3185a1e35 in raise () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007db3185a1e35 in raise () from /lib64/libc.so.6\n#1 0x00007db31858c895 in abort () from /lib64/libc.so.6\n#2 0x00000000008b4470 in ExceptionalCondition (conditionName=conditionName@entry=0xabe49e \"1\", errorType=errorType@entry=0x907128 \"unrecognized TOAST vartag\", fileName=fileName@entry=0xa4965b \"execTuples.c\",\n lineNumber=lineNumber@entry=971) at assert.c:54\n#3 0x00000000006466d3 in slot_deform_heap_tuple (natts=6, offp=0x1135170, tuple=<optimized out>, slot=0x1135128) at execTuples.c:985\n#4 tts_buffer_heap_getsomeattrs (slot=0x1135128, natts=<optimized out>) at execTuples.c:676\n#5 0x00000000006489fc in slot_getsomeattrs_int (slot=slot@entry=0x1135128, attnum=6) at execTuples.c:1877\n#6 0x00000000006379a3 in slot_getsomeattrs (attnum=<optimized out>, slot=0x1135128) at ../../../src/include/executor/tuptable.h:345\n#7 ExecInterpExpr (state=0x11364b0, econtext=0x1134cd8, isnull=<optimized out>) at execExprInterp.c:441\n#8 0x000000000064f699 in ExecEvalExprSwitchContext (isNull=0x7ffcfd8f3b2f, econtext=<optimized out>, state=<optimized out>) at ../../../src/include/executor/executor.h:307\n#9 advance_aggregates (aggstate=0x1134aa0, aggstate=0x1134aa0) at nodeAgg.c:679\n#10 agg_retrieve_direct (aggstate=0x1134aa0) at nodeAgg.c:1847\n#11 ExecAgg (pstate=0x1134aa0) at nodeAgg.c:1572\n#12 0x000000000063b58b in ExecProcNode (node=0x1134aa0) at ../../../src/include/executor/executor.h:239\n#13 ExecutePlan (execute_once=<optimized out>, dest=0x11439d8, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1134aa0, estate=0x1134848)\n at execMain.c:1646\n#14 standard_ExecutorRun (queryDesc=0x1094f18, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#15 0x00000000007a43cc in PortalRunSelect (portal=0x10da368, forward=<optimized out>, count=0, dest=<optimized out>) at pquery.c:929\n#16 0x00000000007a5958 in PortalRun (portal=portal@entry=0x10da368, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x11439d8, altdest=altdest@entry=0x11439d8,\n completionTag=0x7ffcfd8f3db0 \"\") at pquery.c:770\n#17 0x00000000007a177e in exec_simple_query (query_string=0x10727a8 \"select max(id_default) from q_tbl_archiv ;\") at postgres.c:1215\n#18 0x00000000007a2f3f in PostgresMain (argc=<optimized out>, argv=argv@entry=0x109e4f0, dbname=<optimized out>, username=<optimized out>) at postgres.c:4236\n#19 0x00000000007237ce in BackendRun (port=0x10976f0, port=0x10976f0) at postmaster.c:4437\n#20 BackendStartup (port=0x10976f0) at postmaster.c:4128\n#21 ServerLoop () at postmaster.c:1704\n#22 0x000000000072458e in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x106c350) at postmaster.c:1377\n#23 0x000000000047d101 in main (argc=3, argv=0x106c350) at main.c:228\n\n\nIt's quite puzzling, though. If I had to guess, I'd say it's some sort\nof memory management issue (either we're corrupting it somehow, or\nperhaps using it after pfree).\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 9 Oct 2019 16:18:41 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Wed, Oct 09, 2019 at 10:07:01AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> FWIW I can reproduce this - it's enough to do this on the 11 cluster\n>\n>I failed to reproduce any problem from your example, but I was trying\n>in C locale on a Linux machine. What environment are you testing?\n>\n>\t\t\tregards, tom lane\n\ntest=# show lc_collate ;\n lc_collate \n------------\n C.UTF-8\n(1 row)\n\n\nI can reproduce this pretty easily like this:\n\n1) build 11\n\ngit checkout REL_11_STABLE\n./configure --prefix=/home/user/pg-11 --enable-debug --enable-cassert && make -s clean && make -s -j4 install\n\n2) build 12\n\ngit checkout REL_12_STABLE\n./configure --prefix=/home/user/pg-12 --enable-debug --enable-cassert && make -s clean && make -s -j4 install\n\n3) create the 11 cluster\n\n/home/user/pg-11/bin/pg_ctl -D /tmp/data-11 init\n/home/user/pg-11/bin/pg_ctl -D /tmp/data-11 -l /tmp/pg-11.log start\n/home/user/pg-11/bin/createdb test\n/home/user/pg-11/bin/psql test\n\n4) create the table\n\n create table q_tbl_archiv as\n with\n qseason as (\n select table_name,column_name, ordinal_position\n ,replace(column_name,'_season','') as col_qualifier\n -- ,'id_'||replace(column_name,'_season','') as id_column\n from information_schema.columns\n order by table_name\n )\n select qs.*,c.column_name as id_column, c.column_default as id_default\n from\n qseason qs\n left join information_schema.columns c on c.table_name=qs.table_name and\n c.column_name like 'id_%';\n\n5) shutdown the 11 cluster\n\n /home/user/pg-11/bin/pg_ctl -D /tmp/data-11 stop\n\n6) init 12 cluster\n\n /home/user/pg-12/bin/pg_ctl -D /tmp/data-12 init\n\n7) do the pg_upgrade thing\n\n /home/user/pg-12/bin/pg_upgrade -b /home/user/pg-11/bin -B /home/user/pg-12/bin -d /tmp/data-11 -D /tmp/data-12 -k\n\n8) start 12 cluster\n\n /home/user/pg-12/bin/pg_ctl -D /tmp/data-12 -l /tmp/pg-12.log start\n\n9) kabooom\n\n /home/user/pg-12/bin/psql test -c \"analyze q_tbl_archiv\"\n\n\nOn my system (Fedora 30 in x86_64) this reliably results a crash (and\nvarious other crashes as demonstrated in my previous message).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 9 Oct 2019 16:28:27 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Hi Tomas,\n\nNice that you could reproduce it.\nThis was just the way I followed.\n\nFor your Info, here are my no-standard config params:\n\n name | current_setting \n------------------------------------+---------------------------------\n application_name | psql \n auto_explain.log_analyze | on \n auto_explain.log_min_duration | 0 \n auto_explain.log_nested_statements | on \n client_encoding | WIN1252 \n cluster_name | HB_DEV \n data_checksums | on \n DateStyle | ISO, DMY \n default_text_search_config | pg_catalog.german \n dynamic_shared_memory_type | windows \n effective_cache_size | 8GB \n lc_collate | C \n lc_ctype | German_Germany.1252 \n lc_messages | C \n lc_monetary | German_Germany.1252 \n lc_numeric | German_Germany.1252 \n lc_time | German_Germany.1252 \n log_destination | stderr \n log_directory | N:/ZZ_log/pg_log_hbdev \n log_error_verbosity | verbose \n log_file_mode | 0640 \n log_line_prefix | WHB %a %t %i %e %2l:> \n log_statement | mod \n log_temp_files | 0 \n log_timezone | CET \n logging_collector | on \n maintenance_work_mem | 128MB \n max_connections | 100 \n max_stack_depth | 2MB \n max_wal_size | 1GB \n min_wal_size | 80MB \n pg_stat_statements.max | 5000 \n pg_stat_statements.track | all \n random_page_cost | 1 \n search_path | public, archiv, ablage, admin \n server_encoding | UTF8 \n server_version | 12.0 \n shared_buffers | 1GB \n shared_preload_libraries | auto_explain,pg_stat_statements \n temp_buffers | 32MB \n TimeZone | CET \n transaction_deferrable | off \n transaction_isolation | read committed \n transaction_read_only | off \n update_process_title | off \n wal_buffers | 16MB \n wal_segment_size | 16MB \n work_mem | 32MB \n(48 rows)\n\nIndeed, the database has UTF8 Encoding.\n\nThe Extended error-log (i have set auto_explain):\n\n\n\nWHB psql 2019-10-09 15:45:03 CEST XX000 7:> ERROR: XX000: invalid memory alloc request size 18446744073709551613\nWHB psql 2019-10-09 15:45:03 CEST XX000 8:> LOCATION: palloc, d:\\pginstaller_12.auto\\postgres.windows-x64\\src\\backend\\utils\\mmgr\\mcxt.c:934\nWHB psql 2019-10-09 15:45:03 CEST XX000 9:> STATEMENT: select * from q_tbl_archiv;\nWHB vacuumdb 2019-10-09 15:46:42 CEST 00000 1:> LOG: 00000: duration: 0.022 ms plan:\n\tQuery Text: SELECT pg_catalog.set_config('search_path', '', false);\n\tResult (cost=0.00..0.01 rows=1 width=32) (actual time=0.014..0.015 rows=1 loops=1)\nWHB vacuumdb 2019-10-09 15:46:42 CEST 00000 2:> LOCATION: explain_ExecutorEnd, d:\\pginstaller_12.auto\\postgres.windows-x64\\contrib\\auto_explain\\auto_explain.c:415\nWHB vacuumdb 2019-10-09 15:46:42 CEST 00000 3:> LOG: 00000: duration: 0.072 ms plan:\n\tQuery Text: SELECT datname FROM pg_database WHERE datallowconn ORDER BY 1;\n\tSort (cost=1.16..1.16 rows=1 width=64) (actual time=0.063..0.064 rows=14 loops=1)\n\t Sort Key: datname\n\t Sort Method: quicksort Memory: 26kB\n\t -> Seq Scan on pg_database (cost=0.00..1.15 rows=1 width=64) (actual time=0.018..0.022 rows=14 loops=1)\n\t Filter: datallowconn\n\t Rows Removed by Filter: 1\nWHB vacuumdb 2019-10-09 15:46:42 CEST 00000 4:> LOCATION: explain_ExecutorEnd, d:\\pginstaller_12.auto\\postgres.windows-x64\\contrib\\auto_explain\\auto_explain.c:415\nWHB vacuumdb 2019-10-09 15:46:43 CEST 00000 1:> LOG: 00000: duration: 0.027 ms plan:\n\tQuery Text: SELECT pg_catalog.set_config('search_path', '', false);\n\tResult (cost=0.00..0.01 rows=1 width=32) (actual time=0.012..0.013 rows=1 loops=1)\nWHB vacuumdb 2019-10-09 15:46:43 CEST 00000 2:> LOCATION: explain_ExecutorEnd, d:\\pginstaller_12.auto\\postgres.windows-x64\\contrib\\auto_explain\\auto_explain.c:415\nWHB vacuumdb 2019-10-09 15:46:43 CEST 00000 3:> LOG: 00000: duration: 1.036 ms plan:\n\tQuery Text: SELECT c.relname, ns.nspname FROM pg_catalog.pg_class c\n\t JOIN pg_catalog.pg_namespace ns ON c.relnamespace OPERATOR(pg_catalog.=) ns.oid\n\t LEFT JOIN pg_catalog.pg_class t ON c.reltoastrelid OPERATOR(pg_catalog.=) t.oid\n\t WHERE c.relkind OPERATOR(pg_catalog.=) ANY (array['r', 'm'])\n\t ORDER BY c.relpages DESC;\n\tSort (cost=56.56..56.59 rows=13 width=132) (actual time=0.843..0.854 rows=320 loops=1)\n\t Sort Key: c.relpages DESC\n\t Sort Method: quicksort Memory: 110kB\n\t -> Hash Join (cost=1.23..56.32 rows=13 width=132) (actual time=0.082..0.649 rows=320 loops=1)\n\t Hash Cond: (c.relnamespace = ns.oid)\n\t -> Seq Scan on pg_class c (cost=0.00..55.05 rows=13 width=76) (actual time=0.034..0.545 rows=320 loops=1)\n\t Filter: ((relkind)::text = ANY ('{r,m}'::text[]))\n\t Rows Removed by Filter: 950\n\t -> Hash (cost=1.10..1.10 rows=10 width=68) (actual time=0.022..0.022 rows=10 loops=1)\n\t Buckets: 1024 Batches: 1 Memory Usage: 9kB\n\t -> Seq Scan on pg_namespace ns (cost=0.00..1.10 rows=10 width=68) (actual time=0.010..0.011 rows=10 loops=1)\nWHB vacuumdb 2019-10-09 15:46:43 CEST 00000 4:> LOCATION: explain_ExecutorEnd, d:\\pginstaller_12.auto\\postgres.windows-x64\\contrib\\auto_explain\\auto_explain.c:415\nWHB vacuumdb 2019-10-09 15:46:43 CEST 00000 5:> LOG: 00000: duration: 0.011 ms plan:\n\tQuery Text: SELECT pg_catalog.set_config('search_path', '', false);\n\tResult (cost=0.00..0.01 rows=1 width=32) (actual time=0.008..0.008 rows=1 loops=1)\nWHB vacuumdb 2019-10-09 15:46:43 CEST 00000 6:> LOCATION: explain_ExecutorEnd, d:\\pginstaller_12.auto\\postgres.windows-x64\\contrib\\auto_explain\\auto_explain.c:415\nWHB 2019-10-09 15:47:01 CEST 00000 22:> LOG: 00000: server process (PID 4708) was terminated by exception 0xC0000005\nWHB 2019-10-09 15:47:01 CEST 00000 23:> DETAIL: Failed process was running: ANALYZE admin.q_tbl_archiv;\nWHB 2019-10-09 15:47:01 CEST 00000 24:> HINT: See C include file \"ntstatus.h\" for a description of the hexadecimal value.\nWHB 2019-10-09 15:47:01 CEST 00000 25:> LOCATION: LogChildExit, d:\\pginstaller_12.auto\\postgres.windows-x64\\src\\backend\\postmaster\\postmaster.c:3670\nWHB 2019-10-09 15:47:01 CEST 00000 26:> LOG: 00000: terminating any other active server processes\nWHB 2019-10-09 15:47:01 CEST 00000 27:> LOCATION: HandleChildCrash, d:\\pginstaller_12.auto\\postgres.windows-x64\\src\\backend\\postmaster\\postmaster.c:3400\nWHB psql 2019-10-09 15:47:01 CEST 57P02 10:> WARNING: 57P02: terminating connection because of crash of another server process\nWHB psql 2019-10-09 15:47:01 CEST 57P02 11:> DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\nWHB psql 2019-10-09 15:47:01 CEST 57P02 12:> HINT: In a moment you should be able to reconnect to the database and repeat your command.\nWHB psql 2019-10-09 15:47:01 CEST 57P02 13:> LOCATION: quickdie, d:\\pginstaller_12.auto\\postgres.windows-x64\\src\\backend\\tcop\\postgres.c:2717\nWHB 2019-10-09 15:47:02 CEST 57P02 3:> WARNING: 57P02: terminating connection because of crash of another server process\nWHB 2019-10-09 15:47:02 CEST 57P02 4:> DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\nWHB 2019-10-09 15:47:02 CEST 57P02 5:> HINT: In a moment you should be able to reconnect to the database and repeat your command.\nWHB 2019-10-09 15:47:02 CEST 57P02 6:> LOCATION: quickdie, d:\\pginstaller_12.auto\\postgres.windows-x64\\src\\backend\\tcop\\postgres.c:2717\nWHB 2019-10-09 15:47:02 CEST 00000 28:> LOG: 00000: all server processes terminated; reinitializing\nWHB 2019-10-09 15:47:02 CEST 00000 29:> LOCATION: PostmasterStateMachine, d:\\pginstaller_12.auto\\postgres.windows-x64\\src\\backend\\postmaster\\postmaster.c:3912\nWHB 2019-10-09 15:47:02 CEST 00000 1:> LOG: 00000: database system was interrupted; last known up at 2019-10-09 15:46:03 CEST\nWHB 2019-10-09 15:47:02 CEST 00000 2:> LOCATION: StartupXLOG, d:\\pginstaller_12.auto\\postgres.windows-x64\\src\\backend\\access\\transam\\xlog.c:6277\n\n\nThe table was imported successively by pg_dump/pg_restore from the previous versions into pg11.\n\nThis was the same what I did on the other machine (pg 11.5). On this test machine I could successfully Export the table with pg_dump -t.\n\nOn the erroneous PG12 Cluster I succeeded to recreate a similar table with the original create table Statements: no Errors.\n\nUnder PG12 upgraded, I tried to select only the first column (select table_name from q_tbl_archiv) and got erroneaus results (shown first 2 entries):\n\ncpsdb=# select table_name from q_tbl_archiv;\n table_name\n---------------------------------------------\n \\x11chemmat\\x17chm_season\n !collectionsheet\\x15cs_season\n\nIt seems that the length Bytes are present in the Output.\n\nHope this Information helps.\n\nHans Buschmann\n\n\n\n", "msg_date": "Wed, 9 Oct 2019 14:50:52 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": false, "msg_subject": "AW: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Well, I think I found the root cause. It's because of 7c15cef86d, which\nchanged the definition of sql_identifier so that it's a domain over name\ninstead of varchar. So we now have this:\n\n SELECT typname, typlen FROM pg_type WHERE typname = 'sql_identifier':\n\n -[ RECORD 1 ]--+---------------\n typname | sql_identifier\n typlen | -1\n\ninstead of this\n\n -[ RECORD 1 ]--+---------------\n typname | sql_identifier\n typlen | 64\n\nUnfortunately, that seems very much like a break of on-disk format, and\nafter pg_upgrade any table containing sql_identifier columns is pretty\nmuch guaranteed to be badly mangled. For example, the first row from the\ntable used in the original report looks like this on PostgreSQL 11:\n\n test=# select ctid, * from q_tbl_archiv limit 1;\n -[ RECORD 1 ]----+--------------------------\n ctid | (0,1)\n table_name | _pg_foreign_data_wrappers\n column_name | foreign_data_wrapper_name\n ordinal_position | 5\n col_qualifier | foreign_data_wrapper_name\n id_column | \n id_default | \n\nwhile on PostgreSQL 12 after pg_upgrade it looks like this\n\n test=# select ctid, table_name, column_name, ordinal_position from q_tbl_archiv limit 1;:\n -[ RECORD 1 ]----+---------------------------------------------------------\n ctid | (0,1)\n table_name | 5_pg_foreign_data_wrappers5foreign_data_wrapper_name\\x05\n column_name | _data_wrapper_name\n ordinal_position | 0\n\nNot sure what to do about this :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 10 Oct 2019 01:07:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Well, I think I found the root cause. It's because of 7c15cef86d, which\n> changed the definition of sql_identifier so that it's a domain over name\n> instead of varchar.\n\nAh...\n\n> Not sure what to do about this :-(\n\nFortunately, there should be close to zero people with user tables\ndepending on sql_identifier. I think we should just add a test in\npg_upgrade that refuses to upgrade if there are any such columns.\nIt won't be the first such restriction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 19:18:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Wed, Oct 09, 2019 at 07:18:45PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> Well, I think I found the root cause. It's because of 7c15cef86d, which\n>> changed the definition of sql_identifier so that it's a domain over name\n>> instead of varchar.\n>\n>Ah...\n>\n>> Not sure what to do about this :-(\n>\n>Fortunately, there should be close to zero people with user tables\n>depending on sql_identifier. I think we should just add a test in\n>pg_upgrade that refuses to upgrade if there are any such columns.\n>It won't be the first such restriction.\n>\n\nHmmm, yeah. I agree the number of people using sql_identifier in user\ntables is low, but OTOH we got this report within a week after release,\nso maybe it's higher than we think.\n\nAnother option would be to teach pg_upgrade to switch the columns to\n'text' or 'varchar', not sure if that's possible or how much work would\nthat be.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 10 Oct 2019 01:28:36 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, Oct 09, 2019 at 07:18:45PM -0400, Tom Lane wrote:\n>> Fortunately, there should be close to zero people with user tables\n>> depending on sql_identifier. I think we should just add a test in\n>> pg_upgrade that refuses to upgrade if there are any such columns.\n>> It won't be the first such restriction.\n\n> Hmmm, yeah. I agree the number of people using sql_identifier in user\n> tables is low, but OTOH we got this report within a week after release,\n> so maybe it's higher than we think.\n\nTrue.\n\n> Another option would be to teach pg_upgrade to switch the columns to\n> 'text' or 'varchar', not sure if that's possible or how much work would\n> that be.\n\nI think it'd be a mess --- the actual hacking would have to happen in\npg_dump, I think, and it'd be a kluge because pg_dump doesn't normally\nunderstand what server version its output is going to. So we'd more\nor less have to control it through a new pg_dump switch that pg_upgrade\nwould use. Ick.\n\nAlso, even if we did try to silently convert such columns that way,\nI bet we'd get other bug reports about \"why'd my columns suddenly\nchange type?\". So I'd rather force the user to be involved.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 19:41:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On 2019-10-09 19:41:54 -0400, Tom Lane wrote:\n> Also, even if we did try to silently convert such columns that way,\n> I bet we'd get other bug reports about \"why'd my columns suddenly\n> change type?\". So I'd rather force the user to be involved.\n\n+1\n\n\n", "msg_date": "Wed, 9 Oct 2019 18:48:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Wed, Oct 09, 2019 at 06:48:13PM -0700, Andres Freund wrote:\n>On 2019-10-09 19:41:54 -0400, Tom Lane wrote:\n>> Also, even if we did try to silently convert such columns that way,\n>> I bet we'd get other bug reports about \"why'd my columns suddenly\n>> change type?\". So I'd rather force the user to be involved.\n>\n>+1\n\nFair enough, attached is a patch doing that, I think. Maybe the file\nshould be named differently, as it contains other objects than just\ntables.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 10 Oct 2019 15:43:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Fair enough, attached is a patch doing that, I think. Maybe the file\n> should be named differently, as it contains other objects than just\n> tables.\n\nSeems about right, though I notice it will not detect domains over\nsql_identifier. How much do we care about that?\n\nTo identify such domains, I think we'd need something like\nWHERE attypid IN (recursive-WITH-query), which makes me nervous.\nWe did support those starting with 8.4, which is as far back as\npg_upgrade will go, so in theory it should work. But I think we\nhad bugs with such cases in old releases. Do we want to assume\nthat the source server has been updated enough to avoid any such\nbugs? The expense of such a query might be daunting, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Oct 2019 10:19:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Thu, Oct 10, 2019 at 10:19:12AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> Fair enough, attached is a patch doing that, I think. Maybe the file\n>> should be named differently, as it contains other objects than just\n>> tables.\n>\n>Seems about right, though I notice it will not detect domains over\n>sql_identifier. How much do we care about that?\n>\n>To identify such domains, I think we'd need something like\n>WHERE attypid IN (recursive-WITH-query), which makes me nervous.\n>We did support those starting with 8.4, which is as far back as\n>pg_upgrade will go, so in theory it should work. But I think we\n>had bugs with such cases in old releases. Do we want to assume\n>that the source server has been updated enough to avoid any such\n>bugs? The expense of such a query might be daunting, too.\n>\n\nNot sure.\n\nRegarding bugs, I think it's fine to assume the users are running\nsufficiently recent version - they may not, but then they're probably\nsubject to various other bugs (data corruption, queries). If they're\nnot, then they'll either get false positives (in which case they'll be\nforced to update) or false negatives (which is just as if we did\nnothing).\n\nFor the query cost, I think we can assume the domain hierarchies are not\nparticularly deep (in practice I'd expect just domains directly on the\nsql_identifier type). And I doubt people are using that very widely,\nit's probably more like this report - ad-hoc CTAS, so just a couple of\nitems. So I wouldn't expect it to be a huge deal in most cases. But even\nif it takes a second or two, it's a one-time cost.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 10 Oct 2019 18:33:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Oct 10, 2019 at 10:19:12AM -0400, Tom Lane wrote:\n>> To identify such domains, I think we'd need something like\n>> WHERE attypid IN (recursive-WITH-query), which makes me nervous.\n>> We did support those starting with 8.4, which is as far back as\n>> pg_upgrade will go, so in theory it should work. But I think we\n>> had bugs with such cases in old releases. Do we want to assume\n>> that the source server has been updated enough to avoid any such\n>> bugs? The expense of such a query might be daunting, too.\n\n> For the query cost, I think we can assume the domain hierarchies are not\n> particularly deep (in practice I'd expect just domains directly on the\n> sql_identifier type). And I doubt people are using that very widely,\n> it's probably more like this report - ad-hoc CTAS, so just a couple of\n> items. So I wouldn't expect it to be a huge deal in most cases. But even\n> if it takes a second or two, it's a one-time cost.\n\nWhat I was worried about was the planner possibly trying to apply the\natttypid restriction as a scan qual using a subplan, which might be rather\nawful. But it doesn't look like that happens. I get a hash semijoin to\nthe CTE output, in all branches back to 8.4, on this trial query:\n\nexplain\nwith recursive sqlidoids(toid) as (\nselect 'information_schema.sql_identifier'::pg_catalog.regtype as toid\nunion\nselect oid from pg_catalog.pg_type, sqlidoids\n where typtype = 'd' and typbasetype = sqlidoids.toid\n) \nSELECT n.nspname, c.relname, a.attname \nFROM pg_catalog.pg_class c, \n pg_catalog.pg_namespace n, \n pg_catalog.pg_attribute a \nWHERE c.oid = a.attrelid AND \n NOT a.attisdropped AND \n a.atttypid in (select toid from sqlidoids) AND\n c.relkind IN ('r','v','i') and\n c.relnamespace = n.oid AND \n n.nspname !~ '^pg_temp_' AND \n n.nspname !~ '^pg_toast_temp_' AND \n n.nspname NOT IN ('pg_catalog', 'information_schema');\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Oct 2019 16:14:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Thu, Oct 10, 2019 at 04:14:20PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Thu, Oct 10, 2019 at 10:19:12AM -0400, Tom Lane wrote:\n>>> To identify such domains, I think we'd need something like\n>>> WHERE attypid IN (recursive-WITH-query), which makes me nervous.\n>>> We did support those starting with 8.4, which is as far back as\n>>> pg_upgrade will go, so in theory it should work. But I think we\n>>> had bugs with such cases in old releases. Do we want to assume\n>>> that the source server has been updated enough to avoid any such\n>>> bugs? The expense of such a query might be daunting, too.\n>\n>> For the query cost, I think we can assume the domain hierarchies are not\n>> particularly deep (in practice I'd expect just domains directly on the\n>> sql_identifier type). And I doubt people are using that very widely,\n>> it's probably more like this report - ad-hoc CTAS, so just a couple of\n>> items. So I wouldn't expect it to be a huge deal in most cases. But even\n>> if it takes a second or two, it's a one-time cost.\n>\n>What I was worried about was the planner possibly trying to apply the\n>atttypid restriction as a scan qual using a subplan, which might be rather\n>awful. But it doesn't look like that happens. \n\nOK.\n\n> I get a hash semijoin to\n>the CTE output, in all branches back to 8.4, on this trial query:\n>\n>explain\n>with recursive sqlidoids(toid) as (\n>select 'information_schema.sql_identifier'::pg_catalog.regtype as toid\n>union\n>select oid from pg_catalog.pg_type, sqlidoids\n> where typtype = 'd' and typbasetype = sqlidoids.toid\n>)\n>SELECT n.nspname, c.relname, a.attname\n>FROM pg_catalog.pg_class c,\n> pg_catalog.pg_namespace n,\n> pg_catalog.pg_attribute a\n>WHERE c.oid = a.attrelid AND\n> NOT a.attisdropped AND\n> a.atttypid in (select toid from sqlidoids) AND\n> c.relkind IN ('r','v','i') and\n> c.relnamespace = n.oid AND\n> n.nspname !~ '^pg_temp_' AND\n> n.nspname !~ '^pg_toast_temp_' AND\n> n.nspname NOT IN ('pg_catalog', 'information_schema');\n>\n\nI think that's not quite sufficient - the problem is that we can have\ndomains and composite types on sql_identifier, in some arbitrary order.\nAnd the recursive CTE won't handle that the way it's written - it will\nmiss domains on composite types containing sql_identifier. And we have\nquite a few of them in the information schema, so maybe someone created\na domain on one of those (however unlikely it may seem).\n\nI think this recursive CTE does it correctly:\n\nWITH RECURSIVE oids AS (\n -- type itself\n SELECT 'information_schema.sql_identifier'::regtype AS oid\n UNION ALL\n SELECT * FROM (\n -- domains on the type\n WITH x AS (SELECT oid FROM oids)\n SELECT t.oid FROM pg_catalog.pg_type t, x WHERE typbasetype = x.oid AND typtype = 'd'\n UNION\n -- composite types containing the type\n SELECT t.oid FROM pg_catalog.pg_type t, pg_catalog.pg_class c, pg_catalog.pg_attribute a, x\n WHERE t.typtype = 'c' AND\n t.oid = c.reltype AND\n c.oid = a.attrelid AND\n a.atttypid = x.oid\n ) foo\n) \n\nI had to use CTE within CTE, because the 'oids' can be referenced only\nonce, but we have two subqueries there. Maybe there's a better solution.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 10 Oct 2019 22:40:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "OK,\n\nhere is an updated patch, with the recursive CTE. I've done a fair\namount of testing on it on older versions (up to 9.4), and it seems to\nwork just fine.\n\nAnother thing that I noticed is that the query does not need to look at\nRELKIND_COMPOSITE_TYPE, because we only really care about cases when\nsql_identifier is stored on-disk. Composite type alone does not do that,\nand the CTE includes OIDs of composite types that we then check against\nrelations and matviews.\n\nBarring objections, I'll push this early next week.\n\n\nBTW the query (including the RELKIND_COMPSITE_TYPE) was essentially just\na lightly-massaged copy of old_9_6_check_for_unknown_data_type_usage, so\nthat seems wrong too. The comment explicitly says:\n\n * Also check composite types, in case they are used for table columns.\n\nbut even a simple \"create type c as (a unknown, b int)\" without any\ntable using it enough to trigger the failure. But maybe it's\nintentional, not sure.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 13 Oct 2019 02:10:32 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> here is an updated patch, with the recursive CTE. I've done a fair\n> amount of testing on it on older versions (up to 9.4), and it seems to\n> work just fine.\n\nMight be a good idea to exclude attisdropped columns in the part of the\nrecursive query that's looking for sql_identifier columns of composite\ntypes. I'm not sure if composites can have dropped columns today,\nbut even if they can't it seems like a wise bit of future-proofing.\n(We'll no doubt have occasion to use this logic again...)\n\nLooks good other than that nit.\n\n> BTW the query (including the RELKIND_COMPSITE_TYPE) was essentially just\n> a lightly-massaged copy of old_9_6_check_for_unknown_data_type_usage, so\n> that seems wrong too.\n\nYeah, we should back-port this logic into that check too, IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Oct 2019 14:26:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Sun, Oct 13, 2019 at 02:26:48PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> here is an updated patch, with the recursive CTE. I've done a fair\n>> amount of testing on it on older versions (up to 9.4), and it seems to\n>> work just fine.\n>\n>Might be a good idea to exclude attisdropped columns in the part of the\n>recursive query that's looking for sql_identifier columns of composite\n>types. I'm not sure if composites can have dropped columns today,\n>but even if they can't it seems like a wise bit of future-proofing.\n>(We'll no doubt have occasion to use this logic again...)\n>\n\nHmm? How could that be safe? Let's say we have a composite type with a\nsql_identifier column, it's used in a table with data, and we drop the\ncolumn. We need the pg_type information to parse the existing, so how\ncould we skip attisdropped columns?\n\n>Looks good other than that nit.\n>\n>> BTW the query (including the RELKIND_COMPSITE_TYPE) was essentially just\n>> a lightly-massaged copy of old_9_6_check_for_unknown_data_type_usage, so\n>> that seems wrong too.\n>\n>Yeah, we should back-port this logic into that check too, IMO.\n>\n\nYou mean the recursive CTE, removal of RELKIND_COMPOSITE_TYPE or the\nproposed change w.r.t. dropped columns?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 13 Oct 2019 22:38:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, Oct 13, 2019 at 02:26:48PM -0400, Tom Lane wrote:\n>> Might be a good idea to exclude attisdropped columns in the part of the\n>> recursive query that's looking for sql_identifier columns of composite\n>> types. I'm not sure if composites can have dropped columns today,\n\n[ I checked this, they can ]\n\n>> but even if they can't it seems like a wise bit of future-proofing.\n>> (We'll no doubt have occasion to use this logic again...)\n\n> Hmm? How could that be safe? Let's say we have a composite type with a\n> sql_identifier column, it's used in a table with data, and we drop the\n> column. We need the pg_type information to parse the existing, so how\n> could we skip attisdropped columns?\n\nIt works exactly like it does for table rowtypes.\n\nregression=# create type cfoo as (f1 int, f2 int, f3 int);\nCREATE TYPE\nregression=# alter type cfoo drop attribute f2;\nALTER TYPE\nregression=# select attname,atttypid,attisdropped,attlen,attalign from pg_attribute where attrelid = 'cfoo'::regclass;\n attname | atttypid | attisdropped | attlen | attalign \n------------------------------+----------+--------------+--------+----------\n f1 | 23 | f | 4 | i\n ........pg.dropped.2........ | 0 | t | 4 | i\n f3 | 23 | f | 4 | i\n(3 rows)\n\nAll we need to skip over the dead data is attlen/attalign, which are\npreserved in pg_attribute even if the pg_type row is gone.\n\nAs this example shows, you don't really *have* to check attisdropped\nbecause atttypid will be set to zero. But the latter is just a\ndefense measure in case somebody forgets to check attisdropped;\nyou're not supposed to forget that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Oct 2019 10:16:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Mon, Oct 14, 2019 at 10:16:40AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Sun, Oct 13, 2019 at 02:26:48PM -0400, Tom Lane wrote:\n>>> Might be a good idea to exclude attisdropped columns in the part of the\n>>> recursive query that's looking for sql_identifier columns of composite\n>>> types. I'm not sure if composites can have dropped columns today,\n>\n>[ I checked this, they can ]\n>\n>>> but even if they can't it seems like a wise bit of future-proofing.\n>>> (We'll no doubt have occasion to use this logic again...)\n>\n>> Hmm? How could that be safe? Let's say we have a composite type with a\n>> sql_identifier column, it's used in a table with data, and we drop the\n>> column. We need the pg_type information to parse the existing, so how\n>> could we skip attisdropped columns?\n>\n>It works exactly like it does for table rowtypes.\n>\n>regression=# create type cfoo as (f1 int, f2 int, f3 int);\n>CREATE TYPE\n>regression=# alter type cfoo drop attribute f2;\n>ALTER TYPE\n>regression=# select attname,atttypid,attisdropped,attlen,attalign from pg_attribute where attrelid = 'cfoo'::regclass;\n> attname | atttypid | attisdropped | attlen | attalign\n>------------------------------+----------+--------------+--------+----------\n> f1 | 23 | f | 4 | i\n> ........pg.dropped.2........ | 0 | t | 4 | i\n> f3 | 23 | f | 4 | i\n>(3 rows)\n>\n>All we need to skip over the dead data is attlen/attalign, which are\n>preserved in pg_attribute even if the pg_type row is gone.\n>\n>As this example shows, you don't really *have* to check attisdropped\n>because atttypid will be set to zero. But the latter is just a\n>defense measure in case somebody forgets to check attisdropped;\n>you're not supposed to forget that.\n>\n\nAha! I forgot we copy the necessary stuff into pg_attribute. Thanks for\nclarifying, I'll polish and push the fix shortly.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 14 Oct 2019 18:35:38 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:\n> ...\n>\n>Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for\n>clarifying, I'll polish and push the fix shortly.\n>\n\nI've pushed and backpatched the fix. Attached are similar fixes for the\nexisting pg_upgrade checks for pg_catalog.line and pg_catalog.unknown\ntypes, which have the same issues with composite types and domains.\n\nThere are some additional details & examples in the commit messages.\n\nI've kept this in two patches primarily because of backpatching - the\nline fix should go back up to 9.4, the unknown is for 10.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 15 Oct 2019 02:18:17 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Tue, Oct 15, 2019 at 02:18:17AM +0200, Tomas Vondra wrote:\n> On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:\n> >...\n> >\n> >Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for\n> >clarifying, I'll polish and push the fix shortly.\n\nPerhaps it'd be worth creating a test for on-disk format ?\n\nLike a table with a column for each core type, which is either SELECTed from\nafter pg_upgrade, or pg_dump output compared before and after.\n\nJustin\n\n\n", "msg_date": "Mon, 14 Oct 2019 23:41:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Mon, Oct 14, 2019 at 11:41:18PM -0500, Justin Pryzby wrote:\n>On Tue, Oct 15, 2019 at 02:18:17AM +0200, Tomas Vondra wrote:\n>> On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:\n>> >...\n>> >\n>> >Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for\n>> >clarifying, I'll polish and push the fix shortly.\n>\n>Perhaps it'd be worth creating a test for on-disk format ?\n>\n>Like a table with a column for each core type, which is either SELECTed from\n>after pg_upgrade, or pg_dump output compared before and after.\n>\n\nIMO that would be useful - we now have a couple of these checks for\ndifferent data types (line, unknown, sql_identifier), with a couple of\ncombinations each. And I've been looking if we do similar pg_upgrade\ntests, but I haven't found anything. I mean, we do pg_upgrade the\ncluster used for regression tests, but here we need to test a number of\ncases that are meant to abort the pg_upgrade. So we'd need a number of\npg_upgrade runs, to test that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 15 Oct 2019 09:07:25 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Tue, Oct 15, 2019 at 02:18:17AM +0200, Tomas Vondra wrote:\n>On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:\n>>...\n>>\n>>Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for\n>>clarifying, I'll polish and push the fix shortly.\n>>\n>\n>I've pushed and backpatched the fix. Attached are similar fixes for the\n>existing pg_upgrade checks for pg_catalog.line and pg_catalog.unknown\n>types, which have the same issues with composite types and domains.\n>\n>There are some additional details & examples in the commit messages.\n>\n>I've kept this in two patches primarily because of backpatching - the\n>line fix should go back up to 9.4, the unknown is for 10.\n>\n\nI've just committed and pushed both fixes after some minor corrections.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 13:33:44 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I've just committed and pushed both fixes after some minor corrections.\n\nNot quite right in 9.6 and before, according to crake. Looks like\nsome issue with the CppAsString2'd constants? Did we even have\nCppAsString2 that far back?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Oct 2019 14:27:57 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "I wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I've just committed and pushed both fixes after some minor corrections.\n\n> Not quite right in 9.6 and before, according to crake. Looks like\n> some issue with the CppAsString2'd constants? Did we even have\n> CppAsString2 that far back?\n\nYeah, we did. On closer inspection I suspect that we need to #include\nsome other file to get the RELKIND_ constants in the old branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Oct 2019 15:26:42 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Wed, Oct 16, 2019 at 03:26:42PM +0200, Tom Lane wrote:\n>I wrote:\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>> I've just committed and pushed both fixes after some minor corrections.\n>\n>> Not quite right in 9.6 and before, according to crake. Looks like\n>> some issue with the CppAsString2'd constants? Did we even have\n>> CppAsString2 that far back?\n>\n>Yeah, we did. On closer inspection I suspect that we need to #include\n>some other file to get the RELKIND_ constants in the old branches.\n>\n\nOh! Looking.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 15:41:17 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Wed, Oct 16, 2019 at 03:26:42PM +0200, Tom Lane wrote:\n>I wrote:\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>> I've just committed and pushed both fixes after some minor corrections.\n>\n>> Not quite right in 9.6 and before, according to crake. Looks like\n>> some issue with the CppAsString2'd constants? Did we even have\n>> CppAsString2 that far back?\n>\n>Yeah, we did. On closer inspection I suspect that we need to #include\n>some other file to get the RELKIND_ constants in the old branches.\n>\n\nYeah, the pg_class.h catalog was missing on pre-10 relases. It compiled\njust fine, so I haven't noticed that during the backpatching :-(\n\nFixed, let's see if the buildfarm is happy with that.\n\nregads\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 16:33:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "On Tue, Oct 15, 2019 at 02:18:17AM +0200, Tomas Vondra wrote:\n> On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:\n> > ...\n> > \n> > Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for\n> > clarifying, I'll polish and push the fix shortly.\n> > \n> \n> I've pushed and backpatched the fix. Attached are similar fixes for the\n> existing pg_upgrade checks for pg_catalog.line and pg_catalog.unknown\n> types, which have the same issues with composite types and domains.\n\nThis comit added old_11_check_for_sql_identifier_data_type_usage(), but\nit did not use the clearer database error list format added to the\nmaster branch in commit 1634d36157. Attached is a patch to fix this,\nwhich I have committed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Wed, 23 Oct 2019 18:08:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16045: vacuum_db crash and illegal memory alloc after\n pg_upgrade from PG11 to PG12" }, { "msg_contents": "I'm finally returning to this 14 month old thread:\n(was: Re: BUG #16045: vacuum_db crash and illegal memory alloc after pg_upgrade from PG11 to PG12)\n\nOn Tue, Oct 15, 2019 at 09:07:25AM +0200, Tomas Vondra wrote:\n> On Mon, Oct 14, 2019 at 11:41:18PM -0500, Justin Pryzby wrote:\n> > \n> > Perhaps it'd be worth creating a test for on-disk format ?\n> > \n> > Like a table with a column for each core type, which is either SELECTed from\n> > after pg_upgrade, or pg_dump output compared before and after.\n> \n> IMO that would be useful - we now have a couple of these checks for\n> different data types (line, unknown, sql_identifier), with a couple of\n> combinations each. And I've been looking if we do similar pg_upgrade\n> tests, but I haven't found anything. I mean, we do pg_upgrade the\n> cluster used for regression tests, but here we need to test a number of\n> cases that are meant to abort the pg_upgrade. So we'd need a number of\n> pg_upgrade runs, to test that.\n\nI meant to notice if the binary format is accidentally changed again, which was\nwhat happened here:\n7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.\n\nI added a table to the regression tests so it's processed by pg_upgrade tests,\nrun like:\n\n| time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin\n\nI checked that if I cherry-pick 0002 to v11, and comment out\nold_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh\ndetects the original problem:\npg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613\n\nI understand the buildfarm has its own cross-version-upgrade test, which I\nthink would catch this on its own.\n\nThese all seem to complicate use of pg_upgrade/test.sh, so 0001 is needed to\nallow testing upgrade from older releases.\n\ne78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ for output directory in pg_regress\n40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't write to src/test/regress.\nfc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable at initdb time.\nc37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA\nda9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions\n\n-- \nJustin", "msg_date": "Sun, 6 Dec 2020 12:02:48 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:\n> I meant to notice if the binary format is accidentally changed again, which was\n> what happened here:\n> 7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.\n> \n> I added a table to the regression tests so it's processed by pg_upgrade tests,\n> run like:\n> \n> | time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin\n\nPer cfbot, this avoids testing ::xml (support for which may not be enabled)\nAnd also now tests oid types.\n\nI think the per-version hacks should be grouped by logical change, rather than\nby version. Which I've started doing here.\n\n-- \nJustin", "msg_date": "Wed, 16 Dec 2020 11:22:23 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Wed, Dec 16, 2020 at 11:22:23AM -0600, Justin Pryzby wrote:\n> On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:\n> > I meant to notice if the binary format is accidentally changed again, which was\n> > what happened here:\n> > 7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.\n> > \n> > I added a table to the regression tests so it's processed by pg_upgrade tests,\n> > run like:\n> > \n> > | time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin\n> \n> Per cfbot, this avoids testing ::xml (support for which may not be enabled)\n> And also now tests oid types.\n> \n> I think the per-version hacks should be grouped by logical change, rather than\n> by version. Which I've started doing here.\n\nrebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f\n\n-- \nJustin", "msg_date": "Sun, 27 Dec 2020 13:07:29 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On 2020-12-27 20:07, Justin Pryzby wrote:\n> On Wed, Dec 16, 2020 at 11:22:23AM -0600, Justin Pryzby wrote:\n>> On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:\n>>> I meant to notice if the binary format is accidentally changed again, which was\n>>> what happened here:\n>>> 7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.\n>>>\n>>> I added a table to the regression tests so it's processed by pg_upgrade tests,\n>>> run like:\n>>>\n>>> | time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin\n>>\n>> Per cfbot, this avoids testing ::xml (support for which may not be enabled)\n>> And also now tests oid types.\n>>\n>> I think the per-version hacks should be grouped by logical change, rather than\n>> by version. Which I've started doing here.\n> \n> rebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f\n\nI think these patches could use some in-place documentation of what they \nare trying to achieve and how they do it. The required information is \nspread over a lengthy thread. No one wants to read that. Add commit \nmessages to the patches.\n\n\n", "msg_date": "Mon, 11 Jan 2021 15:28:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote:\n> On 2020-12-27 20:07, Justin Pryzby wrote:\n> > rebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f\n> \n> I think these patches could use some in-place documentation of what they are\n> trying to achieve and how they do it. The required information is spread\n> over a lengthy thread. No one wants to read that. Add commit messages to\n> the patches.\n\nOh, I see that now, and agree that you need to explain each item with a\ncomment. pg_upgrade is doing some odd things, so documenting everything\nit does is a big win.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 11 Jan 2021 10:21:36 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote:\n> On 2020-12-27 20:07, Justin Pryzby wrote:\n> > On Wed, Dec 16, 2020 at 11:22:23AM -0600, Justin Pryzby wrote:\n> > > On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:\n> > > > I meant to notice if the binary format is accidentally changed again, which was\n> > > > what happened here:\n> > > > 7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.\n> > > > \n> > > > I added a table to the regression tests so it's processed by pg_upgrade tests,\n> > > > run like:\n> > > > \n> > > > | time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin\n> > > \n> > > Per cfbot, this avoids testing ::xml (support for which may not be enabled)\n> > > And also now tests oid types.\n> > > \n> > > I think the per-version hacks should be grouped by logical change, rather than\n> > > by version. Which I've started doing here.\n> > \n> > rebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f\n> \n> I think these patches could use some in-place documentation of what they are\n> trying to achieve and how they do it. The required information is spread\n> over a lengthy thread. No one wants to read that. Add commit messages to\n> the patches.\n\n0001 patch fixes pg_upgrade/test.sh, which was disfunctional.\nPortions of the first patch were independently handled by commits 52202bb39,\nfa744697c, 091866724. So this is rebased on those.\nI guess updating this script should be a part of a beta-checklist somewhere,\nsince I guess nobody will want to backpatch changes for testing older releases.\n\n0002 allows detecting the information_schema problem that was introduced at:\n7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.\n\n+-- Create a table with different data types, to exercise binary compatibility\n+-- during pg_upgrade test\n\nIf binary compatibility is changed I expect this will error, crash, at least\nreturn wrong data, and thereby fail tests.\n\n-- \nJustin\n\nOn Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:\n> I checked that if I cherry-pick 0002 to v11, and comment out\n> old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh\n> detects the original problem:\n> pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613\n> \n> I understand the buildfarm has its own cross-version-upgrade test, which I\n> think would catch this on its own.\n> \n> These all seem to complicate use of pg_upgrade/test.sh, so 0001 is needed to\n> allow testing upgrade from older releases.\n> \n> e78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ for output directory in pg_regress\n> 40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't write to src/test/regress.\n> fc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable at initdb time.\n> c37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA\n> da9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions", "msg_date": "Mon, 11 Jan 2021 22:13:52 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Mon, Jan 11, 2021 at 10:13:52PM -0600, Justin Pryzby wrote:\n> On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote:\n> > I think these patches could use some in-place documentation of what they are\n> > trying to achieve and how they do it. The required information is spread\n> > over a lengthy thread. No one wants to read that. Add commit messages to\n> > the patches.\n> \n> 0001 patch fixes pg_upgrade/test.sh, which was disfunctional.\n> Portions of the first patch were independently handled by commits 52202bb39,\n> fa744697c, 091866724. So this is rebased on those.\n> I guess updating this script should be a part of a beta-checklist somewhere,\n> since I guess nobody will want to backpatch changes for testing older releases.\n\nUh, what exactly is missing from the beta checklist? I read the patch\nand commit message but don't understand it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 12 Jan 2021 12:15:59 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Tue, Jan 12, 2021 at 12:15:59PM -0500, Bruce Momjian wrote:\n> On Mon, Jan 11, 2021 at 10:13:52PM -0600, Justin Pryzby wrote:\n> > On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote:\n> > > I think these patches could use some in-place documentation of what they are\n> > > trying to achieve and how they do it. The required information is spread\n> > > over a lengthy thread. No one wants to read that. Add commit messages to\n> > > the patches.\n> > \n> > 0001 patch fixes pg_upgrade/test.sh, which was disfunctional.\n> > Portions of the first patch were independently handled by commits 52202bb39,\n> > fa744697c, 091866724. So this is rebased on those.\n> > I guess updating this script should be a part of a beta-checklist somewhere,\n> > since I guess nobody will want to backpatch changes for testing older releases.\n> \n> Uh, what exactly is missing from the beta checklist? I read the patch\n> and commit message but don't understand it.\n\nDid you try to use test.sh to upgrade from a prior release ?\n\nEvidently it's frequently forgotten, as evidenced by all the \"deferred\nmaintenance\" I had to do to allow testing the main patch (currently 0003).\n\nSee also:\n\ncommit 5bab1985dfc25eecf4b098145789955c0b246160\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Jun 8 13:48:27 2017 -0400\n\n Fix bit-rot in pg_upgrade's test.sh, and improve documentation.\n \n Doing a cross-version upgrade test with test.sh evidently hasn't been\n tested since circa 9.2, because the script lacked case branches for\n old-version servers newer than 9.1. Future-proof that a bit, and\n clean up breakage induced by our recent drop of V0 function call\n protocol (namely that oldstyle_length() isn't in the regression\n suite anymore).\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 12 Jan 2021 11:27:53 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Tue, Jan 12, 2021 at 11:27:53AM -0600, Justin Pryzby wrote:\n> On Tue, Jan 12, 2021 at 12:15:59PM -0500, Bruce Momjian wrote:\n> > Uh, what exactly is missing from the beta checklist? I read the patch\n> > and commit message but don't understand it.\n> \n> Did you try to use test.sh to upgrade from a prior release ?\n> \n> Evidently it's frequently forgotten, as evidenced by all the \"deferred\n> maintenance\" I had to do to allow testing the main patch (currently 0003).\n> \n> See also:\n> \n> commit 5bab1985dfc25eecf4b098145789955c0b246160\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Thu Jun 8 13:48:27 2017 -0400\n> \n> Fix bit-rot in pg_upgrade's test.sh, and improve documentation.\n> \n> Doing a cross-version upgrade test with test.sh evidently hasn't been\n> tested since circa 9.2, because the script lacked case branches for\n> old-version servers newer than 9.1. Future-proof that a bit, and\n> clean up breakage induced by our recent drop of V0 function call\n> protocol (namely that oldstyle_length() isn't in the regression\n> suite anymore).\n\nOh, that is odd. I thought that was regularly run. I have my own test\ninfrastructure that I run for every major release so I never have run\nthe built-in one, except for make check-world.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 12 Jan 2021 12:53:56 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "\nOn 1/12/21 12:53 PM, Bruce Momjian wrote:\n> On Tue, Jan 12, 2021 at 11:27:53AM -0600, Justin Pryzby wrote:\n>> On Tue, Jan 12, 2021 at 12:15:59PM -0500, Bruce Momjian wrote:\n>>> Uh, what exactly is missing from the beta checklist? I read the patch\n>>> and commit message but don't understand it.\n>> Did you try to use test.sh to upgrade from a prior release ?\n>>\n>> Evidently it's frequently forgotten, as evidenced by all the \"deferred\n>> maintenance\" I had to do to allow testing the main patch (currently 0003).\n>>\n>> See also:\n>>\n>> commit 5bab1985dfc25eecf4b098145789955c0b246160\n>> Author: Tom Lane <tgl@sss.pgh.pa.us>\n>> Date: Thu Jun 8 13:48:27 2017 -0400\n>>\n>> Fix bit-rot in pg_upgrade's test.sh, and improve documentation.\n>> \n>> Doing a cross-version upgrade test with test.sh evidently hasn't been\n>> tested since circa 9.2, because the script lacked case branches for\n>> old-version servers newer than 9.1. Future-proof that a bit, and\n>> clean up breakage induced by our recent drop of V0 function call\n>> protocol (namely that oldstyle_length() isn't in the regression\n>> suite anymore).\n> Oh, that is odd. I thought that was regularly run. I have my own test\n> infrastructure that I run for every major release so I never have run\n> the built-in one, except for make check-world.\n>\n\nCross version pg_upgrade is tested regularly in the buildfarm, but not\nusing test.sh. Instead it uses the saved data repository from a previous\nrun of the buildfarm client for the source branch, and tries to upgrade\nthat to the target branch.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 12 Jan 2021 16:44:28 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On 2021-01-12 22:44, Andrew Dunstan wrote:\n> Cross version pg_upgrade is tested regularly in the buildfarm, but not\n> using test.sh. Instead it uses the saved data repository from a previous\n> run of the buildfarm client for the source branch, and tries to upgrade\n> that to the target branch.\n\nDoes it maintain a set of fixups similar to what is in test.sh? Are \nthose two sets the same?\n\n\n", "msg_date": "Fri, 15 Jan 2021 09:00:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 2021-01-12 22:44, Andrew Dunstan wrote:\n>> Cross version pg_upgrade is tested regularly in the buildfarm, but not\n>> using test.sh. Instead it uses the saved data repository from a previous\n>> run of the buildfarm client for the source branch, and tries to upgrade\n>> that to the target branch.\n\n> Does it maintain a set of fixups similar to what is in test.sh? Are \n> those two sets the same?\n\nResponding to Peter: the first answer is yes, the second is I didn't\ncheck, but certainly Justin's patch makes them closer.\n\nI spent some time poking through this set of patches. I agree that\nthere's problem(s) here that we need to solve, but it feels like this\nisn't a great way to solve them. What I see in the patchset is:\n\nv4-0001 mostly teaches test.sh about specific changes that have to be\nmade to historic versions of the regression database to allow them\nto be reloaded into current servers. As already discussed, this is\nreally duplicative of knowledge that's been embedded into the buildfarm\nclient over time. It'd be better if we could refactor that so that\nthe buildfarm shares a common database of these actions with test.sh.\nAnd said database ought to be in our git tree, so committers could\nfix problems without having to get Andrew involved every time.\nI think this could be represented as a psql script, at least in\nversions that have psql \\if (but that came in in v10, so maybe\nwe're there already).\n\n(Taking a step back, maybe the regression database isn't an ideal\ntestbed for this in the first place. But it does have the advantage of\nnot being a narrow-minded test that is going to miss things we haven't\nexplicitly thought of.)\n\nv4-0002 is a bunch of random changes that mostly seem to revert hacky\nadjustments previously made to improve test coverage. I don't really\nagree with any of these, nor see why they're necessary. If they\nare necessary then we need to restore the coverage somewhere else.\nAdmittedly, the previous changes were a bit hacky, but deleting them\n(without even bothering to adjust the relevant comments) isn't the\nanswer.\n\nv4-0003 is really the heart of the matter: it adds a table with some\npreviously-not-covered datatypes plus a query that purports to make sure\nthat we are covering all types of interest. But I'm not sure I believe\nthat query. It's got hard-wired assumptions about which typtype values\nneed to be covered. Why is it okay to exclude range and multirange?\nAre we sure that all composites are okay to exclude? Likewise, the\nrestriction to pg_catalog and information_schema schemas seems likely to\nbite us someday. There are some very random exclusions based on name\npatterns, which seem unsafe (let's list the specific type OIDs), and\nagain the nearby comments don't match the code. But the biggest issue\nis that this can only cover core datatypes, not any contrib stuff.\n\nI don't know what we could do about contrib types. Maybe we should\nfigure that covering core types is already a step forward, and be\nhappy with getting that done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Mar 2021 15:01:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 2021-01-12 22:44, Andrew Dunstan wrote:\n> >> Cross version pg_upgrade is tested regularly in the buildfarm, but not\n> >> using test.sh. Instead it uses the saved data repository from a previous\n> >> run of the buildfarm client for the source branch, and tries to upgrade\n> >> that to the target branch.\n> \n> > Does it maintain a set of fixups similar to what is in test.sh? Are \n> > those two sets the same?\n> \n> Responding to Peter: the first answer is yes, the second is I didn't\n> check, but certainly Justin's patch makes them closer.\n\nRight - I had meant to send this.\n\nhttps://github.com/PGBuildFarm/client-code/blob/master/PGBuild/Modules/TestUpgradeXversion.pm\n\n $opsql = 'drop operator if exists public.=> (bigint, NONE)';\n..\n my $missing_funcs = q{drop function if exists public.boxarea(box);\n drop function if exists public.funny_dup17();\n..\n my $prstmt = join(';',\n 'drop operator if exists #@# (bigint,NONE)',\n 'drop operator if exists #%# (bigint,NONE)',\n 'drop operator if exists !=- (bigint,NONE)',\n..\n $prstmt = join(';',\n 'drop operator @#@ (NONE, bigint)',\n..\n 'drop aggregate if exists public.array_cat_accum(anyarray)',\n\n> I spent some time poking through this set of patches. I agree that\n> there's problem(s) here that we need to solve, but it feels like this\n> isn't a great way to solve them. What I see in the patchset is:\n\nFor starters, is there a \"release beta checklist\" ?\nTesting test.sh should be on it.\nSo should fuzz testing.\n\n> v4-0001 mostly teaches test.sh about specific changes that have to be\n> made to historic versions of the regression database to allow them\n> to be reloaded into current servers. As already discussed, this is\n> really duplicative of knowledge that's been embedded into the buildfarm\n> client over time. It'd be better if we could refactor that so that\n> the buildfarm shares a common database of these actions with test.sh.\n> And said database ought to be in our git tree, so committers could\n> fix problems without having to get Andrew involved every time.\n> I think this could be represented as a psql script, at least in\n> versions that have psql \\if (but that came in in v10, so maybe\n> we're there already).\n\nI started this. I don't know if it's compatible with the buildfarm client, but\nI think any issues maybe can be avoided by using \"IF EXISTS\".\n\n> v4-0002 is a bunch of random changes that mostly seem to revert hacky\n> adjustments previously made to improve test coverage. I don't really\n> agree with any of these, nor see why they're necessary. If they\n> are necessary then we need to restore the coverage somewhere else.\n> Admittedly, the previous changes were a bit hacky, but deleting them\n> (without even bothering to adjust the relevant comments) isn't the\n> answer.\n\nIt was necessary to avoid --wal-segsize and -g to allow testing upgrades from\nversions which don't support those options. I think test.sh should be portable\nback to all supported versions.\n\nWhen those options were added, it broke test.sh upgrading from old versions.\nI changed this to a shell conditional for the \"new\" features:\n| \"$1\" -N -A trust ${oldsrc:+--wal-segsize 1 -g}\nIdeally it would check the version.\n\n> v4-0003 is really the heart of the matter: it adds a table with some\n> previously-not-covered datatypes plus a query that purports to make sure\n> that we are covering all types of interest.\n\nActually the 'manytypes' table intends to include *all* core datatypes itself,\nnot just those that aren't included somewhere else. I think \"included\nsomewhere else\" depends on the order of the regression these, and type_sanity\nruns early, so the table might need to include many types that are created\nlater, to avoid \"false positives\" in the associated test.\n\n> But I'm not sure I believe\n> that query. It's got hard-wired assumptions about which typtype values\n> need to be covered. Why is it okay to exclude range and multirange?\n> Are we sure that all composites are okay to exclude? Likewise, the\n> restriction to pg_catalog and information_schema schemas seems likely to\n> bite us someday. There are some very random exclusions based on name\n> patterns, which seem unsafe (let's list the specific type OIDs), and\n> again the nearby comments don't match the code. But the biggest issue\n> is that this can only cover core datatypes, not any contrib stuff.\n\nI changed to use regtype/OIDs, included range/multirange and stopped including\nonly pg_catalog/information_schema. But didn't yet handle composites.\n\n> I don't know what we could do about contrib types. Maybe we should\n> figure that covering core types is already a step forward, and be\n> happy with getting that done.\n\nRight .. this is meant to at least handle the lowest hanging fruit.\n\n-- \nJustin", "msg_date": "Fri, 30 Apr 2021 13:33:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:\r\n> On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:\r\n> > v4-0001 mostly teaches test.sh about specific changes that have to be\r\n> > made to historic versions of the regression database to allow them\r\n> > to be reloaded into current servers. As already discussed, this is\r\n> > really duplicative of knowledge that's been embedded into the buildfarm\r\n> > client over time. It'd be better if we could refactor that so that\r\n> > the buildfarm shares a common database of these actions with test.sh.\r\n> > And said database ought to be in our git tree, so committers could\r\n> > fix problems without having to get Andrew involved every time.\r\n> > I think this could be represented as a psql script, at least in\r\n> > versions that have psql \\if (but that came in in v10, so maybe\r\n> > we're there already).\r\n> \r\n> I started this. I don't know if it's compatible with the buildfarm client, but\r\n> I think any issues maybe can be avoided by using \"IF EXISTS\".\r\n\r\nI'm going to try pulling this into a psql script today and see how far\r\nI get.\r\n\r\n> > But I'm not sure I believe\r\n> > that query. It's got hard-wired assumptions about which typtype values\r\n> > need to be covered. Why is it okay to exclude range and multirange?\r\n> > Are we sure that all composites are okay to exclude? Likewise, the\r\n> > restriction to pg_catalog and information_schema schemas seems likely to\r\n> > bite us someday. There are some very random exclusions based on name\r\n> > patterns, which seem unsafe (let's list the specific type OIDs), and\r\n> > again the nearby comments don't match the code. But the biggest issue\r\n> > is that this can only cover core datatypes, not any contrib stuff.\r\n> \r\n> I changed to use regtype/OIDs, included range/multirange and stopped including\r\n> only pg_catalog/information_schema. But didn't yet handle composites.\r\n\r\nPer cfbot, this test needs to be taught about the new\r\npg_brin_bloom_summary and pg_brin_minmax_multi_summary types.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 16 Jul 2021 16:21:07 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Fri, 2021-07-16 at 16:21 +0000, Jacob Champion wrote:\r\n> On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:\r\n> > On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:\r\n> > > v4-0001 mostly teaches test.sh about specific changes that have to be\r\n> > > made to historic versions of the regression database to allow them\r\n> > > to be reloaded into current servers. As already discussed, this is\r\n> > > really duplicative of knowledge that's been embedded into the buildfarm\r\n> > > client over time. It'd be better if we could refactor that so that\r\n> > > the buildfarm shares a common database of these actions with test.sh.\r\n> > > And said database ought to be in our git tree, so committers could\r\n> > > fix problems without having to get Andrew involved every time.\r\n> > > I think this could be represented as a psql script, at least in\r\n> > > versions that have psql \\if (but that came in in v10, so maybe\r\n> > > we're there already).\r\n> > \r\n> > I started this. I don't know if it's compatible with the buildfarm client, but\r\n> > I think any issues maybe can be avoided by using \"IF EXISTS\".\r\n> \r\n> I'm going to try pulling this into a psql script today and see how far\r\n> I get.\r\n\r\nI completely misread this exchange -- you already did this in 0004.\r\nSorry for the noise.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 16 Jul 2021 16:40:34 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:\r\n> On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:\r\n> > v4-0001 mostly teaches test.sh about specific changes that have to be\r\n> > made to historic versions of the regression database to allow them\r\n> > to be reloaded into current servers. As already discussed, this is\r\n> > really duplicative of knowledge that's been embedded into the buildfarm\r\n> > client over time. It'd be better if we could refactor that so that\r\n> > the buildfarm shares a common database of these actions with test.sh.\r\n> > And said database ought to be in our git tree, so committers could\r\n> > fix problems without having to get Andrew involved every time.\r\n> > I think this could be represented as a psql script, at least in\r\n> > versions that have psql \\if (but that came in in v10, so maybe\r\n> > we're there already).\r\n> \r\n> I started this. I don't know if it's compatible with the buildfarm client, but\r\n> I think any issues maybe can be avoided by using \"IF EXISTS\".\r\n\r\nHere are the differences I see on a first pass (without putting too\r\nmuch thought into how significant the differences are). Buildfarm code\r\nI'm comparing against is at [1].\r\n\r\n- Both versions drop @#@ and array_cat_accum, but the buildfarm\r\nadditionally replaces them with a new operator and aggregate,\r\nrespectively.\r\n\r\n- The buildfarm's dropping of table OIDs is probably more resilient,\r\nsince it loops over pg_class looking for relhasoids.\r\n\r\n- The buildfarm handles (or drops) several contrib databases in\r\naddition to the core regression DB.\r\n\r\n- The psql script drops the first_el_agg_any aggregate and a `TRANSFORM\r\nFOR integer`; I don't see any corresponding code in the buildfarm.\r\n\r\n- Some version ranges are different between the two. For example,\r\nabstime_/reltime_/tinterval_tbl are dropped by the buildfarm if the old\r\nversion is < 9.3, while the psql script drops them for old versions <=\r\n10.\r\n\r\n- The buildfarm drops the public.=> operator for much older versions of\r\nPostgres. I assume we don't need that here.\r\n\r\n- The buildfarm adjusts pg_proc for the location of regress.so; I see\r\nthere's a commented placeholder for this at the end of the psql script\r\nbut it's not yet implemented.\r\n\r\nAs an aside, I think the \"fromv10\" naming scheme for the \"old version\r\n<= 10\" condition is unintuitive. If the old version is e.g. 9.6, we're\r\nnot upgrading \"from 10\".\r\n\r\n--Jacob\r\n\r\n[1] https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/TestUpgradeXversion.pm\r\n", "msg_date": "Fri, 16 Jul 2021 18:02:18 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:\n>> I started this. I don't know if it's compatible with the buildfarm client, but\n>> I think any issues maybe can be avoided by using \"IF EXISTS\".\n\n> Here are the differences I see on a first pass (without putting too\n> much thought into how significant the differences are). Buildfarm code\n> I'm comparing against is at [1].\n\nI switched the CF entry for this to \"Waiting on Author\". It's\nbeen failing in the cfbot for a couple of months, and Jacob's\nprovided some review-ish comments here, so I think there's\nplenty of reason to deem the ball to be in Justin's court.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Sep 2021 14:19:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Fri, Jul 16, 2021 at 06:02:18PM +0000, Jacob Champion wrote:\n> On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:\n> > On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:\n> > > v4-0001 mostly teaches test.sh about specific changes that have to be\n> > > made to historic versions of the regression database to allow them\n> > > to be reloaded into current servers. As already discussed, this is\n> > > really duplicative of knowledge that's been embedded into the buildfarm\n> > > client over time. It'd be better if we could refactor that so that\n> > > the buildfarm shares a common database of these actions with test.sh.\n> > > And said database ought to be in our git tree, so committers could\n> > > fix problems without having to get Andrew involved every time.\n> > > I think this could be represented as a psql script, at least in\n> > > versions that have psql \\if (but that came in in v10, so maybe\n> > > we're there already).\n> > \n> > I started this. I don't know if it's compatible with the buildfarm client, but\n> > I think any issues maybe can be avoided by using \"IF EXISTS\".\n> \n> Here are the differences I see on a first pass (without putting too\n> much thought into how significant the differences are). Buildfarm code\n> I'm comparing against is at [1].\n> \n> - Both versions drop @#@ and array_cat_accum, but the buildfarm\n> additionally replaces them with a new operator and aggregate,\n> respectively.\n> \n> - The buildfarm's dropping of table OIDs is probably more resilient,\n> since it loops over pg_class looking for relhasoids.\n\nThese are all \"translated\" from test.sh, so follow its logic.\nMaybe it should be improved, but that's separate from this patch - which is\nalready doing a few unrelated things.\n\n> - The buildfarm adjusts pg_proc for the location of regress.so; I see\n> there's a commented placeholder for this at the end of the psql script\n> but it's not yet implemented.\n\nI didn't understand why this was done here, but it turns out it has to be done\n*after* calling pg_dump. So it has to stay where it is.\n\n> - Some version ranges are different between the two. For example,\n> abstime_/reltime_/tinterval_tbl are dropped by the buildfarm if the old\n> version is < 9.3, while the psql script drops them for old versions <=\n> 10.\n\nThis was an error. Thanks.\n\n> - The buildfarm drops the public.=> operator for much older versions of\n> Postgres. I assume we don't need that here.\n\n> As an aside, I think the \"fromv10\" naming scheme for the \"old version\n> <= 10\" condition is unintuitive. If the old version is e.g. 9.6, we're\n> not upgrading \"from 10\".\n\nI renamed the version vars - feel free to suggest something better.\n\nI'll solicit suggestions what else to do to progresss these.\n\n@Andrew: did you have any comment on this part ?\n\n|Subject: buildfarm xversion diff\n|Forking https://www.postgresql.org/message-id/20210328231433.GI15100@telsasoft.com\n|\n|I gave suggestion how to reduce the \"lines of diff\" metric almost to nothing,\n|allowing a very small \"fudge factor\", and which I think makes this a pretty\n|good metric rather than a passable one.\n\n-- \nJustin", "msg_date": "Sat, 11 Sep 2021 19:51:16 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "\nOn 9/11/21 8:51 PM, Justin Pryzby wrote:\n>\n> @Andrew: did you have any comment on this part ?\n>\n> |Subject: buildfarm xversion diff\n> |Forking https://www.postgresql.org/message-id/20210328231433.GI15100@telsasoft.com\n> |\n> |I gave suggestion how to reduce the \"lines of diff\" metric almost to nothing,\n> |allowing a very small \"fudge factor\", and which I think makes this a pretty\n> |good metric rather than a passable one.\n>\n\nSomehow I missed that. Looks like some good suggestions. I'll\nexperiment. (Note: we can't assume the presence of sed, especially on\nWindows).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 12 Sep 2021 14:41:23 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On 9/12/21 2:41 PM, Andrew Dunstan wrote:\n> On 9/11/21 8:51 PM, Justin Pryzby wrote:\n>> @Andrew: did you have any comment on this part ?\n>>\n>> |Subject: buildfarm xversion diff\n>> |Forking https://www.postgresql.org/message-id/20210328231433.GI15100@telsasoft.com\n>> |\n>> |I gave suggestion how to reduce the \"lines of diff\" metric almost to nothing,\n>> |allowing a very small \"fudge factor\", and which I think makes this a pretty\n>> |good metric rather than a passable one.\n>>\n> Somehow I missed that. Looks like some good suggestions. I'll\n> experiment. (Note: we can't assume the presence of sed, especially on\n> Windows).\n>\n>\n\nI tried with the attached patch on crake, which tests back as far as\n9.2. Here are the diff counts from HEAD:\n\n\nandrew@emma:HEAD $ grep -c '^[+-]' dumpdiff-REL9_* dumpdiff-REL_1*\ndumpdiff-HEAD\ndumpdiff-REL9_2_STABLE:514\ndumpdiff-REL9_3_STABLE:169\ndumpdiff-REL9_4_STABLE:185\ndumpdiff-REL9_5_STABLE:221\ndumpdiff-REL9_6_STABLE:11\ndumpdiff-REL_10_STABLE:11\ndumpdiff-REL_11_STABLE:73\ndumpdiff-REL_12_STABLE:73\ndumpdiff-REL_13_STABLE:73\ndumpdiff-REL_14_STABLE:0\ndumpdiff-HEAD:0\n\n\nI've also attached those non-empty dumpdiff files for information, since\nthey are quite small.\n\n\nThere is still work to do, but this is promising. Next step: try it on\nWindows.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 13 Sep 2021 09:20:37 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "\nOn 9/13/21 9:20 AM, Andrew Dunstan wrote:\n> On 9/12/21 2:41 PM, Andrew Dunstan wrote:\n>> On 9/11/21 8:51 PM, Justin Pryzby wrote:\n>>> @Andrew: did you have any comment on this part ?\n>>>\n>>> |Subject: buildfarm xversion diff\n>>> |Forking https://www.postgresql.org/message-id/20210328231433.GI15100@telsasoft.com\n>>> |\n>>> |I gave suggestion how to reduce the \"lines of diff\" metric almost to nothing,\n>>> |allowing a very small \"fudge factor\", and which I think makes this a pretty\n>>> |good metric rather than a passable one.\n>>>\n>> Somehow I missed that. Looks like some good suggestions. I'll\n>> experiment. (Note: we can't assume the presence of sed, especially on\n>> Windows).\n>>\n>>\n> I tried with the attached patch on crake, which tests back as far as\n> 9.2. Here are the diff counts from HEAD:\n>\n>\n> andrew@emma:HEAD $ grep -c '^[+-]' dumpdiff-REL9_* dumpdiff-REL_1*\n> dumpdiff-HEAD\n> dumpdiff-REL9_2_STABLE:514\n> dumpdiff-REL9_3_STABLE:169\n> dumpdiff-REL9_4_STABLE:185\n> dumpdiff-REL9_5_STABLE:221\n> dumpdiff-REL9_6_STABLE:11\n> dumpdiff-REL_10_STABLE:11\n> dumpdiff-REL_11_STABLE:73\n> dumpdiff-REL_12_STABLE:73\n> dumpdiff-REL_13_STABLE:73\n> dumpdiff-REL_14_STABLE:0\n> dumpdiff-HEAD:0\n>\n>\n> I've also attached those non-empty dumpdiff files for information, since\n> they are quite small.\n>\n>\n> There is still work to do, but this is promising. Next step: try it on\n> Windows.\n>\n>\n\nIt appears to do the right thing on Windows. yay!\n\n\nWe probably need to get smarter about the heuristics, though, e.g. by\ntaking into account the buildfarm options and the platform. It would\nalso help a lot if we could make vcregress.pl honor USE_MODULE_DB.\nThat's on my TODO list, but it just got a lot higher priority.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 15 Sep 2021 15:28:54 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "\nOn 9/15/21 3:28 PM, Andrew Dunstan wrote:\n> On 9/13/21 9:20 AM, Andrew Dunstan wrote:\n>> On 9/12/21 2:41 PM, Andrew Dunstan wrote:\n>>> On 9/11/21 8:51 PM, Justin Pryzby wrote:\n>>>> @Andrew: did you have any comment on this part ?\n>>>>\n>>>> |Subject: buildfarm xversion diff\n>>>> |Forking https://www.postgresql.org/message-id/20210328231433.GI15100@telsasoft.com\n>>>> |\n>>>> |I gave suggestion how to reduce the \"lines of diff\" metric almost to nothing,\n>>>> |allowing a very small \"fudge factor\", and which I think makes this a pretty\n>>>> |good metric rather than a passable one.\n>>>>\n>>> Somehow I missed that. Looks like some good suggestions. I'll\n>>> experiment. (Note: we can't assume the presence of sed, especially on\n>>> Windows).\n>>>\n>>>\n>> I tried with the attached patch on crake, which tests back as far as\n>> 9.2. Here are the diff counts from HEAD:\n>>\n>>\n>> andrew@emma:HEAD $ grep -c '^[+-]' dumpdiff-REL9_* dumpdiff-REL_1*\n>> dumpdiff-HEAD\n>> dumpdiff-REL9_2_STABLE:514\n>> dumpdiff-REL9_3_STABLE:169\n>> dumpdiff-REL9_4_STABLE:185\n>> dumpdiff-REL9_5_STABLE:221\n>> dumpdiff-REL9_6_STABLE:11\n>> dumpdiff-REL_10_STABLE:11\n>> dumpdiff-REL_11_STABLE:73\n>> dumpdiff-REL_12_STABLE:73\n>> dumpdiff-REL_13_STABLE:73\n>> dumpdiff-REL_14_STABLE:0\n>> dumpdiff-HEAD:0\n>>\n>>\n>> I've also attached those non-empty dumpdiff files for information, since\n>> they are quite small.\n>>\n>>\n>> There is still work to do, but this is promising. Next step: try it on\n>> Windows.\n>>\n>>\n> It appears to do the right thing on Windows. yay!\n>\n>\n> We probably need to get smarter about the heuristics, though, e.g. by\n> taking into account the buildfarm options and the platform. It would\n> also help a lot if we could make vcregress.pl honor USE_MODULE_DB.\n> That's on my TODO list, but it just got a lot higher priority.\n>\n>\n\n\nHere's what I've committed:\n<https://github.com/PGBuildFarm/client-code/commit/6317d82c0e897a29dabd57ed8159d13920401f96>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 24 Sep 2021 10:58:52 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Sat, Sep 11, 2021 at 07:51:16PM -0500, Justin Pryzby wrote:\n> These are all \"translated\" from test.sh, so follow its logic.\n> Maybe it should be improved, but that's separate from this patch - which is\n> already doing a few unrelated things.\n\nI was looking at this CF entry, and what you are doing in 0004 to move\nthe tweaks from pg_upgrade's test.sh to a separate SQL script that\nuses psql's meta-commands like \\if to check which version we are on is\nreally interesting. The patch does not apply anymore, so this needs a\nrebase. The entry has been switched as waiting on author by Tom, but\nyou did not update it after sending the new versions in [1]. I am\nwondering if we could have something cleaner than just a set booleans\nas you do here for each check, as that does not help with the\nreadability of the tests.\n\n[1]: https://www.postgresql.org/message-id/20210912005116.GF26465@telsasoft.com\n--\nMichael", "msg_date": "Fri, 1 Oct 2021 16:58:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Fri, Oct 01, 2021 at 04:58:41PM +0900, Michael Paquier wrote:\n> I was looking at this CF entry, and what you are doing in 0004 to move\n> the tweaks from pg_upgrade's test.sh to a separate SQL script that\n> uses psql's meta-commands like \\if to check which version we are on is\n> really interesting. The patch does not apply anymore, so this needs a\n> rebase. The entry has been switched as waiting on author by Tom, but\n> you did not update it after sending the new versions in [1]. I am\n> wondering if we could have something cleaner than just a set booleans\n> as you do here for each check, as that does not help with the\n> readability of the tests.\n\nAnd so, I am back at this thread, looking at the set of patches\nproposed from 0001 to 0004. The patches are rather messy and mix many\nthings and concepts, but there are basically four things that stand\nout:\n- test.sh is completely broken when using PG >= 14 as new version\nbecause of the removal of the test tablespace. Older versions of\npg_regress don't support --make-tablespacedir so I am fine to stick a\ncouple of extra mkdirs for testtablespace/, expected/ and sql/ to\nallow the script to work properly for major upgrades as a workaround,\nbut only if we use an old version. We need to do something here for\nHEAD and REL_14_STABLE.\n- The script would fail when using PG <= 11 as old version because of\nWITH OIDS relations. We need to do something down to REL_12_STABLE.\nI did not like much the approach taken to stick 4 ALTER TABLE queries\nthough (the patch was actually failing here for me), so instead I have\nborrowed what the buildfarm has been doing with a DO block. That\nworks fine, and that's more portable.\n- Not using --extra-float-digits with PG <= 11 as older version causes\na bunch of diffs in the dumps, making the whole unreadable. The patch\nwas doing that unconditionally for *all version*, which is not good.\nWe should only do that on the versions that need it, and we know the\nold version number before taking any dumps so that's easy to check.\n- The addition of --wal-segsize and --allow-group-access breaks the\nscript when using PG < 10 at initdb time as these got added in 11.\nWith 10 getting EOL'd next year and per the lack of complaints, I am\nnot excited to do anything here and I'd rather leave this out so as we\nkeep coverage for those options across *all* major versions upgraded\nfrom 11~. The buildfarm has tests down to 9.2, but for devs my take\nis that this is enough for now.\n\nThis is for the basics in terms of fixing test.sh and what should be\nbackpatched. In this aspect patches 0001 and 0002 were a bit\nincorrect. I am not sure that 0003 is that interesting as designed as\nwe would miss any new core types introduced.\n\n0004 is something I'd like to get done on HEAD to ease the move of the\npg_upgrade tests to TAP, but it could be made a bit easier to read by\nnot having all those oldpgversion_XX_YY flags grouped together for\none. So I am going to rewrite portions of it once done with the\nabove.\n\nFor now, attached is a patch to address the issues with test.sh that I\nam planning to backpatch. This fixes the facility on HEAD, while\nminimizing the diffs between the dumps. We could do more, like a\ns/PROCEDURE/FUNCTION/ but that does not make the diffs really\nunreadable either. I have only tested that on HEAD as new version\ndown to 11 as the oldest version per the business with --wal-segsize.\nThis still needs tests with 12~ as new version though, which is boring\nbut not complicated at all :)\n--\nMichael", "msg_date": "Mon, 11 Oct 2021 14:38:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Mon, Oct 11, 2021 at 02:38:12PM +0900, Michael Paquier wrote:\n> For now, attached is a patch to address the issues with test.sh that I\n> am planning to backpatch. This fixes the facility on HEAD, while\n> minimizing the diffs between the dumps. We could do more, like a\n> s/PROCEDURE/FUNCTION/ but that does not make the diffs really\n> unreadable either. I have only tested that on HEAD as new version\n> down to 11 as the oldest version per the business with --wal-segsize.\n> This still needs tests with 12~ as new version though, which is boring\n> but not complicated at all :)\n\nOkay, tested and done as of fa66b6d.\n--\nMichael", "msg_date": "Wed, 13 Oct 2021 10:36:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Mon, Oct 11, 2021 at 02:38:12PM +0900, Michael Paquier wrote:\n> On Fri, Oct 01, 2021 at 04:58:41PM +0900, Michael Paquier wrote:\n> > I was looking at this CF entry, and what you are doing in 0004 to move\n> > the tweaks from pg_upgrade's test.sh to a separate SQL script that\n> > uses psql's meta-commands like \\if to check which version we are on is\n> > really interesting. The patch does not apply anymore, so this needs a\n> > rebase. The entry has been switched as waiting on author by Tom, but\n> > you did not update it after sending the new versions in [1]. I am\n> > wondering if we could have something cleaner than just a set booleans\n> > as you do here for each check, as that does not help with the\n> > readability of the tests.\n> \n> And so, I am back at this thread, looking at the set of patches\n> proposed from 0001 to 0004. The patches are rather messy and mix many\n> things and concepts, but there are basically four things that stand\n> out:\n> - test.sh is completely broken when using PG >= 14 as new version\n> because of the removal of the test tablespace. Older versions of\n> pg_regress don't support --make-tablespacedir so I am fine to stick a\n> couple of extra mkdirs for testtablespace/, expected/ and sql/ to\n> allow the script to work properly for major upgrades as a workaround,\n> but only if we use an old version. We need to do something here for\n> HEAD and REL_14_STABLE.\n> - The script would fail when using PG <= 11 as old version because of\n> WITH OIDS relations. We need to do something down to REL_12_STABLE.\n> I did not like much the approach taken to stick 4 ALTER TABLE queries\n> though (the patch was actually failing here for me), so instead I have\n> borrowed what the buildfarm has been doing with a DO block. That\n> works fine, and that's more portable.\n> - Not using --extra-float-digits with PG <= 11 as older version causes\n> a bunch of diffs in the dumps, making the whole unreadable. The patch\n> was doing that unconditionally for *all version*, which is not good.\n> We should only do that on the versions that need it, and we know the\n> old version number before taking any dumps so that's easy to check.\n> - The addition of --wal-segsize and --allow-group-access breaks the\n> script when using PG < 10 at initdb time as these got added in 11.\n> With 10 getting EOL'd next year and per the lack of complaints, I am\n> not excited to do anything here and I'd rather leave this out so as we\n> keep coverage for those options across *all* major versions upgraded\n> from 11~. The buildfarm has tests down to 9.2, but for devs my take\n> is that this is enough for now.\n\nMichael handled those in fa66b6d.\nNote that the patch assumes that the \"old version\" being pg_upgraded has\ncommit 97f73a978: \"Work around cross-version-upgrade issues created by commit 9e38c2bb5.\"\n\nThat may be good enough for test.sh, but if the kludges were moved to a .sql\nscript which was also run by the buildfarm (in stead of its hardcoded kludges), then\nit might be necessary to handle the additional stuff my patch did, like:\n\n+ DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;\"\n+ DROP FUNCTION boxarea(box);\"\n+ DROP FUNCTION funny_dup17();\"\n+ DROP TABLE abstime_tbl;\"\n+ DROP TABLE reltime_tbl;\"\n+ DROP TABLE tinterval_tbl;\"\n+ DROP AGGREGATE first_el_agg_any(anyelement);\"\n+ DROP AGGREGATE array_cat_accum(anyarray);\"\n+ DROP OPERATOR @#@(NONE,bigint);\"\n\nOr, maybe it's guaranteed that the animals all run latest version of old\nbranches, in which case I think some of the BF's existing logic could be\ndropped, which would help to reconcile these two scripts:\n\n my $missing_funcs = q{drop function if exists public.boxarea(box); \n drop function if exists public.funny_dup17(); \n.. \n $prstmt = join(';', \n 'drop operator @#@ (NONE, bigint)', \n.. \n 'drop aggregate if exists public.array_cat_accum(anyarray)', \n\n> This is for the basics in terms of fixing test.sh and what should be\n> backpatched. In this aspect patches 0001 and 0002 were a bit\n> incorrect. I am not sure that 0003 is that interesting as designed as\n> we would miss any new core types introduced.\n\nWe wouldn't miss new core types, because of the 2nd part of type_sanity which\ntests that each core type was included in the \"manytypes\" table.\n\n+-- And now a test on the previous test, checking that all core types are \n+-- included in this table \n+-- XXX or some other non-catalog table processed by pg_upgrade \n+SELECT oid, typname, typtype, typelem, typarray, typarray FROM pg_type t \n+WHERE typtype NOT IN ('p', 'c') \n+-- reg* which cannot be pg_upgraded \n+AND oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper', 'regoperator', 'regconfig', 'regdictionary', 'regnamespace', 'regcollation']::regtype[]) \n+-- XML might be disabled at compile-time \n+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree', 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list', 'pg_brin_bloom_summary', 'pg_brin_minmax_multi_summary']::regtype[]) \n+AND NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray=t.oid) -- exclude arrays \n+AND NOT EXISTS (SELECT 1 FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass); \n\n> 0004 is something I'd like to get done on HEAD to ease the move of the\n> pg_upgrade tests to TAP, but it could be made a bit easier to read by\n> not having all those oldpgversion_XX_YY flags grouped together for\n> one. So I am going to rewrite portions of it once done with the\n> above.\n\n\n-- \nJustin", "msg_date": "Sun, 7 Nov 2021 13:22:00 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Sun, Nov 07, 2021 at 01:22:00PM -0600, Justin Pryzby wrote:\n> That may be good enough for test.sh, but if the kludges were moved to a .sql\n> script which was also run by the buildfarm (in stead of its hardcoded kludges), then\n> it might be necessary to handle the additional stuff my patch did, like:\n\n> [...]\n>\n> Or, maybe it's guaranteed that the animals all run latest version of old\n> branches, in which case I think some of the BF's existing logic could be\n> dropped, which would help to reconcile these two scripts:\n\nI am pretty sure that it is safe to assume that all buildfarm animals\nrun the top of the stable branch they are testing, at least on the\ncommunity side. An advantage of moving all those SQLs to a script\nthat can be process with psql thanks to the \\if metacommands you have\nadded is that buildfarm clients are not required to immediately update\ntheir code to work properly. Considering as well that we should\nminimize the amount of duplication between all those things, I'd like\nto think that we'd better apply 0002 and consider a backpatch to allow\nthe buildfarm to catch up on it. It should at least allow to remove a\ngood chunk of the object cleanup done directly by the buildfarm.\n\n>> This is for the basics in terms of fixing test.sh and what should be\n>> backpatched. In this aspect patches 0001 and 0002 were a bit\n>> incorrect. I am not sure that 0003 is that interesting as designed as\n>> we would miss any new core types introduced.\n> \n> We wouldn't miss new core types, because of the 2nd part of type_sanity which\n> tests that each core type was included in the \"manytypes\" table.\n\n+-- XML might be disabled at compile-time\n+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree',\n'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list',\n'pg_brin_bloom_summary', 'pg_brin_minmax_multi_summary']::regtype[])\n\nI believe that this comment is incomplete, applying only to the first\nelement listed in this array. I guess that this had better document\nwhy those catalogs are part of the list? Good to see that adding a\nreg* in core would immediately be noticed though, as far as I\nunderstand this SQL.\n--\nMichael", "msg_date": "Mon, 8 Nov 2021 12:53:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Sun, Nov 07, 2021 at 01:22:00PM -0600, Justin Pryzby wrote:\n> That may be good enough for test.sh, but if the kludges were moved to a .sql\n> script which was also run by the buildfarm (in stead of its hardcoded kludges), then\n> it might be necessary to handle the additional stuff my patch did, like:\n>\n> + DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;\"\n> + DROP FUNCTION boxarea(box);\"\n> + DROP FUNCTION funny_dup17();\"\n\nThese apply for an old version <= v10.\n\n> + DROP TABLE abstime_tbl;\"\n> + DROP TABLE reltime_tbl;\"\n> + DROP TABLE tinterval_tbl;\"\n\nold version <= 9.3.\n\n> + DROP AGGREGATE first_el_agg_any(anyelement);\"\n\nNot sure about this one.\n\n> + DROP AGGREGATE array_cat_accum(anyarray);\"\n> + DROP OPERATOR @#@(NONE,bigint);\"\n\nThese are on 9.4. It is worth noting that TestUpgradeXversion.pm\nrecreates those objects. I'd agree to close the gap completely rather\nthan just moving what test.sh does to wipe out a maximum client code\nfor the buildfarm.\n\n> Or, maybe it's guaranteed that the animals all run latest version of old\n> branches, in which case I think some of the BF's existing logic could be\n> dropped, which would help to reconcile these two scripts:\n\nThat seems like a worthy goal to reduce the amount of duplication with\nthe buildfarm code, while allowing tests from upgrades with older\nversions (the WAL segment size and group permission issue in test.sh\nhad better be addressed in a better way, perhaps once the pg_upgrade\ntests are moved to TAP). There are also things specific to contrib/\nmodules with older versions, but that may be too specific for this\nexercise.\n\n+\\if :oldpgversion_le84\n+DROP FUNCTION public.myfunc(integer);\n+\\endif\n\nThe oldest version tested by the buildfarm is 9.2, so we could ignore\nthis part I guess?\n\nAndrew, what do you think about this part? Based on my read of this\nthread, there is an agreement that this approach makes the buildfarm\ncode more manageable so as committers would not need to patch the\nbuildfarm code if their test fail. I agree with this conclusion, but\nI wanted to double-check with you first. This would need a backpatch\ndown to 10 so as we could clean up a maximum of code in\nTestUpgradeXversion.pm without waiting for an extra 5 years. Please\nnote that I am fine to send a patch for the buildfarm client.\n\n> We wouldn't miss new core types, because of the 2nd part of type_sanity which\n> tests that each core type was included in the \"manytypes\" table.\n\nThanks, I see your point now after a closer read.\n\nThere is still a pending question for contrib modules, but I think\nthat we need to think larger here with a better integration of\ncontrib/ modules in the upgrade testing process. Making that cheap\nwould require running the set of regression tests on the instance\nto-be-upgraded first. I think that one step in this direction would\nbe to have unique databases for each contrib/ modules, so as there is\nno overlap with objects dropped?\n\nHaving some checks with code types looks fine as a first step, so\nlet's do that. I have reviewed 0001, rewrote a couple of comments.\nAll the comments from upthread seem to be covered with that. So I'd\nlike to get that applied on HEAD. We could as well be less\nconservative and backpatch that down to 12 to follow on 7c15cef so we\nwould be more careful with 15~ already (a backpatch down to 14 would\nbe enough for this purpose, actually thanks to the 14->15 upgrade\npath).\n--\nMichael", "msg_date": "Wed, 17 Nov 2021 16:01:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "\nOn 11/17/21 02:01, Michael Paquier wrote:\n>\n> The oldest version tested by the buildfarm is 9.2, so we could ignore\n> this part I guess?\n>\n> Andrew, what do you think about this part? Based on my read of this\n> thread, there is an agreement that this approach makes the buildfarm\n> code more manageable so as committers would not need to patch the\n> buildfarm code if their test fail. I agree with this conclusion, but\n> I wanted to double-check with you first. This would need a backpatch\n> down to 10 so as we could clean up a maximum of code in\n> TestUpgradeXversion.pm without waiting for an extra 5 years. Please\n> note that I am fine to send a patch for the buildfarm client.\n>\n>\n\nIn general I'm in agreement with the direction here. If we can have a\nscript that applies to back branches to make them suitable for upgrade\ntesting instead of embedding this in the buildfarm client, so much the\nbetter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 17 Nov 2021 10:07:17 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Wed, Nov 17, 2021 at 10:07:17AM -0500, Andrew Dunstan wrote:\n> In general I'm in agreement with the direction here. If we can have a\n> script that applies to back branches to make them suitable for upgrade\n> testing instead of embedding this in the buildfarm client, so much the\n> better.\n\nOkay. I have worked on 0001 to add the table to check after the\nbinary compatibilities and applied it. What remains on this thread is\n0002 to move all the SQL queries into a psql-able file with the set of\n\\if clauses to control which query is run depending on the backend\nversion. Justin, could you send a rebased version of that with all\nthe changes from the buildfarm client included?\n--\nMichael", "msg_date": "Thu, 18 Nov 2021 13:36:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Wed, Nov 17, 2021 at 04:01:19PM +0900, Michael Paquier wrote:\n> On Sun, Nov 07, 2021 at 01:22:00PM -0600, Justin Pryzby wrote:\n> > That may be good enough for test.sh, but if the kludges were moved to a .sql\n> > script which was also run by the buildfarm (in stead of its hardcoded kludges), then\n> > it might be necessary to handle the additional stuff my patch did, like:\n> >\n> > + DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;\"\n> > + DROP FUNCTION boxarea(box);\"\n> > + DROP FUNCTION funny_dup17();\"\n> \n> These apply for an old version <= v10.\n> \n> > + DROP TABLE abstime_tbl;\"\n> > + DROP TABLE reltime_tbl;\"\n> > + DROP TABLE tinterval_tbl;\"\n> \n> old version <= 9.3.\n> \n> > + DROP AGGREGATE first_el_agg_any(anyelement);\"\n> \n> Not sure about this one.\n\nSee 97f73a978fc1aca59c6ad765548ce0096d95a923\n\n> These are on 9.4. It is worth noting that TestUpgradeXversion.pm\n> recreates those objects. I'd agree to close the gap completely rather\n> than just moving what test.sh does to wipe out a maximum client code\n> for the buildfarm.\n\n>>Or, maybe it's guaranteed that the animals all run latest version of old\n>>branches, in which case I think some of the BF's existing logic could be\n>>dropped, which would help to reconcile these two scripts:\n>>\n>> my $missing_funcs = q{drop function if exists public.boxarea(box);\n>> drop function if exists public.funny_dup17();\n>>..\n>> $prstmt = join(';',\n>> 'drop operator @#@ (NONE, bigint)',\n>>..\n>> 'drop aggregate if exists public.array_cat_accum(anyarray)',\n>>\n\nI'm not sure if everything the buildfarm does is needed anymore, or if any of\nit could be removed now, rather than being implemented in test.sh.\n\nboxarea, funny_dup - see also db3af9feb19f39827e916145f88fa5eca3130cb2\nhttps://github.com/PGBuildFarm/client-code/commit/9ca42ac1783a8cf99c73b4f7c52bd05a6024669d\n\narray_larger_accum/array_cat_accum - see also 97f73a978fc1aca59c6ad765548ce0096d95a923\nhttps://github.com/PGBuildFarm/client-code/commit/a55c89869f30db894ab823df472e739cee2e8c91\n\n@#@ 76f412ab310554acb970a0b73c8d1f37f35548c6 ??\nhttps://github.com/PGBuildFarm/client-code/commit/b3fdb743d89dc91fcea47bd9651776c503f774ff\nhttps://github.com/PGBuildFarm/client-code/commit/b44e9390e2d8d904ff8cabd906a2d4b5c8bd300a\nhttps://github.com/PGBuildFarm/client-code/commit/3844503c8fde134f7cc29b3fb147d590b6d2fcc1\n\nabstime:\nhttps://github.com/PGBuildFarm/client-code/commit/f027d991d197036028ffa9070f4c9193076ed5ed\n\nputenv\nhttps://github.com/PGBuildFarm/client-code/commit/fa86d0b1bc7a8d7b9f15b1da8b8e43f4d3a08e2b", "msg_date": "Wed, 17 Nov 2021 22:47:28 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Okay. I have worked on 0001 to add the table to check after the\n> binary compatibilities and applied it.\n\nSomething funny about that on prion:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-11-18%2001%3A55%3A38\n\n@@ -747,6 +747,8 @@\n '{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,\n arrayrange(ARRAY[1,2], ARRAY[2,1]),\n arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));\n+ERROR: unrecognized key word: \"ec2\"\n+HINT: ACL key word must be \"group\" or \"user\".\n -- Sanity check on the previous table, checking that all core types are\n -- included in this table.\n SELECT oid, typname, typtype, typelem, typarray, typarray\n\nNot sure what's going on there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Nov 2021 23:57:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Wed, Nov 17, 2021 at 11:57:51PM -0500, Tom Lane wrote:\n> Something funny about that on prion:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-11-18%2001%3A55%3A38\n> Not sure what's going on there.\n\nYes, that was just some missing quoting in the aclitem of this new\ntable. prion uses a specific user name, \"ec2-user\", that caused the\nfailure.\n--\nMichael", "msg_date": "Thu, 18 Nov 2021 14:49:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Wed, Nov 17, 2021 at 10:47:28PM -0600, Justin Pryzby wrote:\n> I'm not sure if everything the buildfarm does is needed anymore, or if any of\n> it could be removed now, rather than being implemented in test.sh.\n\n+-- This file has a bunch of kludges needed for testing upgrades\nacross major versions\n+-- It supports testing the most recent version of an old release (not\nany arbitrary minor version).\n\nThis could be better-worded. Here is an idea:\n--\n-- SQL queries for major upgrade tests\n--\n-- This file includes a set of SQL queries to make a cluster to-be-upgraded\n-- compatible with the version this file is on. This requires psql,\n-- as per-version queries are controlled with a set of \\if clauses.\n\n+\\if :oldpgversion_le84\n+DROP FUNCTION public.myfunc(integer);\n+\\endif\nWe could retire this part for <= 8.4. The oldest version tested by\nthe buildfarm is 9.2.\n\n+ psql -X -d regression -f \"test-upgrade.sql\" || psql_fix_sql_status=$?\nShouldn't we use an absolute path here? I was testing a VPATH build\nand that was not working properly.\n\n+-- commit 9e38c2bb5 and 97f73a978\n+-- DROP AGGREGATE array_larger_accum(anyarray);\n+DROP AGGREGATE array_cat_accum(anyarray);\n+\n+-- commit 76f412ab3\n+-- DROP OPERATOR @#@(bigint,NONE);\n+DROP OPERATOR @#@(NONE,bigint);\n+\\endif\nThe buildfarm does \"CREATE OPERATOR @#@\" and \"CREATE AGGREGATE\narray_larger_accum\" when dealing with an old version between 9.5 and\n13. Shouldn't we do the same and create those objects rather than a\nplain DROP? What you are doing is not wrong, and it should allow\nupgrades to work, but that's a bit inconsistent with the buildfarm in\nterms of coverage.\n\n+ ver >= 905 AND ver <= 1300 AS oldpgversion_95_13,\n+ ver >= 906 AND ver <= 1300 AS oldpgversion_96_13,\n+ ver >= 906 AND ver <= 1000 AS oldpgversion_96_10,\nSo here, we have the choice between conditions that play with version\nranges or we could make those checks simpler but compensate with a set\nof IF EXISTS queries. I think that your choice is right. The\nbuildfarm mixes both styles to compensate with the cases where the\nobjects are created after a drop.\n\nThe list of objects and the version ranges look correct to me.\n--\nMichael", "msg_date": "Thu, 18 Nov 2021 15:58:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Thu, Nov 18, 2021 at 03:58:18PM +0900, Michael Paquier wrote:\n> + ver >= 905 AND ver <= 1300 AS oldpgversion_95_13,\n> + ver >= 906 AND ver <= 1300 AS oldpgversion_96_13,\n> + ver >= 906 AND ver <= 1000 AS oldpgversion_96_10,\n> So here, we have the choice between conditions that play with version\n> ranges or we could make those checks simpler but compensate with a set\n> of IF EXISTS queries. I think that your choice is right. The\n> buildfarm mixes both styles to compensate with the cases where the\n> objects are created after a drop.\n\nSo, I have come back to this part of the patch set, that moves the SQL\nqueries doing the pre-upgrade cleanups in the old version we upgrade\nfrom, and decided to go with what looks like the simplest approach,\nrelying on some IFEs depending on the object types if they don't\nexist for some cases.\n\nWhile checking the whole thing, I have noticed that some of the\noperations were not really necessary. The result is rather clean now,\nwith a linear organization of the version logic, so as it is a\nno-brainer to get that done in back-branches per the\nbackward-compatibility argument.\n\nI'll get that done down to 10 to maximize its influence, then I'll\nmove on with the buildfarm code and send a patch to plug this and\nreduce the dependencies between core and the buildfarm code.\n--\nMichael", "msg_date": "Wed, 1 Dec 2021 16:19:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" }, { "msg_contents": "On Wed, Dec 01, 2021 at 04:19:44PM +0900, Michael Paquier wrote:\n> I'll get that done down to 10 to maximize its influence, then I'll\n> move on with the buildfarm code and send a patch to plug this and\n> reduce the dependencies between core and the buildfarm code.\n\nOkay, I have checked this one this morning, and applied the split down\nto 10, so as we have a way to fix objects from the main regression\ntest suite. The buildfarm client gets a bit cleaned up after that (I\nhave a patch for that, but I am not 100% sure that it is right).\n\nStill, the global picture is larger than that because there is still\nnothing done for contrib/ modules included in cross-version checks of\npg_upgrade by the buildfarm. The core code tests don't do this much,\nbut if we were to do the same things as the buildfarm, then we would\nneed to run installcheck-world (roughly) on a deployed instance, then\npg_upgrade it. That's not going to be cheap, for sure.\n\nOne thing that we could do is to use unique names for the databases of\nthe contrib/ modules when running an installcheck, so as these are\npreserved for upgrades (the buildfarm client does that). This has as\neffect to increase the number of databases for an instance\ninstallcheck'ed, so this had better be optional, at least.\n--\nMichael", "msg_date": "Thu, 2 Dec 2021 10:49:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade test for binary compatibility of core data types" } ]
[ { "msg_contents": "Hi,\n\nCan someone let me know when you're doing a pg_dump, can you specify not to use the view rules so that the statement in the pg_dump file uses create view instead of create table/create rule? We are not using anything specific to 9.5 like jsonb columns, so the DDL should be compatible between versions when creating / defining objects, it just the way it's creating them that has changed which is causing us an issue.\n\nWe dump from 9.5.5 and restore to one 9.5.18 server and two 9.2 servers....we've been doing this for awhile and had no issues until recently with certain views that are trying to be restored with rule views (some views in the pg_dump file are created with create view and some by create table / create rule) I've read this: https://www.postgresql.org/docs/9.5/rules-views.html but haven't fully understood it yet as to when it applies the create view vs create table/rule syntax, as the pg_dump has a combination of both.\n\nOn the 9.5.18 server where it has the create table syntax for a view, it creates a table instead of a view.\n\nFor the 9.2.9 servers, it generates errors:\n\npg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE v_my_view postgres\nLINE 19: ...E ONLY v_my_view REPLICA ID...\n Command was: CREATE TABLE v_my_view(\n\npg_restore: [archiver (db)] Error from TOC entry 87613; 2618 42703185 RULE _RETURN postgres\n Command was: CREATE RULE \"_RETURN\" AS\n\nAside from an upgrade to all the servers, is there anyway in pg_dump to set a compatibility level when dumping the database? I checked here, and I don't think there is: https://www.postgresql.org/docs/9.5/app-pgdump.html\n\nMany thanks in advance.\n\nAlex\n\nOur setup is the following:\n1. Source Postgresql 9.5 server (pg_dump source)\nPostgreSQL 9.5.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit\n\n2. Two 9.2.9 servers (we restore to)\nPostgreSQL 9.2.9 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n\n3. One 9.5 (we restore to)\nPostgreSQL 9.5.18 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313\n(Red Hat 4.4.7-23), 64-bit\n\nSent with [ProtonMail](https://protonmail.com) Secure Email.\nHi,Can someone let me know when you're doing a pg_dump, can you specify not to use the view rules so that the statement in the pg_dump file uses create view instead of create table/create rule? We are not using anything specific to 9.5 like jsonb columns, so the DDL should be compatible between versions when creating / defining objects, it just the way it's creating them that has changed which is causing us an issue.We dump from 9.5.5 and restore to one 9.5.18 server and two 9.2 servers....we've been doing this for awhile and had no issues until recently with certain views that are trying to be restored with rule views (some views in the pg_dump file are created with create view and some by create table / create rule) I've read this: https://www.postgresql.org/docs/9.5/rules-views.html but haven't fully understood it yet as to when it applies the create view  vs create table/rule syntax, as the pg_dump has a combination of both.On the 9.5.18 server where it has the create table syntax for a view, it creates a table instead of a view. For the 9.2.9 servers, it generates errors: pg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE v_my_view postgresLINE 19: ...E ONLY v_my_view REPLICA ID...    Command was: CREATE TABLE v_my_view(pg_restore: [archiver (db)] Error from TOC entry 87613; 2618 42703185 RULE _RETURN postgres    Command was: CREATE RULE \"_RETURN\" ASAside from an upgrade to all the servers, is there anyway in pg_dump to set a compatibility level when dumping the database? I checked here, and I don't think there is: https://www.postgresql.org/docs/9.5/app-pgdump.htmlMany thanks in advance.AlexOur setup is the following:1. Source Postgresql 9.5 server (pg_dump source)PostgreSQL 9.5.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit2. Two 9.2.9 servers (we restore to)PostgreSQL 9.2.9 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit3. One 9.5  (we restore to)PostgreSQL 9.5.18 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313(Red Hat 4.4.7-23), 64-bitSent with ProtonMail Secure Email.", "msg_date": "Wed, 09 Oct 2019 01:01:39 +0000", "msg_from": "Alex Williams <valenceshell@protonmail.com>", "msg_from_op": true, "msg_subject": "pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "One quick note, on the 9.5.18, while it created a table, it possibly didn't convert it into a view (that pg admin shows it as) as it probably didn't reach the end to apply the rule (I killed the restore manually when I was checking specifically for the view in question and noticed that it created a table instead of a view; unlike 9.2.9 which generated an error.)\n\nSent with [ProtonMail](https://protonmail.com) Secure Email.\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Tuesday, October 8, 2019 9:01 PM, Alex Williams <valenceshell@protonmail.com> wrote:\n\n> Hi,\n>\n> Can someone let me know when you're doing a pg_dump, can you specify not to use the view rules so that the statement in the pg_dump file uses create view instead of create table/create rule? We are not using anything specific to 9.5 like jsonb columns, so the DDL should be compatible between versions when creating / defining objects, it just the way it's creating them that has changed which is causing us an issue.\n>\n> We dump from 9.5.5 and restore to one 9.5.18 server and two 9.2 servers....we've been doing this for awhile and had no issues until recently with certain views that are trying to be restored with rule views (some views in the pg_dump file are created with create view and some by create table / create rule) I've read this: https://www.postgresql.org/docs/9.5/rules-views.html but haven't fully understood it yet as to when it applies the create view vs create table/rule syntax, as the pg_dump has a combination of both.\n>\n> On the 9.5.18 server where it has the create table syntax for a view, it creates a table instead of a view.\n>\n> For the 9.2.9 servers, it generates errors:\n>\n> pg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE v_my_view postgres\n> LINE 19: ...E ONLY v_my_view REPLICA ID...\n> Command was: CREATE TABLE v_my_view(\n>\n> pg_restore: [archiver (db)] Error from TOC entry 87613; 2618 42703185 RULE _RETURN postgres\n> Command was: CREATE RULE \"_RETURN\" AS\n>\n> Aside from an upgrade to all the servers, is there anyway in pg_dump to set a compatibility level when dumping the database? I checked here, and I don't think there is: https://www.postgresql.org/docs/9.5/app-pgdump.html\n>\n> Many thanks in advance.\n>\n> Alex\n>\n> Our setup is the following:\n> 1. Source Postgresql 9.5 server (pg_dump source)\n> PostgreSQL 9.5.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit\n>\n> 2. Two 9.2.9 servers (we restore to)\n> PostgreSQL 9.2.9 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n>\n> 3. One 9.5 (we restore to)\n> PostgreSQL 9.5.18 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313\n> (Red Hat 4.4.7-23), 64-bit\n>\n> Sent with [ProtonMail](https://protonmail.com) Secure Email.\nOne quick note, on the 9.5.18, while it created a table, it possibly didn't convert it into a view (that pg admin shows it as) as it probably didn't reach the end to apply the rule (I killed the restore manually when I was checking specifically for the view in question and noticed that it created a table instead of a view; unlike 9.2.9 which generated an error.)Sent with ProtonMail Secure Email.‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, October 8, 2019 9:01 PM, Alex Williams <valenceshell@protonmail.com> wrote: Hi,Can someone let me know when you're doing a pg_dump, can you specify not to use the view rules so that the statement in the pg_dump file uses create view instead of create table/create rule? We are not using anything specific to 9.5 like jsonb columns, so the DDL should be compatible between versions when creating / defining objects, it just the way it's creating them that has changed which is causing us an issue.We dump from 9.5.5 and restore to one 9.5.18 server and two 9.2 servers....we've been doing this for awhile and had no issues until recently with certain views that are trying to be restored with rule views (some views in the pg_dump file are created with create view and some by create table / create rule) I've read this: https://www.postgresql.org/docs/9.5/rules-views.html but haven't fully understood it yet as to when it applies the create view  vs create table/rule syntax, as the pg_dump has a combination of both.On the 9.5.18 server where it has the create table syntax for a view, it creates a table instead of a view. For the 9.2.9 servers, it generates errors: pg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE v_my_view postgresLINE 19: ...E ONLY v_my_view REPLICA ID...    Command was: CREATE TABLE v_my_view(pg_restore: [archiver (db)] Error from TOC entry 87613; 2618 42703185 RULE _RETURN postgres    Command was: CREATE RULE \"_RETURN\" ASAside from an upgrade to all the servers, is there anyway in pg_dump to set a compatibility level when dumping the database? I checked here, and I don't think there is: https://www.postgresql.org/docs/9.5/app-pgdump.htmlMany thanks in advance.AlexOur setup is the following:1. Source Postgresql 9.5 server (pg_dump source)PostgreSQL 9.5.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit2. Two 9.2.9 servers (we restore to)PostgreSQL 9.2.9 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit3. One 9.5  (we restore to)PostgreSQL 9.5.18 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313(Red Hat 4.4.7-23), 64-bitSent with ProtonMail Secure Email.", "msg_date": "Wed, 09 Oct 2019 01:11:42 +0000", "msg_from": "Alex Williams <valenceshell@protonmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "Alex Williams <valenceshell@protonmail.com> writes:\n> Can someone let me know when you're doing a pg_dump, can you specify not to use the view rules so that the statement in the pg_dump file uses create view instead of create table/create rule?\n\nNo.\n\n> We dump from 9.5.5 and restore to one 9.5.18 server and two 9.2 servers....we've been doing this for awhile and had no issues until recently with certain views that are trying to be restored with rule views (some views in the pg_dump file are created with create view and some by create table / create rule)\n\nIn general, we don't promise that pg_dump output from major version N\ncan be loaded into previous major versions. Having said that, 9.2\nshould not have a problem with either the CREATE VIEW or CREATE TABLE-\nplus-CREATE RULE approaches per se, so there's some critical detail\nthat you haven't told us about. You didn't show the actual error\nmessages, either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 01:01:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "Hi Tom,\n\nThanks for your reply, we appreciate it. This is a long reply, sorry about that, but if there's any specific I can provide you that helps, please let me know.\n\nOK, for the log, we do this when restoring:\n\npg_restore -d my_database -U postgres my_database.dump >restore_result.txt 2>&1\n\nbut our log file only records the following (I've added more detail below using the cmds below.) The view name/column names have changed for the mailing list:\n\n\"CREATE TABLE\" - cat restore_result.txt | grep -A 10 -B 10 -i \"CREATE TABLE\" | more\n\npg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE my_view postgres\npg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near \"REPLICA\"\nLINE 19: ...E ONLY my_view REPLICA ID..\n\npg_restore: [archiver (db)] could not execute query: ERROR: relation \"myschema.my_view \" does not exist\n\n\n\n\"CREATE RULE\" - cat restore_result.txt | grep -A 10 -B 10 -i \"CREATE RULE\" | more\npg_restore: [archiver (db)] Error from TOC entry 87618; 2618 42703185 RULE _RETURN postgres\npg_restore: [archiver (db)] could not execute query: ERROR: relation \"my_view\" does not exist\n Command was: CREATE RULE \"_RETURN\" AS\n ON SELECT TO my_view DO INSTEAD SELECT DISTINCT d.name AS p...\n\n\n\nWe assumed it was the create rule but also looked at \"REPLICA ID\" and couldn't find anything on the properties that it had such a property ... we used the query from here: https://stackoverflow.com/questions/55249431/find-replica-identity-for-a-postgres-table\n\nSELECT CASE relreplident\n WHEN 'd' THEN 'default'\n WHEN 'n' THEN 'nothing'\n WHEN 'f' THEN 'full'\n WHEN 'i' THEN 'index'\n END AS replica_identity\nFROM pg_class\nWHERE oid = 'my_view'::regclass;\n\nand it returned nothing. But I'm wondering could it be any of the tables that the view uses that may have that id; I'm not sure what REPLICA ID is used for, but our source DB for the dump has the the wal_level set to hot standby to sync with another server (same version) without using a dump (for failover/readonly report queries.)\n\nReading this:\nhttps://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-replica-identity-logical-replication/\n\nand this\n\nhttps://www.postgresql.org/docs/devel/sql-altertable.html\n\nI'm not sure what config param would set that other than the wal_level, which in our case is hot standby not logical, but it looks like 9.2 doesn't support that property and that could be causing the issue? Also, I see the replication settings in the conf file, but they are all defaulted to being commented out.\n\n\nSo I'm still not sure what it could be. I'm in process of restoring the db from 9.5.5 to 9.5.18 at the moment to see if it works (currently \"my_view\" is still a table, I'm waiting for the restore to complete to see if when the rule is applied, if it hasn't yet, that it shows as a view and returns records.\")\n\nI'll see if I can extract the statements from another dump that doesn't use the Fc switches that we normally use, and try running them manually.\n\nThanks again for your help!\n\nAlex\n\n\nSent with ProtonMail Secure Email.\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, October 9, 2019 1:01 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alex Williams valenceshell@protonmail.com writes:\n>\n> > Can someone let me know when you're doing a pg_dump, can you specify not to use the view rules so that the statement in the pg_dump file uses create view instead of create table/create rule?\n>\n> No.\n>\n> > We dump from 9.5.5 and restore to one 9.5.18 server and two 9.2 servers....we've been doing this for awhile and had no issues until recently with certain views that are trying to be restored with rule views (some views in the pg_dump file are created with create view and some by create table / create rule)\n>\n> In general, we don't promise that pg_dump output from major version N\n> can be loaded into previous major versions. Having said that, 9.2\n> should not have a problem with either the CREATE VIEW or CREATE TABLE-\n> plus-CREATE RULE approaches per se, so there's some critical detail\n> that you haven't told us about. You didn't show the actual error\n> messages, either.\n>\n> regards, tom lane\n\n\n\n\n", "msg_date": "Wed, 09 Oct 2019 21:32:25 +0000", "msg_from": "Alex Williams <valenceshell@protonmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "\nSorry, there was a bit more that after reviewing again what I sent, I missed copying from the \"CREATE TABLE\" log:\n\npg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE my_view postgres\npg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near \"REPLICA\"\nLINE 19: ...E ONLY my_view REPLICA ID...\n ^\n Command was: CREATE TABLE my_view (\n product character varying(255),\n product_id integer,\n payer...\npg_restore: [archiver (db)] could not execute query: ERROR: relation \"myschema.my_view\" does not exist\n\n\nSent with ProtonMail Secure Email.\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, October 9, 2019 5:32 PM, Alex Williams <valenceshell@protonmail.com> wrote:\n\n> Hi Tom,\n>\n> Thanks for your reply, we appreciate it. This is a long reply, sorry about that, but if there's any specific I can provide you that helps, please let me know.\n>\n> OK, for the log, we do this when restoring:\n>\n> pg_restore -d my_database -U postgres my_database.dump >restore_result.txt 2>&1\n>\n> but our log file only records the following (I've added more detail below using the cmds below.) The view name/column names have changed for the mailing list:\n>\n> \"CREATE TABLE\" - cat restore_result.txt | grep -A 10 -B 10 -i \"CREATE TABLE\" | more\n>\n> pg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE my_view postgres\n> pg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near \"REPLICA\"\n> LINE 19: ...E ONLY my_view REPLICA ID..\n>\n> pg_restore: [archiver (db)] could not execute query: ERROR: relation \"myschema.my_view \" does not exist\n>\n> \"CREATE RULE\" - cat restore_result.txt | grep -A 10 -B 10 -i \"CREATE RULE\" | more\n> pg_restore: [archiver (db)] Error from TOC entry 87618; 2618 42703185 RULE _RETURN postgres\n> pg_restore: [archiver (db)] could not execute query: ERROR: relation \"my_view\" does not exist\n> Command was: CREATE RULE \"_RETURN\" AS\n> ON SELECT TO my_view DO INSTEAD SELECT DISTINCT d.name AS p...\n>\n> We assumed it was the create rule but also looked at \"REPLICA ID\" and couldn't find anything on the properties that it had such a property ... we used the query from here: https://stackoverflow.com/questions/55249431/find-replica-identity-for-a-postgres-table\n>\n> SELECT CASE relreplident\n> WHEN 'd' THEN 'default'\n> WHEN 'n' THEN 'nothing'\n> WHEN 'f' THEN 'full'\n> WHEN 'i' THEN 'index'\n> END AS replica_identity\n> FROM pg_class\n> WHERE oid = 'my_view'::regclass;\n>\n> and it returned nothing. But I'm wondering could it be any of the tables that the view uses that may have that id; I'm not sure what REPLICA ID is used for, but our source DB for the dump has the the wal_level set to hot standby to sync with another server (same version) without using a dump (for failover/readonly report queries.)\n>\n> Reading this:\n> https://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-replica-identity-logical-replication/\n>\n> and this\n>\n> https://www.postgresql.org/docs/devel/sql-altertable.html\n>\n> I'm not sure what config param would set that other than the wal_level, which in our case is hot standby not logical, but it looks like 9.2 doesn't support that property and that could be causing the issue? Also, I see the replication settings in the conf file, but they are all defaulted to being commented out.\n>\n> So I'm still not sure what it could be. I'm in process of restoring the db from 9.5.5 to 9.5.18 at the moment to see if it works (currently \"my_view\" is still a table, I'm waiting for the restore to complete to see if when the rule is applied, if it hasn't yet, that it shows as a view and returns records.\")\n>\n> I'll see if I can extract the statements from another dump that doesn't use the Fc switches that we normally use, and try running them manually.\n>\n> Thanks again for your help!\n>\n> Alex\n>\n> Sent with ProtonMail Secure Email.\n>\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Wednesday, October 9, 2019 1:01 AM, Tom Lane tgl@sss.pgh.pa.us wrote:\n>\n> > Alex Williams valenceshell@protonmail.com writes:\n> >\n> > > Can someone let me know when you're doing a pg_dump, can you specify not to use the view rules so that the statement in the pg_dump file uses create view instead of create table/create rule?\n> >\n> > No.\n> >\n> > > We dump from 9.5.5 and restore to one 9.5.18 server and two 9.2 servers....we've been doing this for awhile and had no issues until recently with certain views that are trying to be restored with rule views (some views in the pg_dump file are created with create view and some by create table / create rule)\n> >\n> > In general, we don't promise that pg_dump output from major version N\n> > can be loaded into previous major versions. Having said that, 9.2\n> > should not have a problem with either the CREATE VIEW or CREATE TABLE-\n> > plus-CREATE RULE approaches per se, so there's some critical detail\n> > that you haven't told us about. You didn't show the actual error\n> > messages, either.\n> > regards, tom lane\n\n\n\n\n", "msg_date": "Wed, 09 Oct 2019 21:34:53 +0000", "msg_from": "Alex Williams <valenceshell@protonmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "Ugh, sorry again, missed one more part, here is the full error for the create table in the log:\n\npg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE my_view postgres\npg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near \"REPLICA\"\nLINE 19: ...E ONLY my_view REPLICA ID...\n ^\n Command was: CREATE TABLE my_view (\n product character varying(255),\n product_id integer,\n payer...\npg_restore: [archiver (db)] could not execute query: ERROR: relation \"myschema.my_view \" does not exist\n Command was: ALTER TABLE myschema.my_view OWNER TO postgres;\n\n\nBut you can see, it doesn't show the whole statement, it uses an ellipses after a certain amount of lines/chars.\n\nSent with ProtonMail Secure Email.\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, October 9, 2019 5:34 PM, Alex Williams <valenceshell@protonmail.com> wrote:\n\n> Sorry, there was a bit more that after reviewing again what I sent, I missed copying from the \"CREATE TABLE\" log:\n>\n> pg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE my_view postgres\n> pg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near \"REPLICA\"\n> LINE 19: ...E ONLY my_view REPLICA ID...\n> ^\n> Command was: CREATE TABLE my_view (\n> product character varying(255),\n> product_id integer,\n> payer...\n> pg_restore: [archiver (db)] could not execute query: ERROR: relation \"myschema.my_view\" does not exist\n>\n> Sent with ProtonMail Secure Email.\n>\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Wednesday, October 9, 2019 5:32 PM, Alex Williams valenceshell@protonmail.com wrote:\n>\n> > Hi Tom,\n> > Thanks for your reply, we appreciate it. This is a long reply, sorry about that, but if there's any specific I can provide you that helps, please let me know.\n> > OK, for the log, we do this when restoring:\n> > pg_restore -d my_database -U postgres my_database.dump >restore_result.txt 2>&1\n> > but our log file only records the following (I've added more detail below using the cmds below.) The view name/column names have changed for the mailing list:\n> > \"CREATE TABLE\" - cat restore_result.txt | grep -A 10 -B 10 -i \"CREATE TABLE\" | more\n> > pg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE my_view postgres\n> > pg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near \"REPLICA\"\n> > LINE 19: ...E ONLY my_view REPLICA ID..\n> > pg_restore: [archiver (db)] could not execute query: ERROR: relation \"myschema.my_view \" does not exist\n> > \"CREATE RULE\" - cat restore_result.txt | grep -A 10 -B 10 -i \"CREATE RULE\" | more\n> > pg_restore: [archiver (db)] Error from TOC entry 87618; 2618 42703185 RULE _RETURN postgres\n> > pg_restore: [archiver (db)] could not execute query: ERROR: relation \"my_view\" does not exist\n> > Command was: CREATE RULE \"_RETURN\" AS\n> > ON SELECT TO my_view DO INSTEAD SELECT DISTINCT d.name AS p...\n> > We assumed it was the create rule but also looked at \"REPLICA ID\" and couldn't find anything on the properties that it had such a property ... we used the query from here: https://stackoverflow.com/questions/55249431/find-replica-identity-for-a-postgres-table\n> > SELECT CASE relreplident\n> > WHEN 'd' THEN 'default'\n> > WHEN 'n' THEN 'nothing'\n> > WHEN 'f' THEN 'full'\n> > WHEN 'i' THEN 'index'\n> > END AS replica_identity\n> > FROM pg_class\n> > WHERE oid = 'my_view'::regclass;\n> > and it returned nothing. But I'm wondering could it be any of the tables that the view uses that may have that id; I'm not sure what REPLICA ID is used for, but our source DB for the dump has the the wal_level set to hot standby to sync with another server (same version) without using a dump (for failover/readonly report queries.)\n> > Reading this:\n> > https://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-replica-identity-logical-replication/\n> > and this\n> > https://www.postgresql.org/docs/devel/sql-altertable.html\n> > I'm not sure what config param would set that other than the wal_level, which in our case is hot standby not logical, but it looks like 9.2 doesn't support that property and that could be causing the issue? Also, I see the replication settings in the conf file, but they are all defaulted to being commented out.\n> > So I'm still not sure what it could be. I'm in process of restoring the db from 9.5.5 to 9.5.18 at the moment to see if it works (currently \"my_view\" is still a table, I'm waiting for the restore to complete to see if when the rule is applied, if it hasn't yet, that it shows as a view and returns records.\")\n> > I'll see if I can extract the statements from another dump that doesn't use the Fc switches that we normally use, and try running them manually.\n> > Thanks again for your help!\n> > Alex\n> > Sent with ProtonMail Secure Email.\n> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> > On Wednesday, October 9, 2019 1:01 AM, Tom Lane tgl@sss.pgh.pa.us wrote:\n> >\n> > > Alex Williams valenceshell@protonmail.com writes:\n> > >\n> > > > Can someone let me know when you're doing a pg_dump, can you specify not to use the view rules so that the statement in the pg_dump file uses create view instead of create table/create rule?\n> > >\n> > > No.\n> > >\n> > > > We dump from 9.5.5 and restore to one 9.5.18 server and two 9.2 servers....we've been doing this for awhile and had no issues until recently with certain views that are trying to be restored with rule views (some views in the pg_dump file are created with create view and some by create table / create rule)\n> > >\n> > > In general, we don't promise that pg_dump output from major version N\n> > > can be loaded into previous major versions. Having said that, 9.2\n> > > should not have a problem with either the CREATE VIEW or CREATE TABLE-\n> > > plus-CREATE RULE approaches per se, so there's some critical detail\n> > > that you haven't told us about. You didn't show the actual error\n> > > messages, either.\n> > > regards, tom lane\n\n\n\n\n", "msg_date": "Wed, 09 Oct 2019 21:39:52 +0000", "msg_from": "Alex Williams <valenceshell@protonmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "Alex Williams <valenceshell@protonmail.com> writes:\n> Ugh, sorry again, missed one more part, here is the full error for the create table in the log:\n> pg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE my_view postgres\n> pg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near \"REPLICA\"\n> LINE 19: ...E ONLY my_view REPLICA ID...\n> ^\n> Command was: CREATE TABLE my_view (\n> product character varying(255),\n> product_id integer,\n> payer...\n\nThis seems to be a chunk of a command like\n\nALTER TABLE ONLY my_view REPLICA IDENTITY FULL;\n\n(or possibly REPLICA IDENTITY NOTHING), which pg_dump will emit if the\ntable has a non-default relreplident setting. I do not, however,\nunderstand your statement that this is a view. AFAIK views should never\nhave non-default relreplident settings, and besides that, the TOC entry\ndescription says it's a table not a view. (If it's a materialized view,\nit could have relreplident, but its TOC entry still shouldn't say TABLE.)\n\nAnyway it's hardly surprising that 9.2 is choking on that syntax; it\ndoesn't have the REPLICA IDENTITY feature.\n\npg_dump actually is taking some pity on you here, in that it's emitting\nthis as a separate ALTER TABLE command, not as part of CREATE TABLE\ndirectly. This means you just need to get 9.2 to ignore the error\non the ALTER TABLE and keep plugging. I think what you need to do\nis something like pg_restore to stdout and then pipe stdout to psql,\nrather than connecting directly to the target server.\n\nAnother fix, if this table was only accidentally labeled with\na replica identity (which I'm suspecting because you don't seem\nto recognize the feature), is to get rid of the marking in the\nsource database:\n\nALTER TABLE ONLY my_view REPLICA IDENTITY DEFAULT;\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 18:46:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "Hi Tom,\n\nThanks again for your quick reply! I've attached three images of the view:\n\n1. The result of the alter table syntax you sent ( and it's definitely a view, actually, I created a few views in the past few weeks, and they all get the same error when trying to restore.)\n\n2. The View definition\n\n3. View info schema result\n\nIn text here, running this:\nALTER TABLE ONLY my_view REPLICA IDENTITY DEFAULT;\n\nReturns:\nERROR: \"myschema.my_view\" is not a table or materialized view\nSQL state: 42809\n\nAlso, it's been about three hours so far into the restore on the server that is 9.5.18, but the restore of that view is still a table, hasn't changed to a view yet. I assumed it would run the DDL statements first, then the data copy and possibly the rules last, so I'm still waiting for the restore to complete to see if it changes. The 9.2.9 Server just fails.\n\nAnd running this: select * from INFORMATION_SCHEMA.views where table_name = 'my_view' returns the expected result.\n\nI'll be away tomorrow, but will reply back on Friday with the result of your restore direction + the result of zgrepping the dump without the Fc switch that I have (sorry, didn't have a chance to do that yet.)\n\n\nThanks again,\n\nAlex\n\n\n\n\nSent with ProtonMail Secure Email.\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, October 9, 2019 6:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alex Williams valenceshell@protonmail.com writes:\n>\n> > Ugh, sorry again, missed one more part, here is the full error for the create table in the log:\n> > pg_restore: [archiver (db)] Error from TOC entry 11240; 1259 42703182 TABLE my_view postgres\n> > pg_restore: [archiver (db)] could not execute query: ERROR: syntax error at or near \"REPLICA\"\n> > LINE 19: ...E ONLY my_view REPLICA ID...\n> > ^\n> > Command was: CREATE TABLE my_view (\n> > product character varying(255),\n> > product_id integer,\n> > payer...\n>\n> This seems to be a chunk of a command like\n>\n> ALTER TABLE ONLY my_view REPLICA IDENTITY FULL;\n>\n> (or possibly REPLICA IDENTITY NOTHING), which pg_dump will emit if the\n> table has a non-default relreplident setting. I do not, however,\n> understand your statement that this is a view. AFAIK views should never\n> have non-default relreplident settings, and besides that, the TOC entry\n> description says it's a table not a view. (If it's a materialized view,\n> it could have relreplident, but its TOC entry still shouldn't say TABLE.)\n>\n> Anyway it's hardly surprising that 9.2 is choking on that syntax; it\n> doesn't have the REPLICA IDENTITY feature.\n>\n> pg_dump actually is taking some pity on you here, in that it's emitting\n> this as a separate ALTER TABLE command, not as part of CREATE TABLE\n> directly. This means you just need to get 9.2 to ignore the error\n> on the ALTER TABLE and keep plugging. I think what you need to do\n> is something like pg_restore to stdout and then pipe stdout to psql,\n> rather than connecting directly to the target server.\n>\n> Another fix, if this table was only accidentally labeled with\n> a replica identity (which I'm suspecting because you don't seem\n> to recognize the feature), is to get rid of the marking in the\n> source database:\n>\n> ALTER TABLE ONLY my_view REPLICA IDENTITY DEFAULT;\n>\n> regards, tom lane", "msg_date": "Thu, 10 Oct 2019 00:24:12 +0000", "msg_from": "Alex Williams <valenceshell@protonmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "Alex Williams <valenceshell@protonmail.com> writes:\n> [ gripes about pg_dump printing REPLICA IDENTITY NOTHING for a view ]\n\nI spent a little bit of time trying to reproduce this, and indeed I can,\nin versions before v10.\n\nregression=# create table mytab (f1 int primary key, f2 text);\nCREATE TABLE\nregression=# create view myview as select * from mytab group by f1;\nCREATE VIEW\n\nThis situation is problematic for pg_dump because validity of the\nview depends on the existence of mytab's primary key constraint,\nand we don't create primary keys till late in the restore process.\nSo it has to break myview into two parts, one to emit during normal\ntable/view creation and one to emit after index creation.\n\nWith 9.5's pg_dump, what comes out is:\n\n--\n-- Name: myview; Type: TABLE; Schema: public; Owner: postgres\n--\n\nCREATE TABLE public.myview (\n f1 integer,\n f2 text\n);\n\nALTER TABLE ONLY public.myview REPLICA IDENTITY NOTHING;\n\n\nALTER TABLE public.myview OWNER TO postgres;\n\nand then later:\n\n--\n-- Name: myview _RETURN; Type: RULE; Schema: public; Owner: postgres\n--\n\nCREATE RULE \"_RETURN\" AS\n ON SELECT TO public.myview DO INSTEAD SELECT mytab.f1,\n mytab.f2\n FROM public.mytab\n GROUP BY mytab.f1;\n\nThe reason we get \"REPLICA IDENTITY NOTHING\" is that a view's relreplident\nis set to 'n' not 'd', which might not have been a great choice. But why\ndoes pg_dump print anything --- it knows perfectly well that it should not\nemit REPLICA IDENTITY for relkinds that don't have storage? The answer\nemerges from looking at the code that breaks the dependency loop:\n\n\t/* pretend view is a plain table and dump it that way */\n\tviewinfo->relkind = 'r';\t/* RELKIND_RELATION */\n\nAfter that, pg_dump *doesn't* know it's a view, which also explains\nwhy the comment says TABLE not VIEW.\n\nThis is fixed in v10 and up thanks to d8c05aff5. I was hesitant to\nback-patch that at the time, but now that it's survived in the field\nfor a couple years, I think a good case could be made for doing so.\nAfter a bit of looking around, the main argument I can find against\nit is that emitting 'CREATE OR REPLACE VIEW' in a dropStmt will\nbreak pg_restore versions preceding this commit:\n\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nBranch: master Release: REL_10_BR [ac888986f] 2016-11-17 14:59:13 -0500\nBranch: REL9_6_STABLE Release: REL9_6_2 [0eaa5118a] 2016-11-17 14:59:19 -0500\nBranch: REL9_5_STABLE Release: REL9_5_6 [a7864037d] 2016-11-17 14:59:23 -0500\nBranch: REL9_4_STABLE Release: REL9_4_11 [e69b532be] 2016-11-17 14:59:26 -0500\n\n Improve pg_dump/pg_restore --create --if-exists logic.\n \n Teach it not to complain if the dropStmt attached to an archive entry\n is actually spelled CREATE OR REPLACE VIEW, since that will happen due to\n an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it\n print a WARNING and then emit the dropStmt unmodified. That seems like a\n much saner behavior than Assert'ing or dumping core due to a null-pointer\n dereference, which is what would happen before :-(.\n \n Back-patch to 9.4 where this option was introduced.\n\nAFAIR, we have not had complaints about back-rev pg_restore failing\non archives made by v10 and up; but perhaps it's more likely that\nsomeone would try to use, say, 9.5.5 pg_restore with a dump made by\n9.5.20 pg_dump.\n\nAn alternative that just responds to Alex's issue without fixing the\nother problems d8c05aff5 fixed is to hack the dependency-loop code\nlike this:\n\n\t/* pretend view is a plain table and dump it that way */\n\tviewinfo->relkind = 'r';\t/* RELKIND_RELATION */\n\tviewinfo->relkind = 'r';\t/* RELKIND_RELATION */\n+\tviewinfo->relreplident = 'd';\t/* REPLICA_IDENTITY_DEFAULT */\n\nThat's mighty ugly but it doesn't seem to carry any particular\nrisk.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Oct 2019 11:20:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "I wrote:\n> Alex Williams <valenceshell@protonmail.com> writes:\n>> [ gripes about pg_dump printing REPLICA IDENTITY NOTHING for a view ]\n\n> This is fixed in v10 and up thanks to d8c05aff5. I was hesitant to\n> back-patch that at the time, but now that it's survived in the field\n> for a couple years, I think a good case could be made for doing so.\n> After a bit of looking around, the main argument I can find against\n> it is that emitting 'CREATE OR REPLACE VIEW' in a dropStmt will\n> break pg_restore versions preceding this commit:\n\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Branch: master Release: REL_10_BR [ac888986f] 2016-11-17 14:59:13 -0500\n> Branch: REL9_6_STABLE Release: REL9_6_2 [0eaa5118a] 2016-11-17 14:59:19 -0500\n> Branch: REL9_5_STABLE Release: REL9_5_6 [a7864037d] 2016-11-17 14:59:23 -0500\n> Branch: REL9_4_STABLE Release: REL9_4_11 [e69b532be] 2016-11-17 14:59:26 -0500\n\n> Improve pg_dump/pg_restore --create --if-exists logic.\n\nAfter further digging, I remembered that we bumped the archive file\nversion number in 3d2aed664 et al. to fix CVE-2018-1058. So current\nversions of pg_dump already emit archive files that will be rejected\nby pg_restore versions preceding the above fix, and so there should be\nno downside to emitting data that depends on it.\n\nI'll go see about backpatching d8c05aff5.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Oct 2019 15:03:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "Hi,\n\nOn 2019-10-10 11:20:14 -0400, Tom Lane wrote:\n> regression=# create table mytab (f1 int primary key, f2 text);\n> CREATE TABLE\n> regression=# create view myview as select * from mytab group by f1;\n> CREATE VIEW\n> \n> This situation is problematic for pg_dump because validity of the\n> view depends on the existence of mytab's primary key constraint,\n> and we don't create primary keys till late in the restore process.\n> So it has to break myview into two parts, one to emit during normal\n> table/view creation and one to emit after index creation.\n> \n> With 9.5's pg_dump, what comes out is:\n> \n> --\n> -- Name: myview; Type: TABLE; Schema: public; Owner: postgres\n> --\n> \n> CREATE TABLE public.myview (\n> f1 integer,\n> f2 text\n> );\n> \n> ALTER TABLE ONLY public.myview REPLICA IDENTITY NOTHING;\n\nIck.\n\n\n> The reason we get \"REPLICA IDENTITY NOTHING\" is that a view's relreplident\n> is set to 'n' not 'd', which might not have been a great choice.\n\nHm, yea. I wonder if we should add a REPLICA_IDENTITY_INVALID or such,\nfor non relation relkinds? I'm mildly inclined to think that setting it\nto REPLICA_IDENTITY_DEFAULT is at least as confusing as\nREPLICA_IDENTITY_DEFAULT...\n\n\n> This is fixed in v10 and up thanks to d8c05aff5. I was hesitant to\n> back-patch that at the time, but now that it's survived in the field\n> for a couple years, I think a good case could be made for doing so.\n\n+1\n\n\n> \t/* pretend view is a plain table and dump it that way */\n> \tviewinfo->relkind = 'r';\t/* RELKIND_RELATION */\n> \tviewinfo->relkind = 'r';\t/* RELKIND_RELATION */\n> +\tviewinfo->relreplident = 'd';\t/* REPLICA_IDENTITY_DEFAULT */\n> \n> That's mighty ugly but it doesn't seem to carry any particular\n> risk.\n\nI also could live with this, given it'd only be in older back-branches.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 26 Oct 2019 13:56:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-10 11:20:14 -0400, Tom Lane wrote:\n>> The reason we get \"REPLICA IDENTITY NOTHING\" is that a view's relreplident\n>> is set to 'n' not 'd', which might not have been a great choice.\n\n> Hm, yea. I wonder if we should add a REPLICA_IDENTITY_INVALID or such,\n> for non relation relkinds? I'm mildly inclined to think that setting it\n> to REPLICA_IDENTITY_DEFAULT is at least as confusing as\n> REPLICA_IDENTITY_DEFAULT...\n\nYeah, I'd be for that in HEAD probably. But of course we can't change\nthe 9.x branches like that.\n\n>> This is fixed in v10 and up thanks to d8c05aff5. I was hesitant to\n>> back-patch that at the time, but now that it's survived in the field\n>> for a couple years, I think a good case could be made for doing so.\n\n> +1\n\nJust finishing up the back-patch now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Oct 2019 17:18:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump compatibility level / use create view instead of create\n table/rule" } ]
[ { "msg_contents": "Hi all,\n\nALTER SYSTEM currently does not raise error upon invalid entry. Take for example:\n\nalter system set superuser_reserved_connections = 10;\n> ALTER SYSTEM\nalter system set max_connections = 5;\n> ALTER SYSTEM\n\nThe database will now fail to restart without manual intervention by way of editing the autoconf file (which says \"# Do not edit this file manually!\" on its first line).\n\nThe attached WIP patch is intended to raise an error on invalid ALTER SYSTEM commands before being written out to the filesystem. Hopefully this behavior is more intuitive.\n\nThanks\n--\nJordan Deitch\nhttps://id.rsa.pub/", "msg_date": "Tue, 08 Oct 2019 23:12:17 -0400", "msg_from": "\"Jordan Deitch\" <jd@rsa.pub>", "msg_from_op": true, "msg_subject": "WIP: raise error when submitting invalid ALTER SYSTEM command" }, { "msg_contents": "\"Jordan Deitch\" <jd@rsa.pub> writes:\n> ALTER SYSTEM currently does not raise error upon invalid entry.\n\nYou mean on invalid combinations of entries.\n\n Take for example:\n> alter system set superuser_reserved_connections = 10;\n> ALTER SYSTEM\n> alter system set max_connections = 5;\n> ALTER SYSTEM\n\n> The database will now fail to restart without manual intervention by way of editing the autoconf file (which says \"# Do not edit this file manually!\" on its first line).\n\nYeah. That's unfortunate, but ...\n\n> The attached WIP patch is intended to raise an error on invalid ALTER SYSTEM commands before being written out to the filesystem. Hopefully this behavior is more intuitive.\n\nThere's no chance that you can make this work. We've had unpleasant\nexperiences with previous attempts to implement cross-checks between\nGUC variables; in general, they created more problems than they fixed.\n\nA specific issue with what you've got here is that it only checks values\nthat are proposed to be put into postgresql.auto.conf, without regard to\nother value sources such as postgresql.conf or built-in defaults. You\nalso managed to break the system's defenses against invalid combinations\nthat arise from such other sources --- taking out those error checks in\nPostmasterMain is completely unsafe.\n\nAlso, even if you believe that O(N^2) behavior isn't a problem, this\nprogramming approach doesn't scale to cases where more than two variables\ncontribute to an issue. Somewhere around O(N^3) or O(N^4) there is\ndefinitely going to be a threshold of pain. This aspect doesn't seem\nthat hard to fix ... but it's just an efficiency issue, and doesn't\nspeak at all to the fundamental problem that you don't have enough\nvisibility into what the next postmaster run will be seeing.\n\nAlso, from a code maintenance standpoint, having code in\nAlterSystemSetConfigFile that tries to know all about not only specific\nGUCs, but every possible combination of specific GUCs, is just not going\nto be maintainable. (The real underlying problem there is that those\nchecks in PostmasterMain are merely the tip of the iceberg of error\nconditions that might cause a postmaster startup failure.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 01:31:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WIP: raise error when submitting invalid ALTER SYSTEM command" } ]
[ { "msg_contents": "Hi,\n\nThere is the following description in wal.sgml.\n\n Segment\n files are given ever-increasing numbers as names, starting at\n <filename>000000010000000000000000</filename>.\n\nBut the first WAL segment file that initdb creates is 000000010000000000000001\nnot 000000010000000000000000. This change was caused by old commit 8c843fff2d,\nbut seems the documentation had not been updated unfortunately a long time.\nAttached patch replaces 000000010000000000000000 with 000000010000000000000001\nin the above description.\n\nThis patch needs to be back-patched to all the supported versions.\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Wed, 9 Oct 2019 14:48:50 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "First WAL segment file that initdb creates" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nThe issue has been verified using below steps:\r\n1. $ initdb -D /home/test/PG122DATA/data\r\n2. $ ls -l /home/test/PG122DATA/data/pg_wal/\r\ntotal 16388\r\n-rw------- 1 test test 16777216 Feb 18 12:07 000000010000000000000001\r\ndrwx------ 2 test test 4096 Feb 18 12:07 archive_status\r\n\r\nThe first WAL segment file created by initdb is \"000000010000000000000001\", not \"000000010000000000000000\".", "msg_date": "Tue, 18 Feb 2020 20:26:43 +0000", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: First WAL segment file that initdb creates" }, { "msg_contents": "\n\nOn 2020/02/19 5:26, David Zhang wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: tested, passed\n> \n> The issue has been verified using below steps:\n> 1. $ initdb -D /home/test/PG122DATA/data\n> 2. $ ls -l /home/test/PG122DATA/data/pg_wal/\n> total 16388\n> -rw------- 1 test test 16777216 Feb 18 12:07 000000010000000000000001\n> drwx------ 2 test test 4096 Feb 18 12:07 archive_status\n> \n> The first WAL segment file created by initdb is \"000000010000000000000001\", not \"000000010000000000000000\".\n\nThanks for the test! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 3 Mar 2020 12:29:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: First WAL segment file that initdb creates" } ]
[ { "msg_contents": "Hi all,\n\nAfter the set of issues discussed here, it seems to me that it would\nbe a good thing to have some safeguards against incorrect flags when\nopening a fd which would be used for fsync():\nhttps://www.postgresql.org/message-id/16039-196fc97cc05e141c@postgresql.org\n\nAttached is a patch aimed at doing that. Historically O_RDONLY is 0,\nso when looking at a directory we just need to make sure that no write\nflags are used. For files, that's the contrary, a write flag has to\nbe used.\n\nThoughts or better ideas?\n\nThanks,\n--\nMichael", "msg_date": "Wed, 9 Oct 2019 15:26:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Safeguards against incorrect fd flags for fsync()" }, { "msg_contents": "On 10/8/19 11:26 PM, Michael Paquier wrote:\n> Hi all,\n> \n> After the set of issues discussed here, it seems to me that it would\n> be a good thing to have some safeguards against incorrect flags when\n> opening a fd which would be used for fsync():\n> https://www.postgresql.org/message-id/16039-196fc97cc05e141c@postgresql.org\n> \n> Attached is a patch aimed at doing that. Historically O_RDONLY is 0,\n> so when looking at a directory we just need to make sure that no write\n> flags are used. For files, that's the contrary, a write flag has to\n> be used.\n> \n> Thoughts or better ideas?\n\nThe code and comments don't clearly indicate what you have said in the \nemail, that you are verifying directories are opened read-only and files \nare opened either read-write or write-only. I'd recommend changing the \ncomments a bit to make that clearer.\n\nI would also rearrange the code a little, as it is slightly clearer to read:\n\n\tif (x)\n\t\t/* directory stuff */\n\telse\n\t\t/* file stuff */\n\nthan as you have it:\n\n\tif (!x)\n\t\t/* file stuff */\n\telse\n\t\t/* directory stuff */\n\nbecause it takes slightly less time for somebody reading the code when \nthey don't have to think about the negation of x.\n\nI'm a little uncertain about ignoring fstat errors as you do, but left \nthat part of the logic alone. I understand that any fstat error will \nlikely be immediately followed by another error when the fsync is \nattempted, but relying on that seems vaguely similar to the security \nvulnerability of checking permissions and then opening a file as two \nseparate operations. Not sure the analogy actually holds for fstat \nbefore fsync, though.\n\nAttached is a revised version of the patch. Perhaps you can check what \nI've done and tell me if I've broken it.\n\n\n-- \nMark Dilger", "msg_date": "Thu, 7 Nov 2019 13:57:57 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Safeguards against incorrect fd flags for fsync()" }, { "msg_contents": "On Thu, Nov 07, 2019 at 01:57:57PM -0800, Mark Dilger wrote:\n> The code and comments don't clearly indicate what you have said in the\n> email, that you are verifying directories are opened read-only and files are\n> opened either read-write or write-only. I'd recommend changing the comments\n> a bit to make that clearer.\n\nThanks for the suggestions, sounds fine to me.\n\n> I would also rearrange the code a little, as it is slightly clearer to read:\n> \n> \tif (x)\n> \t\t/* directory stuff */\n> \telse\n> \t\t/* file stuff */\n> \n> than as you have it:\n> \n> \tif (!x)\n> \t\t/* file stuff */\n> \telse\n> \t\t/* directory stuff */\n\nThe check order in the former patch is consistent with what's done at\nthe top of fsync_fname_ext(), still I can see your point. So let's do\nas you suggest.\n\n> I'm a little uncertain about ignoring fstat errors as you do, but left that\n> part of the logic alone. I understand that any fstat error will likely be\n> immediately followed by another error when the fsync is attempted, but\n> relying on that seems vaguely similar to the security vulnerability of\n> checking permissions and then opening a file as two separate operations.\n> Not sure the analogy actually holds for fstat before fsync, though.\n\nThe only possible error which could be expected here would be a ENOENT\nso we could filter after that, but fsync() would most likely complain\nabout that so it sounds better to let it do its work with its own\nlogging, which would be more helpful for the user, if of course we\nhave fsync=on in postgresql.conf.\n\n> Attached is a revised version of the patch. Perhaps you can check what I've\n> done and tell me if I've broken it.\n\nThanks for the review. I was wondering why I did not do that as well\nfor file_utils.c, just to find out that fsync_fname() is the only\nentry point in file_utils.c. Anyway, the patch had a problem\nregarding fcntl() which is not available on Windows (see for example\npg_set_noblock in noblock.c). Performing the sanity check will allow\nto catch any problems for all platforms we support, so let's just skip\nit for Windows. For this reason it is better as well to update errno\nto 0 after the fstat() call. Who knows... Attached is an updated\nversion, with your changes included. How does that look?\n--\nMichael", "msg_date": "Mon, 25 Nov 2019 11:28:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Safeguards against incorrect fd flags for fsync()" }, { "msg_contents": "\n\nOn 11/24/19 6:28 PM, Michael Paquier wrote:\n> On Thu, Nov 07, 2019 at 01:57:57PM -0800, Mark Dilger wrote:\n>> The code and comments don't clearly indicate what you have said in the\n>> email, that you are verifying directories are opened read-only and files are\n>> opened either read-write or write-only. I'd recommend changing the comments\n>> a bit to make that clearer.\n> \n> Thanks for the suggestions, sounds fine to me.\n> \n>> I would also rearrange the code a little, as it is slightly clearer to read:\n>>\n>> \tif (x)\n>> \t\t/* directory stuff */\n>> \telse\n>> \t\t/* file stuff */\n>>\n>> than as you have it:\n>>\n>> \tif (!x)\n>> \t\t/* file stuff */\n>> \telse\n>> \t\t/* directory stuff */\n> \n> The check order in the former patch is consistent with what's done at\n> the top of fsync_fname_ext(), still I can see your point. So let's do\n> as you suggest.\n> \n>> I'm a little uncertain about ignoring fstat errors as you do, but left that\n>> part of the logic alone. I understand that any fstat error will likely be\n>> immediately followed by another error when the fsync is attempted, but\n>> relying on that seems vaguely similar to the security vulnerability of\n>> checking permissions and then opening a file as two separate operations.\n>> Not sure the analogy actually holds for fstat before fsync, though.\n> \n> The only possible error which could be expected here would be a ENOENT\n> so we could filter after that, but fsync() would most likely complain\n> about that so it sounds better to let it do its work with its own\n> logging, which would be more helpful for the user, if of course we\n> have fsync=on in postgresql.conf.\n> \n>> Attached is a revised version of the patch. Perhaps you can check what I've\n>> done and tell me if I've broken it.\n> \n> Thanks for the review. I was wondering why I did not do that as well\n> for file_utils.c, just to find out that fsync_fname() is the only\n> entry point in file_utils.c. Anyway, the patch had a problem\n> regarding fcntl() which is not available on Windows (see for example\n> pg_set_noblock in noblock.c). Performing the sanity check will allow\n> to catch any problems for all platforms we support, so let's just skip\n> it for Windows. For this reason it is better as well to update errno\n> to 0 after the fstat() call. Who knows... Attached is an updated\n> version, with your changes included. How does that look?\n\nThat looks great, thank you, but I have not tested it yet. I'll go do\nthat now....\n\n-- \nMark Dilger\n\n\n", "msg_date": "Sun, 24 Nov 2019 18:53:35 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Safeguards against incorrect fd flags for fsync()" }, { "msg_contents": "\n\nOn 11/24/19 6:53 PM, Mark Dilger wrote:\n> \n> \n> On 11/24/19 6:28 PM, Michael Paquier wrote:\n>> On Thu, Nov 07, 2019 at 01:57:57PM -0800, Mark Dilger wrote:\n>>> The code and comments don't clearly indicate what you have said in the\n>>> email, that you are verifying directories are opened read-only and \n>>> files are\n>>> opened either read-write or write-only.� I'd recommend changing the \n>>> comments\n>>> a bit to make that clearer.\n>>\n>> Thanks for the suggestions, sounds fine to me.\n>>\n>>> I would also rearrange the code a little, as it is slightly clearer \n>>> to read:\n>>>\n>>> ����if (x)\n>>> ������� /* directory stuff */\n>>> ����else\n>>> ������� /* file stuff */\n>>>\n>>> than as you have it:\n>>>\n>>> ����if (!x)\n>>> ������� /* file stuff */\n>>> ����else\n>>> ������� /* directory stuff */\n>>\n>> The check order in the former patch is consistent with what's done at\n>> the top of fsync_fname_ext(), still I can see your point.� So let's do\n>> as you suggest.\n>>\n>>> I'm a little uncertain about ignoring fstat errors as you do, but \n>>> left that\n>>> part of the logic alone.� I understand that any fstat error will \n>>> likely be\n>>> immediately followed by another error when the fsync is attempted, but\n>>> relying on that seems vaguely similar to the security vulnerability of\n>>> checking permissions and then opening a file as two separate operations.\n>>> Not sure the analogy actually holds for fstat before fsync, though.\n>>\n>> The only possible error which could be expected here would be a ENOENT\n>> so we could filter after that, but fsync() would most likely complain\n>> about that so it sounds better to let it do its work with its own\n>> logging, which would be more helpful for the user, if of course we\n>> have fsync=on in postgresql.conf.\n>>\n>>> Attached is a revised version of the patch.� Perhaps you can check \n>>> what I've\n>>> done and tell me if I've broken it.\n>>\n>> Thanks for the review.� I was wondering why I did not do that as well\n>> for file_utils.c, just to find out that fsync_fname() is the only\n>> entry point in file_utils.c.� Anyway, the patch had a problem\n>> regarding fcntl() which is not available on Windows (see for example\n>> pg_set_noblock in noblock.c).� Performing the sanity check will allow\n>> to catch any problems for all platforms we support, so let's just skip\n>> it for Windows.� For this reason it is better as well to update errno\n>> to 0 after the fstat() call.� Who knows...� Attached is an updated\n>> version, with your changes included.� How does that look?\n> \n> That looks great, thank you, but I have not tested it yet.� I'll go do\n> that now....\n\nOk, it passes all regression tests, and I played around with\nintentionally breaking the code to open file descriptors in\nthe wrong mode. The assertion appears to work as intended.\n\nI'd say this is ready for commit.\n\n-- \nMark Dilger\n\n\n", "msg_date": "Sun, 24 Nov 2019 20:18:38 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Safeguards against incorrect fd flags for fsync()" }, { "msg_contents": "On Sun, Nov 24, 2019 at 08:18:38PM -0800, Mark Dilger wrote:\n> Ok, it passes all regression tests, and I played around with\n> intentionally breaking the code to open file descriptors in\n> the wrong mode. The assertion appears to work as intended.\n> \n> I'd say this is ready for commit.\n\nThanks for the review. I'll look at that pretty soon.\n--\nMichael", "msg_date": "Mon, 25 Nov 2019 16:18:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Safeguards against incorrect fd flags for fsync()" }, { "msg_contents": "On Mon, Nov 25, 2019 at 04:18:33PM +0900, Michael Paquier wrote:\n> Thanks for the review. I'll look at that pretty soon.\n\nTweaked a bit the comment block added, and committed. Thanks Mark for\nthe input!\n--\nMichael", "msg_date": "Tue, 26 Nov 2019 13:34:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Safeguards against incorrect fd flags for fsync()" } ]
[ { "msg_contents": "When dealing with a case where a 2TB table had 3 billion dead tuples I\ndiscovered that vacuum currently can't make use of more than 1GB of\nmaintenance_work_mem - 179M tuples. This caused excessive amounts of index\nscanning even though there was plenty of memory available.\n\nI didn't see any good reason for having this limit, so here is a patch that\nmakes use of MemoryContextAllocHuge, and converts the array indexing to use\nsize_t to lift a second limit at 12GB.\n\nOne potential problem with allowing larger arrays is that bsearch might no\nlonger be the best way of determining if a ctid was marked dead. It might\npay off to convert the dead tuples array to a hash table to avoid O(n log\nn) runtime when scanning indexes. I haven't done any profiling yet to see\nhow big of a problem this is.\n\nSecond issue I noticed is that the dead_tuples array is always allocated\nmax allowed size, unless the table can't possibly have that many tuples. It\nmay make sense to allocate it based on estimated number of dead tuples and\nresize if needed.\n\nRegards,\nAnts Aasma\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Wed, 9 Oct 2019 15:58:11 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": true, "msg_subject": "Remove size limitations of vacuums dead_tuples array" }, { "msg_contents": "On Wed, Oct 09, 2019 at 03:58:11PM +0300, Ants Aasma wrote:\n>When dealing with a case where a 2TB table had 3 billion dead tuples I\n>discovered that vacuum currently can't make use of more than 1GB of\n>maintenance_work_mem - 179M tuples. This caused excessive amounts of index\n>scanning even though there was plenty of memory available.\n>\n>I didn't see any good reason for having this limit, so here is a patch that\n>makes use of MemoryContextAllocHuge, and converts the array indexing to use\n>size_t to lift a second limit at 12GB.\n>\n>One potential problem with allowing larger arrays is that bsearch might no\n>longer be the best way of determining if a ctid was marked dead. It might\n>pay off to convert the dead tuples array to a hash table to avoid O(n log\n>n) runtime when scanning indexes. I haven't done any profiling yet to see\n>how big of a problem this is.\n>\n>Second issue I noticed is that the dead_tuples array is always allocated\n>max allowed size, unless the table can't possibly have that many tuples. It\n>may make sense to allocate it based on estimated number of dead tuples and\n>resize if needed.\n>\n\nThere already was a attempt to make this improvement, see [1]. There was\na fairly long discussion about how to best do that (using other data\nstructure, not just a simple array). It kinda died about a year ago, but\nI suppose there's a lot of relevant info in that thread.\n\n[1] https://www.postgresql.org/message-id/CAGTBQpbDCaR6vv9%3DscXzuT8fSbckf%3Da3NgZdWFWZbdVugVht6Q%40mail.gmail.com\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 10 Oct 2019 16:05:08 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remove size limitations of vacuums dead_tuples array" }, { "msg_contents": "On Thu, 10 Oct 2019 at 17:05, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> There already was a attempt to make this improvement, see [1]. There was\n> a fairly long discussion about how to best do that (using other data\n> structure, not just a simple array). It kinda died about a year ago, but\n> I suppose there's a lot of relevant info in that thread.\n>\n> [1]\n> https://www.postgresql.org/message-id/CAGTBQpbDCaR6vv9%3DscXzuT8fSbckf%3Da3NgZdWFWZbdVugVht6Q%40mail.gmail.com\n\n\nThanks for the pointer, wow that's a long thread. For some reason it did\nnot consider lifting the INT_MAX tuples/12GB limitation. I'll see if I can\npick up where that thread left off and push it along.\n\nRegards,\nAnts Aasma\nWeb: https://www.cybertec-postgresql.com\n\nOn Thu, 10 Oct 2019 at 17:05, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:There already was a attempt to make this improvement, see [1]. There was\na fairly long discussion about how to best do that (using other data\nstructure, not just a simple array). It kinda died about a year ago, but\nI suppose there's a lot of relevant info in that thread.\n\n[1] https://www.postgresql.org/message-id/CAGTBQpbDCaR6vv9%3DscXzuT8fSbckf%3Da3NgZdWFWZbdVugVht6Q%40mail.gmail.comThanks for the pointer, wow that's a long thread. For some reason it did not consider lifting the INT_MAX tuples/12GB limitation. I'll see if I can pick up where that thread left off and push it along.Regards,Ants AasmaWeb: https://www.cybertec-postgresql.com", "msg_date": "Fri, 11 Oct 2019 16:49:11 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Remove size limitations of vacuums dead_tuples array" }, { "msg_contents": "On Fri, Oct 11, 2019 at 04:49:11PM +0300, Ants Aasma wrote:\n> Thanks for the pointer, wow that's a long thread. For some reason it did\n> not consider lifting the INT_MAX tuples/12GB limitation. I'll see if I can\n> pick up where that thread left off and push it along.\n\nHmm. Okay.. Then I have marked this entry as returned with feedback\nin this CF.\n--\nMichael", "msg_date": "Sat, 30 Nov 2019 11:20:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove size limitations of vacuums dead_tuples array" } ]
[ { "msg_contents": "Dear all postgresql developers,\n\nI have tested postgres v11 against TCP Wrappers but it does not respond \nto TCP wrappers port blocking.\n\nMay I suggest the community to have postgres to work with TCP wrappers.?? \nIts security will be better.\n\n\nRegards,\nTimmy\n\n\n\n\n", "msg_date": "Thu, 10 Oct 2019 03:54:55 +0800", "msg_from": "Timmy Siu <timmy.siu@aol.com>", "msg_from_op": true, "msg_subject": "TCP Wrappers" }, { "msg_contents": "On Wed, Oct 9, 2019 at 12:56 PM Timmy Siu <timmy.siu@aol.com> wrote:\n\n> Dear all postgresql developers,\n>\n> I have tested postgres v11 against TCP Wrappers but it does not respond\n> to TCP wrappers port blocking.\n>\n> May I suggest the community to have postgres to work with TCP wrappers.??\n> Its security will be better.\n>\n>\nThe last stable release of TCP Wrappers was a couple decades ago. It's\ndeprecated in RHEL7 and removed in RHEL8. I'm not a PG core member or\nanything but rather doubt that's an area where the developers will want to\nexpend effort.\n\nCheers,\nSteve\n\nOn Wed, Oct 9, 2019 at 12:56 PM Timmy Siu <timmy.siu@aol.com> wrote:Dear all postgresql developers,\n\nI have tested postgres v11 against TCP Wrappers but it does not respond \nto TCP wrappers port blocking.\n\nMay I suggest the community to have postgres to work with TCP wrappers.?? \nIts security will be better.The last stable release of TCP Wrappers was a couple decades ago. It's deprecated in RHEL7 and removed in RHEL8.  I'm not a PG core member or anything but rather doubt that's an area where the developers will want to expend effort.Cheers,Steve", "msg_date": "Wed, 9 Oct 2019 15:39:11 -0700", "msg_from": "Steve Crawford <scrawford@pinpointresearch.com>", "msg_from_op": false, "msg_subject": "Re: TCP Wrappers" }, { "msg_contents": "Yeah, why bother. Even ’native’ encryption/SSL in PG (were one to use it ‘natively’, as we do) is so good; adding yet another layer seems overkill…\n\nLou Picciano\n\n> On Oct 9, 2019, at 6:39 PM, Steve Crawford <scrawford@pinpointresearch.com> wrote:\n> \n> \n> On Wed, Oct 9, 2019 at 12:56 PM Timmy Siu <timmy.siu@aol.com <mailto:timmy.siu@aol.com>> wrote:\n> Dear all postgresql developers,\n> \n> I have tested postgres v11 against TCP Wrappers but it does not respond \n> to TCP wrappers port blocking.\n> \n> May I suggest the community to have postgres to work with TCP wrappers.?? \n> Its security will be better.\n> \n> \n> The last stable release of TCP Wrappers was a couple decades ago. It's deprecated in RHEL7 and removed in RHEL8. I'm not a PG core member or anything but rather doubt that's an area where the developers will want to expend effort.\n> \n> Cheers,\n> Steve\n> \n\n\nYeah, why bother. Even ’native’ encryption/SSL in PG (were one to use it ‘natively’, as we do) is so good; adding yet another layer seems overkill…Lou PiccianoOn Oct 9, 2019, at 6:39 PM, Steve Crawford <scrawford@pinpointresearch.com> wrote:On Wed, Oct 9, 2019 at 12:56 PM Timmy Siu <timmy.siu@aol.com> wrote:Dear all postgresql developers,\n\nI have tested postgres v11 against TCP Wrappers but it does not respond \nto TCP wrappers port blocking.\n\nMay I suggest the community to have postgres to work with TCP wrappers.?? \nIts security will be better.The last stable release of TCP Wrappers was a couple decades ago. It's deprecated in RHEL7 and removed in RHEL8.  I'm not a PG core member or anything but rather doubt that's an area where the developers will want to expend effort.Cheers,Steve", "msg_date": "Wed, 9 Oct 2019 18:50:20 -0400", "msg_from": "Lou Picciano <LouPicciano@comcast.net>", "msg_from_op": false, "msg_subject": "Re: TCP Wrappers" }, { "msg_contents": "Steve Crawford <scrawford@pinpointresearch.com> writes:\n> On Wed, Oct 9, 2019 at 12:56 PM Timmy Siu <timmy.siu@aol.com> wrote:\n>> May I suggest the community to have postgres to work with TCP wrappers.??\n>> Its security will be better.\n\n> The last stable release of TCP Wrappers was a couple decades ago. It's\n> deprecated in RHEL7 and removed in RHEL8. I'm not a PG core member or\n> anything but rather doubt that's an area where the developers will want to\n> expend effort.\n\nYeah. In a quick dig through the project archives, I can find exactly\none prior suggestion that we should do this, and that email is old\nenough to drink:\n\nhttps://www.postgresql.org/message-id/v0313030fb141b1665de9%40%5B137.78.218.94%5D\n\nThat doesn't bode well for the number of people who would use or care\nabout such a feature.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Oct 2019 19:15:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TCP Wrappers" }, { "msg_contents": "On Thu, 10 Oct 2019 at 07:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> That doesn't bode well for the number of people who would use or care\n> about such a feature.\n>\n\nAgreed. tcp_wrappers predates the widespread availability of easy,\neffective software firewalls. Back when services listened on 0.0.0.0 and if\nyou were lucky you had ipfwadm, tcp_wrappers made a lot of sense. Now it's\nIMO a pointless layer of additional complexity that no longer serves a\npurpose.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 10 Oct 2019 at 07:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThat doesn't bode well for the number of people who would use or care\nabout such a feature.Agreed.  tcp_wrappers predates the widespread availability of easy, effective software firewalls. Back when services listened on 0.0.0.0 and if you were lucky you had ipfwadm, tcp_wrappers made a lot of sense. Now it's IMO a pointless layer of additional complexity that no longer serves a purpose.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 10 Oct 2019 07:29:08 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: TCP Wrappers" } ]
[ { "msg_contents": "Hello, While I'm moving to CentOS8 environment, I got stuck at\n./configure with the following error.\n\nconfigure: error: libperl library is requred for Perl\n\nIt complains that it needs -fPIC.\n\nConfigure uses only $Config{ccflags}, but it seems that\n$Config{cccdlflags} is also required. The attached patch make\n./configure success. (configure itself is excluded in the patch.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 09 Oct 2019 22:46:11 -0700 (PDT)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "configure fails for perl check on CentOS8" }, { "msg_contents": "\nOn 10/10/19 1:46 AM, Kyotaro Horiguchi wrote:\n> Hello, While I'm moving to CentOS8 environment, I got stuck at\n> ./configure with the following error.\n>\n> configure: error: libperl library is requred for Perl\n>\n> It complains that it needs -fPIC.\n>\n> Configure uses only $Config{ccflags}, but it seems that\n> $Config{cccdlflags} is also required. The attached patch make\n> ./configure success. (configure itself is excluded in the patch.)\n>\n\n\n./configure --with-perl\n\n\nis working for me on Centos8 (double checked after a `dnf update`)\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 10 Oct 2019 09:22:48 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 10/10/19 1:46 AM, Kyotaro Horiguchi wrote:\n>> Hello, While I'm moving to CentOS8 environment, I got stuck at\n>> ./configure with the following error.\n>> configure: error: libperl library is requred for Perl\n>> It complains that it needs -fPIC.\n>> Configure uses only $Config{ccflags}, but it seems that\n>> $Config{cccdlflags} is also required. The attached patch make\n>> ./configure success. (configure itself is excluded in the patch.)\n\n> ./configure --with-perl\n> is working for me on Centos8 (double checked after a `dnf update`)\n\nYeah, I'm quite suspicious of this too. Although we don't seem to have\nany buildfarm members covering exactly RHEL8/CentOS8, we have enough\ncoverage of different Fedora releases to make it hard to believe that\nwe missed any changes in Red Hat's packaging of Perl.\n\nIs this error perhaps occurring with a non-vendor Perl installation?\nWhat's the exact error message? config.log might contain some useful\nclues, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Oct 2019 11:51:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "Hi. Sorry for the delay.\n\nAt Thu, 10 Oct 2019 11:51:21 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 10/10/19 1:46 AM, Kyotaro Horiguchi wrote:\n> >> Hello, While I'm moving to CentOS8 environment, I got stuck at\n> >> ./configure with the following error.\n> >> configure: error: libperl library is requred for Perl\n> >> It complains that it needs -fPIC.\n> >> Configure uses only $Config{ccflags}, but it seems that\n> >> $Config{cccdlflags} is also required. The attached patch make\n> >> ./configure success. (configure itself is excluded in the patch.)\n> \n> > ./configure --with-perl\n> > is working for me on Centos8 (double checked after a `dnf update`)\n> \n> Yeah, I'm quite suspicious of this too. Although we don't seem to have\n> any buildfarm members covering exactly RHEL8/CentOS8, we have enough\n> coverage of different Fedora releases to make it hard to believe that\n> we missed any changes in Red Hat's packaging of Perl.\n> \n> Is this error perhaps occurring with a non-vendor Perl installation?\n> What's the exact error message? config.log might contain some useful\n> clues, too.\n\nThe perl package is official one. I found the cause, that's my\nmistake.\n\nThe problematic command line boils down to:\n\n$ ./configure --with-perl CFLAGS=-O0\n\nIt is bash-aliased and survived without being found for a long time on\nmy Cent7 environment, but CentOS8 doesn't allow that.\n\nBy the way, is there any official way to specify options like -O0\nwhile configure time?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n\n", "msg_date": "Tue, 15 Oct 2019 19:45:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "On Tue, Oct 15, 2019 at 6:45 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n\n> >\n> > Is this error perhaps occurring with a non-vendor Perl installation?\n> > What's the exact error message? config.log might contain some useful\n> > clues, too.\n>\n> The perl package is official one. I found the cause, that's my\n> mistake.\n>\n> The problematic command line boils down to:\n>\n> $ ./configure --with-perl CFLAGS=-O0\n>\n> It is bash-aliased and survived without being found for a long time on\n> my Cent7 environment, but CentOS8 doesn't allow that.\n>\n> By the way, is there any official way to specify options like -O0\n> while configure time?\n>\n\n\nCFLAGS=-O0 configure --with-perl ...\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 07:38:40 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On Tue, Oct 15, 2019 at 6:45 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> The problematic command line boils down to:\n>> $ ./configure --with-perl CFLAGS=-O0\n>> It is bash-aliased and survived without being found for a long time on\n>> my Cent7 environment, but CentOS8 doesn't allow that.\n\nI don't quite understand why that wouldn't work.\n\n>> By the way, is there any official way to specify options like -O0\n>> while configure time?\n\n> CFLAGS=-O0 configure --with-perl ...\n\nThe way Horiguchi-san did it has been supported by autoconf for\na good long time now, so I don't think command line syntax is\nthe issue. Could that CFLAGS setting be interfering with some\nfeature test in our configure script?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Oct 2019 14:32:48 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "On Wed, Oct 16, 2019 at 8:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On Tue, Oct 15, 2019 at 6:45 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> The problematic command line boils down to:\n> >> $ ./configure --with-perl CFLAGS=-O0\n> >> It is bash-aliased and survived without being found for a long time on\n> >> my Cent7 environment, but CentOS8 doesn't allow that.\n>\n> I don't quite understand why that wouldn't work.\n>\n> >> By the way, is there any official way to specify options like -O0\n> >> while configure time?\n>\n> > CFLAGS=-O0 configure --with-perl ...\n>\n> The way Horiguchi-san did it has been supported by autoconf for\n> a good long time now, so I don't think command line syntax is\n> the issue.\n\nAh.\n\n> Could that CFLAGS setting be interfering with some\n> feature test in our configure script?\n>\n>\n\n\nIt looks like you need CFLAGS='-O0 -fPIC' on CentOS 8 when building with perl.\n\ncheers\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 10:41:19 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "Hello.\n\n# I'm almost comming back..\n\nAt Wed, 16 Oct 2019 10:41:19 -0400, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote in \n> On Wed, Oct 16, 2019 at 8:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> $ ./configure --with-perl CFLAGS=-O0\n> > >> It is bash-aliased and survived without being found for a long time on\n> > >> my Cent7 environment, but CentOS8 doesn't allow that.\n> >\n> > I don't quite understand why that wouldn't work.\n\nI'm confused with make CFLAGS=..\n\n> It looks like you need CFLAGS='-O0 -fPIC' on CentOS 8 when building with perl.\n\nYes, that is what my first mail said. I dug into it further.\n\nThe immediately problematic command generated by autoconf is:\n\n\ngcc -o /tmp/conftest -Wall -Wmissing-prototypes -Wpointer-arith \\\n -Wdeclaration-after-statement -Werror=vla -Wendif-labels \\\n -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing \\\n -fwrapv -fexcess-precision=standard -Wno-format-truncation \\\n -Wno-stringop-truncation \\\n -O0 \\\n -D_GNU_SOURCE -I/usr/lib64/perl5/CORE \\\n /tmp/conftest.c \\\n -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld \\\n -fstack-protector-strong -L/usr/local/lib -L/usr/lib64/perl5/CORE \\\n -lperl -lpthread -lresolv -ldl -lm -lcrypt -lutil -lc\n\n/usr/bin/ld: /tmp/ccGxodNv.o: relocation R_X86_64_32 against symbol `PL_memory_wrap' can not be used when making a PIE object; recompile with -fPIC\n/usr/bin/ld: final link failed: Nonrepresentable section on output\ncollect2: error: ld returned 1 exit status\n\nVery interestingly I don't get the error when the \"-O0\" is \"-O2\". It\nis because gcc eliminates the PL_memory_wrap maybe by inlining.\n\nThe *final* problematic command boils down to:\n\ngcc -o /tmp/conftest /tmp/conftest_O0.o \\\n -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -lperl\n\nThat is, the problem doesn't happen without\n\"-specs=/usr/lib/rpm/redhat/redhat-hardened-ld\".\n\nI found the following bug report but it hasn't ever been fixed since\n2016 on Fedora 24. I'm not sure about the newer versions.\n\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1343892\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 17 Oct 2019 12:39:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> The immediately problematic command generated by autoconf is:\n> ...\n> /usr/bin/ld: /tmp/ccGxodNv.o: relocation R_X86_64_32 against symbol `PL_memory_wrap' can not be used when making a PIE object; recompile with -fPIC\n> /usr/bin/ld: final link failed: Nonrepresentable section on output\n> collect2: error: ld returned 1 exit status\n\n> Very interestingly I don't get the error when the \"-O0\" is \"-O2\". It\n> is because gcc eliminates the PL_memory_wrap maybe by inlining.\n\nYeah, probably so. But I don't like the idea of fixing a problem\ntriggered by user-supplied CFLAGS by injecting more cflags from\nelsewhere. That seems likely to be counterproductive, or at\nleast it risks overriding what the user wanted.\n\nCan we fix this by using something other than perl_alloc() as\nthe tested-for function? That is surely a pretty arbitrary\nchoice. Are there any standard Perl entry points that are just\nplain functions with no weird macro expansions?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Oct 2019 15:50:31 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "\nOn 10/18/19 9:50 AM, Tom Lane wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> The immediately problematic command generated by autoconf is:\n>> ...\n>> /usr/bin/ld: /tmp/ccGxodNv.o: relocation R_X86_64_32 against symbol `PL_memory_wrap' can not be used when making a PIE object; recompile with -fPIC\n>> /usr/bin/ld: final link failed: Nonrepresentable section on output\n>> collect2: error: ld returned 1 exit status\n>> Very interestingly I don't get the error when the \"-O0\" is \"-O2\". It\n>> is because gcc eliminates the PL_memory_wrap maybe by inlining.\n> Yeah, probably so. But I don't like the idea of fixing a problem\n> triggered by user-supplied CFLAGS by injecting more cflags from\n> elsewhere. That seems likely to be counterproductive, or at\n> least it risks overriding what the user wanted.\n>\n> Can we fix this by using something other than perl_alloc() as\n> the tested-for function? That is surely a pretty arbitrary\n> choice. Are there any standard Perl entry points that are just\n> plain functions with no weird macro expansions?\n>\n\n\nI had a look in perl's proto.h but didn't see any obvious candidate. I\ntried a couple of others (e.g. Perl_get_context) and got the same result\nreported above.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 19 Oct 2019 11:55:39 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 10/18/19 9:50 AM, Tom Lane wrote:\n>> Can we fix this by using something other than perl_alloc() as\n>> the tested-for function? That is surely a pretty arbitrary\n>> choice. Are there any standard Perl entry points that are just\n>> plain functions with no weird macro expansions?\n\n> I had a look in perl's proto.h but didn't see any obvious candidate. I\n> tried a couple of others (e.g. Perl_get_context) and got the same result\n> reported above.\n\nI poked into this on a Fedora 30 installation and determined that the\nstray reference is coming from this bit in Perl's inline.h:\n\n/* saves machine code for a common noreturn idiom typically used in Newx*() */\nGCC_DIAG_IGNORE_DECL(-Wunused-function);\nstatic void\nS_croak_memory_wrap(void)\n{\n Perl_croak_nocontext(\"%s\",PL_memory_wrap);\n}\nGCC_DIAG_RESTORE_DECL;\n\nApparently, gcc is smart enough to optimize this away as unused ...\nat any optimization level higher than -O0. I confirmed that it works\nat -O0 too, if you change this function declaration to \"static inline\"\n--- but evidently, not doing so was intentional, so we won't get much\ncooperation if we propose changing it (back?) to a plain static inline.\n\nSo the failure occurs just from reading this header, independently of\nwhich particular Perl function we try to call; I'd supposed that there\nwas some connection between perl_alloc and PL_memory_wrap, but there\nisn't.\n\nI still don't much like the originally proposed patch. IMO it makes too\nmany assumptions about what is in Perl's ccflags, and perhaps more to the\npoint, it is testing libperl's linkability using switches we will not use\nwhen we actually build plperl. So I think there's a pretty high risk of\nbreaking other cases if we fix it that way.\n\nThe right way to fix it, likely, is to add CFLAGS_SL while performing this\nparticular autoconf test, as that would replicate the switches used for\nplperl (and it turns out that adding -fPIC does solve this problem).\nBut the configure script doesn't currently know about CFLAGS_SL, so we'd\nhave to do some refactoring to approach it that way. Moving that logic\nfrom the platform-specific Makefiles into configure doesn't seem\nunreasonable, but it would make the patch bigger.\n\nA less invasive idea is to forcibly add -O1 to CFLAGS for this autoconf\ntest. We'd have to be careful about doing so for non-gcc compilers, as\nthey might not understand that switch syntax ... but probably we could\nget away with changing CFLAGS only when using a gcc-alike. Still, that's\na hack, and it doesn't have much to recommend it other than being more\nlocalized.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Oct 2019 13:23:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "\nOn 10/20/19 1:23 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 10/18/19 9:50 AM, Tom Lane wrote:\n>>> Can we fix this by using something other than perl_alloc() as\n>>> the tested-for function? That is surely a pretty arbitrary\n>>> choice. Are there any standard Perl entry points that are just\n>>> plain functions with no weird macro expansions?\n>> I had a look in perl's proto.h but didn't see any obvious candidate. I\n>> tried a couple of others (e.g. Perl_get_context) and got the same result\n>> reported above.\n> I poked into this on a Fedora 30 installation and determined that the\n> stray reference is coming from this bit in Perl's inline.h:\n>\n> /* saves machine code for a common noreturn idiom typically used in Newx*() */\n> GCC_DIAG_IGNORE_DECL(-Wunused-function);\n> static void\n> S_croak_memory_wrap(void)\n> {\n> Perl_croak_nocontext(\"%s\",PL_memory_wrap);\n> }\n> GCC_DIAG_RESTORE_DECL;\n>\n> Apparently, gcc is smart enough to optimize this away as unused ...\n> at any optimization level higher than -O0. I confirmed that it works\n> at -O0 too, if you change this function declaration to \"static inline\"\n> --- but evidently, not doing so was intentional, so we won't get much\n> cooperation if we propose changing it (back?) to a plain static inline.\n>\n> So the failure occurs just from reading this header, independently of\n> which particular Perl function we try to call; I'd supposed that there\n> was some connection between perl_alloc and PL_memory_wrap, but there\n> isn't.\n\n\n\nYeah, I came to the same conclusion.\n\n\n>\n> I still don't much like the originally proposed patch. IMO it makes too\n> many assumptions about what is in Perl's ccflags, and perhaps more to the\n> point, it is testing libperl's linkability using switches we will not use\n> when we actually build plperl. So I think there's a pretty high risk of\n> breaking other cases if we fix it that way.\n\n\nAgreed.\n\n\n>\n> The right way to fix it, likely, is to add CFLAGS_SL while performing this\n> particular autoconf test, as that would replicate the switches used for\n> plperl (and it turns out that adding -fPIC does solve this problem).\n> But the configure script doesn't currently know about CFLAGS_SL, so we'd\n> have to do some refactoring to approach it that way. Moving that logic\n> from the platform-specific Makefiles into configure doesn't seem\n> unreasonable, but it would make the patch bigger.\n\n\nSounds like a plan. I agree it's annoying to have to do something large\nfor something so trivial.\n\n\n>\n> A less invasive idea is to forcibly add -O1 to CFLAGS for this autoconf\n> test. We'd have to be careful about doing so for non-gcc compilers, as\n> they might not understand that switch syntax ... but probably we could\n> get away with changing CFLAGS only when using a gcc-alike. Still, that's\n> a hack, and it doesn't have much to recommend it other than being more\n> localized.\n>\n> \t\t\t\n\n\nright. I think your other plan is better.\n\n\ncheers\n\n\nandrew\n\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 20 Oct 2019 15:31:06 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 10/20/19 1:23 PM, Tom Lane wrote:\n>> The right way to fix it, likely, is to add CFLAGS_SL while performing this\n>> particular autoconf test, as that would replicate the switches used for\n>> plperl (and it turns out that adding -fPIC does solve this problem).\n>> But the configure script doesn't currently know about CFLAGS_SL, so we'd\n>> have to do some refactoring to approach it that way. Moving that logic\n>> from the platform-specific Makefiles into configure doesn't seem\n>> unreasonable, but it would make the patch bigger.\n\n> Sounds like a plan. I agree it's annoying to have to do something large\n> for something so trivial.\n\nTurns out it's not really that bad. We just have to transfer the\nresponsibility for setting CFLAGS_SL from the platform Makefiles\nto the platform template files. (As a bonus, it'd be possible to\nallow users to override CFLAGS_SL during configure, as they can\ndo for CFLAGS. But I didn't mess with that here.)\n\nI checked that this fixes the Fedora build problem, but I've not\nreally tested it on any other platform. Still, there's not that\nmuch to go wrong, one would think.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 20 Oct 2019 19:36:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "\nOn 10/20/19 7:36 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 10/20/19 1:23 PM, Tom Lane wrote:\n>>> The right way to fix it, likely, is to add CFLAGS_SL while performing this\n>>> particular autoconf test, as that would replicate the switches used for\n>>> plperl (and it turns out that adding -fPIC does solve this problem).\n>>> But the configure script doesn't currently know about CFLAGS_SL, so we'd\n>>> have to do some refactoring to approach it that way. Moving that logic\n>>> from the platform-specific Makefiles into configure doesn't seem\n>>> unreasonable, but it would make the patch bigger.\n>> Sounds like a plan. I agree it's annoying to have to do something large\n>> for something so trivial.\n> Turns out it's not really that bad. We just have to transfer the\n> responsibility for setting CFLAGS_SL from the platform Makefiles\n> to the platform template files. (As a bonus, it'd be possible to\n> allow users to override CFLAGS_SL during configure, as they can\n> do for CFLAGS. But I didn't mess with that here.)\n>\n> I checked that this fixes the Fedora build problem, but I've not\n> really tested it on any other platform. Still, there's not that\n> much to go wrong, one would think.\n>\n> \t\t\t\n\n\n\nLGTM\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 21 Oct 2019 08:29:39 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: configure fails for perl check on CentOS8" }, { "msg_contents": "Hello.\n\nAt Mon, 21 Oct 2019 08:29:39 -0400, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote in \n> \n> On 10/20/19 7:36 PM, Tom Lane wrote:\n> > Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> >> On 10/20/19 1:23 PM, Tom Lane wrote:\n> >>> The right way to fix it, likely, is to add CFLAGS_SL while performing this\n> >>> particular autoconf test, as that would replicate the switches used for\n> >>> plperl (and it turns out that adding -fPIC does solve this problem).\n> >>> But the configure script doesn't currently know about CFLAGS_SL, so we'd\n> >>> have to do some refactoring to approach it that way. Moving that logic\n> >>> from the platform-specific Makefiles into configure doesn't seem\n> >>> unreasonable, but it would make the patch bigger.\n> >> Sounds like a plan. I agree it's annoying to have to do something large\n> >> for something so trivial.\n> > Turns out it's not really that bad. We just have to transfer the\n> > responsibility for setting CFLAGS_SL from the platform Makefiles\n> > to the platform template files. (As a bonus, it'd be possible to\n> > allow users to override CFLAGS_SL during configure, as they can\n> > do for CFLAGS. But I didn't mess with that here.)\n> >\n> > I checked that this fixes the Fedora build problem, but I've not\n> > really tested it on any other platform. Still, there's not that\n> > much to go wrong, one would think.\n> >\n> \n> LGTM\n\nHowever it's done, but it looks good to me and actually the problem is\ngone. Thaks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 23 Oct 2019 15:04:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: configure fails for perl check on CentOS8" } ]
[ { "msg_contents": "Hi,\n\nWhile digging through the archives, I found a thread from a couple\nyears back about syscache performance. There was an idea [1] to\ngenerate the cache control data at compile time. That would to remove\nthe need to perform database access to complete cache initialization,\nas well as the need to check in various places whether initialization\nhas happened.\n\nIf this were done, catcache.c:InitCatCachePhase2() and\ncatcache.c:CatalogCacheInitializeCache() would disappear, and\nsyscache.c:InitCatalogCachePhase2() could be replaced by code that\nsimply opens the relations when writing new init files. Another\npossibility this opens up is making the SysCacheRelationOid and\nSysCacheSupportingRelOid arrays constant data as well.\n\n\nHere's a basic design sketch:\n\n1. Generate the current syscache cacheinfo[] array and cacheid enum by\nadding a couple arguments to the declarations for system indexes, as\nin:\n\n#define DECLARE_UNIQUE_INDEX(name,oid,oid_macro,cacheid,num_buckets,decl)\nextern int no_such_variable\n\nDECLARE_UNIQUE_INDEX(pg_amop_opr_fam_index, 2654,\nAccessMethodOperatorIndexId, AMOPOPID, 64, on pg_amop using\nbtree(amopopr oid_ops, amoppurpose char_ops, amopfamily oid_ops));\n\nDECLARE_UNIQUE_INDEX(pg_amop_oid_index, 2756,\nAccessMethodOperatorOidIndexId, -, 0, on pg_amop using btree(oid\noid_ops));\n\n...and add in data we already know how to parse from the catalog\nheaders. Note that the last example has '-' and '0' to mean \"no\ncache\". (The index oid macro is superfluous there, but kept for\nconsistency.)\n\n2. Expand the cacheinfo[] element structs with the rest of the constant data:\n\nRelname, and relisshared are straightforward. For eq/hash functions,\nwe could add metadata attributes to pg_type.dat for the relevant\ntypes. Tuple descriptors would get their attrs from schemapg.h.\n\n3. Simplify cat/syscache.c\n\n\nIs this something worth doing?\n\n[1] https://www.postgresql.org/message-id/1295.1507918074%40sss.pgh.pa.us\n\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 10 Oct 2019 16:30:52 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "generating catcache control data" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> While digging through the archives, I found a thread from a couple\n> years back about syscache performance. There was an idea [1] to\n> generate the cache control data at compile time. That would to remove\n> the need to perform database access to complete cache initialization,\n> as well as the need to check in various places whether initialization\n> has happened.\n\nRight.\n\n> 1. Generate the current syscache cacheinfo[] array and cacheid enum by\n> adding a couple arguments to the declarations for system indexes, as\n> in:\n> #define DECLARE_UNIQUE_INDEX(name,oid,oid_macro,cacheid,num_buckets,decl)\n> extern int no_such_variable\n\nI do not like attaching this data to the DECLARE_UNIQUE_INDEX macros.\nIt's really no business of the indexes' whether they are associated\nwith a syscache. It's *certainly* no business of theirs how many\nbuckets such a cache should start off with.\n\nI'd be inclined to make a separate file that's specifically concerned\nwith declaring syscaches, and put all the required data there.\n\n> Relname, and relisshared are straightforward. For eq/hash functions,\n> we could add metadata attributes to pg_type.dat for the relevant\n> types. Tuple descriptors would get their attrs from schemapg.h.\n\nI don't see a need to hard-wire more information than we do today, and\nI'd prefer not to because it adds to the burden of creating new syscaches.\nAssuming that the plan is for genbki.pl or some similar script to generate\nthe constants, it could look up all the appropriate data from the initial\ncontents for pg_opclass and friends. That is, basically what we want here\nis for a constant-creation script to perform the same lookups that're now\ndone during backend startup.\n\n> Is this something worth doing?\n\nHard to tell. It'd take a few cycles out of backend startup, which\nseems like a worthy goal; but I don't know if it'd save enough to be\nworth the trouble. Probably can't tell for sure without doing most\nof the work :-(.\n\nPerhaps you could break it up by building a hand-made copy of the\nconstants and then removing the runtime initialization code. This'd\nbe enough to get data on the performance change. Only if that looked\npromising would you need to write the Perl script to compute the\nconstants.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Oct 2019 15:14:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generating catcache control data" }, { "msg_contents": "... BTW, one other issue with changing this, at least if we want to\nprecompute tupdescs for all system catalogs used in catcaches, is that\nthat would put a very big crimp in doing runtime changes to catalogs.\nWhile we'll probably never support changes in the physical layouts\nof catalog rows, there is interest in being able to change some\nauxiliary pg_attribute fields, e.g. attstattarget [1]. So we'd need\nto be sure that the compiled-in tupdescs are only used to disassemble\ncatalog tuples, and not for other purposes.\n\nOf course this issue arises already for the bootstrap catalogs, so\nmaybe it's been dealt with sufficiently. But it's something to keep\nan eye on.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/8b00ea5e-28a7-88ba-e848-21528b632354%402ndquadrant.com\n\n\n", "msg_date": "Thu, 10 Oct 2019 15:55:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: generating catcache control data" }, { "msg_contents": "On Fri, Oct 11, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I do not like attaching this data to the DECLARE_UNIQUE_INDEX macros.\n> It's really no business of the indexes' whether they are associated\n> with a syscache. It's *certainly* no business of theirs how many\n> buckets such a cache should start off with.\n>\n> I'd be inclined to make a separate file that's specifically concerned\n> with declaring syscaches, and put all the required data there.\n\nThat gave me another idea that would further reduce the bookkeeping\ninvolved in creating new syscaches -- put declarations in the cache id\nenum (syscache.h), like this:\n\n#define DECLARE_SYSCACHE(cacheid,indexname,indexoid,numbuckets) cacheid\n\nenum SysCacheIdentifier\n{\nDECLARE_SYSCACHE(AGGFNOID, pg_aggregate_fnoid_index,\nAggregateFnoidIndexId, 16) = 0,\n...\n};\n\n> > Relname, and relisshared are straightforward. For eq/hash functions,\n> > we could add metadata attributes to pg_type.dat for the relevant\n> > types. Tuple descriptors would get their attrs from schemapg.h.\n>\n> I don't see a need to hard-wire more information than we do today, and\n> I'd prefer not to because it adds to the burden of creating new syscaches.\n> Assuming that the plan is for genbki.pl or some similar script to generate\n> the constants, it could look up all the appropriate data from the initial\n> contents for pg_opclass and friends. That is, basically what we want here\n> is for a constant-creation script to perform the same lookups that're now\n> done during backend startup.\n\nLooking at it again, the eq/hash functions are local and looked up via\nGetCCHashEqFuncs(), but the runtime of that is surely miniscule, so I\nleft it alone.\n\n> > Is this something worth doing?\n>\n> Hard to tell. It'd take a few cycles out of backend startup, which\n> seems like a worthy goal; but I don't know if it'd save enough to be\n> worth the trouble. Probably can't tell for sure without doing most\n> of the work :-(.\n\nI went ahead and did just enough to remove the relation-opening code.\nLooking in the archives, I found this as a quick test:\n\necho '\\set x 1' > x.txt\n./inst/bin/pgbench -n -C -c 1 -f x.txt -T 10\n\nTypical numbers:\n\nmaster:\nnumber of transactions actually processed: 4276\nlatency average = 2.339 ms\ntps = 427.549137 (including connections establishing)\ntps = 24082.726350 (excluding connections establishing)\n\npatch:\nnumber of transactions actually processed: 4436\nlatency average = 2.255 ms\ntps = 443.492369 (including connections establishing)\ntps = 21817.308410 (excluding connections establishing)\n\n...which amounts to nearly 4% improvement in the first tps number,\nwhich isn't earth-shattering, but it's something. Opinions? It\nwouldn't be a lot of additional work to put together a WIP patch.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 18 Oct 2019 15:51:55 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: generating catcache control data" } ]
[ { "msg_contents": "Hello,\n\nWhile giving assistance to some customer with their broker procedure, I found a\nscenario where the subscription is failing but the table are sync'ed anyway.\nHere is bash script to reproduce it with versions 10, 11 and 12 (make sure\nto set PATH correctly):\n\n # env\n PUB=/tmp/pub\n SUB=/tmp/sub\n unset PGPORT PGHOST PGDATABASE PGDATA\n export PGUSER=postgres\n\n # cleanup\n kill %1\n pg_ctl -w -s -D \"$PUB\" -m immediate stop; echo $?\n pg_ctl -w -s -D \"$SUB\" -m immediate stop; echo $?\n rm -r \"$PUB\" \"$SUB\"\n\n # cluster\n initdb -U postgres -N \"$PUB\" &>/dev/null; echo $?\n initdb -U postgres -N \"$SUB\" &>/dev/null; echo $?\n echo \"wal_level=logical\" >> \"$PUB\"/postgresql.conf\n echo \"port=5433\" >> \"$SUB\"/postgresql.conf\n pg_ctl -w -s -D $PUB -l \"$PUB\"-\"$(date +%FT%T)\".log start; echo $?\n pg_ctl -w -s -D $SUB -l \"$SUB\"-\"$(date +%FT%T)\".log start; echo $?\n pgbench -p 5432 -qi \n pg_dump -p 5432 -s | psql -qXp 5433\n\n # fake activity\n pgbench -p 5432 -T 60 -c 2 &\n\n # replication setup\n cat<<SQL | psql -p 5432 -X\n SELECT * FROM pg_create_logical_replication_slot('sub','pgoutput');\n CREATE PUBLICATION prov FOR ALL TABLES;\n SQL\n\n cat<<SQL | psql -p 5433 -X\n CREATE SUBSCRIPTION sub\n CONNECTION 'port=5432'\n PUBLICATION prov\n WITH (create_slot=false, slot_name=sub)\n SQL\n\nHere are part of the logs from the subscriber:\n\n LOG: logical replication apply worker for subscription \"sub\" has started\n LOG: logical replication table synchronization worker for subscription\n \"sub\", table \"pgbench_accounts\" has started\n LOG: logical replication table synchronization worker for subscription\n \"sub\", table \"pgbench_branches\" has started\n ERROR: could not receive data from WAL stream: ERROR: publication \"prov\"\n does not exist\nCONTEXT: slot \"sub\", output plugin \"pgoutput\", in the change callback,\n associated LSN 0/22C0138\n LOG: logical replication table synchronization worker for subscription\n \"sub\", table \"pgbench_branches\" has finished\n LOG: logical replication table synchronization worker for subscription\n \"sub\", table \"pgbench_accounts\" has finished\n LOG: logical replication apply worker for subscription \"sub\" has started\n ERROR: could not receive data from WAL stream: ERROR: publication \"prov\"\n does not exist \nCONTEXT: slot \"sub\", output plugin \"pgoutput\", in the change callback,\n associated LSN 0/22C0138\n\nAll tables are synch'ed while the main worker for subscription is spawned again\nand again with the same failure.\n\nAs far as I could find out, the problem here is that the slot is created\nmanually before the publication is created. When the subscriber subscribe,\nit builds a catalog cache from the slot by the time it has been created.\nThen, it couldn't find the publication, because it didn't exists in this\nold version of the catalog. Is my understanding correct?\n\nSadly, I couldn't find any documentation (neither official or in sources) about\nthe fact the slot must be created after the related publication.\n\nMoreover, it's quite illogical to find some error about a non\nexisting publication when the data are being synched AND the publication\nactually exists on the other side.\n\nI suppose this should be documented in user documentation.\n\nPlus, what about forbidding the data sync if the main worker for\nsubscription fails?\n\nRegards,\n\nPS: the customer hit the following issue as well while messing around but I\nhadn't time to find out how they did yet:\nhttps://www.postgresql.org/message-id/flat/a9139c29-7ddd-973b-aa7f-71fed9c38d75%40minerva.info\n\n\n", "msg_date": "Thu, 10 Oct 2019 11:57:52 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Logical replication dead but synching" } ]
[ { "msg_contents": "Hi hackers,\n\nI've been experimenting with pluggable storage API recently and just\nfeel like I can share my first experience. First of all it's great to\nhave this API and that now community has the opportunity to implement\nalternative storage engines. There are a few applications that come to\nmind and a compressed storage is one of them.\n\nRecently I've been working on a simple append-only compressed storage\n[1]. My first idea was to just store data into compressed 1mb blocks\nin a continuous file and keep separate file for block offsets (similar\nto Knizhnik's CFS proposal). But then i realized that then i won't be\nable to use most of postgres' infrastructure like WAL-logging and also\nwon't be able to implement some of the functions of TableAmRoutine\n(like bitmap scan or analyze). So I had to adjust extension the way to\nutilize standard postgres 8kb blocks: compressed 1mb blocks are split\ninto chunks and distributed among 8kb blocks. Current page layout\nlooks like this:\n\n┌───────────┐\n│ metapage │\n└───────────┘\n┌───────────┐ ┐\n│ block 1 │ │\n├────...────┤ │ compressed 1mb block\n│ block k │ │\n└───────────┘ ┘\n┌───────────┐ ┐\n│ block k+1 │ │\n├────...────┤ │ another compressed 1mb block\n│ block m │ │\n└───────────┘ ┘\n\nInside compressed blocks there are regular postgres heap tuples.\n\nThe following is the list of things i stumbled upon while implementing\nstorage. Since API is just came out there are not many examples of\npluggable storages and even less as external extensions (I managed to\nfind only blackhole_am by Michael Paquier which doesn't do much). So\nmany things i had to figure out by myself. Hopefully some of those\nissues have a solution that i just can't see.\n\n1. Unlike FDW API, in pluggable storage API there are no routines like\n\"begin modify table\" and \"end modify table\" and there is no shared\nstate between insert/update/delete calls. In context of compressed\nstorage that means that there is no exact moment when we can finalize\nwrites (compress, split into chunks etc). We can set a callback at the\nend of transaction, but in this case we'll have to keep latest\nmodifications for every table in memory until the end of transaction.\nAs for shared state we also can maintain some kind of key-value data\nstructure with per-relation shared state. But that again requires memory.\nBecause of this currently I only implemented COPY semantics.\n\n2. It looks like I cannot implement custom storage options. E.g. for\ncompressed storage it makes sense to implement different compression\nmethods (lz4, zstd etc.) and corresponding options (like compression\nlevel). But as i can see storage options (like fillfactor etc) are\nhardcoded and are not extensible. Possible solution is to use GUCs\nwhich would work but is not extremely convinient.\n\n3. A bit surprising limitation that in order to use bitmap scan the\nmaximum number of tuples per page must not exceed 291 due to\nMAX_TUPLES_PER_PAGE macro in tidbitmap.c which is calculated based on\n8kb page size. In case of 1mb page this restriction feels really\nlimiting.\n\n4. In order to use WAL-logging each page must start with a standard 24\nbyte PageHeaderData even if it is needless for storage itself. Not a\nbig deal though. Another (acutally documented) WAL-related limitation\nis that only generic WAL can be used within extension. So unless\ninserts are made in bulks it's going to require a lot of disk space to\naccomodate logs and wide bandwith for replication.\n\npg_cryogen extension is still in developement so if other issues arise\ni'll post them here. At this point the extension already supports\ninserts via COPY, index and bitmap scans, vacuum (only freezing),\nanalyze. It uses lz4 compression and currently i'm working on adding\ndifferent compression methods. I'm also willing to work on\nforementioned issues in API if community verifies them as valid.\n\n\n[1] https://github.com/adjust/pg_cryogen\n\nThanks,\nIldar\n\nHi hackers,I've been experimenting with pluggable storage API recently and justfeel like I can share my first experience. First of all it's great tohave this API and that now community has the opportunity to implementalternative storage engines. There are a few applications that come tomind and a compressed storage is one of them.Recently I've been working on a simple append-only compressed storage[1]. My first idea was to just store data into compressed 1mb blocksin a continuous file and keep separate file for block offsets (similarto Knizhnik's CFS proposal). But then i realized that then i won't beable to use most of postgres' infrastructure like WAL-logging and alsowon't be able to implement some of the functions of TableAmRoutine(like bitmap scan or analyze). So I had to adjust extension the way toutilize standard postgres 8kb blocks: compressed 1mb blocks are splitinto chunks and distributed among 8kb blocks. Current page layoutlooks like this:┌───────────┐│ metapage  │└───────────┘ ┌───────────┐  ┐│  block 1  │  │├────...────┤  │ compressed 1mb block │  block k  │  │└───────────┘  ┘ ┌───────────┐  ┐ │ block k+1 │  │ ├────...────┤  │ another compressed 1mb block │  block m  │  │ └───────────┘  ┘Inside compressed blocks there are regular postgres heap tuples.The following is the list of things i stumbled upon while implementingstorage. Since API is just came out there are not many examples ofpluggable storages and even less as external extensions (I managed tofind only blackhole_am by Michael Paquier which doesn't do much). Somany things i had to figure out by myself. Hopefully some of thoseissues have a solution that i just can't see.1. Unlike FDW API, in pluggable storage API there are no routines like\"begin modify table\" and \"end modify table\" and there is no sharedstate between insert/update/delete calls. In context of compressedstorage that means that there is no exact moment when we can finalizewrites (compress, split into chunks etc). We can set a callback at theend of transaction, but in this case we'll have to keep latestmodifications for every table in memory until the end of transaction.As for shared state we also can maintain some kind of key-value datastructure with per-relation shared state. But that again requires memory.Because of this currently I only implemented COPY semantics.2. It looks like I cannot implement custom storage options. E.g. forcompressed storage it makes sense to implement different compressionmethods (lz4, zstd etc.) and corresponding options (like compressionlevel). But as i can see storage options (like fillfactor etc) arehardcoded and are not extensible. Possible solution is to use GUCswhich would work but is not extremely convinient.3. A bit surprising limitation that in order to use bitmap scan themaximum number of tuples per page must not exceed 291 due toMAX_TUPLES_PER_PAGE macro in tidbitmap.c which is calculated based on8kb page size. In case of 1mb page this restriction feels reallylimiting.4. In order to use WAL-logging each page must start with a standard 24byte PageHeaderData even if it is needless for storage itself. Not abig deal though. Another (acutally documented) WAL-related limitationis that only generic WAL can be used within extension. So unlessinserts are made in bulks it's going to require a lot of disk space toaccomodate logs and wide bandwith for replication.pg_cryogen extension is still in developement so if other issues arisei'll post them here. At this point the extension already supportsinserts via COPY, index and bitmap scans, vacuum (only freezing),analyze. It uses lz4 compression and currently i'm working on addingdifferent compression methods. I'm also willing to work onforementioned issues in API if community verifies them as valid.[1] https://github.com/adjust/pg_cryogenThanks,Ildar", "msg_date": "Thu, 10 Oct 2019 15:25:31 +0200", "msg_from": "Ildar Musin <ildar@adjust.com>", "msg_from_op": true, "msg_subject": "Compressed pluggable storage experiments" }, { "msg_contents": "On 2019-Oct-10, Ildar Musin wrote:\n\n> 1. Unlike FDW API, in pluggable storage API there are no routines like\n> \"begin modify table\" and \"end modify table\" and there is no shared\n> state between insert/update/delete calls.\n\nHmm. I think adding a begin/end to modifytable is a reasonable thing to\ndo (it'd be a no-op for heap and zheap I guess).\n\n> 2. It looks like I cannot implement custom storage options. E.g. for\n> compressed storage it makes sense to implement different compression\n> methods (lz4, zstd etc.) and corresponding options (like compression\n> level). But as i can see storage options (like fillfactor etc) are\n> hardcoded and are not extensible. Possible solution is to use GUCs\n> which would work but is not extremely convinient.\n\nYeah, the reloptions module is undergoing some changes. I expect that\nthere will be a way to extend reloptions from an extension, at the end\nof that set of patches.\n\n> 3. A bit surprising limitation that in order to use bitmap scan the\n> maximum number of tuples per page must not exceed 291 due to\n> MAX_TUPLES_PER_PAGE macro in tidbitmap.c which is calculated based on\n> 8kb page size. In case of 1mb page this restriction feels really\n> limiting.\n\nI suppose this is a hardcoded limit that needs to be fixed by patching\ncore as we make table AM more pervasive.\n\n> 4. In order to use WAL-logging each page must start with a standard 24\n> byte PageHeaderData even if it is needless for storage itself. Not a\n> big deal though. Another (acutally documented) WAL-related limitation\n> is that only generic WAL can be used within extension. So unless\n> inserts are made in bulks it's going to require a lot of disk space to\n> accomodate logs and wide bandwith for replication.\n\nNot sure what to suggest. Either you should ignore this problem, or\nyou should fix it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 17 Oct 2019 12:47:47 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Compressed pluggable storage experiments" }, { "msg_contents": "Hi,\n\nOn 2019-10-17 12:47:47 -0300, Alvaro Herrera wrote:\n> On 2019-Oct-10, Ildar Musin wrote:\n> \n> > 1. Unlike FDW API, in pluggable storage API there are no routines like\n> > \"begin modify table\" and \"end modify table\" and there is no shared\n> > state between insert/update/delete calls.\n> \n> Hmm. I think adding a begin/end to modifytable is a reasonable thing to\n> do (it'd be a no-op for heap and zheap I guess).\n\nI'm fairly strongly against that. Adding two additional \"virtual\"\nfunction calls for something that's rarely going to be used, seems like\nadding too much overhead to me.\n\n\n> > 2. It looks like I cannot implement custom storage options. E.g. for\n> > compressed storage it makes sense to implement different compression\n> > methods (lz4, zstd etc.) and corresponding options (like compression\n> > level). But as i can see storage options (like fillfactor etc) are\n> > hardcoded and are not extensible. Possible solution is to use GUCs\n> > which would work but is not extremely convinient.\n> \n> Yeah, the reloptions module is undergoing some changes. I expect that\n> there will be a way to extend reloptions from an extension, at the end\n> of that set of patches.\n\nCool.\n\n\n> > 3. A bit surprising limitation that in order to use bitmap scan the\n> > maximum number of tuples per page must not exceed 291 due to\n> > MAX_TUPLES_PER_PAGE macro in tidbitmap.c which is calculated based on\n> > 8kb page size. In case of 1mb page this restriction feels really\n> > limiting.\n> \n> I suppose this is a hardcoded limit that needs to be fixed by patching\n> core as we make table AM more pervasive.\n\nThat's not unproblematic - a dynamic limit would make a number of\ncomputations more expensive, and we already spend plenty CPU cycles\nbuilding the tid bitmap. And we'd waste plenty of memory just having all\nthat space for the worst case. ISTM that we \"just\" need to replace the\nTID bitmap with some tree like structure.\n\n\n> > 4. In order to use WAL-logging each page must start with a standard 24\n> > byte PageHeaderData even if it is needless for storage itself. Not a\n> > big deal though. Another (acutally documented) WAL-related limitation\n> > is that only generic WAL can be used within extension. So unless\n> > inserts are made in bulks it's going to require a lot of disk space to\n> > accomodate logs and wide bandwith for replication.\n> \n> Not sure what to suggest. Either you should ignore this problem, or\n> you should fix it.\n\nI think if it becomes a problem you should ask for an rmgr ID to use for\nyour extension, which we encode and then then allow to set the relevant\nrmgr callbacks for that rmgr id at startup. But you should obviously\nfirst develop the WAL logging etc, and make sure it's beneficial over\ngeneric wal logging for your case.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 18 Oct 2019 03:25:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Compressed pluggable storage experiments" }, { "msg_contents": "On Fri, Oct 18, 2019 at 03:25:05AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-10-17 12:47:47 -0300, Alvaro Herrera wrote:\n>> On 2019-Oct-10, Ildar Musin wrote:\n>>\n>> > 1. Unlike FDW API, in pluggable storage API there are no routines like\n>> > \"begin modify table\" and \"end modify table\" and there is no shared\n>> > state between insert/update/delete calls.\n>>\n>> Hmm. I think adding a begin/end to modifytable is a reasonable thing to\n>> do (it'd be a no-op for heap and zheap I guess).\n>\n>I'm fairly strongly against that. Adding two additional \"virtual\"\n>function calls for something that's rarely going to be used, seems like\n>adding too much overhead to me.\n>\n\nThat seems a bit strange to me. Sure - if there's an alternative way to\nachieve the desired behavior (clear way to finalize writes etc.), then\ncool, let's do that. But forcing people to use invonvenient workarounds\nseems like a bad thing to me - having a convenient and clear API is\nquite valueable, IMHO.\n\nLet's see if this actually has a measuerable overhead first.\n\n>\n>> > 2. It looks like I cannot implement custom storage options. E.g. for\n>> > compressed storage it makes sense to implement different compression\n>> > methods (lz4, zstd etc.) and corresponding options (like compression\n>> > level). But as i can see storage options (like fillfactor etc) are\n>> > hardcoded and are not extensible. Possible solution is to use GUCs\n>> > which would work but is not extremely convinient.\n>>\n>> Yeah, the reloptions module is undergoing some changes. I expect that\n>> there will be a way to extend reloptions from an extension, at the end\n>> of that set of patches.\n>\n>Cool.\n>\n\nYep.\n\n>\n>> > 3. A bit surprising limitation that in order to use bitmap scan the\n>> > maximum number of tuples per page must not exceed 291 due to\n>> > MAX_TUPLES_PER_PAGE macro in tidbitmap.c which is calculated based on\n>> > 8kb page size. In case of 1mb page this restriction feels really\n>> > limiting.\n>>\n>> I suppose this is a hardcoded limit that needs to be fixed by patching\n>> core as we make table AM more pervasive.\n>\n>That's not unproblematic - a dynamic limit would make a number of\n>computations more expensive, and we already spend plenty CPU cycles\n>building the tid bitmap. And we'd waste plenty of memory just having all\n>that space for the worst case. ISTM that we \"just\" need to replace the\n>TID bitmap with some tree like structure.\n>\n\nI think the zedstore has roughly the same problem, and Heikki mentioned\nsome possible solutions to dealing with it in his pgconfeu talk (and it\nwas discussed in the zedstore thread, I think).\n\n>\n>> > 4. In order to use WAL-logging each page must start with a standard 24\n>> > byte PageHeaderData even if it is needless for storage itself. Not a\n>> > big deal though. Another (acutally documented) WAL-related limitation\n>> > is that only generic WAL can be used within extension. So unless\n>> > inserts are made in bulks it's going to require a lot of disk space to\n>> > accomodate logs and wide bandwith for replication.\n>>\n>> Not sure what to suggest. Either you should ignore this problem, or\n>> you should fix it.\n>\n>I think if it becomes a problem you should ask for an rmgr ID to use for\n>your extension, which we encode and then then allow to set the relevant\n>rmgr callbacks for that rmgr id at startup. But you should obviously\n>first develop the WAL logging etc, and make sure it's beneficial over\n>generic wal logging for your case.\n>\n\nAFAIK compressed/columnar engines generally implement two types of\nstorage - write-optimized store (WOS) and read-optimized store (ROS),\nwhere the WOS is mostly just an uncompressed append-only buffer, and ROS\nis compressed etc. ISTM the WOS would benefit from a more elaborate WAL\nlogging, but ROS should be mostly fine with the generic WAL logging.\n\nBut yeah, we should test and measure how beneficial that actually is.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 19 Oct 2019 14:23:23 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Compressed pluggable storage experiments" }, { "msg_contents": "Hi all, This is a continuation of the above thread...\n\n>> > 4. In order to use WAL-logging each page must start with a standard 24\n>> > byte PageHeaderData even if it is needless for storage itself. Not a\n>> > big deal though. Another (acutally documented) WAL-related limitation\n>> > is that only generic WAL can be used within extension. So unless\n>> > inserts are made in bulks it's going to require a lot of disk space to\n>> > accomodate logs and wide bandwith for replication.\n>>\n>> Not sure what to suggest. Either you should ignore this problem, or\n>> you should fix it.\n\nI am working on an environment similar to the above extension(pg_cryogen\nwhich experiments pluggable storage api's) but don't have much knowledge on\npg's logical replication..\nPlease suggest some approaches to support pg's logical replication for a\ntable with a custom access method, which writes generic wal record.\n\nOn Wed, 17 Aug 2022 at 19:04, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Fri, Oct 18, 2019 at 03:25:05AM -0700, Andres Freund wrote:\n> >Hi,\n> >\n> >On 2019-10-17 12:47:47 -0300, Alvaro Herrera wrote:\n> >> On 2019-Oct-10, Ildar Musin wrote:\n> >>\n> >> > 1. Unlike FDW API, in pluggable storage API there are no routines like\n> >> > \"begin modify table\" and \"end modify table\" and there is no shared\n> >> > state between insert/update/delete calls.\n> >>\n> >> Hmm. I think adding a begin/end to modifytable is a reasonable thing to\n> >> do (it'd be a no-op for heap and zheap I guess).\n> >\n> >I'm fairly strongly against that. Adding two additional \"virtual\"\n> >function calls for something that's rarely going to be used, seems like\n> >adding too much overhead to me.\n> >\n>\n> That seems a bit strange to me. Sure - if there's an alternative way to\n> achieve the desired behavior (clear way to finalize writes etc.), then\n> cool, let's do that. But forcing people to use invonvenient workarounds\n> seems like a bad thing to me - having a convenient and clear API is\n> quite valueable, IMHO.\n>\n> Let's see if this actually has a measuerable overhead first.\n>\n> >\n> >> > 2. It looks like I cannot implement custom storage options. E.g. for\n> >> > compressed storage it makes sense to implement different compression\n> >> > methods (lz4, zstd etc.) and corresponding options (like compression\n> >> > level). But as i can see storage options (like fillfactor etc) are\n> >> > hardcoded and are not extensible. Possible solution is to use GUCs\n> >> > which would work but is not extremely convinient.\n> >>\n> >> Yeah, the reloptions module is undergoing some changes. I expect that\n> >> there will be a way to extend reloptions from an extension, at the end\n> >> of that set of patches.\n> >\n> >Cool.\n> >\n>\n> Yep.\n>\n> >\n> >> > 3. A bit surprising limitation that in order to use bitmap scan the\n> >> > maximum number of tuples per page must not exceed 291 due to\n> >> > MAX_TUPLES_PER_PAGE macro in tidbitmap.c which is calculated based on\n> >> > 8kb page size. In case of 1mb page this restriction feels really\n> >> > limiting.\n> >>\n> >> I suppose this is a hardcoded limit that needs to be fixed by patching\n> >> core as we make table AM more pervasive.\n> >\n> >That's not unproblematic - a dynamic limit would make a number of\n> >computations more expensive, and we already spend plenty CPU cycles\n> >building the tid bitmap. And we'd waste plenty of memory just having all\n> >that space for the worst case. ISTM that we \"just\" need to replace the\n> >TID bitmap with some tree like structure.\n> >\n>\n> I think the zedstore has roughly the same problem, and Heikki mentioned\n> some possible solutions to dealing with it in his pgconfeu talk (and it\n> was discussed in the zedstore thread, I think).\n>\n> >\n> >> > 4. In order to use WAL-logging each page must start with a standard 24\n> >> > byte PageHeaderData even if it is needless for storage itself. Not a\n> >> > big deal though. Another (acutally documented) WAL-related limitation\n> >> > is that only generic WAL can be used within extension. So unless\n> >> > inserts are made in bulks it's going to require a lot of disk space to\n> >> > accomodate logs and wide bandwith for replication.\n> >>\n> >> Not sure what to suggest. Either you should ignore this problem, or\n> >> you should fix it.\n> >\n> >I think if it becomes a problem you should ask for an rmgr ID to use for\n> >your extension, which we encode and then then allow to set the relevant\n> >rmgr callbacks for that rmgr id at startup. But you should obviously\n> >first develop the WAL logging etc, and make sure it's beneficial over\n> >generic wal logging for your case.\n> >\n>\n> AFAIK compressed/columnar engines generally implement two types of\n> storage - write-optimized store (WOS) and read-optimized store (ROS),\n> where the WOS is mostly just an uncompressed append-only buffer, and ROS\n> is compressed etc. ISTM the WOS would benefit from a more elaborate WAL\n> logging, but ROS should be mostly fine with the generic WAL logging.\n>\n> But yeah, we should test and measure how beneficial that actually is.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n>\n>\n\nHi all, This is a continuation of the above thread...>> > 4. In order to use WAL-logging each page must start with a standard 24>> > byte PageHeaderData even if it is needless for storage itself. Not a>> > big deal though. Another (acutally documented) WAL-related limitation>> > is that only generic WAL can be used within extension. So unless>> > inserts are made in bulks it's going to require a lot of disk space to>> > accomodate logs and wide bandwith for replication.>>>> Not sure what to suggest.  Either you should ignore this problem, or>> you should fix it.I am working on an environment similar to the above extension(pg_cryogen which experiments pluggable storage api's) but don't have much knowledge on pg's logical replication.. Please suggest some approaches to support pg's logical replication for a table with a custom access method, which writes generic wal record.On Wed, 17 Aug 2022 at 19:04, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Fri, Oct 18, 2019 at 03:25:05AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-10-17 12:47:47 -0300, Alvaro Herrera wrote:\n>> On 2019-Oct-10, Ildar Musin wrote:\n>>\n>> > 1. Unlike FDW API, in pluggable storage API there are no routines like\n>> > \"begin modify table\" and \"end modify table\" and there is no shared\n>> > state between insert/update/delete calls.\n>>\n>> Hmm.  I think adding a begin/end to modifytable is a reasonable thing to\n>> do (it'd be a no-op for heap and zheap I guess).\n>\n>I'm fairly strongly against that. Adding two additional \"virtual\"\n>function calls for something that's rarely going to be used, seems like\n>adding too much overhead to me.\n>\n\nThat seems a bit strange to me. Sure - if there's an alternative way to\nachieve the desired behavior (clear way to finalize writes etc.), then\ncool, let's do that. But forcing people to use invonvenient workarounds\nseems like a bad thing to me - having a convenient and clear API is\nquite valueable, IMHO.\n\nLet's see if this actually has a measuerable overhead first.\n\n>\n>> > 2. It looks like I cannot implement custom storage options. E.g. for\n>> > compressed storage it makes sense to implement different compression\n>> > methods (lz4, zstd etc.) and corresponding options (like compression\n>> > level). But as i can see storage options (like fillfactor etc) are\n>> > hardcoded and are not extensible. Possible solution is to use GUCs\n>> > which would work but is not extremely convinient.\n>>\n>> Yeah, the reloptions module is undergoing some changes.  I expect that\n>> there will be a way to extend reloptions from an extension, at the end\n>> of that set of patches.\n>\n>Cool.\n>\n\nYep.\n\n>\n>> > 3. A bit surprising limitation that in order to use bitmap scan the\n>> > maximum number of tuples per page must not exceed 291 due to\n>> > MAX_TUPLES_PER_PAGE macro in tidbitmap.c which is calculated based on\n>> > 8kb page size. In case of 1mb page this restriction feels really\n>> > limiting.\n>>\n>> I suppose this is a hardcoded limit that needs to be fixed by patching\n>> core as we make table AM more pervasive.\n>\n>That's not unproblematic - a dynamic limit would make a number of\n>computations more expensive, and we already spend plenty CPU cycles\n>building the tid bitmap. And we'd waste plenty of memory just having all\n>that space for the worst case.  ISTM that we \"just\" need to replace the\n>TID bitmap with some tree like structure.\n>\n\nI think the zedstore has roughly the same problem, and Heikki mentioned\nsome possible solutions to dealing with it in his pgconfeu talk (and it\nwas discussed in the zedstore thread, I think).\n\n>\n>> > 4. In order to use WAL-logging each page must start with a standard 24\n>> > byte PageHeaderData even if it is needless for storage itself. Not a\n>> > big deal though. Another (acutally documented) WAL-related limitation\n>> > is that only generic WAL can be used within extension. So unless\n>> > inserts are made in bulks it's going to require a lot of disk space to\n>> > accomodate logs and wide bandwith for replication.\n>>\n>> Not sure what to suggest.  Either you should ignore this problem, or\n>> you should fix it.\n>\n>I think if it becomes a problem you should ask for an rmgr ID to use for\n>your extension, which we encode and then then allow to set the relevant\n>rmgr callbacks for that rmgr id at startup.  But you should obviously\n>first develop the WAL logging etc, and make sure it's beneficial over\n>generic wal logging for your case.\n>\n\nAFAIK compressed/columnar engines generally implement two types of\nstorage - write-optimized store (WOS) and read-optimized store (ROS),\nwhere the WOS is mostly just an uncompressed append-only buffer, and ROS\nis compressed etc. ISTM the WOS would benefit from a more elaborate WAL\nlogging, but ROS should be mostly fine with the generic WAL logging.\n\nBut yeah, we should test and measure how beneficial that actually is.\n\n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 18 Aug 2022 12:02:32 +0530", "msg_from": "Natarajan R <nataraj3098@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compressed pluggable storage experiments" } ]
[ { "msg_contents": "Update query stuck in a loop. Looping in _bt_moveright().\n\nExecInsertIndexTuples->btinsert->_bt_doinsert->_bt_search->_bt_moveright\n\nMid Tree Node downlink path taken by _bt_search points to a BTP_DELETED Leaf.\n\nbtpo_next is also DELETED but not in the tree.\n\nbtpo_next->btpo_next is NOT deleted but in the mid tree as a lesser key value.\n\nThus creating an endless loop in moveright.\n\n\nThe deleted page is in the tree. The left leaf page still points to it. The right leaf page points back to the deleted page.\n\nThe deleted page itself has arbitrary prev and next pointer. But the next pointer does lead to a loop.\n\n\nIs there any way, crash recovery or otherwise, that could result in a BTP_DELETED leaf which is still in the tree both in terms of the mid tree pointing down to it but also linked to by the 2 leaf siblings. It is as if the mid tree and two siblings were updated but never made it to disk but the DELETED page itself got written.\n\n\nEven after a restart the hang reoccurred. Rebuild fixed the problem. Unfortunately I'm not sure if I have enough log history left to examine. But I do have the index file before the rebuild and it clearly has the loop on disk.\n\n\n\n\n\nUpdate query stuck in a loop.  Looping in _bt_moveright().ExecInsertIndexTuples->btinsert->_bt_doinsert->_bt_search->_bt_moverightMid Tree Node downlink path taken by _bt_search points to a BTP_DELETED Leaf.btpo_next is also DELETED but not in the tree.btpo_next->btpo_next is NOT deleted but in the mid tree as a lesser key value.Thus creating an endless loop in moveright.The deleted page is in the tree.  The left leaf page still points to it.  The right leaf page points back to the deleted page.The deleted page itself has arbitrary prev and next pointer.  But the next pointer does lead to a loop.Is there any way, crash recovery or otherwise, that could result in a BTP_DELETED leaf which is still in the tree both in terms of the mid tree pointing down to it but also linked to by the 2 leaf siblings.  It is as if the mid tree and two siblings were updated but never made it to disk but the DELETED page itself got written.Even after a restart the hang reoccurred.  Rebuild fixed the problem.  Unfortunately I'm not sure if I have enough log history left to examine.  But I do have the index file before the rebuild and it clearly has the loop on disk.", "msg_date": "Thu, 10 Oct 2019 12:48:18 -0700 (PDT)", "msg_from": "Daniel Wood <hexexpert@comcast.net>", "msg_from_op": true, "msg_subject": "BTP_DELETED leaf still in tree" }, { "msg_contents": "On Thu, Oct 10, 2019 at 12:48 PM Daniel Wood <hexexpert@comcast.net> wrote:\n> Update query stuck in a loop. Looping in _bt_moveright().\n\nYou didn't say which PostgreSQL versions were involved, and if the\ndatabase was ever upgraded using pg_upgrade. Those details could\nmatter.\n\n> ExecInsertIndexTuples->btinsert->_bt_doinsert->_bt_search->_bt_moveright\n>\n> Mid Tree Node downlink path taken by _bt_search points to a BTP_DELETED Leaf.\n\nThis should hardly ever happen -- it is barely possible for an index\nscan to land on a BTP_DELETED leaf page (or a half-dead page) when\nfollowing a downlink in its parent. Recall that nbtree uses Lehman &\nYao's design, so _bt_search() does not \"couple\" buffer locks on the\nway down. It would probably be impossible to observe this happening\nwithout carefully setting breakpoints in multiple sessions.\n\nIf this happens reliably for you, which it sounds like, then you can\nalready assume that the index is corrupt.\n\n> btpo_next is also DELETED but not in the tree.\n>\n> btpo_next->btpo_next is NOT deleted but in the mid tree as a lesser key value.\n>\n> Thus creating an endless loop in moveright.\n\nOffhand, these other details sound normal. The side links are still\nneeded in fully deleted (BTP_DELETED) pages. And, moving right and\nfinding lesser key values (not greater key values) is normal with\ndeleted pages, since page deletion makes the keyspace move right, not\nleft (moving the keyspace left is how the source Lanin & Shasha paper\ndoes it, though).\n\nActually, I take it back -- the looping part is not normal. The\nbtpo_next->btpo_next page has no business linking back to the\noriginal/first deleted page you mentioned. That's just odd.\n\nCan you provide me with a dump of the page images? The easiest way of\ngetting a page dump is described here:\n\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#contrib.2Fpageinspect_page_dump\n\nIf I had to guess, I'd guess that this was due to a generic storage problem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Oct 2019 13:18:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: BTP_DELETED leaf still in tree" }, { "msg_contents": "On Thu, Oct 10, 2019 at 1:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> You didn't say which PostgreSQL versions were involved, and if the\n> database was ever upgraded using pg_upgrade. Those details could\n> matter.\n\nIn case you weren't aware, contrib/amcheck should make detected and\ndiagnosing these kinds of problems a lot easier. You should prefer to\nuse the bt_index_parent_check() variant (which will block writes and\nVACUUM, but not reads). It will verify that sibling pointers are in\nagreement with each other, and that leaf pages contain keys that are\ncovered by the relevant separator keys from the parent level.\n\nIf you happen to be using v11, then you might also want to use the\nheapallindexed option -- that will verify that the heap and index are\nin agreement. If the issue is on v12, the new \"rootdescend\" option can\ndetect very subtle cross-level transitive consistency options. (This\nis only available in v12 because that was the version that made all\nentries in the index unique by using heap TID as a unique-ifier.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 10 Oct 2019 13:26:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: BTP_DELETED leaf still in tree" }, { "msg_contents": "> On October 10, 2019 at 1:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> \n> On Thu, Oct 10, 2019 at 12:48 PM Daniel Wood <hexexpert@comcast.net> wrote:\n> > Update query stuck in a loop. Looping in _bt_moveright().\n> \n> You didn't say which PostgreSQL versions were involved, and if the\n> database was ever upgraded using pg_upgrade. Those details could\n> matter.\n\nPG_VERSION says 10. I suspect we are running 10.9. I have no idea if pg_upgrade was ever done.\n\n> > ExecInsertIndexTuples->btinsert->_bt_doinsert->_bt_search->_bt_moveright\n> >\n> > Mid Tree Node downlink path taken by _bt_search points to a BTP_DELETED Leaf.\n> \n> This should hardly ever happen -- it is barely possible for an index\n> scan to land on a BTP_DELETED leaf page (or a half-dead page) when\n> following a downlink in its parent. Recall that nbtree uses Lehman &\n> Yao's design, so _bt_search() does not \"couple\" buffer locks on the\n> way down. It would probably be impossible to observe this happening\n> without carefully setting breakpoints in multiple sessions.\n> \n> If this happens reliably for you, which it sounds like, then you can\n> already assume that the index is corrupt.\n> \n> > btpo_next is also DELETED but not in the tree.\n> >\n> > btpo_next->btpo_next is NOT deleted but in the mid tree as a lesser key value.\n> >\n> > Thus creating an endless loop in moveright.\n> \n> Offhand, these other details sound normal. The side links are still\n> needed in fully deleted (BTP_DELETED) pages. And, moving right and\n> finding lesser key values (not greater key values) is normal with\n> deleted pages, since page deletion makes the keyspace move right, not\n> left (moving the keyspace left is how the source Lanin & Shasha paper\n> does it, though).\n> \n> Actually, I take it back -- the looping part is not normal. The\n> btpo_next->btpo_next page has no business linking back to the\n> original/first deleted page you mentioned. That's just odd.\n\nbtpo_next->btpo_next does NOT link directly back to the 1st deleted page. It simply links to some in-use page which is 50 or so leaf pages back in the tree. Eventually we do reach the two deleted pages again. Only the first one is in the 'tree'.\n\n> Can you provide me with a dump of the page images? The easiest way of\n> getting a page dump is described here:\n\nCustomer data. Looks like meaningless customer data (5 digit key values). But too much paperwork. :-)\n\nThe hard part for me to understand isn't just why the DELETED leaf node is still referenced in the mid tree node.\nIt is that the step which sets BTP_DELETED should have also linked its leaf and right siblings together. But this hasn't been done.\n\nCould the page have already have been dirty, but because of \"target != leafblkno\", we didn't stamp a new LSN on it. Could this allow us to write the DELETED dirty page without the XLOG_BTREE_MARK_PAGE_HALFDEAD and XLOG_BTREE_UNLINK_PAGE being flushed? Of course, I don't understand the \"target != leafblkno\".\n\nIn any case, thanks.\n\n> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#contrib.2Fpageinspect_page_dump\n> \n> If I had to guess, I'd guess that this was due to a generic storage problem.\n> \n> -- \n> Peter Geoghegan\n\n\n", "msg_date": "Thu, 10 Oct 2019 16:44:46 -0700 (PDT)", "msg_from": "Daniel Wood <hexexpert@comcast.net>", "msg_from_op": true, "msg_subject": "Re: BTP_DELETED leaf still in tree" }, { "msg_contents": "On Fri, Oct 11, 2019 at 12:44 AM Daniel Wood <hexexpert@comcast.net> wrote:\n> > Actually, I take it back -- the looping part is not normal. The\n> > btpo_next->btpo_next page has no business linking back to the\n> > original/first deleted page you mentioned. That's just odd.\n>\n> btpo_next->btpo_next does NOT link directly back to the 1st deleted page. It simply links to some in-use page which is 50 or so leaf pages back in the tree.\n\nThat sounds more normal.\n\n> > Can you provide me with a dump of the page images? The easiest way of\n> > getting a page dump is described here:\n>\n> Customer data. Looks like meaningless customer data (5 digit key values). But too much paperwork. :-)\n\nWell, it was worth a try. ;-)\n\n> The hard part for me to understand isn't just why the DELETED leaf node is still referenced in the mid tree node.\n> It is that the step which sets BTP_DELETED should have also linked its leaf and right siblings together. But this hasn't been done.\n\nBefore the page becomes BTP_DELETED, it must first be BTP_HALF_DEAD.\nAnd that is also the point where it should be impossible for scans to\nreach the page, more or less (there is still that narrow window where\nthe downlink can followed just before its deleted, making the scan\nland on the BTP_HALF_DEAD page -- I mentioned this in my first mail).\n\n> Could the page have already have been dirty, but because of \"target != leafblkno\", we didn't stamp a new LSN on it. Could this allow us to write the DELETED dirty page without the XLOG_BTREE_MARK_PAGE_HALFDEAD and XLOG_BTREE_UNLINK_PAGE being flushed? Of course, I don't understand the \"target != leafblkno\".\n\nThe \"target != leafblkno\" thing concerns whether or not this is a\nmulti-level deletion (actually, that's not quite right, since even a\nmulti-level deletion has \"target == leafblkno\" at the point where it\nfinally gets to mark a half dead page fully deleted).\n\nYes, it's odd that this deleted page exists, even though its siblings\nstill link to it -- the distinction between a fully deleted page and a\nhalf dead page is really just the fact that a fully deleted page is\nsupposed to not be linked to from anywhere, including still-live\nsiblings. But you don't have to get that far to see evidence of\ncorruption -- having a downlink pointing to a half-dead page is\nevidence enough of corruption.\n\n(Actually, it's more complicated than that -- see the comments in\namcheck's bt_downlink_check() function from Postgres 11 or 12.\nMulti-level deletion is a case where a half-dead page has a downlink,\nbut the subtree undergoing deletion is still isolated in about the\nsame way as it is in the simple single level case, since the\n\"topparent\" downlink is zapped at the same point that the leaf page is\nmarked half-dead. The important thing is that even half-dead pages are\nnot reachable by descending the tree, except for the tiny window where\nthe topparent downlink is observed the instant before it is zapped.)\n\nIf page deletion didn't exist, it would be so much easier to\nunderstand the B-Tree code.\n\nMy guess is that there wasn't sufficient WAL to replay the page\ndeletion, but some of the buffers were written out. You might have\n\"gotten away with it\" if the internal page also happened to be written\nout along with everything else, but it just didn't work out that way.\nRemember, there are two weird things about this, that overlap with two\ndistinct types of atomic operations: the fact that the downlink still\nexists at all, and the fact that the sidelinks still exist at all.\nThis smells like a problem with slightly inconsistent page images, as\nopposed to a problem with how one particular atomic operation did\nsomething. It's not actually surprising that this would be the first\nplace that you'd notice a generic issue, since many other things are\n\"more forgiving\" in various ways.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 12 Oct 2019 06:22:34 +0100", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: BTP_DELETED leaf still in tree" } ]
[ { "msg_contents": "Good Afternoon,\n\nI posted about this on another thread here\n<https://www.postgresql.org/message-id/CAMa1XUiH3hL3KGwdKGjnJdJeo2A5H1o1uhtXWBkmMqixrDCWMA@mail.gmail.com>,\nbut the topic was not precisely planner issues, so I wanted to post it here.\n\nI am running Postgres 11.5. I have a table that is insert-only and has 312\nmillion rows. It is also pruned continuously to only past year. The size\nof the table is 223 GB with indexes, 140GB without. One of the fields is\nrec_insert_time timestamptz. Here are all potentially relevant table stats:\n\nschemaname | foo\nrelname | log_table\nn_tup_ins | 86850506\nn_tup_upd | 0\nn_tup_del | 68916115\nn_tup_hot_upd | 0\nn_live_tup | 312810691\nn_dead_tup | 9405132\nn_mod_since_analyze | 11452\nlast_vacuum | 2019-09-20 09:41:43.78383-05\nlast_autovacuum | 2019-10-04 13:56:16.810355-05\nlast_analyze | 2019-10-10 09:34:26.807695-05\nlast_autoanalyze |\nvacuum_count | 2\nautovacuum_count | 1\nanalyze_count | 13\nautoanalyze_count | 0\ntotal_relation_size | 223 GB\nrelation_size | 139 GB\ntable_size | 140 GB\n\nI have a simple query looking at past 10 days based on rec_insert_time, and\nit will not choose the BRIN index even with several configurations. Here\nare my all relevant indexes (I intentionally do not have a btree on\nrec_insert_time because I believe BRIN *should* fit better here):\n\n\"log_table_brand_id_product_rec_insert_time_idx\" btree (brand_id, product,\nrec_insert_time)\n\"log_table_rec_insert_time_idx\" brin (rec_insert_time)\n\"log_table_rec_insert_time_idx1\" brin (rec_insert_time) WITH\n(pages_per_range='64')\n\"rec_insert_time_brin_1000\" brin (rec_insert_time) WITH\n(pages_per_range='1000')\n\nAnd here is the SQL:\nSELECT\n category, source, MIN(rec_insert_time) OVER (partition by source order by\nrec_insert_time) AS first_source_time, MAX(rec_insert_time) OVER (partition\nby source order by rec_insert_time) AS last_source_time\nFROM (SELECT DISTINCT ON (brand_id, last_change, log_id)\ncategory, source(field1) AS source, rec_insert_time\nFROM log_table l\nINNER JOIN public.small_join_table filter ON filter.category = l.category\nWHERE field1 IS NOT NULL AND l.category = 'music'\nAND l.rec_insert_time >= now() - interval '10 days'\nORDER BY brand_id, last_change, log_id, rec_insert_time DESC) unique_cases\n;\n\nThis query will choose a seq scan on log_table every time in spite of these\nBRIN indexes on rec_insert_time.\n@Michael Lewis <mlewis@entrata.com> had suggested I check\ndefault_statistics_target for this column. I raised it to 5000 for this\ncolumn and still it's choosing a seq scan.\n\nHere is default chosen plan (takes 2 minutes 12 seconds):\n WindowAgg (cost=24437881.80..24437897.70 rows=707 width=120) (actual\ntime=132173.173..132173.222 rows=53 loops=1)\n Output: unique_cases.category, unique_cases.source,\nmin(unique_cases.rec_insert_time) OVER (?),\nmax(unique_cases.rec_insert_time) OVER (?), unique_cases.rec_insert_time\n Buffers: shared hit=391676 read=17772642 dirtied=4679 written=7\n -> Sort (cost=24437881.80..24437883.56 rows=707 width=104) (actual\ntime=132173.146..132173.149 rows=53 loops=1)\n Output: unique_cases.source, unique_cases.rec_insert_time,\nunique_cases.category\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n Sort Method: quicksort Memory: 32kB\n Buffers: shared hit=391676 read=17772642 dirtied=4679 written=7\n -> Subquery Scan on unique_cases (cost=24437834.20..24437848.34\nrows=707 width=104) (actual time=132172.950..132173.062 rows=53 loops=1)\n Output: unique_cases.source, unique_cases.rec_insert_time,\nunique_cases.category\n Buffers: shared hit=391676 read=17772642 dirtied=4679\nwritten=7\n -> Unique (cost=24437834.20..24437841.27 rows=707\nwidth=124) (actual time=132172.946..132173.048 rows=53 loops=1)\n Output: l.category, (source(l.field1)),\nl.rec_insert_time, l.brand_id, l.last_change, l.log_id\n Buffers: shared hit=391676 read=17772642 dirtied=4679\nwritten=7\n -> Sort (cost=24437834.20..24437835.96 rows=707\nwidth=124) (actual time=132172.939..132172.962 rows=466 loops=1)\n Output: l.category, (source(l.field1)),\nl.rec_insert_time, l.brand_id, l.last_change, l.log_id\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n Sort Method: quicksort Memory: 90kB\n Buffers: shared hit=391676 read=17772642\ndirtied=4679 written=7\n -> Nested Loop (cost=0.00..24437800.73\nrows=707 width=124) (actual time=4096.253..132171.425 rows=466 loops=1)\n Output: l.category, source(l.field1),\nl.rec_insert_time, l.brand_id, l.last_change, l.log_id\n Inner Unique: true\n Join Filter: ((l.category)::text =\nfilter.category)\n Rows Removed by Join Filter: 346704\n Buffers: shared hit=391676 read=17772642\ndirtied=4679 written=7\n -> Seq Scan on foo.log_table l\n (cost=0.00..24420483.80 rows=707 width=99) (actual\ntime=4095.763..132112.686 rows=466 loops=1)\n Output: <hidden>\n Filter: ((l.field1 IS NOT NULL) AND\n(l.category = 'music'::name) AND (l.rec_insert_time >= (now() - '10\ndays'::interval)))\n Rows Removed by Filter: 312830265\n Buffers: shared hit=391675\nread=17772636 dirtied=4679 written=7\n -> Materialize (cost=0.00..33.98\nrows=1399 width=8) (actual time=0.001..0.036 rows=745 loops=466)\n Output: filter.category\n Buffers: shared hit=1 read=6\n -> Seq Scan on\npublic.small_join_table filter (cost=0.00..26.99 rows=1399 width=8)\n(actual time=0.054..0.189 rows=745 loops=1)\n Output: filter.category\n Buffers: shared hit=1 read=6\n Planning Time: 0.552 ms\n Execution Time: 132173.657 ms\n(38 rows)\n\n\nHere is the plan I get when I turn off seqscan and indexscan!!! (21\nseconds):\nSET enable_seqscan TO false;\nSET enable_indexscan TO false;\n\n WindowAgg (cost=24363224.32..24363240.85 rows=735 width=120) (actual\ntime=21337.992..21338.040 rows=53 loops=1)\n Output: unique_cases.category, unique_cases.source,\nmin(unique_cases.rec_insert_time) OVER (?),\nmax(unique_cases.rec_insert_time) OVER (?), unique_cases.rec_insert_time\n Buffers: shared hit=1471 read=1509030 dirtied=121 written=1631\n -> Sort (cost=24363224.32..24363226.15 rows=735 width=104) (actual\ntime=21337.965..21337.968 rows=53 loops=1)\n Output: unique_cases.source, unique_cases.rec_insert_time,\nunique_cases.category\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n Sort Method: quicksort Memory: 32kB\n Buffers: shared hit=1471 read=1509030 dirtied=121 written=1631\n -> Subquery Scan on unique_cases (cost=24363174.62..24363189.32\nrows=735 width=104) (actual time=21337.777..21337.889 rows=53 loops=1)\n Output: unique_cases.source, unique_cases.rec_insert_time,\nunique_cases.category\n Buffers: shared hit=1471 read=1509030 dirtied=121\nwritten=1631\n -> Unique (cost=24363174.62..24363181.97 rows=735\nwidth=124) (actual time=21337.772..21337.874 rows=53 loops=1)\n Output: l.category, (source(l.field1)),\nl.rec_insert_time, l.brand_id, l.last_change, l.log_id\n Buffers: shared hit=1471 read=1509030 dirtied=121\nwritten=1631\n -> Sort (cost=24363174.62..24363176.46 rows=735\nwidth=124) (actual time=21337.767..21337.791 rows=466 loops=1)\n Output: l.category, (source(l.field1)),\nl.rec_insert_time, l.brand_id, l.last_change, l.log_id\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n Sort Method: quicksort Memory: 90kB\n Buffers: shared hit=1471 read=1509030\ndirtied=121 written=1631\n -> Nested Loop (cost=2393.31..24363139.63\nrows=735 width=124) (actual time=824.212..21336.263 rows=466 loops=1)\n Output: l.category, source(l.field1),\nl.rec_insert_time, l.brand_id, l.last_change, l.log_id\n Inner Unique: true\n Buffers: shared hit=1471 read=1509030\ndirtied=121 written=1631\n -> Bitmap Heap Scan on foo.log_table l\n (cost=2391.71..24360848.29 rows=735 width=99) (actual\ntime=824.133..21329.054 rows=466 loops=1)\n Output: <hidden>\n Recheck Cond: (l.rec_insert_time >=\n(now() - '10 days'::interval))\n Rows Removed by Index Recheck:\n8187584\n Filter: ((l.field1 IS NOT NULL) AND\n(l.category = 'music'::name))\n Rows Removed by Filter: 19857107\n Heap Blocks: lossy=1509000\n Buffers: shared hit=73 read=1509030\ndirtied=121 written=1631\n -> Bitmap Index Scan on\nrec_insert_time_brin_1000 (cost=0.00..2391.52 rows=157328135 width=0)\n(actual time=72.391..72.391 rows=15090000 loops=1)\n Index Cond: (l.rec_insert_time\n>= (now() - '10 days'::interval))\n Buffers: shared hit=29 read=74\n -> Bitmap Heap Scan on\npublic.small_join_table filter (cost=1.60..3.12 rows=1 width=8) (actual\ntime=0.010..0.010 rows=1 loops=466)\n Output: filter.category,\nfilter.type, filter.location\n Recheck Cond: (filter.category =\n(l.category)::text)\n Heap Blocks: exact=466\n Buffers: shared hit=1398\n -> Bitmap Index Scan on\nsmall_join_table_pkey (cost=0.00..1.60 rows=1 width=0) (actual\ntime=0.007..0.007 rows=1 loops=466)\n Index Cond: (filter.category =\n(l.category)::text)\n Buffers: shared hit=932\n Planning Time: 1.869 ms\n Execution Time: 21338.244 ms\n(44 rows)\n\nNotice it chooses the smallest BRIN index with 1000 pages per range, and\nthis is far faster than the seq scan.\n\nI do believe the estimate is actually way off. Just a plain EXPLAIN of the\nlatter estimates 10x more rows than actual:\n WindowAgg (cost=24354689.19..24354705.07 rows=706 width=120)\n -> Sort (cost=24354689.19..24354690.95 rows=706 width=104)\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n -> Subquery Scan on unique_cases (cost=24354641.66..24354655.78\nrows=706 width=104)\n -> Unique (cost=24354641.66..24354648.72 rows=706\nwidth=124)\n -> Sort (cost=24354641.66..24354643.42 rows=706\nwidth=124)\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n -> Nested Loop (cost=2385.42..24354608.25\nrows=706 width=124)\n -> Bitmap Heap Scan on log_table l\n (cost=2383.82..24352408.26 rows=706 width=99)\n Recheck Cond: (rec_insert_time >=\n(now() - '10 days'::interval))\n Filter: ((field1 IS NOT NULL) AND\n(category = 'music'::name))\n -> Bitmap Index Scan on\nrec_insert_time_brin_1000 (cost=0.00..2383.64 rows=156577455 width=0)\n Index Cond: (rec_insert_time\n>= (now() - '10 days'::interval))\n -> Bitmap Heap Scan on small_join_table\nfilter (cost=1.60..3.12 rows=1 width=8)\n Recheck Cond: (category =\n(l.category)::text)\n -> Bitmap Index Scan on\nsmall_join_table_pkey (cost=0.00..1.60 rows=1 width=0)\n Index Cond: (category =\n(l.category)::text)\n(17 rows)\n\n\nHere is EXPLAIN only of the default chosen plan:\n WindowAgg (cost=24437857.18..24437873.07 rows=706 width=120)\n -> Sort (cost=24437857.18..24437858.95 rows=706 width=104)\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n -> Subquery Scan on unique_cases (cost=24437809.66..24437823.78\nrows=706 width=104)\n -> Unique (cost=24437809.66..24437816.72 rows=706\nwidth=124)\n -> Sort (cost=24437809.66..24437811.42 rows=706\nwidth=124)\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n -> Nested Loop (cost=0.00..24437776.25\nrows=706 width=124)\n Join Filter: ((l.category)::text =\nfilter.category)\n -> Seq Scan on log_table l\n (cost=0.00..24420483.80 rows=706 width=99)\n Filter: ((field1 IS NOT NULL) AND\n(category = 'music'::name) AND (rec_insert_time >= (now() - '10\ndays'::interval)))\n -> Materialize (cost=0.00..33.98\nrows=1399 width=8)\n -> Seq Scan on small_join_table\nfilter (cost=0.00..26.99 rows=1399 width=8)\n(13 rows)\n\n\n\nAny insight into this is much appreciated. This is just one example of\nmany similar issues I have been finding with BRIN indexes scaling\npredictably with insert-only workloads.\n\nThanks!\nJeremy\n\nGood Afternoon,I posted about this on another thread here, but the topic was not precisely planner issues, so I wanted to post it here.I am running Postgres 11.5.  I have a table that is insert-only and has 312 million rows.  It is also pruned continuously to only past year.  The size of the table is 223 GB with indexes, 140GB without.  One of the fields is rec_insert_time timestamptz.  Here are all potentially relevant table stats:schemaname          | foorelname             | log_tablen_tup_ins           | 86850506n_tup_upd           | 0n_tup_del           | 68916115n_tup_hot_upd       | 0n_live_tup          | 312810691n_dead_tup          | 9405132n_mod_since_analyze | 11452last_vacuum         | 2019-09-20 09:41:43.78383-05last_autovacuum     | 2019-10-04 13:56:16.810355-05last_analyze        | 2019-10-10 09:34:26.807695-05last_autoanalyze    |vacuum_count        | 2autovacuum_count    | 1analyze_count       | 13autoanalyze_count   | 0total_relation_size | 223 GBrelation_size       | 139 GBtable_size          | 140 GBI have a simple query looking at past 10 days based on rec_insert_time, and it will not choose the BRIN index even with several configurations.  Here are my all relevant indexes (I intentionally do not have a btree on rec_insert_time because I believe BRIN *should* fit better here):\"log_table_brand_id_product_rec_insert_time_idx\" btree (brand_id, product, rec_insert_time)\"log_table_rec_insert_time_idx\" brin (rec_insert_time)\"log_table_rec_insert_time_idx1\" brin (rec_insert_time) WITH (pages_per_range='64')\"rec_insert_time_brin_1000\" brin (rec_insert_time) WITH (pages_per_range='1000')And here is the SQL:SELECT category, source, MIN(rec_insert_time) OVER (partition by source order by rec_insert_time) AS first_source_time, MAX(rec_insert_time) OVER (partition by source order by rec_insert_time) AS last_source_timeFROM (SELECT DISTINCT ON (brand_id, last_change, log_id)category, source(field1) AS source, rec_insert_timeFROM log_table lINNER JOIN public.small_join_table filter ON filter.category = l.categoryWHERE field1 IS NOT NULL AND l.category = 'music'AND l.rec_insert_time >= now() - interval '10 days'ORDER BY brand_id, last_change, log_id, rec_insert_time DESC) unique_cases;This query will choose a seq scan on log_table every time in spite of these BRIN indexes on rec_insert_time.@Michael Lewis had suggested I check default_statistics_target for this column.  I raised it to 5000 for this column and still it's choosing a seq scan.Here is default chosen plan (takes 2 minutes 12 seconds): WindowAgg  (cost=24437881.80..24437897.70 rows=707 width=120) (actual time=132173.173..132173.222 rows=53 loops=1)   Output: unique_cases.category, unique_cases.source, min(unique_cases.rec_insert_time) OVER (?), max(unique_cases.rec_insert_time) OVER (?), unique_cases.rec_insert_time   Buffers: shared hit=391676 read=17772642 dirtied=4679 written=7   ->  Sort  (cost=24437881.80..24437883.56 rows=707 width=104) (actual time=132173.146..132173.149 rows=53 loops=1)         Output: unique_cases.source, unique_cases.rec_insert_time, unique_cases.category         Sort Key: unique_cases.source, unique_cases.rec_insert_time         Sort Method: quicksort  Memory: 32kB         Buffers: shared hit=391676 read=17772642 dirtied=4679 written=7         ->  Subquery Scan on unique_cases  (cost=24437834.20..24437848.34 rows=707 width=104) (actual time=132172.950..132173.062 rows=53 loops=1)               Output: unique_cases.source, unique_cases.rec_insert_time, unique_cases.category               Buffers: shared hit=391676 read=17772642 dirtied=4679 written=7               ->  Unique  (cost=24437834.20..24437841.27 rows=707 width=124) (actual time=132172.946..132173.048 rows=53 loops=1)                     Output: l.category, (source(l.field1)), l.rec_insert_time, l.brand_id, l.last_change, l.log_id                     Buffers: shared hit=391676 read=17772642 dirtied=4679 written=7                     ->  Sort  (cost=24437834.20..24437835.96 rows=707 width=124) (actual time=132172.939..132172.962 rows=466 loops=1)                           Output: l.category, (source(l.field1)), l.rec_insert_time, l.brand_id, l.last_change, l.log_id                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           Sort Method: quicksort  Memory: 90kB                           Buffers: shared hit=391676 read=17772642 dirtied=4679 written=7                           ->  Nested Loop  (cost=0.00..24437800.73 rows=707 width=124) (actual time=4096.253..132171.425 rows=466 loops=1)                                 Output: l.category, source(l.field1), l.rec_insert_time, l.brand_id, l.last_change, l.log_id                                 Inner Unique: true                                 Join Filter: ((l.category)::text = filter.category)                                 Rows Removed by Join Filter: 346704                                 Buffers: shared hit=391676 read=17772642 dirtied=4679 written=7                                 ->  Seq Scan on foo.log_table l  (cost=0.00..24420483.80 rows=707 width=99) (actual time=4095.763..132112.686 rows=466 loops=1)                                       Output: <hidden>                                       Filter: ((l.field1 IS NOT NULL) AND (l.category = 'music'::name) AND (l.rec_insert_time >= (now() - '10 days'::interval)))                                       Rows Removed by Filter: 312830265                                       Buffers: shared hit=391675 read=17772636 dirtied=4679 written=7                                 ->  Materialize  (cost=0.00..33.98 rows=1399 width=8) (actual time=0.001..0.036 rows=745 loops=466)                                       Output: filter.category                                       Buffers: shared hit=1 read=6                                       ->  Seq Scan on public.small_join_table filter  (cost=0.00..26.99 rows=1399 width=8) (actual time=0.054..0.189 rows=745 loops=1)                                             Output: filter.category                                             Buffers: shared hit=1 read=6 Planning Time: 0.552 ms Execution Time: 132173.657 ms(38 rows)Here is the plan I get when I turn off seqscan and indexscan!!! (21 seconds):SET enable_seqscan TO false;SET enable_indexscan TO false; WindowAgg  (cost=24363224.32..24363240.85 rows=735 width=120) (actual time=21337.992..21338.040 rows=53 loops=1)   Output: unique_cases.category, unique_cases.source, min(unique_cases.rec_insert_time) OVER (?), max(unique_cases.rec_insert_time) OVER (?), unique_cases.rec_insert_time   Buffers: shared hit=1471 read=1509030 dirtied=121 written=1631   ->  Sort  (cost=24363224.32..24363226.15 rows=735 width=104) (actual time=21337.965..21337.968 rows=53 loops=1)         Output: unique_cases.source, unique_cases.rec_insert_time, unique_cases.category         Sort Key: unique_cases.source, unique_cases.rec_insert_time         Sort Method: quicksort  Memory: 32kB         Buffers: shared hit=1471 read=1509030 dirtied=121 written=1631         ->  Subquery Scan on unique_cases  (cost=24363174.62..24363189.32 rows=735 width=104) (actual time=21337.777..21337.889 rows=53 loops=1)               Output: unique_cases.source, unique_cases.rec_insert_time, unique_cases.category               Buffers: shared hit=1471 read=1509030 dirtied=121 written=1631               ->  Unique  (cost=24363174.62..24363181.97 rows=735 width=124) (actual time=21337.772..21337.874 rows=53 loops=1)                     Output: l.category, (source(l.field1)), l.rec_insert_time, l.brand_id, l.last_change, l.log_id                     Buffers: shared hit=1471 read=1509030 dirtied=121 written=1631                     ->  Sort  (cost=24363174.62..24363176.46 rows=735 width=124) (actual time=21337.767..21337.791 rows=466 loops=1)                           Output: l.category, (source(l.field1)), l.rec_insert_time, l.brand_id, l.last_change, l.log_id                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           Sort Method: quicksort  Memory: 90kB                           Buffers: shared hit=1471 read=1509030 dirtied=121 written=1631                           ->  Nested Loop  (cost=2393.31..24363139.63 rows=735 width=124) (actual time=824.212..21336.263 rows=466 loops=1)                                 Output: l.category, source(l.field1), l.rec_insert_time, l.brand_id, l.last_change, l.log_id                                 Inner Unique: true                                 Buffers: shared hit=1471 read=1509030 dirtied=121 written=1631                                 ->  Bitmap Heap Scan on foo.log_table l  (cost=2391.71..24360848.29 rows=735 width=99) (actual time=824.133..21329.054 rows=466 loops=1)                                       Output: <hidden>                                       Recheck Cond: (l.rec_insert_time >= (now() - '10 days'::interval))                                       Rows Removed by Index Recheck: 8187584                                       Filter: ((l.field1 IS NOT NULL) AND (l.category = 'music'::name))                                       Rows Removed by Filter: 19857107                                       Heap Blocks: lossy=1509000                                       Buffers: shared hit=73 read=1509030 dirtied=121 written=1631                                       ->  Bitmap Index Scan on rec_insert_time_brin_1000  (cost=0.00..2391.52 rows=157328135 width=0) (actual time=72.391..72.391 rows=15090000 loops=1)                                             Index Cond: (l.rec_insert_time >= (now() - '10 days'::interval))                                             Buffers: shared hit=29 read=74                                 ->  Bitmap Heap Scan on public.small_join_table filter  (cost=1.60..3.12 rows=1 width=8) (actual time=0.010..0.010 rows=1 loops=466)                                       Output: filter.category, filter.type, filter.location                                       Recheck Cond: (filter.category = (l.category)::text)                                       Heap Blocks: exact=466                                       Buffers: shared hit=1398                                       ->  Bitmap Index Scan on small_join_table_pkey  (cost=0.00..1.60 rows=1 width=0) (actual time=0.007..0.007 rows=1 loops=466)                                             Index Cond: (filter.category = (l.category)::text)                                             Buffers: shared hit=932 Planning Time: 1.869 ms Execution Time: 21338.244 ms(44 rows)Notice it chooses the smallest BRIN index with 1000 pages per range, and this is far faster than the seq scan.I do believe the estimate is actually way off.  Just a plain EXPLAIN of the latter estimates 10x more rows than actual: WindowAgg  (cost=24354689.19..24354705.07 rows=706 width=120)   ->  Sort  (cost=24354689.19..24354690.95 rows=706 width=104)         Sort Key: unique_cases.source, unique_cases.rec_insert_time         ->  Subquery Scan on unique_cases  (cost=24354641.66..24354655.78 rows=706 width=104)               ->  Unique  (cost=24354641.66..24354648.72 rows=706 width=124)                     ->  Sort  (cost=24354641.66..24354643.42 rows=706 width=124)                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           ->  Nested Loop  (cost=2385.42..24354608.25 rows=706 width=124)                                 ->  Bitmap Heap Scan on log_table l  (cost=2383.82..24352408.26 rows=706 width=99)                                       Recheck Cond: (rec_insert_time >= (now() - '10 days'::interval))                                       Filter: ((field1 IS NOT NULL) AND (category = 'music'::name))                                       ->  Bitmap Index Scan on rec_insert_time_brin_1000  (cost=0.00..2383.64 rows=156577455 width=0)                                             Index Cond: (rec_insert_time >= (now() - '10 days'::interval))                                 ->  Bitmap Heap Scan on small_join_table filter  (cost=1.60..3.12 rows=1 width=8)                                       Recheck Cond: (category = (l.category)::text)                                       ->  Bitmap Index Scan on small_join_table_pkey  (cost=0.00..1.60 rows=1 width=0)                                             Index Cond: (category = (l.category)::text)(17 rows)Here is EXPLAIN only of the default chosen plan: WindowAgg  (cost=24437857.18..24437873.07 rows=706 width=120)   ->  Sort  (cost=24437857.18..24437858.95 rows=706 width=104)         Sort Key: unique_cases.source, unique_cases.rec_insert_time         ->  Subquery Scan on unique_cases  (cost=24437809.66..24437823.78 rows=706 width=104)               ->  Unique  (cost=24437809.66..24437816.72 rows=706 width=124)                     ->  Sort  (cost=24437809.66..24437811.42 rows=706 width=124)                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           ->  Nested Loop  (cost=0.00..24437776.25 rows=706 width=124)                                 Join Filter: ((l.category)::text = filter.category)                                 ->  Seq Scan on log_table l  (cost=0.00..24420483.80 rows=706 width=99)                                       Filter: ((field1 IS NOT NULL) AND (category = 'music'::name) AND (rec_insert_time >= (now() - '10 days'::interval)))                                 ->  Materialize  (cost=0.00..33.98 rows=1399 width=8)                                       ->  Seq Scan on small_join_table filter  (cost=0.00..26.99 rows=1399 width=8)(13 rows)Any insight into this is much appreciated.  This is just one example of many similar issues I have been finding with BRIN indexes scaling predictably with insert-only workloads.Thanks!Jeremy", "msg_date": "Thu, 10 Oct 2019 16:58:11 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "BRIN index which is much faster never chosen by planner" }, { "msg_contents": "Since the optimizer is choosing a seq scan over index scan when it seems\nlike it has good row estimates in both cases, to me that may mean costs of\nscanning index are expected to be high. Is this workload on SSD? Has the\nrandom_page_cost config been decreased from default 4 (compared with cost\nof 1 unit for sequential scan)?\n\nYour buffer hits aren't great. What is shared_buffers set to? How much ram\non this cluster?\n\nWith this table being insert only, one assumes correlation is very high on\nthe data in this column as shown in pg_stats, but have your confirmed?\n\nTo me, distinct ON is often a bad code smell and probably can be re-written\nto be much more efficient with GROUP BY, lateral & order by, or some other\ntool. Same with the window function. It is a powerful tool, but sometimes\nnot the right one.\n\nIs \"source\" a function that is called on field1? What is it doing/how is it\ndefined?\n\nSince the optimizer is choosing a seq scan over index scan when it seems like it has good row estimates in both cases, to me that may mean costs of scanning index are expected to be high. Is this workload on SSD? Has the random_page_cost config been decreased from default 4 (compared with cost of 1 unit for sequential scan)?Your buffer hits aren't great. What is shared_buffers set to? How much ram on this cluster?With this table being insert only, one assumes correlation is very high on the data in this column as shown in pg_stats, but have your confirmed?To me, distinct ON is often a bad code smell and probably can be re-written to be much more efficient with GROUP BY, lateral & order by, or some other tool. Same with the window function. It is a powerful tool, but sometimes not the right one.Is \"source\" a function that is called on field1? What is it doing/how is it defined?", "msg_date": "Thu, 10 Oct 2019 16:19:36 -0600", "msg_from": "Michael Lewis <mlewis@entrata.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Thu, Oct 10, 2019 at 04:58:11PM -0500, Jeremy Finzel wrote:\n>\n> ...\n>\n>Notice it chooses the smallest BRIN index with 1000 pages per range, and\n>this is far faster than the seq scan.\n>\n>I do believe the estimate is actually way off. Just a plain EXPLAIN of the\n>latter estimates 10x more rows than actual:\n> WindowAgg (cost=24354689.19..24354705.07 rows=706 width=120)\n> -> Sort (cost=24354689.19..24354690.95 rows=706 width=104)\n> Sort Key: unique_cases.source, unique_cases.rec_insert_time\n> -> Subquery Scan on unique_cases (cost=24354641.66..24354655.78\n>rows=706 width=104)\n> -> Unique (cost=24354641.66..24354648.72 rows=706\n>width=124)\n> -> Sort (cost=24354641.66..24354643.42 rows=706\n>width=124)\n> Sort Key: l.brand_id, l.last_change, l.log_id,\n>l.rec_insert_time DESC\n> -> Nested Loop (cost=2385.42..24354608.25\n>rows=706 width=124)\n> -> Bitmap Heap Scan on log_table l\n> (cost=2383.82..24352408.26 rows=706 width=99)\n> Recheck Cond: (rec_insert_time >=\n>(now() - '10 days'::interval))\n> Filter: ((field1 IS NOT NULL) AND\n>(category = 'music'::name))\n> -> Bitmap Index Scan on\n>rec_insert_time_brin_1000 (cost=0.00..2383.64 rows=156577455 width=0)\n> Index Cond: (rec_insert_time\n>>= (now() - '10 days'::interval))\n> -> Bitmap Heap Scan on small_join_table\n>filter (cost=1.60..3.12 rows=1 width=8)\n> Recheck Cond: (category =\n>(l.category)::text)\n> -> Bitmap Index Scan on\n>small_join_table_pkey (cost=0.00..1.60 rows=1 width=0)\n> Index Cond: (category =\n>(l.category)::text)\n>(17 rows)\n>\n>\n>Here is EXPLAIN only of the default chosen plan:\n> WindowAgg (cost=24437857.18..24437873.07 rows=706 width=120)\n> -> Sort (cost=24437857.18..24437858.95 rows=706 width=104)\n> Sort Key: unique_cases.source, unique_cases.rec_insert_time\n> -> Subquery Scan on unique_cases (cost=24437809.66..24437823.78\n>rows=706 width=104)\n> -> Unique (cost=24437809.66..24437816.72 rows=706\n>width=124)\n> -> Sort (cost=24437809.66..24437811.42 rows=706\n>width=124)\n> Sort Key: l.brand_id, l.last_change, l.log_id,\n>l.rec_insert_time DESC\n> -> Nested Loop (cost=0.00..24437776.25\n>rows=706 width=124)\n> Join Filter: ((l.category)::text =\n>filter.category)\n> -> Seq Scan on log_table l\n> (cost=0.00..24420483.80 rows=706 width=99)\n> Filter: ((field1 IS NOT NULL) AND\n>(category = 'music'::name) AND (rec_insert_time >= (now() - '10\n>days'::interval)))\n> -> Materialize (cost=0.00..33.98\n>rows=1399 width=8)\n> -> Seq Scan on small_join_table\n>filter (cost=0.00..26.99 rows=1399 width=8)\n>(13 rows)\n>\n>\n>\n>Any insight into this is much appreciated. This is just one example of\n>many similar issues I have been finding with BRIN indexes scaling\n>predictably with insert-only workloads.\n>\n\nIt's quite interesting planning issue. The cost estimates are:\n\n -> Seq Scan on foo.log_table l\n (cost=0.00..24420483.80 rows=707 width=99) (actual\n\nwhile for the bitmap heap scan it looks like this:\n\n -> Bitmap Heap Scan on foo.log_table l\n (cost=2391.71..24360848.29 rows=735 width=99) (actual\n\nSo the planner actualy thinks the bitmap heap scan is a tad *cheaper*\nbut picks the seq scan anyway. This is likely because we don't really\ncompare the exact costs, but we do fuzzy comparison - the plan has to be\nat least 1% cheaper to dominate the existing plan. This allows us to\nsave some work when replacing the paths.\n\nIn this case the difference is only about 0.2%, so we keep the seqscan\npath. The real question is why the planner came to this cost, when it\ngot pretty good row estimates etc.\n\nLooking at the cost_bitmap_heap_scan() I think the costing issue comes\nmostly from this bit:\n\n /*\n * For small numbers of pages we should charge spc_random_page_cost\n * apiece, while if nearly all the table's pages are being read, it's more\n * appropriate to charge spc_seq_page_cost apiece. The effect is\n * nonlinear, too. For lack of a better idea, interpolate like this to\n * determine the cost per page.\n */\n if (pages_fetched >= 2.0)\n cost_per_page = spc_random_page_cost -\n (spc_random_page_cost - spc_seq_page_cost)\n * sqrt(pages_fetched / T);\n else\n cost_per_page = spc_random_page_cost;\n\nThe index scan is estimated to return 157328135 rows, i.e. about 50% of\nthe table (apparently it's ~10x more than the actual number). This is\nproduced by compute_bitmap_pages() which also computes pages_fetched,\nand I guess that's going to be pretty close to all pages, because with\nT = 18350080 (which is 140GB) and using\n\n pages_fetched = (2.0 * T * tuples_fetched) / (2.0 * T + tuples_fetched);\n\nwe get 29731418 (which is more than 18350080, so it gets clamped).\n\nSo this kinda seems like the optimizer kinda believes it'll have to scan\nthe whole table anyway. In reality, of course, the number of tuples\nreturned by the index is 10x lower, so the formula above would give us\nonly about 10975262 pages (so ~1/2 the table). The actual number of\npages is however even lower - only about 1509030, i.e. ~8% of the table.\n\nSo this seems like a combination of multiple issues. Firstly, the bitmap\nindex scan on rec_insert_time_brin_1000 estimate seems somewhat poor. It\nmight be worth increasing stats target on that column, or something like\nthat. Not sure, but it's about the only \"fixable\" thing here, I think.\n\nThe other issue is that the estimation of pages fetched using bitmap\nheap scan is rather crude - but that's simply hard, and I don't think we\ncan fundamentally improve this.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 11 Oct 2019 01:13:09 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Fri, 11 Oct 2019 at 12:13, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> The index scan is estimated to return 157328135 rows, i.e. about 50% of\n> the table (apparently it's ~10x more than the actual number).\n\nDon't pay too much attention to the actual row counts from bitmap\nindex scans of brin indexes. The value is entirely made up, per:\n\n/*\n* XXX We have an approximation of the number of *pages* that our scan\n* returns, but we don't have a precise idea of the number of heap tuples\n* involved.\n*/\nreturn totalpages * 10;\n\nin bringetbitmap().\n\n(Ideally EXPLAIN would be written in such a way that it didn't even\nshow the actual rows for node types that don't return rows. However,\nI'm sure that would break many explain parsing tools)\n\nThe planner might be able to get a better estimate on the number of\nmatching rows if the now() - interval '10 days' expression was\nreplaced with 'now'::timestamptz - interval '10 days'. However, care\nwould need to be taken to ensure the plan is never prepared since\n'now' is evaluated during parse. The same care must be taken when\ncreating views, functions, stored procedures and the like.\n\nThe planner will just estimate the selectivity of now() - interval '10\ndays' by using DEFAULT_INEQ_SEL, which is 0.3333333333333333, so it\nthinks it'll get 1/3rd of the table. Using 'now' will allow the\nplanner to lookup actual statistics on that column which will likely\ngive a much better estimate, which by the looks of it, likely will\nresult in one of those BRIN index being used.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Fri, 11 Oct 2019 13:22:07 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Thu, Oct 10, 2019 at 6:22 PM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> The planner might be able to get a better estimate on the number of\n> matching rows if the now() - interval '10 days' expression was\n> replaced with 'now'::timestamptz - interval '10 days'. However, care\n> would need to be taken to ensure the plan is never prepared since\n> 'now' is evaluated during parse. The same care must be taken when\n> creating views, functions, stored procedures and the like.\n>\n> The planner will just estimate the selectivity of now() - interval '10\n> days' by using DEFAULT_INEQ_SEL, which is 0.3333333333333333, so it\n> thinks it'll get 1/3rd of the table. Using 'now' will allow the\n> planner to lookup actual statistics on that column which will likely\n> give a much better estimate, which by the looks of it, likely will\n> result in one of those BRIN index being used.\n>\n\nThis surprised me a bit, and would have significant implications. I tested\na few different tables in our system and get the same row count estimate\nwith either WHERE condition. Perhaps I am missing a critical piece of what\nyou said.\n\nexplain\nselect * from charges where posted_on > now() - interval '10 days';\n\nexplain\nselect * from charges where posted_on > 'now'::timestamptz - interval '10\ndays';\n\nOn Thu, Oct 10, 2019 at 6:22 PM David Rowley <david.rowley@2ndquadrant.com> wrote:The planner might be able to get a better estimate on the number of\nmatching rows if the now() - interval '10 days' expression was\nreplaced with 'now'::timestamptz - interval '10 days'. However, care\nwould need to be taken to ensure the plan is never prepared since\n'now' is evaluated during parse. The same care must be taken when\ncreating views, functions, stored procedures and the like.\n\nThe planner will just estimate the selectivity of now() - interval '10\ndays'  by using DEFAULT_INEQ_SEL, which is 0.3333333333333333, so it\nthinks it'll get 1/3rd of the table.  Using 'now' will allow the\nplanner to lookup actual statistics on that column which will likely\ngive a much better estimate, which by the looks of it, likely will\nresult in one of those BRIN index being used.This surprised me a bit, and would have significant implications. I tested a few different tables in our system and get the same row count estimate with either WHERE condition. Perhaps I am missing a critical piece of what you said.explainselect * from charges where posted_on > now() - interval '10 days';explainselect * from charges where posted_on > 'now'::timestamptz  - interval '10 days';", "msg_date": "Thu, 10 Oct 2019 22:48:08 -0600", "msg_from": "Michael Lewis <mlewis@entrata.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Fri, 11 Oct 2019 at 17:48, Michael Lewis <mlewis@entrata.com> wrote:\n>\n> On Thu, Oct 10, 2019 at 6:22 PM David Rowley <david.rowley@2ndquadrant.com> wrote:\n>> The planner will just estimate the selectivity of now() - interval '10\n>> days' by using DEFAULT_INEQ_SEL, which is 0.3333333333333333, so it\n>> thinks it'll get 1/3rd of the table. Using 'now' will allow the\n>> planner to lookup actual statistics on that column which will likely\n>> give a much better estimate, which by the looks of it, likely will\n>> result in one of those BRIN index being used.\n>\n>\n> This surprised me a bit, and would have significant implications. I tested a few different tables in our system and get the same row count estimate with either WHERE condition. Perhaps I am missing a critical piece of what you said.\n>\n> explain\n> select * from charges where posted_on > now() - interval '10 days';\n>\n> explain\n> select * from charges where posted_on > 'now'::timestamptz - interval '10 days';\n\nYou're right. On looking more closely at the code, it uses\nestimate_expression_value(), which performs additional constant\nfolding of expressions for selectivity purposes only. It does end up\ncalling the now() function and evaluating the now() - interval '10\ndays'; expression into a Const.\n\nThe header comment for that function reads:\n\n* estimate_expression_value\n *\n * This function attempts to estimate the value of an expression for\n * planning purposes. It is in essence a more aggressive version of\n * eval_const_expressions(): we will perform constant reductions that are\n * not necessarily 100% safe, but are reasonable for estimation purposes.\n\nSo I take back what I said about using 'now'::timestamptz instead of now().\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Fri, 11 Oct 2019 21:47:51 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Thu, Oct 10, 2019 at 7:22 PM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> The planner might be able to get a better estimate on the number of\n> matching rows if the now() - interval '10 days' expression was\n> replaced with 'now'::timestamptz - interval '10 days'. However, care\n> would need to be taken to ensure the plan is never prepared since\n> 'now' is evaluated during parse. The same care must be taken when\n> creating views, functions, stored procedures and the like.\n>\n\nYou are on to something here I think with the now() function, even if above\nsuggestion is not exactly right as you said further down. I am finding a\nhard-coded timestamp gives the right query plan. I also tested same with\neven bigger window (last 16 days) and it yet still chooses the brin index.\n\nfoo_prod=# EXPLAIN\nfoo_prod-# SELECT\nfoo_prod-# category, source, MIN(rec_insert_time) OVER (partition by\nsource order by rec_insert_time) AS first_source_time, MAX(rec_insert_time)\nOVER (partition by source order by rec_insert_time) AS last_source_time\nfoo_prod-# FROM (SELECT DISTINCT ON (brand_id, last_change, log_id)\nfoo_prod(# category, source(field1) AS source, rec_insert_time\nfoo_prod(# FROM log_table l\nfoo_prod(# INNER JOIN public.small_join_table filter ON filter.category =\nl.category\nfoo_prod(# WHERE field1 IS NOT NULL AND l.category = 'music'\nfoo_prod(# AND l.rec_insert_time >= now() - interval '10 days'\nfoo_prod(# ORDER BY brand_id, last_change, log_id, rec_insert_time DESC)\nunique_cases;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=24436329.10..24436343.56 rows=643 width=120)\n -> Sort (cost=24436329.10..24436330.70 rows=643 width=104)\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n -> Subquery Scan on unique_cases (cost=24436286.24..24436299.10\nrows=643 width=104)\n -> Unique (cost=24436286.24..24436292.67 rows=643\nwidth=124)\n -> Sort (cost=24436286.24..24436287.85 rows=643\nwidth=124)\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n -> Nested Loop (cost=0.00..24436256.25\nrows=643 width=124)\n Join Filter: ((l.category)::text =\nfilter.category)\n -> Seq Scan on small_join_table filter\n (cost=0.00..26.99 rows=1399 width=8)\n -> Materialize (cost=0.00..24420487.02\nrows=643 width=99)\n -> Seq Scan on log_table l\n (cost=0.00..24420483.80 rows=643 width=99)\n Filter: ((field1 IS NOT NULL)\nAND (category = 'music'::name) AND (rec_insert_time >= (now() - '10\ndays'::interval)))\n(13 rows)\n\nfoo_prod=# SELECT now() - interval '10 days';\n ?column?\n-------------------------------\n 2019-10-01 08:20:38.115471-05\n(1 row)\n\nfoo_prod=# EXPLAIN\nSELECT\n category, source, MIN(rec_insert_time) OVER (partition by source order by\nrec_insert_time) AS first_source_time, MAX(rec_insert_time) OVER (partition\nby source order by rec_insert_time) AS last_source_time\nFROM (SELECT DISTINCT ON (brand_id, last_change, log_id)\ncategory, source(field1) AS source, rec_insert_time\nFROM log_table l\nINNER JOIN public.small_join_table filter ON filter.category = l.category\nWHERE field1 IS NOT NULL AND l.category = 'music'\nAND l.rec_insert_time >= '2019-10-01 08:20:38.115471-05'\nORDER BY brand_id, last_change, log_id, rec_insert_time DESC) unique_cases;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=19664576.17..19664590.63 rows=643 width=120)\n -> Sort (cost=19664576.17..19664577.77 rows=643 width=104)\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n -> Subquery Scan on unique_cases (cost=19664533.31..19664546.17\nrows=643 width=104)\n -> Unique (cost=19664533.31..19664539.74 rows=643\nwidth=124)\n -> Sort (cost=19664533.31..19664534.92 rows=643\nwidth=124)\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n -> Nested Loop (cost=3181.19..19664503.32\nrows=643 width=124)\n -> Gather (cost=3180.91..19662574.92\nrows=643 width=99)\n Workers Planned: 3\n -> Parallel Bitmap Heap Scan on\nlog_table l (cost=2180.91..19661510.62 rows=207 width=99)\n Recheck Cond: (rec_insert_time\n>= '2019-10-01 08:20:38.115471-05'::timestamp with time zone)\n Filter: ((field1 IS NOT NULL)\nAND (category = 'music'::name))\n -> Bitmap Index Scan on\nrec_insert_time_brin_1000 (cost=0.00..2180.75 rows=142602171 width=0)\n Index Cond:\n(rec_insert_time >= '2019-10-01 08:20:38.115471-05'::timestamp with time\nzone)\n\n\nLet me know if this rings any bells! I will respond to other comments with\nother replies.\n\nThanks,\nJeremy\n\nOn Thu, Oct 10, 2019 at 7:22 PM David Rowley <david.rowley@2ndquadrant.com> wrote:The planner might be able to get a better estimate on the number of\nmatching rows if the now() - interval '10 days' expression was\nreplaced with 'now'::timestamptz - interval '10 days'. However, care\nwould need to be taken to ensure the plan is never prepared since\n'now' is evaluated during parse. The same care must be taken when\ncreating views, functions, stored procedures and the like.You are on to something here I think with the now() function, even if above suggestion is not exactly right as you said further down.  I am finding a hard-coded timestamp gives the right query plan.  I also tested same with even bigger window (last 16 days) and it yet still chooses the brin index.foo_prod=# EXPLAINfoo_prod-# SELECTfoo_prod-#  category, source, MIN(rec_insert_time) OVER (partition by source order by rec_insert_time) AS first_source_time, MAX(rec_insert_time) OVER (partition by source order by rec_insert_time) AS last_source_timefoo_prod-# FROM (SELECT DISTINCT ON (brand_id, last_change, log_id)foo_prod(# category, source(field1) AS source, rec_insert_timefoo_prod(# FROM log_table lfoo_prod(# INNER JOIN public.small_join_table filter ON filter.category = l.categoryfoo_prod(# WHERE field1 IS NOT NULL AND l.category = 'music'foo_prod(# AND l.rec_insert_time >= now() - interval '10 days'foo_prod(# ORDER BY brand_id, last_change, log_id, rec_insert_time DESC) unique_cases;                                                                               QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------- WindowAgg  (cost=24436329.10..24436343.56 rows=643 width=120)   ->  Sort  (cost=24436329.10..24436330.70 rows=643 width=104)         Sort Key: unique_cases.source, unique_cases.rec_insert_time         ->  Subquery Scan on unique_cases  (cost=24436286.24..24436299.10 rows=643 width=104)               ->  Unique  (cost=24436286.24..24436292.67 rows=643 width=124)                     ->  Sort  (cost=24436286.24..24436287.85 rows=643 width=124)                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           ->  Nested Loop  (cost=0.00..24436256.25 rows=643 width=124)                                 Join Filter: ((l.category)::text = filter.category)                                 ->  Seq Scan on small_join_table filter  (cost=0.00..26.99 rows=1399 width=8)                                 ->  Materialize  (cost=0.00..24420487.02 rows=643 width=99)                                       ->  Seq Scan on log_table l  (cost=0.00..24420483.80 rows=643 width=99)                                             Filter: ((field1 IS NOT NULL) AND (category = 'music'::name) AND (rec_insert_time >= (now() - '10 days'::interval)))(13 rows)foo_prod=# SELECT now() - interval '10 days';           ?column?------------------------------- 2019-10-01 08:20:38.115471-05(1 row)foo_prod=# EXPLAINSELECT category, source, MIN(rec_insert_time) OVER (partition by source order by rec_insert_time) AS first_source_time, MAX(rec_insert_time) OVER (partition by source order by rec_insert_time) AS last_source_timeFROM (SELECT DISTINCT ON (brand_id, last_change, log_id)category, source(field1) AS source, rec_insert_timeFROM log_table lINNER JOIN public.small_join_table filter ON filter.category = l.categoryWHERE field1 IS NOT NULL AND l.category = 'music'AND l.rec_insert_time >= '2019-10-01 08:20:38.115471-05'ORDER BY brand_id, last_change, log_id, rec_insert_time DESC) unique_cases;                                                                              QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------- WindowAgg  (cost=19664576.17..19664590.63 rows=643 width=120)   ->  Sort  (cost=19664576.17..19664577.77 rows=643 width=104)         Sort Key: unique_cases.source, unique_cases.rec_insert_time         ->  Subquery Scan on unique_cases  (cost=19664533.31..19664546.17 rows=643 width=104)               ->  Unique  (cost=19664533.31..19664539.74 rows=643 width=124)                     ->  Sort  (cost=19664533.31..19664534.92 rows=643 width=124)                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           ->  Nested Loop  (cost=3181.19..19664503.32 rows=643 width=124)                                 ->  Gather  (cost=3180.91..19662574.92 rows=643 width=99)                                       Workers Planned: 3                                       ->  Parallel Bitmap Heap Scan on log_table l  (cost=2180.91..19661510.62 rows=207 width=99)                                             Recheck Cond: (rec_insert_time >= '2019-10-01 08:20:38.115471-05'::timestamp with time zone)                                             Filter: ((field1 IS NOT NULL) AND (category = 'music'::name))                                             ->  Bitmap Index Scan on rec_insert_time_brin_1000  (cost=0.00..2180.75 rows=142602171 width=0)                                                   Index Cond: (rec_insert_time >= '2019-10-01 08:20:38.115471-05'::timestamp with time zone)Let me know if this rings any bells!  I will respond to other comments with other replies.Thanks,Jeremy", "msg_date": "Fri, 11 Oct 2019 09:08:05 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "Dear Michael,\n\nOn Thu, Oct 10, 2019 at 5:20 PM Michael Lewis <mlewis@entrata.com> wrote:\n\n> Since the optimizer is choosing a seq scan over index scan when it seems\n> like it has good row estimates in both cases, to me that may mean costs of\n> scanning index are expected to be high. Is this workload on SSD? Has the\n> random_page_cost config been decreased from default 4 (compared with cost\n> of 1 unit for sequential scan)?\n>\n\nIt's 1.5\n\n\n> Your buffer hits aren't great. What is shared_buffers set to? How much ram\n> on this cluster?\n>\n\nshared_buffers is 4GB. It has 500G of RAM, but server has several clusters\non it.\n\n\n>\n> With this table being insert only, one assumes correlation is very high on\n> the data in this column as shown in pg_stats, but have your confirmed?\n>\n\nYes, but the issue isn't with the BRIN index performing badly or being\nfragmented. It's that it performs great (7x faster than the seq scan) but\npostgres doesn't pick using it. I have seen this same issue also in other\nattempts I have made to use BRIN.\n\n\n> To me, distinct ON is often a bad code smell and probably can be\n> re-written to be much more efficient with GROUP BY, lateral & order by, or\n> some other tool. Same with the window function. It is a powerful tool, but\n> sometimes not the right one.\n>\n\nI don't really agree, but it's beside the point because the issue is not in\naggregation. It's pre-aggregation. Indeed if I run my query as a simple\nselect (as I tried) it's the exact same planning issue. (In my experience,\ndistinct on for given example is the fastest. Same with window functions\nwhich prevent inefficient self-joins)\n\n\n> Is \"source\" a function that is called on field1? What is it doing/how is\n> it defined?\n>\n\nI can't see how that matters either, but the \"source\" function is a mask\nfor a built-in pg function that is trivial. This whole query is masked so\nas not to expose our actual prod query, but I hope it's still\nunderstandable enough :).\n\nMy question is not how to make this query faster in general. It's that I\nwant to use BRIN indexes very much, but I'm not sure I can trust they will\nscale with the right query plan like I know BTREE will.\n\nThanks!\nJeremy\n\nDear Michael,On Thu, Oct 10, 2019 at 5:20 PM Michael Lewis <mlewis@entrata.com> wrote:Since the optimizer is choosing a seq scan over index scan when it seems like it has good row estimates in both cases, to me that may mean costs of scanning index are expected to be high. Is this workload on SSD? Has the random_page_cost config been decreased from default 4 (compared with cost of 1 unit for sequential scan)?It's 1.5 Your buffer hits aren't great. What is shared_buffers set to? How much ram on this cluster?shared_buffers is 4GB.  It has 500G of RAM, but server has several clusters on it. With this table being insert only, one assumes correlation is very high on the data in this column as shown in pg_stats, but have your confirmed?Yes, but the issue isn't with the BRIN index performing badly or being fragmented.  It's that it performs great (7x faster than the seq scan) but postgres doesn't pick using it.  I have seen this same issue also in other attempts I have made to use BRIN. To me, distinct ON is often a bad code smell and probably can be re-written to be much more efficient with GROUP BY, lateral & order by, or some other tool. Same with the window function. It is a powerful tool, but sometimes not the right one.I don't really agree, but it's beside the point because the issue is not in aggregation.  It's pre-aggregation.  Indeed if I run my query as a simple select (as I tried) it's the exact same planning issue.  (In my experience, distinct on for given example is the fastest.  Same with window functions which prevent inefficient self-joins) Is \"source\" a function that is called on field1? What is it doing/how is it defined?I can't see how that matters either, but the \"source\" function is a mask for a built-in pg function that is trivial.  This whole query is masked so as not to expose our actual prod query, but I hope it's still understandable enough :).My question is not how to make this query faster in general.  It's that I want to use BRIN indexes very much, but I'm not sure I can trust they will scale with the right query plan like I know BTREE will.Thanks!Jeremy", "msg_date": "Fri, 11 Oct 2019 09:19:33 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Thu, Oct 10, 2019 at 6:13 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> So this seems like a combination of multiple issues. Firstly, the bitmap\n> index scan on rec_insert_time_brin_1000 estimate seems somewhat poor. It\n> might be worth increasing stats target on that column, or something like\n> that. Not sure, but it's about the only \"fixable\" thing here, I think.\n>\n\nIn the OP I had mentioned that I already increased it to 5000, and it made\nno difference. Ah fine.... let's go ahead and try 10000... still no change:\n\nfoo_prod=# ALTER TABLE log_table ALTER COLUMN rec_insert_time SET\nSTATISTICS 10000;\nALTER TABLE\nfoo_prod=# ANALYZE log_table;\nANALYZE\nfoo_prod=# EXPLAIN\nSELECT\n category, source, MIN(rec_insert_time) OVER (partition by source order by\nrec_insert_time) AS first_source_time, MAX(rec_insert_time) OVER (partition\nby source order by rec_insert_time) AS last_source_time\nFROM (SELECT DISTINCT ON (brand_id, last_change, log_id)\ncategory, source(field1) AS source, rec_insert_time\nFROM log_table l\nINNER JOIN public.small_join_table filter ON filter.category = l.category\nWHERE field1 IS NOT NULL AND l.category = 'music'\nAND l.rec_insert_time >= now() - interval '10 days'\nORDER BY brand_id, last_change, log_id, rec_insert_time DESC) unique_cases;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=24451299.20..24451313.21 rows=623 width=120)\n -> Sort (cost=24451299.20..24451300.75 rows=623 width=104)\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n -> Subquery Scan on unique_cases (cost=24451257.82..24451270.28\nrows=623 width=104)\n -> Unique (cost=24451257.82..24451264.05 rows=623\nwidth=124)\n -> Sort (cost=24451257.82..24451259.38 rows=623\nwidth=124)\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n -> Nested Loop (cost=0.00..24451228.90\nrows=623 width=124)\n Join Filter: ((l.category)::text =\nfilter.category)\n -> Seq Scan on small_join_table filter\n (cost=0.00..26.99 rows=1399 width=8)\n -> Materialize (cost=0.00..24435949.31\nrows=623 width=99)\n -> Seq Scan on log_table l\n (cost=0.00..24435946.20 rows=623 width=99)\n Filter: ((field1 IS NOT NULL)\nAND (category = 'music'::name) AND (rec_insert_time >= (now() - '10\ndays'::interval)))\n(13 rows)\n\nThanks,\nJeremy\n\nOn Thu, Oct 10, 2019 at 6:13 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nSo this seems like a combination of multiple issues. Firstly, the bitmap\nindex scan on rec_insert_time_brin_1000 estimate seems somewhat poor. It\nmight be worth increasing stats target on that column, or something like\nthat. Not sure, but it's about the only \"fixable\" thing here, I think.In the OP I had mentioned that I already increased it to 5000, and it made no difference.  Ah fine.... let's go ahead and try 10000... still no change:foo_prod=# ALTER TABLE log_table ALTER COLUMN rec_insert_time SET STATISTICS 10000;ALTER TABLEfoo_prod=# ANALYZE log_table;ANALYZEfoo_prod=# EXPLAINSELECT category, source, MIN(rec_insert_time) OVER (partition by source order by rec_insert_time) AS first_source_time, MAX(rec_insert_time) OVER (partition by source order by rec_insert_time) AS last_source_timeFROM (SELECT DISTINCT ON (brand_id, last_change, log_id)category, source(field1) AS source, rec_insert_timeFROM log_table lINNER JOIN public.small_join_table filter ON filter.category = l.categoryWHERE field1 IS NOT NULL AND l.category = 'music'AND l.rec_insert_time >= now() - interval '10 days'ORDER BY brand_id, last_change, log_id, rec_insert_time DESC) unique_cases;                                                                               QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------- WindowAgg  (cost=24451299.20..24451313.21 rows=623 width=120)   ->  Sort  (cost=24451299.20..24451300.75 rows=623 width=104)         Sort Key: unique_cases.source, unique_cases.rec_insert_time         ->  Subquery Scan on unique_cases  (cost=24451257.82..24451270.28 rows=623 width=104)               ->  Unique  (cost=24451257.82..24451264.05 rows=623 width=124)                     ->  Sort  (cost=24451257.82..24451259.38 rows=623 width=124)                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           ->  Nested Loop  (cost=0.00..24451228.90 rows=623 width=124)                                 Join Filter: ((l.category)::text = filter.category)                                 ->  Seq Scan on small_join_table filter  (cost=0.00..26.99 rows=1399 width=8)                                 ->  Materialize  (cost=0.00..24435949.31 rows=623 width=99)                                       ->  Seq Scan on log_table l  (cost=0.00..24435946.20 rows=623 width=99)                                             Filter: ((field1 IS NOT NULL) AND (category = 'music'::name) AND (rec_insert_time >= (now() - '10 days'::interval)))(13 rows)Thanks,Jeremy", "msg_date": "Fri, 11 Oct 2019 09:31:08 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Fri, Oct 11, 2019 at 09:08:05AM -0500, Jeremy Finzel wrote:\n>On Thu, Oct 10, 2019 at 7:22 PM David Rowley <david.rowley@2ndquadrant.com>\n>wrote:\n>\n>> The planner might be able to get a better estimate on the number of\n>> matching rows if the now() - interval '10 days' expression was\n>> replaced with 'now'::timestamptz - interval '10 days'. However, care\n>> would need to be taken to ensure the plan is never prepared since\n>> 'now' is evaluated during parse. The same care must be taken when\n>> creating views, functions, stored procedures and the like.\n>>\n>\n>You are on to something here I think with the now() function, even if above\n>suggestion is not exactly right as you said further down. I am finding a\n>hard-coded timestamp gives the right query plan. I also tested same with\n>even bigger window (last 16 days) and it yet still chooses the brin index.\n>\n>foo_prod=# EXPLAIN\n>foo_prod-# SELECT\n>foo_prod-# category, source, MIN(rec_insert_time) OVER (partition by\n>source order by rec_insert_time) AS first_source_time, MAX(rec_insert_time)\n>OVER (partition by source order by rec_insert_time) AS last_source_time\n>foo_prod-# FROM (SELECT DISTINCT ON (brand_id, last_change, log_id)\n>foo_prod(# category, source(field1) AS source, rec_insert_time\n>foo_prod(# FROM log_table l\n>foo_prod(# INNER JOIN public.small_join_table filter ON filter.category =\n>l.category\n>foo_prod(# WHERE field1 IS NOT NULL AND l.category = 'music'\n>foo_prod(# AND l.rec_insert_time >= now() - interval '10 days'\n>foo_prod(# ORDER BY brand_id, last_change, log_id, rec_insert_time DESC)\n>unique_cases;\n>\n> QUERY PLAN\n>-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> WindowAgg (cost=24436329.10..24436343.56 rows=643 width=120)\n> -> Sort (cost=24436329.10..24436330.70 rows=643 width=104)\n> Sort Key: unique_cases.source, unique_cases.rec_insert_time\n> -> Subquery Scan on unique_cases (cost=24436286.24..24436299.10\n>rows=643 width=104)\n> -> Unique (cost=24436286.24..24436292.67 rows=643\n>width=124)\n> -> Sort (cost=24436286.24..24436287.85 rows=643\n>width=124)\n> Sort Key: l.brand_id, l.last_change, l.log_id,\n>l.rec_insert_time DESC\n> -> Nested Loop (cost=0.00..24436256.25\n>rows=643 width=124)\n> Join Filter: ((l.category)::text =\n>filter.category)\n> -> Seq Scan on small_join_table filter\n> (cost=0.00..26.99 rows=1399 width=8)\n> -> Materialize (cost=0.00..24420487.02\n>rows=643 width=99)\n> -> Seq Scan on log_table l\n> (cost=0.00..24420483.80 rows=643 width=99)\n> Filter: ((field1 IS NOT NULL)\n>AND (category = 'music'::name) AND (rec_insert_time >= (now() - '10\n>days'::interval)))\n>(13 rows)\n>\n>foo_prod=# SELECT now() - interval '10 days';\n> ?column?\n>-------------------------------\n> 2019-10-01 08:20:38.115471-05\n>(1 row)\n>\n>foo_prod=# EXPLAIN\n>SELECT\n> category, source, MIN(rec_insert_time) OVER (partition by source order by\n>rec_insert_time) AS first_source_time, MAX(rec_insert_time) OVER (partition\n>by source order by rec_insert_time) AS last_source_time\n>FROM (SELECT DISTINCT ON (brand_id, last_change, log_id)\n>category, source(field1) AS source, rec_insert_time\n>FROM log_table l\n>INNER JOIN public.small_join_table filter ON filter.category = l.category\n>WHERE field1 IS NOT NULL AND l.category = 'music'\n>AND l.rec_insert_time >= '2019-10-01 08:20:38.115471-05'\n>ORDER BY brand_id, last_change, log_id, rec_insert_time DESC) unique_cases;\n>\n> QUERY PLAN\n>-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> WindowAgg (cost=19664576.17..19664590.63 rows=643 width=120)\n> -> Sort (cost=19664576.17..19664577.77 rows=643 width=104)\n> Sort Key: unique_cases.source, unique_cases.rec_insert_time\n> -> Subquery Scan on unique_cases (cost=19664533.31..19664546.17\n>rows=643 width=104)\n> -> Unique (cost=19664533.31..19664539.74 rows=643\n>width=124)\n> -> Sort (cost=19664533.31..19664534.92 rows=643\n>width=124)\n> Sort Key: l.brand_id, l.last_change, l.log_id,\n>l.rec_insert_time DESC\n> -> Nested Loop (cost=3181.19..19664503.32\n>rows=643 width=124)\n> -> Gather (cost=3180.91..19662574.92\n>rows=643 width=99)\n> Workers Planned: 3\n> -> Parallel Bitmap Heap Scan on\n>log_table l (cost=2180.91..19661510.62 rows=207 width=99)\n> Recheck Cond: (rec_insert_time\n>>= '2019-10-01 08:20:38.115471-05'::timestamp with time zone)\n> Filter: ((field1 IS NOT NULL)\n>AND (category = 'music'::name))\n> -> Bitmap Index Scan on\n>rec_insert_time_brin_1000 (cost=0.00..2180.75 rows=142602171 width=0)\n> Index Cond:\n>(rec_insert_time >= '2019-10-01 08:20:38.115471-05'::timestamp with time\n>zone)\n>\n>\n>Let me know if this rings any bells! I will respond to other comments with\n>other replies.\n>\n\nMy guess - it's (at least partially) due to cpu_operator_cost,\nassociated with the now() call. When replaced with a literal, this cost\ndisappears and so the total query cost decreases.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 11 Oct 2019 20:46:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": ">\n> The other issue is that the estimation of pages fetched using bitmap\n> heap scan is rather crude - but that's simply hard, and I don't think we\n> can fundamentally improve this.\n>\n\nI wanted to follow up on this specific issue. Isn't this the heart of the\nmatter and a fundamental problem? If I want to rely on BRIN indexes as in\na straightforward case as explained in OP, but I don't know if the planner\nwill be nearly reliable enough, how can I depend on them in production? Is\nthis not considered a planner bug or should this kind of case be documented\nas problematic for BRIN? As another way to look at it: is there a\nconfiguration parameter that could be added specific to BRIN or bitmapscan\nto provide help to cases like this?\n\nOn freshly analyzed tables, I tried my original query again and found that\neven with now() - 3 days it does not choose the BRIN index. In fact it\nchose another btree on the table like (id1, id2, rec_insert_time). With\nwarm cache, the pg-chosen plan takes 40 seconds to execute, whereas when I\nforce a BRIN scan it takes only 4 seconds.\n\nI could understand more if the execution times were close, but the actual\nBRIN index is orders of magnitude faster than the plan Postgres is\nchoosing. I appreciate the feedback on this very much, as I am quite eager\nto use BRIN indexes!!!\n\nThanks,\nJeremy\n\nThe other issue is that the estimation of pages fetched using bitmap\nheap scan is rather crude - but that's simply hard, and I don't think we\ncan fundamentally improve this.I wanted to follow up on this specific issue.  Isn't this the heart of the matter and a fundamental problem?  If I want to rely on BRIN indexes as in a straightforward case as explained in OP, but I don't know if the planner will be nearly reliable enough, how can I depend on them in production?  Is this not considered a planner bug or should this kind of case be documented as problematic for BRIN?  As another way to look at it: is there a configuration parameter that could be added specific to BRIN or bitmapscan to provide help to cases like this?On freshly analyzed tables, I tried my original query again and found that even with now() - 3 days it does not choose the BRIN index.  In fact it chose another btree on the table like (id1, id2, rec_insert_time).  With warm cache, the pg-chosen plan takes 40 seconds to execute, whereas when I force a BRIN scan it takes only 4 seconds.I could understand more if the execution times were close, but the actual BRIN index is orders of magnitude faster than the plan Postgres is choosing.  I appreciate the feedback on this very much, as I am quite eager to use BRIN indexes!!!Thanks,Jeremy", "msg_date": "Mon, 14 Oct 2019 14:42:51 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Tue, 15 Oct 2019 at 08:43, Jeremy Finzel <finzelj@gmail.com> wrote:\n> I wanted to follow up on this specific issue. Isn't this the heart of the matter and a fundamental problem? If I want to rely on BRIN indexes as in a straightforward case as explained in OP, but I don't know if the planner will be nearly reliable enough, how can I depend on them in production? Is this not considered a planner bug or should this kind of case be documented as problematic for BRIN? As another way to look at it: is there a configuration parameter that could be added specific to BRIN or bitmapscan to provide help to cases like this?\n>\n> On freshly analyzed tables, I tried my original query again and found that even with now() - 3 days it does not choose the BRIN index. In fact, it chose another btree on the table like (id1, id2, rec_insert_time). With warm cache, the pg-chosen plan takes 40 seconds to execute, whereas when I force a BRIN scan it takes only 4 seconds.\n\nAnother thing which you might want to look at is the correlation\ncolumn in the pg_stats view for the rec_insert_time column. Previous\nto 7e534adcd, BRIN index were costed based on the selectivity\nestimate. There was no accountability towards the fact that the pages\nfor those records might have been spread out over the entire table.\nPost 7e534adcd, we use the correlation estimate to attempt to estimate\nhow many pages (more specifically \"ranges\") we're likely to hit based\non that and the selectivity estimate. This commit intended to fix the\nissue we had with BRIN indexes being selected far too often. Of\ncourse, the correlation is based on the entire table, if there are\nsubsets of the table that are perhaps perfectly correlated, then the\nplanner is not going to know about that. It's possible that some of\nyour older rec_insert_times are spread out far more than the newer\nones. As a test, you could try creating a new table and copying the\nrecords over to it in rec_insert_time order and seeing if the BRIN\nindex is selected for that table (after having performed an ANALYZE).\n\nIt would be interesting if you could show the pg_stats row for the\ncolumn so that we can see if the correlation is low.\n\nYou can see from the code below that the final selectivity strongly\ninfluenced by the correlation value (REF: brincostestimate)\n\nqualSelectivity = clauselist_selectivity(root, indexQuals,\nbaserel->relid,\nJOIN_INNER, NULL);\n\n/* work out the actual number of ranges in the index */\nindexRanges = Max(ceil((double) baserel->pages / statsData.pagesPerRange),\n 1.0);\n\n/*\n* Now calculate the minimum possible ranges we could match with if all of\n* the rows were in the perfect order in the table's heap.\n*/\nminimalRanges = ceil(indexRanges * qualSelectivity);\n\n/*\n* Now estimate the number of ranges that we'll touch by using the\n* indexCorrelation from the stats. Careful not to divide by zero (note\n* we're using the absolute value of the correlation).\n*/\nif (*indexCorrelation < 1.0e-10)\nestimatedRanges = indexRanges;\nelse\nestimatedRanges = Min(minimalRanges / *indexCorrelation, indexRanges);\n\n/* we expect to visit this portion of the table */\nselec = estimatedRanges / indexRanges;\n\nCLAMP_PROBABILITY(selec);\n\n\nMy overall view on this is that the BRIN index is not that great since\nit's not eliminating that many rows by using it.\n\n From above we see:\n\n> Bitmap Heap Scan on foo.log_table l (cost=2391.71..24360848.29 rows=735 width=99) (actual time=824.133..21329.054 rows=466 loops=1)\n Output: <hidden>\n Recheck Cond:\n(l.rec_insert_time >= (now() - '10 days'::interval))\n Rows Removed by Index Recheck: 8187584\n Filter: ((l.field1 IS NOT NULL)\nAND (l.category = 'music'::name))\n Rows Removed by Filter: 19857107\n Heap Blocks: lossy=1509000\n\nSo you have just 466 rows matching these quals, but the executor had\nto scan 1.5 million pages to get those and filter out 8.1 million rows\non the recheck then 19.8 million on the filter. You've mentioned that\nthe table's heap is 139 GB, which is about 18 million pages. It seems\nyour query would perform much better if you had a btree index such as\n(category, rec_insert_time) where field1 is not null;,\n\nOf course, you've mentioned that you are finding when the plan uses\nthe BRIN index that it executes more quickly, but I think you're going\nto find BRIN unreliable for tables anything other than INSERT-only\ntables which the records are always inserted with an ever-increasing\nor decreasing value in the BRIN indexed column. If you start\nperforming UPDATEs then that's going to create holes that new record\nwill fill and cause the correlation to start dropping resulting in the\nBRIN indexes scan cost going up.\n\nOn the other hand, if you think you can do better than what was done\nin 7e534adcd, then it would be good to see someone working on it. I'm\nsure something better can be done. It's just not that easy to do with\nthe scant correlation data we have on the column.\n\nAs for is this a bug or something that's missing from the documents.\nThe documents do mention:\n\n\"BRIN stands for Block Range Index. BRIN is designed for handling very\nlarge tables in which certain columns have some natural correlation\nwith their physical location within the table.\"\n\nhttps://www.postgresql.org/docs/current/brin-intro.html\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 15 Oct 2019 09:48:14 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "Thank you for the thorough and thoughtful reply! Please see below.\n\nOn Mon, Oct 14, 2019 at 3:48 PM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> Another thing which you might want to look at is the correlation\n> column in the pg_stats view for the rec_insert_time column. Previous\n> to 7e534adcd, BRIN index were costed based on the selectivity\n> estimate. There was no accountability towards the fact that the pages\n> for those records might have been spread out over the entire table.\n> Post 7e534adcd, we use the correlation estimate to attempt to estimate\n> how many pages (more specifically \"ranges\") we're likely to hit based\n> on that and the selectivity estimate. This commit intended to fix the\n> issue we had with BRIN indexes being selected far too often. Of\n> course, the correlation is based on the entire table, if there are\n> subsets of the table that are perhaps perfectly correlated, then the\n> planner is not going to know about that. It's possible that some of\n> your older rec_insert_times are spread out far more than the newer\n> ones. As a test, you could try creating a new table and copying the\n> records over to it in rec_insert_time order and seeing if the BRIN\n> index is selected for that table (after having performed an ANALYZE).\n>\n> It would be interesting if you could show the pg_stats row for the\n> column so that we can see if the correlation is low.\n>\n\nSo what I said originally (and light bulbs now going off) is that the table\nis insert-only, but it is **pruned (deletes) to the past year of data**. I\nthink this is the key of what I've missed. Once vacuum runs, we have a\nbunch of old physical space being re-used by new inserts, thus botching\nthat good correlation between physical and logical order. So it appears\nthe physical order of the data is indeed well-correlated in recent history,\nbut not so when you go back a bit further. Here are pg_stats:\n\n-[ RECORD 1 ]----------+---------------------------\nschemaname | foo\ntablename | log_table\nattname | rec_insert_time\ninherited | f\nnull_frac | 0\navg_width | 8\nn_distinct | 1.89204e+06\ncorrelation | 0.193951\nmost_common_elems |\nmost_common_elem_freqs |\nelem_count_histogram |\n\nI have not tried creating a fresh table, but if I modify my OP query to\ninstead take a window of 10 days 100 days ago, the BRIN index actually\nperforms really bad... identically to the seq scan:\n\nHere is with a seq scan:\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=26822167.70..26822170.60 rows=129 width=120) (actual\ntime=200730.856..200730.910 rows=62 loops=1)\n -> Sort (cost=26822167.70..26822168.02 rows=129 width=104) (actual\ntime=200730.834..200730.837 rows=62 loops=1)\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n Sort Method: quicksort Memory: 33kB\n -> Subquery Scan on unique_cases (cost=26822160.60..26822163.18\nrows=129 width=104) (actual time=200730.672..200730.763 rows=62 loops=1)\n -> Unique (cost=26822160.60..26822161.89 rows=129\nwidth=124) (actual time=200730.670..200730.753 rows=62 loops=1)\n -> Sort (cost=26822160.60..26822160.92 rows=129\nwidth=124) (actual time=200730.668..200730.686 rows=395 loops=1)\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n Sort Method: quicksort Memory: 80kB\n -> Nested Loop (cost=0.00..26822156.08\nrows=129 width=124) (actual time=200692.321..200730.121 rows=395 loops=1)\n Join Filter: ((l.category)::text =\nfilter.category)\n Rows Removed by Join Filter: 552210\n -> Seq Scan on small_join_table filter\n (cost=0.00..26.99 rows=1399 width=8) (actual time=0.013..0.179 rows=1399\nloops=1)\n -> Materialize (cost=0.00..26818970.84\nrows=129 width=99) (actual time=136.942..143.440 rows=395 loops=1399)\n -> Seq Scan on log_table l\n (cost=0.00..26818970.20 rows=129 width=99) (actual\ntime=191581.193..200649.013 rows=395 loops=1)\n Filter: ((field1 IS NOT NULL)\nAND (category = 'music'::name) AND (rec_insert_time >= (now() - '100\ndays'::interval)) AND (rec_insert_time <= (now() - '90 days'::interval)))\n Rows Removed by Filter:\n315097963\n Planning Time: 1.541 ms\n Execution Time: 200731.273 ms\n(19 rows)\n\nHere is with the forced brin index scan:\n\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=26674201.49..26674204.40 rows=129 width=120) (actual\ntime=200303.118..200303.177 rows=62 loops=1)\n -> Sort (cost=26674201.49..26674201.82 rows=129 width=104) (actual\ntime=200303.093..200303.096 rows=62 loops=1)\n Sort Key: unique_cases.source, unique_cases.rec_insert_time\n Sort Method: quicksort Memory: 33kB\n -> Subquery Scan on unique_cases (cost=26674194.39..26674196.97\nrows=129 width=104) (actual time=200302.918..200303.012 rows=62 loops=1)\n -> Unique (cost=26674194.39..26674195.68 rows=129\nwidth=124) (actual time=200302.914..200302.998 rows=62 loops=1)\n -> Sort (cost=26674194.39..26674194.71 rows=129\nwidth=124) (actual time=200302.911..200302.929 rows=395 loops=1)\n Sort Key: l.brand_id, l.last_change, l.log_id,\nl.rec_insert_time DESC\n Sort Method: quicksort Memory: 80kB\n -> Nested Loop (cost=1245.13..26674189.87\nrows=129 width=124) (actual time=138087.290..200301.925 rows=395 loops=1)\n -> Bitmap Heap Scan on log_table l\n (cost=1244.85..26673801.66 rows=129 width=99) (actual\ntime=138087.083..200298.259 rows=395 loops=1)\n Recheck Cond: ((rec_insert_time >=\n(now() - '100 days'::interval)) AND (rec_insert_time <= (now() - '90\ndays'::interval)))\n Rows Removed by Index Recheck:\n214939302\n Filter: ((field1 IS NOT NULL) AND\n(category = 'music'::name))\n Rows Removed by Filter: 15326889\n Heap Blocks: lossy=13566000\n -> Bitmap Index Scan on\nrec_insert_time_brin_1000 (cost=0.00..1244.81 rows=78608872 width=0)\n(actual time=678.203..678.203 rows=135660000 loops=1)\n Index Cond: ((rec_insert_time\n>= (now() - '100 days'::interval)) AND (rec_insert_time <= (now() - '90\ndays'::interval)))\n -> Index Only Scan using\nsmall_join_table_pkey on small_join_table filter (cost=0.28..3.01 rows=1\nwidth=8) (actual time=0.005..0.005 rows=1 loops=395)\n Index Cond: (category =\n(l.category)::text)\n Heap Fetches: 395\n Planning Time: 2.031 ms\n Execution Time: 200304.411 ms\n(23 rows)\n\n\n> So you have just 466 rows matching these quals, but the executor had\n> to scan 1.5 million pages to get those and filter out 8.1 million rows\n> on the recheck then 19.8 million on the filter. You've mentioned that\n> the table's heap is 139 GB, which is about 18 million pages. It seems\n> your query would perform much better if you had a btree index such as\n> (category, rec_insert_time) where field1 is not null;,\n>\n\nI agree btree would do the trick for performance, but I was trying to avoid\nthe near-100G overhead of such an index. For the given example, the\nsomewhat improved performance of BRIN may be acceptable to me. However, as\nseen above, it appears that a btree may be my only option given this\nworkload to get reliable performance.\n\n\n> Of course, you've mentioned that you are finding when the plan uses\n> the BRIN index that it executes more quickly, but I think you're going\n> to find BRIN unreliable for tables anything other than INSERT-only\n> tables which the records are always inserted with an ever-increasing\n> or decreasing value in the BRIN indexed column. If you start\n> performing UPDATEs then that's going to create holes that new record\n> will fill and cause the correlation to start dropping resulting in the\n> BRIN indexes scan cost going up.\n>\n\nOr deletes, as in my case.\n\n\n> On the other hand, if you think you can do better than what was done\n> in 7e534adcd, then it would be good to see someone working on it. I'm\n> sure something better can be done. It's just not that easy to do with\n> the scant correlation data we have on the column.\n>\n> As for is this a bug or something that's missing from the documents.\n> The documents do mention:\n>\n> \"BRIN stands for Block Range Index. BRIN is designed for handling very\n> large tables in which certain columns have some natural correlation\n> with their physical location within the table.\"\n>\n\nYes, I was aware of this, and perhaps nothing indeed needs to change with\ndocs here given my case. But perhaps it would be worth exploring if there\ncould be more detailed stats on physical vs logical correlation, such as\nwhen ANALYZE takes its samples, noting physical locations as well as\nlogical values, and allowing the correlation to take account of that more\ndetailed analysis. Of course, sounds like a huge amount of work with\nuncertain benefits. In my case, it could be said that if I am always\nquerying the last few days of data, a BRIN index here is perfect, and a\nBTREE is way overkill. That is a real use case to consider. But more\ngenerally, I would drop the BRIN index if I had any other query patterns\nbeyond the few days of data.\n\nHowever, this may be a fallacy. It might be that a few days from now, the\nlast 10 days of data will actually be really fragmented, depending only on\nwhen VACUUM runs.\n\nAs the docs state, I do believe that the only use case that will work\nreally well for BRIN is either a truly insert-only table which is never\npruned (an idea I dislike as a DBA who wants us to keep OLTP data trim and\nimplement data retention policies :), or a table which is routinely\nCLUSTERed!\n\nThanks again for the detailed feedback.\n\nThanks,\nJeremy\n\nThank you for the thorough and thoughtful reply!  Please see below.On Mon, Oct 14, 2019 at 3:48 PM David Rowley <david.rowley@2ndquadrant.com> wrote:Another thing which you might want to look at is the correlation\ncolumn in the pg_stats view for the rec_insert_time column. Previous\nto 7e534adcd, BRIN index were costed based on the selectivity\nestimate. There was no accountability towards the fact that the pages\nfor those records might have been spread out over the entire table.\nPost 7e534adcd, we use the correlation estimate to attempt to estimate\nhow many pages (more specifically \"ranges\") we're likely to hit based\non that and the selectivity estimate. This commit intended to fix the\nissue we had with BRIN indexes being selected far too often.  Of\ncourse, the correlation is based on the entire table, if there are\nsubsets of the table that are perhaps perfectly correlated, then the\nplanner is not going to know about that. It's possible that some of\nyour older rec_insert_times are spread out far more than the newer\nones.  As a test, you could try creating a new table and copying the\nrecords over to it in rec_insert_time order and seeing if the BRIN\nindex is selected for that table (after having performed an ANALYZE).\n\nIt would be interesting if you could show the pg_stats row for the\ncolumn so that we can see if the correlation is low.So what I said originally (and light bulbs now going off) is that the table is insert-only, but it is **pruned (deletes) to the past year of data**.  I think this is the key of what I've missed.  Once vacuum runs, we have a bunch of old physical space being re-used by new inserts, thus botching that good correlation between physical and logical order.  So it appears the physical order of the data is indeed well-correlated in recent history, but not so when you go back a bit further.  Here are pg_stats:-[ RECORD 1 ]----------+---------------------------schemaname             | footablename              | log_tableattname                | rec_insert_timeinherited              | fnull_frac              | 0avg_width              | 8n_distinct             | 1.89204e+06correlation            | 0.193951most_common_elems      |most_common_elem_freqs |elem_count_histogram   |I have not tried creating a fresh table, but if I modify my OP query to instead take a window of 10 days 100 days ago, the BRIN index actually performs really bad... identically to the seq scan:Here is with a seq scan:                                                                                                           QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- WindowAgg  (cost=26822167.70..26822170.60 rows=129 width=120) (actual time=200730.856..200730.910 rows=62 loops=1)   ->  Sort  (cost=26822167.70..26822168.02 rows=129 width=104) (actual time=200730.834..200730.837 rows=62 loops=1)         Sort Key: unique_cases.source, unique_cases.rec_insert_time         Sort Method: quicksort  Memory: 33kB         ->  Subquery Scan on unique_cases  (cost=26822160.60..26822163.18 rows=129 width=104) (actual time=200730.672..200730.763 rows=62 loops=1)               ->  Unique  (cost=26822160.60..26822161.89 rows=129 width=124) (actual time=200730.670..200730.753 rows=62 loops=1)                     ->  Sort  (cost=26822160.60..26822160.92 rows=129 width=124) (actual time=200730.668..200730.686 rows=395 loops=1)                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           Sort Method: quicksort  Memory: 80kB                           ->  Nested Loop  (cost=0.00..26822156.08 rows=129 width=124) (actual time=200692.321..200730.121 rows=395 loops=1)                                 Join Filter: ((l.category)::text = filter.category)                                 Rows Removed by Join Filter: 552210                                 ->  Seq Scan on small_join_table filter  (cost=0.00..26.99 rows=1399 width=8) (actual time=0.013..0.179 rows=1399 loops=1)                                 ->  Materialize  (cost=0.00..26818970.84 rows=129 width=99) (actual time=136.942..143.440 rows=395 loops=1399)                                       ->  Seq Scan on log_table l  (cost=0.00..26818970.20 rows=129 width=99) (actual time=191581.193..200649.013 rows=395 loops=1)                                             Filter: ((field1 IS NOT NULL) AND (category = 'music'::name) AND (rec_insert_time >= (now() - '100 days'::interval)) AND (rec_insert_time <= (now() - '90 days'::interval)))                                             Rows Removed by Filter: 315097963 Planning Time: 1.541 ms Execution Time: 200731.273 ms(19 rows)Here is with the forced brin index scan:                                                                                                    QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- WindowAgg  (cost=26674201.49..26674204.40 rows=129 width=120) (actual time=200303.118..200303.177 rows=62 loops=1)   ->  Sort  (cost=26674201.49..26674201.82 rows=129 width=104) (actual time=200303.093..200303.096 rows=62 loops=1)         Sort Key: unique_cases.source, unique_cases.rec_insert_time         Sort Method: quicksort  Memory: 33kB         ->  Subquery Scan on unique_cases  (cost=26674194.39..26674196.97 rows=129 width=104) (actual time=200302.918..200303.012 rows=62 loops=1)               ->  Unique  (cost=26674194.39..26674195.68 rows=129 width=124) (actual time=200302.914..200302.998 rows=62 loops=1)                     ->  Sort  (cost=26674194.39..26674194.71 rows=129 width=124) (actual time=200302.911..200302.929 rows=395 loops=1)                           Sort Key: l.brand_id, l.last_change, l.log_id, l.rec_insert_time DESC                           Sort Method: quicksort  Memory: 80kB                           ->  Nested Loop  (cost=1245.13..26674189.87 rows=129 width=124) (actual time=138087.290..200301.925 rows=395 loops=1)                                 ->  Bitmap Heap Scan on log_table l  (cost=1244.85..26673801.66 rows=129 width=99) (actual time=138087.083..200298.259 rows=395 loops=1)                                       Recheck Cond: ((rec_insert_time >= (now() - '100 days'::interval)) AND (rec_insert_time <= (now() - '90 days'::interval)))                                       Rows Removed by Index Recheck: 214939302                                       Filter: ((field1 IS NOT NULL) AND (category = 'music'::name))                                       Rows Removed by Filter: 15326889                                       Heap Blocks: lossy=13566000                                       ->  Bitmap Index Scan on rec_insert_time_brin_1000  (cost=0.00..1244.81 rows=78608872 width=0) (actual time=678.203..678.203 rows=135660000 loops=1)                                             Index Cond: ((rec_insert_time >= (now() - '100 days'::interval)) AND (rec_insert_time <= (now() - '90 days'::interval)))                                 ->  Index Only Scan using small_join_table_pkey on small_join_table filter  (cost=0.28..3.01 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=395)                                       Index Cond: (category = (l.category)::text)                                       Heap Fetches: 395 Planning Time: 2.031 ms Execution Time: 200304.411 ms(23 rows) \nSo you have just 466 rows matching these quals, but the executor had\nto scan 1.5 million pages to get those and filter out 8.1 million rows\non the recheck then 19.8 million on the filter. You've mentioned that\nthe table's heap is 139 GB, which is about 18 million pages.  It seems\nyour query would perform much better if you had a btree index such as\n(category, rec_insert_time) where field1 is not null;,I agree btree would do the trick for performance, but I was trying to avoid the near-100G overhead of such an index.  For the given example, the somewhat improved performance of BRIN may be acceptable to me.  However, as seen above, it appears that a btree may be my only option given this workload to get reliable performance. Of course, you've mentioned that you are finding when the plan uses\nthe BRIN index that it executes more quickly, but I think you're going\nto find BRIN unreliable for tables anything other than INSERT-only\ntables which the records are always inserted with an ever-increasing\nor decreasing value in the BRIN indexed column.  If you start\nperforming UPDATEs then that's going to create holes that new record\nwill fill and cause the correlation to start dropping resulting in the\nBRIN indexes scan cost going up.Or deletes, as in my case. \nOn the other hand, if you think you can do better than what was done\nin 7e534adcd, then it would be good to see someone working on it. I'm\nsure something better can be done. It's just not that easy to do with\nthe scant correlation data we have on the column.\n\nAs for is this a bug or something that's missing from the documents.\nThe documents do mention:\n\n\"BRIN stands for Block Range Index. BRIN is designed for handling very\nlarge tables in which certain columns have some natural correlation\nwith their physical location within the table.\"Yes, I was aware of this, and perhaps nothing indeed needs to change with docs here given my case.  But perhaps it would be worth exploring if there could be more detailed stats on physical vs logical correlation, such as when ANALYZE takes its samples, noting physical locations as well as logical values, and allowing the correlation to take account of that more detailed analysis.  Of course, sounds like a huge amount of work with uncertain benefits.  In my case, it could be said that if I am always querying the last few days of data, a BRIN index here is perfect, and a BTREE is way overkill.  That is a real use case to consider.  But more generally, I would drop the BRIN index if I had any other query patterns beyond the few days of data.However, this may be a fallacy.  It might be that a few days from now, the last 10 days of data will actually be really fragmented, depending only on when VACUUM runs.As the docs state, I do believe that the only use case that will work really well for BRIN is either a truly insert-only table which is never pruned (an idea I dislike as a DBA who wants us to keep OLTP data trim and implement data retention policies :), or a table which is routinely CLUSTERed!Thanks again for the detailed feedback.Thanks,Jeremy", "msg_date": "Tue, 15 Oct 2019 11:05:13 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "Thanks for closing the loop on the data correlation question. I've been\nplaying with BRIN indexes on a log table of sorts and this thread helped\nclear up some of the behavior I have been seeing.\n\nI am curious, would a partial btree index fit your needs? Perhaps the\nmaintenance overhead is too significant or this is too off-the-wall, but a\ndaily job to create new index and drop the old concurrently could give the\nperformance you need while still saving the extra disk space of the full\nbtree on the timestamp.\n\nCREATE INDEX CONCURRENTLY log_table_rec_insert_time_partial_10_04 ON\nlog_table USING btree ( rec_insert_time ) WHERE rec_insert_time >\n'2019-10-04'::DATE;\nDROP INDEX CONCURRENTLY IF EXISTS log_table_rec_insert_time_partial_10_03;\n\nI would consider including category column as well, but I suspect that\nwould increase the size of the index significantly. Of course, this depends\non the query planner evaluating that \"l.rec_insert_time >= now() - interval\n'10 days'\" and determining that the index fulfills the need.\n\n>\n\nThanks for closing the loop on the data correlation question. I've been playing with BRIN indexes on a log table of sorts and this thread helped clear up some of the behavior I have been seeing.I am curious, would a partial btree index fit your needs? Perhaps the maintenance overhead is too significant or this is too off-the-wall, but a daily job to create new index and drop the old concurrently could give the performance you need while still saving the extra disk space of the full btree on the timestamp.CREATE INDEX CONCURRENTLY log_table_rec_insert_time_partial_10_04 ON log_table USING btree ( rec_insert_time ) WHERE rec_insert_time > '2019-10-04'::DATE;DROP INDEX CONCURRENTLY IF EXISTS log_table_rec_insert_time_partial_10_03;I would consider including category column as well, but I suspect that would increase the size of the index significantly. Of course, this depends on the query planner evaluating that \"l.rec_insert_time >= now() - interval '10 days'\" and determining that the index fulfills the need.", "msg_date": "Tue, 15 Oct 2019 10:43:50 -0600", "msg_from": "Michael Lewis <mlewis@entrata.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "This reminds me of an issue I reported several years ago where Btree index\nscans were chosen over seq scan of a large, INSERT-only table due to very high\ncorrelation, but performed poorly. I concluded that use of the the high \"large\nscale\" correlation on a large 50+GB table caused the planner to fail to account\nfor a larger number of pages being read nonsequentially (the opposite of your\nissue). I think that's because we were INSERTing data which was at least\napproximately sorted on record END time, and the index was on record START\ntime. For a large table with a week's data, the correlation of \"start time\"\nwas still be very high (0.99995). But scanning the index ends up reading pages\nnonsequentially, and also multiple visits per page.\n\nI eeked out a patch which made \"correlation\" a per-index statistic rather than\na per-column one. That means the planner could distinguish between a\nfreshly-built btree index and a fragmented one. (At the time, there was a\nhypothesis that our issue was partially due to repeated values of the index\ncolumns.) It didn't occur to me at the time, but that would also allow\ncreating numerous, partial BRIN indices, each of which would have separate\ncorrelation computed over just their \"restricted range\", which *might* also\nhandle your case, depending on how packed your data is.\n\nhttps://www.postgresql.org/message-id/flat/20170707234119.GN17566%40telsasoft.com#fdcbebc342b8fb9ad0ff293913f54d11\n\nOn Tue, Oct 15, 2019 at 11:05:13AM -0500, Jeremy Finzel wrote:\n> I do believe that the only use case that will work really well for BRIN is\n> either a truly insert-only table which is never pruned ... or a table which\n> is routinely CLUSTERed!\n\nOr partitioned table, which for large data sets I highly recommend instead of\nDELETE.\n\nJustin\n\n\n", "msg_date": "Tue, 15 Oct 2019 17:40:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Wed, 16 Oct 2019 at 05:05, Jeremy Finzel <finzelj@gmail.com> wrote:\n> But perhaps it would be worth exploring if there could be more detailed stats on physical vs logical correlation, such as when ANALYZE takes its samples, noting physical locations as well as logical values, and allowing the correlation to take account of that more detailed analysis. Of course, sounds like a huge amount of work with uncertain benefits.\n\nYes. I think improving the statistics could be beneficial. It does\nappear like the single value for the entire column is not fine-grained\nenough for your use case.\n\n> As the docs state, I do believe that the only use case that will work really well for BRIN is either a truly insert-only table which is never pruned (an idea I dislike as a DBA who wants us to keep OLTP data trim and implement data retention policies :), or a table which is routinely CLUSTERed!\n\nHave you considered partitioning the table so that the retention\npolicy could be implemented by dropping a partition rather than\nperforming a bulk DELETE?\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 11:43:52 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" }, { "msg_contents": "On Wed, 16 Oct 2019 at 11:40, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> It didn't occur to me at the time, but that would also allow\n> creating numerous, partial BRIN indices, each of which would have separate\n> correlation computed over just their \"restricted range\", which *might* also\n> handle your case, depending on how packed your data is.\n\nPerhaps I've misunderstood you, but the correlation that's used is per\ncolumn, not per index. The only way to have it calculate multiple\ncorrelations would be to partition the table. There'd be a correlation\nfor the column on each partition that way.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 11:46:49 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BRIN index which is much faster never chosen by planner" } ]
[ { "msg_contents": "I´ve written some PostgreSQL C-Extensions (for the first time...) and they\nwork as expected.\n\nBut now I want to call other functions from inside the C-Extensions (but not\nvia SPI_execute),\nfor example \"regexp_match()\" or from other extensions like PostGIS\n\"ST_POINT\" etc...\n\nI think \"fmgr\" is the key - but I didn't find any examples.\n\nGreetings from Berlin\n-Stefan Wolf-\n\n\n\n\n\n", "msg_date": "Fri, 11 Oct 2019 10:15:31 +0200", "msg_from": "\"Stefan Wolf\" <sw@zpmt.de>", "msg_from_op": true, "msg_subject": "PostgreSQL, C-Extension, calling other Functions" }, { "msg_contents": "Hi\n\npá 11. 10. 2019 v 10:15 odesílatel Stefan Wolf <sw@zpmt.de> napsal:\n\n> I´ve written some PostgreSQL C-Extensions (for the first time...) and they\n> work as expected.\n>\n> But now I want to call other functions from inside the C-Extensions (but\n> not\n> via SPI_execute),\n> for example \"regexp_match()\" or from other extensions like PostGIS\n> \"ST_POINT\" etc...\n>\n> I think \"fmgr\" is the key - but I didn't find any examples.\n>\n\nsearch DirectFunctionCall\n\n<-->PG_RETURN_NUMERIC(\n<--><-->DirectFunctionCall1(float8_numeric, Float8GetDatumFast(result)));\n\nRegards\n\nPavel\n\n\n> Greetings from Berlin\n> -Stefan Wolf-\n>\n>\n>\n>\n>\n>\n\nHipá 11. 10. 2019 v 10:15 odesílatel Stefan Wolf <sw@zpmt.de> napsal:I´ve written some PostgreSQL C-Extensions (for the first time...) and they\nwork as expected.\n\nBut now I want to call other functions from inside the C-Extensions (but not\nvia SPI_execute),\nfor example \"regexp_match()\" or from other extensions like PostGIS\n\"ST_POINT\" etc...\n\nI think \"fmgr\" is the key - but I didn't find any examples.search DirectFunctionCall <-->PG_RETURN_NUMERIC(<--><-->DirectFunctionCall1(float8_numeric, Float8GetDatumFast(result)));RegardsPavel\n\nGreetings from Berlin\n-Stefan Wolf-", "msg_date": "Fri, 11 Oct 2019 10:22:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL, C-Extension, calling other Functions" }, { "msg_contents": ">>>>> \"Stefan\" == Stefan Wolf <sw@zpmt.de> writes:\n\n Stefan> I´ve written some PostgreSQL C-Extensions (for the first\n Stefan> time...) and they work as expected.\n\n Stefan> But now I want to call other functions from inside the\n Stefan> C-Extensions (but not via SPI_execute), for example\n Stefan> \"regexp_match()\" or from other extensions like PostGIS\n Stefan> \"ST_POINT\" etc...\n\n Stefan> I think \"fmgr\" is the key - but I didn't find any examples.\n\nThere are a number of levels of difficulty here depending on which\nspecific functions you need to call and whether you need to handle\nNULLs.\n\nThe simplest case is using DirectFunctionCall[N][Coll] to call a builtin\n(internal-language) function. This _only_ works for functions that don't\nrequire access to an flinfo; many functions either need the flinfo to\nget parameter type info or to use fn_extra as a per-callsite cache.\n(Also there's no guarantee that a function that works without flinfo now\nwill continue to do so in future versions.) One restriction of this\nmethod is that neither parameters nor results may be NULL.\n\nThe next step up from that is getting a function's Oid and using\nOidFunctionCall[N][Coll]. This can call functions in any language,\nincluding dynamic-loaded ones, but it can't handle polymorphic\nfunctions. (Overloaded functions are fine, since each overload has its\nown Oid.) This is still fairly simple but is inefficient: it constructs\nand populates the flinfo, calls it once, then abandons it (it's not even\nfreed, it's up to the calling memory context to do that). If you're\ngoing to be invoking a function repeatedly, it's worth avoiding this\none. This still has the restriction of no NULLs either in or out.\n\nThe next step from that is calling fmgr_info and FunctionCall[N][Coll]\nseparately (which is just breaking down OidFunctionCall into its parts);\nthis allows you to re-use the flinfo for multiple calls. Still no NULLs\nallowed, but it's possible to use polymorphic functions if you try hard\nenough (it's not documented, but it requires consing up a faked\nexpression tree and using fmgr_info_set_expr).\n\nFinally, if none of the above apply, you're at the level where you\nshould seriously consider using SPI regardless; but if you don't want to\ndo that, you can use fmgr_info, InitFunctionCallInfoData and\nFunctionCallInvoke.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Fri, 11 Oct 2019 16:28:15 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL, C-Extension, calling other Functions" } ]
[ { "msg_contents": "Hi,\n\n\n\nI am having issues with PAM auth :\n\nit works, password are correctly checked, unknown users cannot access,\nknown user can, everything looks good\n\n\n\nBut, it always log an error by default even if auth is succesful:\n\n2019-10-10 15:00:46.481 CEST [6109] LOG: pam_authenticate failed:\nAuthentication failure\n2019-10-10 15:00:46.481 CEST [6109] FATAL: PAM authentication failed for\nuser \"ylacancellera\"\n2019-10-10 15:00:46.481 CEST [6109] DETAIL: Connection matched pg_hba.conf\nline 5: \"local all all pam\"\n2019-10-10 15:00:46.481 CEST [6109] LOG: could not send data to client:\nBroken pipe\n\n\nAnd if auth is unsuccessful, it will log that very same message twice\n\n\nMy pg_hba is basically :\n\nlocal all postgres peer\n\nlocal all all pam\n\n\nAny idea about this ? I suspect something is wrong\n\nThank you,\n\n\nHi,\n \nI am having issues with PAM auth : it works, password are correctly checked, unknown users cannot access, known user can, everything looks goodBut, it always log an error by default even if auth is succesful:\n2019-10-10 15:00:46.481 CEST [6109] LOG: pam_authenticate failed: Authentication failure2019-10-10 15:00:46.481 CEST [6109] FATAL: PAM authentication failed for user \"ylacancellera\"2019-10-10 15:00:46.481 CEST [6109] DETAIL: Connection matched pg_hba.conf line 5: \"local all all pam\"2019-10-10 15:00:46.481 CEST [6109] LOG: could not send data to client: Broken pipeAnd if auth is unsuccessful, it will log that very same message twiceMy pg_hba is basically :local           all        postgres                   peerlocal           all        all                               pamAny idea about this ? I suspect something is wrongThank you,", "msg_date": "Fri, 11 Oct 2019 10:38:58 +0200", "msg_from": "La Cancellera Yoann <lacancellera.yoann@gmail.com>", "msg_from_op": true, "msg_subject": "Issues with PAM : log that it failed,\n whether it actually failed or not" }, { "msg_contents": "La Cancellera Yoann <lacancellera.yoann@gmail.com> writes:\n> I am having issues with PAM auth :\n> it works, password are correctly checked, unknown users cannot access,\n> known user can, everything looks good\n> But, it always log an error by default even if auth is succesful:\n> And if auth is unsuccessful, it will log that very same message twice\n\nThose aren't errors, they're just log events.\n\nIf you're using psql to connect, the extra messages aren't surprising,\nbecause psql will first try to connect without a password, and only\nif it gets a failure that indicates that a password is needed will\nit prompt the user for a password (so two connection attempts occur,\neven if the second one is successful). You can override that default\nbehavior with the -W switch, and I bet that will make the extra\nlog messages go away.\n\nHaving said that, using LOG level for unsurprising auth failures\nseems excessively chatty. More-commonly-used auth methods aren't\nthat noisy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Oct 2019 10:08:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues with PAM : log that it failed,\n whether it actually failed or not" }, { "msg_contents": "[ redirecting to pgsql-hackers ]\n\nI wrote:\n> La Cancellera Yoann <lacancellera.yoann@gmail.com> writes:\n>> I am having issues with PAM auth :\n>> it works, password are correctly checked, unknown users cannot access,\n>> known user can, everything looks good\n>> But, it always log an error by default even if auth is succesful:\n>> And if auth is unsuccessful, it will log that very same message twice\n\n> Those aren't errors, they're just log events.\n\n> If you're using psql to connect, the extra messages aren't surprising,\n> because psql will first try to connect without a password, and only\n> if it gets a failure that indicates that a password is needed will\n> it prompt the user for a password (so two connection attempts occur,\n> even if the second one is successful). You can override that default\n> behavior with the -W switch, and I bet that will make the extra\n> log messages go away.\n\n> Having said that, using LOG level for unsurprising auth failures\n> seems excessively chatty. More-commonly-used auth methods aren't\n> that noisy.\n\nI took a closer look at this and realized that the problem is that\nthe PAM code doesn't support our existing convention of not logging\nanything about connections wherein the client side disconnects when\nchallenged for a password. 0001 attached fixes that, not in a\nterribly nice way perhaps, but the PAM code is already relying on\nstatic variables for communication :-(.\n\nAlso, 0002 adjusts some messages in the same file to match project\ncapitalization conventions.\n\nBarring objections, I propose to back-patch 0001 but apply 0002\nto HEAD only.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 04 Nov 2019 12:01:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issues with PAM : log that it failed,\n whether it actually failed or not" } ]
[ { "msg_contents": "Dear Hackers,\n\nThis propose a way to develop global temporary tables in PostgreSQL.\n\nI noticed that there is an \"Allow temporary tables to exist as empty by default in all sessions\" in the postgresql todolist.\nhttps://wiki.postgresql.org/wiki/Todo <https://wiki.postgresql.org/wiki/Todo>\n\nIn recent years, PG community had many discussions about global temp table (GTT) support. Previous discussion covered the following topics: \n(1)\tThe main benefit or function: GTT offers features like “persistent schema, ephemeral data”, which avoids catalog bloat and reduces catalog vacuum. \n(2)\tWhether follows ANSI concept of temporary tables\n(3)\tHow to deal with statistics, single copy of schema definition, relcache\n(4)\tMore can be seen in https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru\n(5)\tA recent implementation and design from Konstantin Knizhnik covered many functions of GTT: https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch <https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch>\n\nHowever, as pointed by Konstantin himself, the implementation still needs functions related to CLOG, vacuum, and MVCC visibility.\n\nWe developed GTT based on PG 11 and included most needed features, such as how to deal with concurrent DDL and DML operations, how to handle vacuum and too old relfrozenxids, and how to store and access GTT statistics. \n\nThis design followed many suggestions from previous discussion in community. Here are some examples:\n\t“have a separate 'relpersistence' setting for global temp tables…by having the backend id in all filename…. From Andres Freund\n\tUse session memory context to store information related to GTT. From Pavel Stehule\n\t“extend the relfilenode mapper to support a backend-local non-persistent relfilenode map that's used to track temp table and index relfilenodes…” from Craig Ringer\n\nOur implementation creates one record in pg_class for GTT’s schema definition. When rows are first inserted into the GTT in a session, a session specific file is created to store the GTT’s data. Those files are removed when the session ends. We maintain the GTT’s statistics in session local memory. DDL operations, such as DROP table or CREATE INDEX, can be executed on a GTT only by one session, while no other sessions insert any data into the GTT before or it is already truncated. This also avoids the concurrency of DML and DDL operations on GTT. We maintain a session level oldest relfrozenxids for GTT. This way, autovacuum or vacuum can truncate CLOG and increase global relfrozenxids based on all tables’ relfrozenxids, including GTT’s. \nThe follows summarize the main design and implementation: \n\tSyntax: ON COMMIT PRESERVE ROWS and ON COMMIT DELETE ROWS\n\tData storage and buffering follows the same way as local temp table with a relfilenode including session id.\n\tA hash table(A) in shared memory is used to track sessions and their usage of GTTs and to serialize DDL and DML operations. \n\tAnother hash table(B) in session memory is introduced to record storage files for GTTs and their indexes. When a session ends, those files are removed. \n\tThe same hash table(B) in session memory is used to record the relfrozenxids of each GTT. The oldest one is stored in myproc so that autovacuum and vacuum may use it to determine global oldest relfrozenxids and truncate clog. \n\tThe same hash table(B) in session memory stores GTT’s session level statistics, It is generated during the operations of vacuum and analyze, and used by SQL optimizer to create execution plan. \n\tSome utility functions are added for DBA to manage GTTs. \n\tTRUNCATE command on a GTT behaves differently from that on a normal table. The command deletes the data immediately but keeps relfilenode using lower level table lock, RowExclusiveLock, instead of AccessExclusiveLock. \n\tMain limits of this version or future improvement: need suggestions from community: \n\t\t1 VACUUM FULL and CLUSTER are not supported; any operations which may change relfilenode are disabled to GTT.\n\t\t2 Sequence column is not supported in GTT for now.\n\t\t3 Users defined statistics is not supported.\n\n\nDetails:\n\nRequirement\nThe features list about global temp table:\n\t1. global temp table (ON COMMIT clause is omitted, SQL specifies that the default behavior is ON COMMIT DELETE ROWS)\n\t2. support with on commit DELETE ROWS\n\t3. support with on commit PRESERVE ROWS\n\t4. not support ON COMMIT DROP\n\nFeature description\nGlobal temp tables are defined just once and automatically exist (starting with empty contents) in every session that needs them.\nGlobal temp table, each session use local buffer, read or write independent data files.\nUse on commit DELETE ROWS for a transaction-specific global temp table. This is the default. database will truncate the table (delete all its rows) after each commit.\nUse on commit PRESERVE ROWS Specify PRESERVE ROWS for a session-specific global temp table. databse will truncate the table (delete all its rows) when you terminate the session.\n\ndesign\nGlobal temp tables are designed based on local temp table(buffer and storage files). \nBecause the catalog of global temp table is shared between sessions but the data is not shared, we need to build some new mechanisms to manage non-shared data and statistics for those data.\n\n1. catalog\n1.1 relpersistence\ndefine RELPERSISTENCEGLOBALTEMP 'g'\nMark global temp table in pg_class relpersistence to 'T'. The relpersistence of the index created on the global temp table is also set to ’T'\n\n1.2 on commit clause\nIn local temp table on commit DELETE ROWS and on commit PRESERVE ROWS not store in catalog, but GTT need.\nStore a bool value oncommitdelete_rows to reloptions only for GTT and share with other session.\n\n2. gram.y\nGlobal temp table already has a syntax tree. jush need to remove the warning message \"GLOBAL is deprecated in temporary table creation\" and mark relpersistence = RELPERSISTENCEGLOBALTEMP\n\n3. STORAGE\n3.1. active_gtt_shared_hash\ncreate a hash table in shared memory to trace the GTT files that are initialized in each session. \nEach hash entry contains a bitmap that records the backendid of the initialized GTT file.\nWith this hash table, we know which backend/session are using this GTT.\nIt will be used in GTT's DDL.\n\n3.2. gtt_storage_local_hash\nIn each backend, create a local hashtable gtt_storage_local_hash for tracks GTT storage file and statistics.\n1). GTT storage file track\nWhen one session inserts data into a GTT for the first time, record to local hash.\n2). normal clean GTT files\nUse beforeshmemexit to ensure that all files for the session GTT are deleted when the session exits.\n3). abnormal situation file cleanup\nWhen a backend exits abnormally (such as oom kill), the startup process started to recovery before accept connect. startup process check and remove all GTT files before redo wal.\n\n4 DDL\n4.1 DROP GTT\nOne GTT table is allowed to be deleted when only the current session USES it. After get the AccessExclusiveLock of the GTT table, use active_gtt_shared_hash to check and make sure that.\n\n4.2 ALTER GTT\nSame as drop GTT.\n\n4.3 CREATE INDEX ON GTT, DROP INDEX ON GTT\nSame as drop GTT.\n\n4.4 TRUNCATE GTT\nThe truncate GTT use RowExclusiveLock, not AccessExclusiveLock, Because truncate only cleans up local data file and local buffers in this session.\nAlso, truncate immediately deletes the data file without changing the relfilenode of the GTT table. btw, I'm not sure the implementation will be acceptable to the community.\n\n4.5 create index on GTT\nSame as drop GTT.\n\n4.6 OTHERS\nAny table operations about GTT that need to change relfilenode are disabled, such as vacuum full/cluster.\n\n5. The statistics of GTT\n\t1 relpages reltuples relallvisible frozenxid minmulti from pg_class\n\t2 The statistics for each column from pg_statistic\nAll the above information will be stored to gtt_storage_local_hash.\nWhen vacuum or analyze GTT's statistic will update, and the planner will use them. Of course, statistics only contain data within the current session.\n\n5.1. View global temp table statistics\nProvide pggttattstatistic get column statistics for GTT. Provide pggtt_relstats to rel statistics for GTT.\nThese functions are implemented in a plug-in, without add system view or function.\n\n6. autovacuum\nAutovacuum skips all GTT.\n\n7. vacuum(frozenxid push, clog truncate)\nThe GTT data file contains transaction information. Queries for GTT data rely on transaction information such as clog. That's can not be vacuumed automatically by vacuum.\n7.1 The session level gtt oldest frozenxid\nWhen one GTT been create or remove, record the session level oldest frozenxid and put it into MyProc. \n\n7.1 vacuum\nWhen vacuum push the db's frozenxid(vacupdatedatfrozenxid), need to consider the GTT. It needs to calculate the transactions required for the GTT(search all MyPorc), to avoid the clog required by GTT being cleaned.\n\n8. Parallel query\nPlanner does not produce parallel query plans for SQL related to global temp table.\n\n9. Operability\nProvide pggttattachedpid lists all the pids that are using the GTT. Provide pglistgttrelfrozenxids lists the session level oldest frozenxid of using GTT.\nThese functions are implemented in a plug-in, without add system view or function.\nDBA can use the above function and pgterminatebackend to force the cleanup of \"too old\" GTT tables and sessions.\n\n10. Limitations and todo list\n10.1. alter GTT\n10.2. pg_statistic_ext\n10.3. remove GTT's relfilenode can not change limit.\ncluster/vacuum full, optimize truncate gtt.\n10.4. SERIAL column type\nThe GTT from different sessions share a sequence(SERIAL type).\nNeed each session use the sequence independently.\n10.5. Locking optimization for GTT.\n10.6 materialized views is not support on GTT.\n\n\nWhat do you thinking about this proposal?\nLooking forward to your feedback.\n\nThanks!\n\n\nregards\n\n--\nZeng Wenjing\nAlibaba Group-Database Products Business Unit\n\n\n\nDear Hackers,This propose a way to develop global temporary tables in PostgreSQL.I noticed that there is an \"Allow temporary tables to exist as empty by default in all sessions\" in the postgresql todolist.https://wiki.postgresql.org/wiki/TodoIn recent years, PG community had many discussions about global temp table (GTT) support. Previous discussion covered the following topics: (1) The main benefit or function: GTT offers features like “persistent schema, ephemeral data”, which avoids catalog bloat and reduces catalog vacuum. (2) Whether follows ANSI concept of temporary tables(3) How to deal with statistics, single copy of schema definition, relcache(4) More can be seen in https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru(5) A recent implementation and design from Konstantin Knizhnik covered many functions of GTT: https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patchHowever, as pointed by Konstantin himself, the implementation still needs functions related to CLOG, vacuum, and MVCC visibility.We developed GTT based on PG 11 and included most needed features, such as how to deal with concurrent DDL and DML operations, how to handle vacuum and too old relfrozenxids, and how to store and access GTT statistics. This design followed many suggestions from previous discussion in community. Here are some examples: “have a separate 'relpersistence' setting for global temp tables…by having the backend id in all filename….   From Andres Freund Use session memory context to store information related to GTT.   From Pavel Stehule “extend the relfilenode mapper to support a backend-local non-persistent relfilenode map that's used to track temp table and index relfilenodes…” from Craig RingerOur implementation creates one record in pg_class for GTT’s schema definition. When rows are first inserted into the GTT in a session, a session specific file is created to store the GTT’s data. Those files are removed when the session ends. We maintain the GTT’s statistics in session local memory. DDL operations, such as DROP table or CREATE INDEX, can be executed on a GTT only by one session, while no other sessions insert any data into the GTT before or it is already truncated. This also avoids the concurrency of DML and DDL operations on GTT. We maintain a session level oldest relfrozenxids for GTT. This way, autovacuum or vacuum can truncate CLOG and increase global relfrozenxids based on all tables’ relfrozenxids, including GTT’s. The follows summarize the main design and implementation:  Syntax: ON COMMIT PRESERVE ROWS and ON COMMIT DELETE ROWS Data storage and buffering follows the same way as local temp table with a relfilenode including session id. A hash table(A) in shared memory is used to track sessions and their usage of GTTs and to serialize DDL and DML operations.  Another hash table(B) in session memory is introduced to record storage files for GTTs and their indexes. When a session ends, those files are removed.  The same hash table(B) in session memory is used to record the relfrozenxids of each GTT. The oldest one is stored in myproc so that autovacuum and vacuum may use it to determine global oldest relfrozenxids and truncate clog.  The same hash table(B) in session memory stores GTT’s session level statistics, It is generated during the operations of vacuum and analyze, and used by SQL optimizer to create execution plan.  Some utility functions are added for DBA to manage GTTs.  TRUNCATE command on a GTT behaves differently from that on a normal table. The command deletes the data immediately but keeps relfilenode using lower level table lock, RowExclusiveLock, instead of  AccessExclusiveLock.  Main limits of this version or future improvement: need suggestions from community:  1 VACUUM FULL and CLUSTER are not supported; any operations which may change relfilenode are disabled to GTT. 2 Sequence column is not supported in GTT for now. 3 Users defined statistics is not supported.Details:RequirementThe features list about global temp table: 1. global temp table (ON COMMIT clause is omitted, SQL specifies that the default behavior is ON COMMIT DELETE ROWS) 2. support with on commit DELETE ROWS 3. support with on commit PRESERVE ROWS 4. not support ON COMMIT DROPFeature descriptionGlobal temp tables are defined just once and automatically exist (starting with empty contents) in every session that needs them.Global temp table, each session use local buffer, read or write independent data files.Use on commit DELETE ROWS for a transaction-specific global temp table. This is the default. database will truncate the table (delete all its rows) after each commit.Use on commit PRESERVE ROWS Specify PRESERVE ROWS for a session-specific global temp table. databse will truncate the table (delete all its rows) when you terminate the session.designGlobal temp tables are designed based on local temp table(buffer and storage files). Because the catalog of global temp table is shared between sessions but the data is not shared, we need to build some new mechanisms to manage non-shared data and statistics for those data.1. catalog1.1 relpersistencedefine RELPERSISTENCEGLOBALTEMP 'g'Mark global temp table in pg_class relpersistence to 'T'. The relpersistence of the index created on the global temp table is also set to ’T'1.2 on commit clauseIn local temp table on commit DELETE ROWS and on commit PRESERVE ROWS not store in catalog, but GTT need.Store a bool value oncommitdelete_rows to reloptions only for GTT and share with other session.2. gram.yGlobal temp table already has a syntax tree. jush need to remove the warning message \"GLOBAL is deprecated in temporary table creation\" and mark relpersistence = RELPERSISTENCEGLOBALTEMP3. STORAGE3.1. active_gtt_shared_hashcreate a hash table in shared memory to trace the GTT files that are initialized in each session. Each hash entry contains a bitmap that records the backendid of the initialized GTT file.With this hash table, we know which backend/session are using this GTT.It will be used in GTT's DDL.3.2. gtt_storage_local_hashIn each backend, create a local hashtable gtt_storage_local_hash for tracks GTT storage file and statistics.1). GTT storage file trackWhen one session inserts data into a GTT for the first time, record to local hash.2). normal clean GTT filesUse beforeshmemexit to ensure that all files for the session GTT are deleted when the session exits.3). abnormal situation file cleanupWhen a backend exits abnormally (such as oom kill), the startup process started to recovery before accept connect. startup process check and remove all GTT files before redo wal.4 DDL4.1 DROP GTTOne GTT table is allowed to be deleted when only the current session USES it. After get the AccessExclusiveLock of the GTT table, use active_gtt_shared_hash to check and make sure that.4.2 ALTER GTTSame as drop GTT.4.3 CREATE INDEX ON GTT, DROP INDEX ON GTTSame as drop GTT.4.4 TRUNCATE GTTThe truncate GTT use RowExclusiveLock, not AccessExclusiveLock, Because truncate only cleans up local data file and local buffers in this session.Also, truncate immediately deletes the data file without changing the relfilenode of the GTT table. btw, I'm not sure the implementation will be acceptable to the community.4.5  create index on GTTSame as drop GTT.4.6 OTHERSAny table operations about GTT that need to change relfilenode are disabled, such as vacuum full/cluster.5. The statistics of GTT 1 relpages reltuples relallvisible frozenxid minmulti from pg_class 2 The statistics for each column from pg_statisticAll the above information will be stored to gtt_storage_local_hash.When vacuum or analyze GTT's statistic will update, and the planner will use them. Of course, statistics only contain data within the current session.5.1. View global temp table statisticsProvide pggttattstatistic get column statistics for GTT. Provide pggtt_relstats to rel statistics for GTT.These functions are implemented in a plug-in, without add system view or function.6. autovacuumAutovacuum skips all GTT.7. vacuum(frozenxid push, clog truncate)The GTT data file contains transaction information. Queries for GTT data rely on transaction information such as clog. That's can not be vacuumed automatically by vacuum.7.1 The session level gtt oldest frozenxidWhen one GTT been create or remove, record the session level oldest frozenxid and put it into MyProc. 7.1 vacuumWhen vacuum push the db's frozenxid(vacupdatedatfrozenxid), need to consider the GTT. It needs to calculate the transactions required for the GTT(search all MyPorc), to avoid the clog required by GTT being cleaned.8. Parallel queryPlanner does not produce parallel query plans for SQL related to global temp table.9. OperabilityProvide pggttattachedpid lists all the pids that are using the GTT. Provide pglistgttrelfrozenxids lists the session level oldest frozenxid of using GTT.These functions are implemented in a plug-in, without add system view or function.DBA can use the above function and pgterminatebackend to force the cleanup of \"too old\" GTT tables and sessions.10. Limitations and todo list10.1. alter GTT10.2. pg_statistic_ext10.3. remove GTT's relfilenode can not change limit.cluster/vacuum full, optimize truncate gtt.10.4. SERIAL column typeThe GTT from different sessions share a sequence(SERIAL type).Need each session use the sequence independently.10.5. Locking optimization for GTT.10.6 materialized views is not support on GTT.What do you thinking about this proposal?Looking forward to your feedback.Thanks!regards--Zeng WenjingAlibaba Group-Database Products Business Unit", "msg_date": "Fri, 11 Oct 2019 20:15:27 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "[Proposal] Global temporary tables" }, { "msg_contents": "On 11.10.2019 15:15, 曾文旌(义从) wrote:\n> Dear Hackers,\n>\n> This propose a way to develop global temporary tables in PostgreSQL.\n>\n> I noticed that there is an \"Allow temporary tables to exist as empty \n> by default in all sessions\" in the postgresql todolist.\n> https://wiki.postgresql.org/wiki/Todo\n>\n> In recent years, PG community had many discussions about global temp \n> table (GTT) support. Previous discussion covered the following topics:\n> (1)The main benefit or function: GTT offers features like “persistent \n> schema, ephemeral data”, which avoids catalog bloat and reduces \n> catalog vacuum.\n> (2)Whether follows ANSI concept of temporary tables\n> (3)How to deal with statistics, single copy of schema definition, relcache\n> (4)More can be seen in \n> https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru\n> (5)A recent implementation and design from Konstantin Knizhnik covered \n> many functions of GTT: \n> https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch\n>\n> However, as pointed by Konstantin himself, the implementation still \n> needs functions related to CLOG, vacuum, and MVCC visibility.\n>\n\nJust to clarify.\nI have now proposed several different solutions for GTT:\n\nShared vs. private buffers for GTT:\n1. Private buffers. This is least invasive patch, requiring no changes \nin relfilenodes.\n2. Shared buffers. Requires changing relfilenode but supports parallel \nquery execution for GTT.\n\nAccess to GTT at replica:\n1. Access is prohibited (as for original temp tables). No changes at all.\n2. Tuples of temp tables are marked with forzen XID.  Minimal changes, \nrollbacks are not possible.\n3. Providing special XIDs for GTT at replica. No changes in CLOG are \nrequired, but special MVCC visibility rules are used for GTT. Current \nlimitation: number of transactions accessing GTT at replica is limited \nby 2^32\nand bitmap of correspondent size has to be maintained (tuples of GTT are \nnot proceeded by vacuum and not frozen, so XID horizon never moved).\n\nSo except the limitation mentioned above (which I do not consider as \ncritical) there is only one problem which was not addressed: maintaining \nstatistics for GTT.\nIf all of the following conditions are true:\n\n1) GTT are used in joins\n2) There are indexes defined for GTT\n3) Size and histogram of GTT in different backends can significantly vary.\n4) ANALYZE was explicitly called for GTT\n\nthen query execution plan built in one backend will be also used for \nother backends where it can be inefficient.\nI also do not consider this problem as \"show stopper\" for adding GTT to \nPostgres.\n\nI still do not understand the opinion of community which functionality \nof GTT is considered to be most important.\nBut the patch with local buffers and no replica support is small enough \nto become good starting point.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 11.10.2019 15:15, 曾文旌(义从) wrote:\n\n\n\nDear Hackers,\n\n\n\nThis propose a way to develop global temporary\n tables in PostgreSQL.\n\n\n\nI noticed that\n there is an \"Allow temporary tables to exist as empty by\n default in all sessions\" in the postgresql todolist.\nhttps://wiki.postgresql.org/wiki/Todo\n\n\n\nIn recent years, PG community had many discussions about\n global temp table (GTT) support. Previous discussion\n covered the following topics: \n(1) The\n main benefit or function: GTT offers features like\n “persistent schema, ephemeral data”, which avoids catalog\n bloat and reduces catalog vacuum. \n(2) Whether\n follows ANSI concept of temporary tables\n(3) How\n to deal with statistics, single copy of schema definition,\n relcache\n(4) More\n can be seen in https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru\n(5) A\n recent implementation and design from Konstantin Knizhnik\n covered many functions of GTT: https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch\n\n\nHowever, as\n pointed by Konstantin himself, the implementation still\n needs functions related to CLOG, vacuum, and MVCC\n visibility.\n\n\n\n\n\n Just to clarify.\n I have now proposed several different solutions for GTT:\n\n Shared vs. private buffers for GTT:\n 1. Private buffers. This is least invasive patch, requiring no\n changes in relfilenodes.\n 2. Shared buffers. Requires changing relfilenode but supports\n parallel query execution for GTT.\n\n Access to GTT at replica:\n 1. Access is prohibited (as for original temp tables). No changes at\n all.\n 2. Tuples of temp tables are marked with forzen XID.  Minimal\n changes, rollbacks are not possible.\n 3. Providing special XIDs for GTT at replica. No changes in CLOG are\n required, but special MVCC visibility rules are used for GTT.\n Current limitation: number of transactions accessing GTT at replica\n is limited by 2^32\n and bitmap of correspondent size has to be maintained (tuples of GTT\n are not proceeded by vacuum and not frozen, so XID horizon never\n moved).\n\n So except the limitation mentioned above (which I do not consider as\n critical) there is only one problem which was not addressed:\n maintaining statistics for GTT. \n If all of the following conditions are true:\n\n 1) GTT are used in joins\n 2) There are indexes defined for GTT\n 3) Size and histogram of GTT in different backends can significantly\n vary. \n 4) ANALYZE was explicitly called for GTT\n\n then query execution plan built in one backend will be also used for\n other backends where it can be inefficient.\n I also do not consider this problem as \"show stopper\" for adding GTT\n to Postgres.\n\n I still do not understand the opinion of community which\n functionality of GTT is considered to be most important.\n But the patch with local buffers and no replica support is small\n enough to become good starting point.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 11 Oct 2019 16:50:07 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 11. 10. 2019 v 15:50 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 11.10.2019 15:15, 曾文旌(义从) wrote:\n>\n> Dear Hackers,\n>\n> This propose a way to develop global temporary tables in PostgreSQL.\n>\n> I noticed that there is an \"Allow temporary tables to exist as empty by\n> default in all sessions\" in the postgresql todolist.\n> https://wiki.postgresql.org/wiki/Todo\n>\n> In recent years, PG community had many discussions about global temp\n> table (GTT) support. Previous discussion covered the following topics:\n> (1) The main benefit or function: GTT offers features like “persistent\n> schema, ephemeral data”, which avoids catalog bloat and reduces catalog\n> vacuum.\n> (2) Whether follows ANSI concept of temporary tables\n> (3) How to deal with statistics, single copy of schema definition,\n> relcache\n> (4) More can be seen in\n> https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru\n> (5) A recent implementation and design from Konstantin Knizhnik covered\n> many functions of GTT:\n> https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch\n>\n> However, as pointed by Konstantin himself, the implementation still needs\n> functions related to CLOG, vacuum, and MVCC visibility.\n>\n>\n> Just to clarify.\n> I have now proposed several different solutions for GTT:\n>\n> Shared vs. private buffers for GTT:\n> 1. Private buffers. This is least invasive patch, requiring no changes in\n> relfilenodes.\n> 2. Shared buffers. Requires changing relfilenode but supports parallel\n> query execution for GTT.\n>\n\nThis is important argument for using share buffers. Maybe the best is mix\nof both - store files in temporal tablespace, but using share buffers.\nMore, it can be accessible for autovacuum.\n\n>\n> Access to GTT at replica:\n> 1. Access is prohibited (as for original temp tables). No changes at all.\n> 2. Tuples of temp tables are marked with forzen XID. Minimal changes,\n> rollbacks are not possible.\n> 3. Providing special XIDs for GTT at replica. No changes in CLOG are\n> required, but special MVCC visibility rules are used for GTT. Current\n> limitation: number of transactions accessing GTT at replica is limited by\n> 2^32\n> and bitmap of correspondent size has to be maintained (tuples of GTT are\n> not proceeded by vacuum and not frozen, so XID horizon never moved).\n>\n> So except the limitation mentioned above (which I do not consider as\n> critical) there is only one problem which was not addressed: maintaining\n> statistics for GTT.\n> If all of the following conditions are true:\n>\n> 1) GTT are used in joins\n> 2) There are indexes defined for GTT\n> 3) Size and histogram of GTT in different backends can significantly vary.\n> 4) ANALYZE was explicitly called for GTT\n>\n> then query execution plan built in one backend will be also used for other\n> backends where it can be inefficient.\n> I also do not consider this problem as \"show stopper\" for adding GTT to\n> Postgres.\n>\n\nThe last issue is show stopper in my mind. It really depends on usage.\nThere are situation where shared statistics are ok (and maybe good\nsolution), and other situation, where shared statistics are just unusable.\n\nRegards\n\nPavel\n\n\n\n> I still do not understand the opinion of community which functionality of\n> GTT is considered to be most important.\n> But the patch with local buffers and no replica support is small enough to\n> become good starting point.\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 11. 10. 2019 v 15:50 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 11.10.2019 15:15, 曾文旌(义从) wrote:\n\n\nDear Hackers,\n\n\n\nThis propose a way to develop global temporary\n tables in PostgreSQL.\n\n\n\nI noticed that\n there is an \"Allow temporary tables to exist as empty by\n default in all sessions\" in the postgresql todolist.\nhttps://wiki.postgresql.org/wiki/Todo\n\n\n\nIn recent years, PG community had many discussions about\n global temp table (GTT) support. Previous discussion\n covered the following topics: \n(1) The\n main benefit or function: GTT offers features like\n “persistent schema, ephemeral data”, which avoids catalog\n bloat and reduces catalog vacuum. \n(2) Whether\n follows ANSI concept of temporary tables\n(3) How\n to deal with statistics, single copy of schema definition,\n relcache\n(4) More\n can be seen in https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru\n(5) A\n recent implementation and design from Konstantin Knizhnik\n covered many functions of GTT: https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch\n\n\nHowever, as\n pointed by Konstantin himself, the implementation still\n needs functions related to CLOG, vacuum, and MVCC\n visibility.\n\n\n\n\n\n Just to clarify.\n I have now proposed several different solutions for GTT:\n\n Shared vs. private buffers for GTT:\n 1. Private buffers. This is least invasive patch, requiring no\n changes in relfilenodes.\n 2. Shared buffers. Requires changing relfilenode but supports\n parallel query execution for GTT.This is important argument for using share buffers. Maybe the best is mix of both - store files in temporal tablespace, but using share buffers. More, it can be accessible for autovacuum.\n\n Access to GTT at replica:\n 1. Access is prohibited (as for original temp tables). No changes at\n all.\n 2. Tuples of temp tables are marked with forzen XID.  Minimal\n changes, rollbacks are not possible.\n 3. Providing special XIDs for GTT at replica. No changes in CLOG are\n required, but special MVCC visibility rules are used for GTT.\n Current limitation: number of transactions accessing GTT at replica\n is limited by 2^32\n and bitmap of correspondent size has to be maintained (tuples of GTT\n are not proceeded by vacuum and not frozen, so XID horizon never\n moved).\n\n So except the limitation mentioned above (which I do not consider as\n critical) there is only one problem which was not addressed:\n maintaining statistics for GTT. \n If all of the following conditions are true:\n\n 1) GTT are used in joins\n 2) There are indexes defined for GTT\n 3) Size and histogram of GTT in different backends can significantly\n vary. \n 4) ANALYZE was explicitly called for GTT\n\n then query execution plan built in one backend will be also used for\n other backends where it can be inefficient.\n I also do not consider this problem as \"show stopper\" for adding GTT\n to Postgres.The last issue is show stopper in my mind. It really depends on usage. There are situation where shared statistics are ok (and maybe good solution), and other situation, where shared statistics are just unusable.RegardsPavel \n\n I still do not understand the opinion of community which\n functionality of GTT is considered to be most important.\n But the patch with local buffers and no replica support is small\n enough to become good starting point.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 12 Oct 2019 07:16:56 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2019年10月12日 下午1:16,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> pá 11. 10. 2019 v 15:50 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n> \n> \n> On 11.10.2019 15:15, 曾文旌(义从) wrote:\n>> Dear Hackers,\n>> \n>> This propose a way to develop global temporary tables in PostgreSQL.\n>> \n>> I noticed that there is an \"Allow temporary tables to exist as empty by default in all sessions\" in the postgresql todolist.\n>> https://wiki.postgresql.org/wiki/Todo <https://wiki.postgresql.org/wiki/Todo>\n>> \n>> In recent years, PG community had many discussions about global temp table (GTT) support. Previous discussion covered the following topics: \n>> (1)\tThe main benefit or function: GTT offers features like “persistent schema, ephemeral data”, which avoids catalog bloat and reduces catalog vacuum. \n>> (2)\tWhether follows ANSI concept of temporary tables\n>> (3)\tHow to deal with statistics, single copy of schema definition, relcache\n>> (4)\tMore can be seen in https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru <https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru>\n>> (5)\tA recent implementation and design from Konstantin Knizhnik covered many functions of GTT: https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch <https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch>\n>> \n>> However, as pointed by Konstantin himself, the implementation still needs functions related to CLOG, vacuum, and MVCC visibility.\n>> \n> \n> Just to clarify.\n> I have now proposed several different solutions for GTT:\n> \n> Shared vs. private buffers for GTT:\n> 1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.\n> 2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.\n> \n> This is important argument for using share buffers. Maybe the best is mix of both - store files in temporal tablespace, but using share buffers. More, it can be accessible for autovacuum.\n> \n> Access to GTT at replica:\n> 1. Access is prohibited (as for original temp tables). No changes at all.\n> 2. Tuples of temp tables are marked with forzen XID. Minimal changes, rollbacks are not possible.\n> 3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32\n> and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).\n> \n> So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT. \n> If all of the following conditions are true:\n> \n> 1) GTT are used in joins\n> 2) There are indexes defined for GTT\n> 3) Size and histogram of GTT in different backends can significantly vary. \n> 4) ANALYZE was explicitly called for GTT\n> \n> then query execution plan built in one backend will be also used for other backends where it can be inefficient.\n> I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.\n> \n> The last issue is show stopper in my mind. It really depends on usage. There are situation where shared statistics are ok (and maybe good solution), and other situation, where shared statistics are just unusable.\nThis proposal calculates and stores independent statistics(relpages reltuples and histogram of GTT) for the gtt data within each session, ensuring optimizer can get accurate statistics.\n\n\n> Regards\n> \n> Pavel\n> \n> \n> \n> I still do not understand the opinion of community which functionality of GTT is considered to be most important.\n> But the patch with local buffers and no replica support is small enough to become good starting point.\n> \n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com <http://www.postgrespro.com/>\n> The Russian Postgres Company \n\n\n2019年10月12日 下午1:16,Pavel Stehule <pavel.stehule@gmail.com> 写道:pá 11. 10. 2019 v 15:50 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:On 11.10.2019 15:15, 曾文旌(义从) wrote:Dear Hackers,This propose a way to develop global temporary tables in PostgreSQL.I noticed that there is an \"Allow temporary tables to exist as empty by default in all sessions\" in the postgresql todolist.https://wiki.postgresql.org/wiki/TodoIn recent years, PG community had many discussions about global temp table (GTT) support. Previous discussion covered the following topics: (1) The main benefit or function: GTT offers features like “persistent schema, ephemeral data”, which avoids catalog bloat and reduces catalog vacuum. (2) Whether follows ANSI concept of temporary tables(3) How to deal with statistics, single copy of schema definition, relcache(4) More can be seen in https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru(5) A recent implementation and design from Konstantin Knizhnik covered many functions of GTT: https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patchHowever, as pointed by Konstantin himself, the implementation still needs functions related to CLOG, vacuum, and MVCC visibility.Just to clarify.I have now proposed several different solutions for GTT:Shared vs. private buffers for GTT:1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.This is important argument for using share buffers. Maybe the best is mix of both - store files in temporal tablespace, but using share buffers. More, it can be accessible for autovacuum.Access to GTT at replica:1. Access is prohibited (as for original temp tables). No changes at all.2. Tuples of temp tables are marked with forzen XID.  Minimal changes, rollbacks are not possible.3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT. If all of the following conditions are true:1) GTT are used in joins2) There are indexes defined for GTT3) Size and histogram of GTT in different backends can significantly vary. 4) ANALYZE was explicitly called for GTTthen query execution plan built in one backend will be also used for other backends where it can be inefficient.I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.The last issue is show stopper in my mind. It really depends on usage. There are situation where shared statistics are ok (and maybe good solution), and other situation, where shared statistics are just unusable.This proposal calculates and stores independent statistics(relpages reltuples and histogram of GTT) for the gtt data within each session, ensuring optimizer can get accurate statistics.RegardsPavelI still do not understand the opinion of community which functionality of GTT is considered to be most important.But the patch with local buffers and no replica support is small enough to become good starting point.-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 15 Oct 2019 17:49:56 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2019年10月11日 下午9:50,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 11.10.2019 15:15, 曾文旌(义从) wrote:\n>> Dear Hackers,\n>> \n>> This propose a way to develop global temporary tables in PostgreSQL.\n>> \n>> I noticed that there is an \"Allow temporary tables to exist as empty by default in all sessions\" in the postgresql todolist.\n>> https://wiki.postgresql.org/wiki/Todo <https://wiki.postgresql.org/wiki/Todo>\n>> \n>> In recent years, PG community had many discussions about global temp table (GTT) support. Previous discussion covered the following topics: \n>> (1)\tThe main benefit or function: GTT offers features like “persistent schema, ephemeral data”, which avoids catalog bloat and reduces catalog vacuum. \n>> (2)\tWhether follows ANSI concept of temporary tables\n>> (3)\tHow to deal with statistics, single copy of schema definition, relcache\n>> (4)\tMore can be seen in https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru <https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru>\n>> (5)\tA recent implementation and design from Konstantin Knizhnik covered many functions of GTT: https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch <https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch>\n>> \n>> However, as pointed by Konstantin himself, the implementation still needs functions related to CLOG, vacuum, and MVCC visibility.\n>> \n> \n> Just to clarify.\n> I have now proposed several different solutions for GTT:\n> \n> Shared vs. private buffers for GTT:\n> 1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.\n> 2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.\n> \n> Access to GTT at replica:\n> 1. Access is prohibited (as for original temp tables). No changes at all.\n> 2. Tuples of temp tables are marked with forzen XID. Minimal changes, rollbacks are not possible.\n> 3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32\n> and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).\n> \n> So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT. \n> If all of the following conditions are true:\n> \n> 1) GTT are used in joins\n> 2) There are indexes defined for GTT\n> 3) Size and histogram of GTT in different backends can significantly vary. \n> 4) ANALYZE was explicitly called for GTT\n> \n> then query execution plan built in one backend will be also used for other backends where it can be inefficient.\n> I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.\nWhen session A writes 10000000 rows of data to gtt X, session B also uses X at the same time and it has 100 rows of different data. If B uses analyze to count the statistics of 100000 rows of data and updates it to catalog.\nObviously, session A will get inaccurate query plan based on misaligned statistics when calculating the query plan for X related queries. Session A may think that table X is too small to be worth using index scan, but it is not. Each session needs to get the statistics of the self data to make the query plan.\n\n\n> I still do not understand the opinion of community which functionality of GTT is considered to be most important.\n> But the patch with local buffers and no replica support is small enough to become good starting point.\nYes ,the first step, we focus on complete basic functions of gtt (dml ddl index on gtt (MVCC visibility rules) storage).\nAbnormal statistics can cause problems with index selection on gtt, so index on gtt and accurate statistical information is necessary.\n\n\n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com <http://www.postgrespro.com/>\n> The Russian Postgres Company \n\n\n2019年10月11日 下午9:50,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n\n\n\n\nOn 11.10.2019 15:15, 曾文旌(义从) wrote:\n\n\n\nDear Hackers,\n\n\n\nThis propose a way to develop global temporary\n tables in PostgreSQL.\n\n\n\nI noticed that\n there is an \"Allow temporary tables to exist as empty by\n default in all sessions\" in the postgresql todolist.\nhttps://wiki.postgresql.org/wiki/Todo\n\n\n\nIn recent years, PG community had many discussions about\n global temp table (GTT) support. Previous discussion\n covered the following topics: \n(1) The\n main benefit or function: GTT offers features like\n “persistent schema, ephemeral data”, which avoids catalog\n bloat and reduces catalog vacuum. \n(2) Whether\n follows ANSI concept of temporary tables\n(3) How\n to deal with statistics, single copy of schema definition,\n relcache\n(4) More\n can be seen in https://www.postgresql.org/message-id/73954ab7-44d3-b37b-81a3-69bdcbb446f7%40postgrespro.ru\n(5) A\n recent implementation and design from Konstantin Knizhnik\n covered many functions of GTT: https://www.postgresql.org/message-id/attachment/103265/global_private_temp-1.patch\n\n\nHowever, as\n pointed by Konstantin himself, the implementation still\n needs functions related to CLOG, vacuum, and MVCC\n visibility.\n\n\n\n\n\n Just to clarify.\n I have now proposed several different solutions for GTT:\n\n Shared vs. private buffers for GTT:\n 1. Private buffers. This is least invasive patch, requiring no\n changes in relfilenodes.\n 2. Shared buffers. Requires changing relfilenode but supports\n parallel query execution for GTT.\n\n Access to GTT at replica:\n 1. Access is prohibited (as for original temp tables). No changes at\n all.\n 2. Tuples of temp tables are marked with forzen XID.  Minimal\n changes, rollbacks are not possible.\n 3. Providing special XIDs for GTT at replica. No changes in CLOG are\n required, but special MVCC visibility rules are used for GTT.\n Current limitation: number of transactions accessing GTT at replica\n is limited by 2^32\n and bitmap of correspondent size has to be maintained (tuples of GTT\n are not proceeded by vacuum and not frozen, so XID horizon never\n moved).\n\n So except the limitation mentioned above (which I do not consider as\n critical) there is only one problem which was not addressed:\n maintaining statistics for GTT. \n If all of the following conditions are true:\n\n 1) GTT are used in joins\n 2) There are indexes defined for GTT\n 3) Size and histogram of GTT in different backends can significantly\n vary. \n 4) ANALYZE was explicitly called for GTT\n\n then query execution plan built in one backend will be also used for\n other backends where it can be inefficient.\n I also do not consider this problem as \"show stopper\" for adding GTT\n to Postgres.When session A writes 10000000 rows of data to gtt X, session B also uses X at the same time and it has 100 rows of different data. If B uses analyze to count the statistics of 100000 rows of data and updates it to catalog.Obviously, session A will get inaccurate query plan based on misaligned statistics when calculating the query plan for X related queries. Session A may think that table X is too small to be worth using index scan, but it is not. Each session needs to get the statistics of the self data to make the query plan.\n I still do not understand the opinion of community which\n functionality of GTT is considered to be most important.\n But the patch with local buffers and no replica support is small\n enough to become good starting point.Yes ,the first step, we focus on complete basic functions of gtt (dml ddl index on gtt (MVCC visibility rules) storage).Abnormal statistics can cause problems with index selection on gtt, so index on gtt and accurate statistical information is necessary.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 17 Oct 2019 18:18:34 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Oct 11, 2019 at 9:50 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> Just to clarify.\n> I have now proposed several different solutions for GTT:\n>\n> Shared vs. private buffers for GTT:\n> 1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.\n> 2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.\n\nI vote for #1. I think parallel query for temp objects may be a\ndesirable feature, but I don't think it should be the job of a patch\nimplementing GTTs to make it happen. In fact, I think it would be an\nactively bad idea, because I suspect that if we do eventually support\ntemp relations for parallel query, we're going to want a solution that\nis shared between regular temp tables and global temp tables, not\nseparate solutions for each.\n\n> Access to GTT at replica:\n> 1. Access is prohibited (as for original temp tables). No changes at all.\n> 2. Tuples of temp tables are marked with forzen XID. Minimal changes, rollbacks are not possible.\n> 3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32\n> and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).\n\nI again vote for #1. A GTT is defined to allow data to be visible only\nwithin one session -- so what does it even mean for the data to be\naccessible on a replica?\n\n> So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT.\n> If all of the following conditions are true:\n>\n> 1) GTT are used in joins\n> 2) There are indexes defined for GTT\n> 3) Size and histogram of GTT in different backends can significantly vary.\n> 4) ANALYZE was explicitly called for GTT\n>\n> then query execution plan built in one backend will be also used for other backends where it can be inefficient.\n> I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.\n\nI think that's *definitely* a show stopper.\n\n> I still do not understand the opinion of community which functionality of GTT is considered to be most important.\n> But the patch with local buffers and no replica support is small enough to become good starting point.\n\nWell, it seems we now have two patches for this feature. I guess we\nneed to figure out which one is better, and whether it's possible for\nthe two efforts to be merged, rather than having two different teams\nhacking on separate code bases.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 25 Oct 2019 11:01:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 25. 10. 2019 v 17:01 odesílatel Robert Haas <robertmhaas@gmail.com>\nnapsal:\n\n> On Fri, Oct 11, 2019 at 9:50 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n> > Just to clarify.\n> > I have now proposed several different solutions for GTT:\n> >\n> > Shared vs. private buffers for GTT:\n> > 1. Private buffers. This is least invasive patch, requiring no changes\n> in relfilenodes.\n> > 2. Shared buffers. Requires changing relfilenode but supports parallel\n> query execution for GTT.\n>\n> I vote for #1. I think parallel query for temp objects may be a\n> desirable feature, but I don't think it should be the job of a patch\n> implementing GTTs to make it happen. In fact, I think it would be an\n> actively bad idea, because I suspect that if we do eventually support\n> temp relations for parallel query, we're going to want a solution that\n> is shared between regular temp tables and global temp tables, not\n> separate solutions for each.\n>\n> > Access to GTT at replica:\n> > 1. Access is prohibited (as for original temp tables). No changes at all.\n> > 2. Tuples of temp tables are marked with forzen XID. Minimal changes,\n> rollbacks are not possible.\n> > 3. Providing special XIDs for GTT at replica. No changes in CLOG are\n> required, but special MVCC visibility rules are used for GTT. Current\n> limitation: number of transactions accessing GTT at replica is limited by\n> 2^32\n> > and bitmap of correspondent size has to be maintained (tuples of GTT are\n> not proceeded by vacuum and not frozen, so XID horizon never moved).\n>\n> I again vote for #1. A GTT is defined to allow data to be visible only\n> within one session -- so what does it even mean for the data to be\n> accessible on a replica?\n>\n\nwhy not? there are lot of sessions on replica servers. One usage of temp\ntables is fixing estimation errors. You can create temp table with partial\nquery result, run ANALYZE and evaluate other steps. Now this case is not\npossible on replica servers.\n\nOne motivation for GTT is decreasing port costs from Oracle. But other\nmotivations, like do more complex calculations on replica are valid and\nvaluable.\n\n\n\n> > So except the limitation mentioned above (which I do not consider as\n> critical) there is only one problem which was not addressed: maintaining\n> statistics for GTT.\n> > If all of the following conditions are true:\n> >\n> > 1) GTT are used in joins\n> > 2) There are indexes defined for GTT\n> > 3) Size and histogram of GTT in different backends can significantly\n> vary.\n> > 4) ANALYZE was explicitly called for GTT\n> >\n> > then query execution plan built in one backend will be also used for\n> other backends where it can be inefficient.\n> > I also do not consider this problem as \"show stopper\" for adding GTT to\n> Postgres.\n>\n> I think that's *definitely* a show stopper.\n>\n> > I still do not understand the opinion of community which functionality\n> of GTT is considered to be most important.\n> > But the patch with local buffers and no replica support is small enough\n> to become good starting point.\n>\n> Well, it seems we now have two patches for this feature. I guess we\n> need to figure out which one is better, and whether it's possible for\n> the two efforts to be merged, rather than having two different teams\n> hacking on separate code bases.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\npá 25. 10. 2019 v 17:01 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Fri, Oct 11, 2019 at 9:50 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> Just to clarify.\n> I have now proposed several different solutions for GTT:\n>\n> Shared vs. private buffers for GTT:\n> 1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.\n> 2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.\n\nI vote for #1. I think parallel query for temp objects may be a\ndesirable feature, but I don't think it should be the job of a patch\nimplementing GTTs to make it happen. In fact, I think it would be an\nactively bad idea, because I suspect that if we do eventually support\ntemp relations for parallel query, we're going to want a solution that\nis shared between regular temp tables and global temp tables, not\nseparate solutions for each.\n\n> Access to GTT at replica:\n> 1. Access is prohibited (as for original temp tables). No changes at all.\n> 2. Tuples of temp tables are marked with forzen XID.  Minimal changes, rollbacks are not possible.\n> 3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32\n> and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).\n\nI again vote for #1. A GTT is defined to allow data to be visible only\nwithin one session -- so what does it even mean for the data to be\naccessible on a replica?why not? there are lot of sessions on replica servers. One usage of temp tables is fixing estimation errors. You can create temp table with partial query result, run ANALYZE and evaluate other steps. Now this case is not possible on replica servers.One motivation for GTT  is decreasing port costs from Oracle. But other motivations, like do more complex calculations on replica are valid and valuable. \n\n> So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT.\n> If all of the following conditions are true:\n>\n> 1) GTT are used in joins\n> 2) There are indexes defined for GTT\n> 3) Size and histogram of GTT in different backends can significantly vary.\n> 4) ANALYZE was explicitly called for GTT\n>\n> then query execution plan built in one backend will be also used for other backends where it can be inefficient.\n> I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.\n\nI think that's *definitely* a show stopper.\n\n> I still do not understand the opinion of community which functionality of GTT is considered to be most important.\n> But the patch with local buffers and no replica support is small enough to become good starting point.\n\nWell, it seems we now have two patches for this feature. I guess we\nneed to figure out which one is better, and whether it's possible for\nthe two efforts to be merged, rather than having two different teams\nhacking on separate code bases.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 25 Oct 2019 17:13:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 25.10.2019 18:01, Robert Haas wrote:\n> On Fri, Oct 11, 2019 at 9:50 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> Just to clarify.\n>> I have now proposed several different solutions for GTT:\n>>\n>> Shared vs. private buffers for GTT:\n>> 1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.\n>> 2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.\n> I vote for #1. I think parallel query for temp objects may be a\n> desirable feature, but I don't think it should be the job of a patch\n> implementing GTTs to make it happen. In fact, I think it would be an\n> actively bad idea, because I suspect that if we do eventually support\n> temp relations for parallel query, we're going to want a solution that\n> is shared between regular temp tables and global temp tables, not\n> separate solutions for each.\n\nSorry, may be I do not not understand you.\nIt seems to me that there is only one thing preventing usage of \ntemporary tables in parallel plans: private buffers.\nIf global temporary tables are accessed as normal tables though shared \nbuffers then them can be used in parallel queries\nand no extra support is required for it.\nAt least I have checked that parallel queries are correctly worked for \nmy implementation of GTT with shared buffers.\nSo I do not understand about which \"separate solutions\" you are talking \nabout.\n\nI can agree that private buffers may be  good starting point for GTT \nimplementation, because it is less invasive and GTT access speed is \nexactly the same as of normal temp tables.\nBut I do not understand your argument why it is \"actively bad idea\".\n\n>> Access to GTT at replica:\n>> 1. Access is prohibited (as for original temp tables). No changes at all.\n>> 2. Tuples of temp tables are marked with forzen XID. Minimal changes, rollbacks are not possible.\n>> 3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32\n>> and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).\n> I again vote for #1. A GTT is defined to allow data to be visible only\n> within one session -- so what does it even mean for the data to be\n> accessible on a replica?\n\nThere are sessions at replica (in case of hot standby), aren't there?\n\n>\n>> So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT.\n>> If all of the following conditions are true:\n>>\n>> 1) GTT are used in joins\n>> 2) There are indexes defined for GTT\n>> 3) Size and histogram of GTT in different backends can significantly vary.\n>> 4) ANALYZE was explicitly called for GTT\n>>\n>> then query execution plan built in one backend will be also used for other backends where it can be inefficient.\n>> I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.\n> I think that's *definitely* a show stopper.\nWell, if both you and Pavel think that it is really \"show stopper\", then \nthis problem really has to be addressed.\nI slightly confused about this opinion, because Pavel has told me \nhimself that 99% of users never create indexes for temp tables\nor run \"analyze\" for them. And without it, this problem is not a problem \nat all.\n\n>> I still do not understand the opinion of community which functionality of GTT is considered to be most important.\n>> But the patch with local buffers and no replica support is small enough to become good starting point.\n> Well, it seems we now have two patches for this feature. I guess we\n> need to figure out which one is better, and whether it's possible for\n> the two efforts to be merged, rather than having two different teams\n> hacking on separate code bases.\n\nI am open for cooperations.\nSource code of all my patches is available.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 25 Oct 2019 19:22:23 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> >\n> >> So except the limitation mentioned above (which I do not consider as\n> critical) there is only one problem which was not addressed: maintaining\n> statistics for GTT.\n> >> If all of the following conditions are true:\n> >>\n> >> 1) GTT are used in joins\n> >> 2) There are indexes defined for GTT\n> >> 3) Size and histogram of GTT in different backends can significantly\n> vary.\n> >> 4) ANALYZE was explicitly called for GTT\n> >>\n> >> then query execution plan built in one backend will be also used for\n> other backends where it can be inefficient.\n> >> I also do not consider this problem as \"show stopper\" for adding GTT to\n> Postgres.\n> > I think that's *definitely* a show stopper.\n> Well, if both you and Pavel think that it is really \"show stopper\", then\n> this problem really has to be addressed.\n> I slightly confused about this opinion, because Pavel has told me\n> himself that 99% of users never create indexes for temp tables\n> or run \"analyze\" for them. And without it, this problem is not a problem\n> at all.\n>\n>\nUsers doesn't do ANALYZE on temp tables in 99%. It's true. But second fact\nis so users has lot of problems. It's very similar to wrong statistics on\npersistent tables. When data are small, then it is not problem for users,\nalthough from my perspective it's not optimal. When data are not small,\nthen the problem can be brutal. Temporary tables are not a exception. And\nusers and developers are people - we know only about fatal problems. There\nare lot of unoptimized queries, but because the problem is not fatal, then\nit is not reason for report it. And lot of people has not any idea how fast\nthe databases can be. The knowledges of users and app developers are sad\nbook.\n\nPavel\n\n\n>\n>> So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT.\n>> If all of the following conditions are true:\n>>\n>> 1) GTT are used in joins\n>> 2) There are indexes defined for GTT\n>> 3) Size and histogram of GTT in different backends can significantly vary.\n>> 4) ANALYZE was explicitly called for GTT\n>>\n>> then query execution plan built in one backend will be also used for other backends where it can be inefficient.\n>> I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.\n> I think that's *definitely* a show stopper.\nWell, if both you and Pavel think that it is really \"show stopper\", then \nthis problem really has to be addressed.\nI slightly confused about this opinion, because Pavel has told me \nhimself that 99% of users never create indexes for temp tables\nor run \"analyze\" for them. And without it, this problem is not a problem \nat all.\nUsers doesn't do ANALYZE on temp tables in 99%. It's true. But second fact is so users has lot of problems. It's very similar to wrong statistics on persistent tables. When data are small, then it is not problem for users, although from my perspective it's not optimal. When data are not small, then the problem can be brutal. Temporary tables are not a exception. And users and developers are people - we know only about fatal problems. There are lot of unoptimized queries, but because the problem is not fatal, then it is not reason for report it. And lot of people has not any idea how fast the databases can be. The knowledges of  users and app developers are sad book.Pavel", "msg_date": "Fri, 25 Oct 2019 19:00:06 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2019年10月26日 上午12:22,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 25.10.2019 18:01, Robert Haas wrote:\n>> On Fri, Oct 11, 2019 at 9:50 AM Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru> wrote:\n>>> Just to clarify.\n>>> I have now proposed several different solutions for GTT:\n>>> \n>>> Shared vs. private buffers for GTT:\n>>> 1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.\n>>> 2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.\n>> I vote for #1. I think parallel query for temp objects may be a\n>> desirable feature, but I don't think it should be the job of a patch\n>> implementing GTTs to make it happen. In fact, I think it would be an\n>> actively bad idea, because I suspect that if we do eventually support\n>> temp relations for parallel query, we're going to want a solution that\n>> is shared between regular temp tables and global temp tables, not\n>> separate solutions for each.\n> \n> Sorry, may be I do not not understand you.\n> It seems to me that there is only one thing preventing usage of temporary tables in parallel plans: private buffers.\n> If global temporary tables are accessed as normal tables though shared buffers then them can be used in parallel queries\n> and no extra support is required for it.\n> At least I have checked that parallel queries are correctly worked for my implementation of GTT with shared buffers.\n> So I do not understand about which \"separate solutions\" you are talking about.\n> \n> I can agree that private buffers may be good starting point for GTT implementation, because it is less invasive and GTT access speed is exactly the same as of normal temp tables.\n> But I do not understand your argument why it is \"actively bad idea\".\n> \n>>> Access to GTT at replica:\n>>> 1. Access is prohibited (as for original temp tables). No changes at all.\n>>> 2. Tuples of temp tables are marked with forzen XID. Minimal changes, rollbacks are not possible.\n>>> 3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32\n>>> and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).\n>> I again vote for #1. A GTT is defined to allow data to be visible only\n>> within one session -- so what does it even mean for the data to be\n>> accessible on a replica?\n> \n> There are sessions at replica (in case of hot standby), aren't there?\n> \n>> \n>>> So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT.\n>>> If all of the following conditions are true:\n>>> \n>>> 1) GTT are used in joins\n>>> 2) There are indexes defined for GTT\n>>> 3) Size and histogram of GTT in different backends can significantly vary.\n>>> 4) ANALYZE was explicitly called for GTT\n>>> \n>>> then query execution plan built in one backend will be also used for other backends where it can be inefficient.\n>>> I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.\n>> I think that's *definitely* a show stopper.\n> Well, if both you and Pavel think that it is really \"show stopper\", then this problem really has to be addressed.\n> I slightly confused about this opinion, because Pavel has told me himself that 99% of users never create indexes for temp tables\n> or run \"analyze\" for them. And without it, this problem is not a problem at all.\n> \n>>> I still do not understand the opinion of community which functionality of GTT is considered to be most important.\n>>> But the patch with local buffers and no replica support is small enough to become good starting point.\n>> Well, it seems we now have two patches for this feature. I guess we\n>> need to figure out which one is better, and whether it's possible for\n>> the two efforts to be merged, rather than having two different teams\n>> hacking on separate code bases.\n> \n> I am open for cooperations.\n> Source code of all my patches is available.\nWe are also willing to cooperate to complete this feature.\nLet me prepare the code(merge code to pg12) and up to community, then see how we work together.\n\n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n> \n> \n> \n\n\n\n", "msg_date": "Mon, 28 Oct 2019 15:15:18 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Oct 25, 2019 at 11:14 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> > Access to GTT at replica:\n>> > 1. Access is prohibited (as for original temp tables). No changes at all.\n>> > 2. Tuples of temp tables are marked with forzen XID. Minimal changes, rollbacks are not possible.\n>> > 3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32\n>> > and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).\n>>\n>> I again vote for #1. A GTT is defined to allow data to be visible only\n>> within one session -- so what does it even mean for the data to be\n>> accessible on a replica?\n>\n> why not? there are lot of sessions on replica servers. One usage of temp tables is fixing estimation errors. You can create temp table with partial query result, run ANALYZE and evaluate other steps. Now this case is not possible on replica servers.\n>\n> One motivation for GTT is decreasing port costs from Oracle. But other motivations, like do more complex calculations on replica are valid and valuable.\n\nHmm, I think I was slightly confused when I wrote my previous\nresponse. I now see that what was under discussion was not making data\nfrom the master visible on the standbys, which really wouldn't make\nany sense, but rather allowing standby sessions to also use the GTT,\neach with its own local copy of the data. I don't think that's a bad\nfeature, but look how invasive the required changes are. Not allowing\nrollbacks seems dead on arrival; an abort would be able to leave the\ntable and index mutually inconsistent. A separate XID space would be\na real solution, perhaps, but it would be *extremely* complicated and\ninvasive to implement.\n\nOne thing that I've learned over and over again as a developer is that\nyou get a lot more done if you tackle one problem at a time. GTTs are\na sufficiently-large problem all by themselves; a major reworking of\nthe way XIDs work might be a good project to undertake at some point,\nbut it doesn't make any sense to incorporate that into the GTT\nproject, which is otherwise about a mostly-separate set of issues.\nLet's not try to solve more problems at once than strictly necessary.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 08:07:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Oct 25, 2019 at 12:22 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> On 25.10.2019 18:01, Robert Haas wrote:\n> > On Fri, Oct 11, 2019 at 9:50 AM Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru> wrote:\n> >> Just to clarify.\n> >> I have now proposed several different solutions for GTT:\n> >>\n> >> Shared vs. private buffers for GTT:\n> >> 1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.\n> >> 2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.\n> > I vote for #1. I think parallel query for temp objects may be a\n> > desirable feature, but I don't think it should be the job of a patch\n> > implementing GTTs to make it happen. In fact, I think it would be an\n> > actively bad idea, because I suspect that if we do eventually support\n> > temp relations for parallel query, we're going to want a solution that\n> > is shared between regular temp tables and global temp tables, not\n> > separate solutions for each.\n>\n> Sorry, may be I do not not understand you.\n> It seems to me that there is only one thing preventing usage of\n> temporary tables in parallel plans: private buffers.\n> If global temporary tables are accessed as normal tables though shared\n> buffers then them can be used in parallel queries\n> and no extra support is required for it.\n> At least I have checked that parallel queries are correctly worked for\n> my implementation of GTT with shared buffers.\n> So I do not understand about which \"separate solutions\" you are talking\n> about.\n>\n> I can agree that private buffers may be good starting point for GTT\n> implementation, because it is less invasive and GTT access speed is\n> exactly the same as of normal temp tables.\n> But I do not understand your argument why it is \"actively bad idea\".\n\nWell, it sounds like you're talking about ending up in a situation\nwhere local temporary tables are still in private buffers, but global\ntemporary table data is in shared buffers. I think that would be\ninconsistent. And it would mean that when somebody wanted to make\nlocal temporary tables accessible in parallel query, they'd have to\nwrite a patch for that. In other words, I don't support dividing the\npatches like this:\n\nPatch #1: Support global temporary tables + allow global temporary\ntables to used by parallel query\nPatch #2: Allow local temporary tables to be used by parallel query\n\nI support dividing them like this:\n\nPatch #1: Support global temporary tables\nPatch #2: Allow (all kinds of) temporary tables to be used by parallel query\n\nThe second division looks a lot cleaner to me, although as always I\nmight be missing something.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 08:13:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 28.10.2019 15:07, Robert Haas wrote:\n> On Fri, Oct 25, 2019 at 11:14 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>>> Access to GTT at replica:\n>>>> 1. Access is prohibited (as for original temp tables). No changes at all.\n>>>> 2. Tuples of temp tables are marked with forzen XID. Minimal changes, rollbacks are not possible.\n>>>> 3. Providing special XIDs for GTT at replica. No changes in CLOG are required, but special MVCC visibility rules are used for GTT. Current limitation: number of transactions accessing GTT at replica is limited by 2^32\n>>>> and bitmap of correspondent size has to be maintained (tuples of GTT are not proceeded by vacuum and not frozen, so XID horizon never moved).\n>>> I again vote for #1. A GTT is defined to allow data to be visible only\n>>> within one session -- so what does it even mean for the data to be\n>>> accessible on a replica?\n>> why not? there are lot of sessions on replica servers. One usage of temp tables is fixing estimation errors. You can create temp table with partial query result, run ANALYZE and evaluate other steps. Now this case is not possible on replica servers.\n>>\n>> One motivation for GTT is decreasing port costs from Oracle. But other motivations, like do more complex calculations on replica are valid and valuable.\n> Hmm, I think I was slightly confused when I wrote my previous\n> response. I now see that what was under discussion was not making data\n> from the master visible on the standbys, which really wouldn't make\n> any sense, but rather allowing standby sessions to also use the GTT,\n> each with its own local copy of the data. I don't think that's a bad\n> feature, but look how invasive the required changes are. Not allowing\n> rollbacks seems dead on arrival; an abort would be able to leave the\n> table and index mutually inconsistent. A separate XID space would be\n> a real solution, perhaps, but it would be *extremely* complicated and\n> invasive to implement.\n\nSorry, but both statements are not true.\nAs I mentioned before, I have implemented both solutions.\n\nI am not sure how vital is lack of aborts for transactions working with \nGTT at replica.\nSome people said that there is no sense in aborts of read-only \ntransactions at replica (despite to the fact that them are saving \nintermediate results in GTT).\nSome people said something similar with your's \"dead on arrival\".\nBut inconsistency is not possible: if such transaction is really \naborted, then backend is terminated and nobody can see this inconsistency.\n\nConcerning second alternative: you can check yourself that it is not \n*extremely* complicated and invasive.\nI extracted changes which are related with handling transactions at \nreplica and attached them to this mail.\nIt is just 500 lines (including diff contexts). Certainly there are some \nlimitation of this implementation: number of  transactions working with \nGTT at replica is limited by 2^32\nand since GTT tuples are not frozen, analog of GTT CLOG kept in memory \nis never truncated.\n\n>\n> One thing that I've learned over and over again as a developer is that\n> you get a lot more done if you tackle one problem at a time. GTTs are\n> a sufficiently-large problem all by themselves; a major reworking of\n> the way XIDs work might be a good project to undertake at some point,\n> but it doesn't make any sense to incorporate that into the GTT\n> project, which is otherwise about a mostly-separate set of issues.\n> Let's not try to solve more problems at once than strictly necessary.\n>\nI agree with it and think that implementation of GTT with private \nbuffers and no replica access is good starting point.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 28 Oct 2019 16:37:50 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 28.10.2019 15:13, Robert Haas wrote:\n> On Fri, Oct 25, 2019 at 12:22 PM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> On 25.10.2019 18:01, Robert Haas wrote:\n>>> On Fri, Oct 11, 2019 at 9:50 AM Konstantin Knizhnik\n>>> <k.knizhnik@postgrespro.ru> wrote:\n>>>> Just to clarify.\n>>>> I have now proposed several different solutions for GTT:\n>>>>\n>>>> Shared vs. private buffers for GTT:\n>>>> 1. Private buffers. This is least invasive patch, requiring no changes in relfilenodes.\n>>>> 2. Shared buffers. Requires changing relfilenode but supports parallel query execution for GTT.\n>>> I vote for #1. I think parallel query for temp objects may be a\n>>> desirable feature, but I don't think it should be the job of a patch\n>>> implementing GTTs to make it happen. In fact, I think it would be an\n>>> actively bad idea, because I suspect that if we do eventually support\n>>> temp relations for parallel query, we're going to want a solution that\n>>> is shared between regular temp tables and global temp tables, not\n>>> separate solutions for each.\n>> Sorry, may be I do not not understand you.\n>> It seems to me that there is only one thing preventing usage of\n>> temporary tables in parallel plans: private buffers.\n>> If global temporary tables are accessed as normal tables though shared\n>> buffers then them can be used in parallel queries\n>> and no extra support is required for it.\n>> At least I have checked that parallel queries are correctly worked for\n>> my implementation of GTT with shared buffers.\n>> So I do not understand about which \"separate solutions\" you are talking\n>> about.\n>>\n>> I can agree that private buffers may be good starting point for GTT\n>> implementation, because it is less invasive and GTT access speed is\n>> exactly the same as of normal temp tables.\n>> But I do not understand your argument why it is \"actively bad idea\".\n> Well, it sounds like you're talking about ending up in a situation\n> where local temporary tables are still in private buffers, but global\n> temporary table data is in shared buffers. I think that would be\n> inconsistent. And it would mean that when somebody wanted to make\n> local temporary tables accessible in parallel query, they'd have to\n> write a patch for that. In other words, I don't support dividing the\n> patches like this:\n>\n> Patch #1: Support global temporary tables + allow global temporary\n> tables to used by parallel query\n> Patch #2: Allow local temporary tables to be used by parallel query\n>\n> I support dividing them like this:\n>\n> Patch #1: Support global temporary tables\n> Patch #2: Allow (all kinds of) temporary tables to be used by parallel query\n>\n> The second division looks a lot cleaner to me, although as always I\n> might be missing something.\n>\nLogically it may be good decision. But piratically support of parallel \naccess to GTT requires just accessing their data through shared buffer.\nBut in case of local temp tables we need also need to some how share \ntable's metadata between parallel workers. It seems to be much more \ncomplicated if ever possible.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 28 Oct 2019 16:48:43 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Oct 28, 2019 at 9:48 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> Logically it may be good decision. But piratically support of parallel\n> access to GTT requires just accessing their data through shared buffer.\n> But in case of local temp tables we need also need to some how share\n> table's metadata between parallel workers. It seems to be much more\n> complicated if ever possible.\n\nWhy? The backends all share a snapshot, and can load whatever they\nneed from the system catalogs.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 12:29:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Oct 28, 2019 at 9:37 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> Sorry, but both statements are not true.\n\nWell, I think they are true.\n\n> I am not sure how vital is lack of aborts for transactions working with\n> GTT at replica.\n> Some people said that there is no sense in aborts of read-only\n> transactions at replica (despite to the fact that them are saving\n> intermediate results in GTT).\n> Some people said something similar with your's \"dead on arrival\".\n> But inconsistency is not possible: if such transaction is really\n> aborted, then backend is terminated and nobody can see this inconsistency.\n\nAborting the current transaction is a very different thing from\nterminating the backend.\n\nAlso, the idea that there is no sense in aborts of read-only\ntransactions on a replica seems totally wrong. Suppose that you insert\na row into the table and then you go to insert a row in each index,\nbut one of the index inserts fails - duplicate key, out of memory\nerror, I/O error, whatever. Now the table and the index are\ninconsistent. Normally, we're protected against this by MVCC, but if\nyou use a solution that breaks MVCC by using the same XID for all\ntransactions, then it can happen.\n\n> Concerning second alternative: you can check yourself that it is not\n> *extremely* complicated and invasive.\n> I extracted changes which are related with handling transactions at\n> replica and attached them to this mail.\n> It is just 500 lines (including diff contexts). Certainly there are some\n> limitation of this implementation: number of transactions working with\n> GTT at replica is limited by 2^32\n> and since GTT tuples are not frozen, analog of GTT CLOG kept in memory\n> is never truncated.\n\nI admit that this patch is not lengthy, but there remains the question\nof whether it is correct. It's possible that the problem isn't as\ncomplicated as I think it is, but I do think there are quite a number\nof reasons why this patch wouldn't be considered acceptable...\n\n> I agree with it and think that implementation of GTT with private\n> buffers and no replica access is good starting point.\n\n...but given that we seem to agree on this point, perhaps it isn't\nnecessary to argue about those things right now.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 12:40:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 28.10.2019 19:40, Robert Haas wrote:\n> Aborting the current transaction is a very different thing from\n> terminating the backend.\n>\n> Also, the idea that there is no sense in aborts of read-only\n> transactions on a replica seems totally wrong. Suppose that you insert\n> a row into the table and then you go to insert a row in each index,\n> but one of the index inserts fails - duplicate key, out of memory\n> error, I/O error, whatever. Now the table and the index are\n> inconsistent. Normally, we're protected against this by MVCC, but if\n> you use a solution that breaks MVCC by using the same XID for all\n> transactions, then it can happen.\n\n\nCertainly I understand the difference between abort of transaction and \ntermination of backend.\nI do not say that it is good solution. And definitely aborts can happen \nfor read-only transactions.\nI just wanted to express one moment: transaction aborts are caused by \ntwo reasons:\n- expected programming errors: deadlocks, conversion errors, unique \nconstraint violation,...\n- unexpected system errors: disk space exhaustion, out of memory, I/O \nerrors...\n\nUsually at replica with read-only transactions we do not have to deal \nwith errors of first kind.\nSo transaction may be aborted, but such abort most likely means that \nsomething is wrong with the system\nand restart of backend is not so bad solution in this situation.\n\nIn any case, I do not insist on this \"frozen XID\" approach.\nThe only advantage of this approach is that it is very simple to \nimplement: correspondent patch contains just 80 lines of code\nand actually it requires just 5 (five) one-line changes.\nI didn't agree with your statement just because restart of backend makes \nit not possible to observe any inconsistencies in the database.\n\n> ...but given that we seem to agree on this point, perhaps it isn't\n> necessary to argue about those things right now.\n>\nOk.\nI attached new patch for GTT with local (private) buffer and no replica \naccess.\nIt provides GTT for all built-in indexes\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 29 Oct 2019 11:04:52 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 25.10.2019 20:00, Pavel Stehule wrote:\n>\n> >\n> >> So except the limitation mentioned above (which I do not\n> consider as critical) there is only one problem which was not\n> addressed: maintaining statistics for GTT.\n> >> If all of the following conditions are true:\n> >>\n> >> 1) GTT are used in joins\n> >> 2) There are indexes defined for GTT\n> >> 3) Size and histogram of GTT in different backends can\n> significantly vary.\n> >> 4) ANALYZE was explicitly called for GTT\n> >>\n> >> then query execution plan built in one backend will be also\n> used for other backends where it can be inefficient.\n> >> I also do not consider this problem as \"show stopper\" for\n> adding GTT to Postgres.\n> > I think that's *definitely* a show stopper.\n> Well, if both you and Pavel think that it is really \"show\n> stopper\", then\n> this problem really has to be addressed.\n> I slightly confused about this opinion, because Pavel has told me\n> himself that 99% of users never create indexes for temp tables\n> or run \"analyze\" for them. And without it, this problem is not a\n> problem\n> at all.\n>\n>\n> Users doesn't do ANALYZE on temp tables in 99%. It's true. But second \n> fact is so users has lot of problems. It's very similar to wrong \n> statistics on persistent tables. When data are small, then it is not \n> problem for users, although from my perspective it's not optimal. When \n> data are not small, then the problem can be brutal. Temporary tables \n> are not a exception. And users and developers are people - we know \n> only about fatal problems. There are lot of unoptimized queries, but \n> because the problem is not fatal, then it is not reason for report it. \n> And lot of people has not any idea how fast the databases can be. The \n> knowledges of  users and app developers are sad book.\n>\n> Pavel\n\nIt seems to me that I have found quite elegant solution for per-backend \nstatistic for GTT: I just inserting it in backend's catalog cache, but \nnot in pg_statistic table itself.\nTo do it I have to add InsertSysCache/InsertCatCache functions which \ninsert pinned entry in the correspondent cache.\nI wonder if there are some pitfalls of such approach?\n\nNew patch for GTT is attached.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 1 Nov 2019 18:15:20 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n> I wonder if there are some pitfalls of such approach?\n\nThat sounds pretty hackish. You'd have to be very careful, for\nexample, that if the tables were dropped or re-analyzed, all of the\nold entries got removed -- and then it would still fail if any code\ntried to access the statistics directly from the table, rather than\nvia the caches. My assumption is that the statistics ought to be\nstored in some backend-private data structure designed for that\npurpose, and that the code that needs the data should be taught to\nlook for it there when the table is a GTT.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Nov 2019 11:26:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 01.11.2019 18:26, Robert Haas wrote:\n> On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n>> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n>> I wonder if there are some pitfalls of such approach?\n> That sounds pretty hackish. You'd have to be very careful, for\n> example, that if the tables were dropped or re-analyzed, all of the\n> old entries got removed --\n\nI have checked it:\n- when table is reanalyzed, then cache entries are replaced.\n- when table is dropped, then cache entries are removed.\n\n> and then it would still fail if any code\n> tried to access the statistics directly from the table, rather than\n> via the caches. My assumption is that the statistics ought to be\n> stored in some backend-private data structure designed for that\n> purpose, and that the code that needs the data should be taught to\n> look for it there when the table is a GTT.\n\nYes, if you do \"select * from pg_statistic\" then you will not see \nstatistic for GTT in this case.\nBut I do not think that it is so critical. I do not believe that anybody \nis trying to manually interpret values in this table.\nAnd optimizer is retrieving statistic through sys-cache mechanism and so \nis able to build correct plan in this case.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 1 Nov 2019 19:09:49 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 01.11.2019 18:26, Robert Haas wrote:\n> > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru> wrote:\n> >> It seems to me that I have found quite elegant solution for per-backend\n> statistic for GTT: I just inserting it in backend's catalog cache, but not\n> in pg_statistic table itself.\n> >> To do it I have to add InsertSysCache/InsertCatCache functions which\n> insert pinned entry in the correspondent cache.\n> >> I wonder if there are some pitfalls of such approach?\n> > That sounds pretty hackish. You'd have to be very careful, for\n> > example, that if the tables were dropped or re-analyzed, all of the\n> > old entries got removed --\n>\n> I have checked it:\n> - when table is reanalyzed, then cache entries are replaced.\n> - when table is dropped, then cache entries are removed.\n>\n> > and then it would still fail if any code\n> > tried to access the statistics directly from the table, rather than\n> > via the caches. My assumption is that the statistics ought to be\n> > stored in some backend-private data structure designed for that\n> > purpose, and that the code that needs the data should be taught to\n> > look for it there when the table is a GTT.\n>\n> Yes, if you do \"select * from pg_statistic\" then you will not see\n> statistic for GTT in this case.\n> But I do not think that it is so critical. I do not believe that anybody\n> is trying to manually interpret values in this table.\n> And optimizer is retrieving statistic through sys-cache mechanism and so\n> is able to build correct plan in this case.\n>\n\nYears ago, when I though about it, I wrote patch with similar design. It's\nworking, but surely it's ugly.\n\nI have another idea. Can be pg_statistics view instead a table?\n\nSome like\n\nSELECT * FROM pg_catalog.pg_statistics_rel\nUNION ALL\nSELECT * FROM pg_catalog.pg_statistics_gtt();\n\nInternally - when stat cache is filled, then there can be used\npg_statistics_rel and pg_statistics_gtt() directly. What I remember, there\nwas not possibility to work with queries, only with just relations.\n\nOr crazy idea - today we can implement own types of heaps. Is possible to\ncreate engine where result can be combination of some shared data and local\ndata. So union will be implemented on heap level.\nThis implementation can be simple, just scanning pages from shared buffers\nand from local buffers. For these tables we don't need complex metadata.\nIt's crazy idea, and I think so union with table function should be best.\n\nRegards\n\nPavel\n\n\n\n\n\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\nOn 01.11.2019 18:26, Robert Haas wrote:\n> On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n>> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n>> I wonder if there are some pitfalls of such approach?\n> That sounds pretty hackish. You'd have to be very careful, for\n> example, that if the tables were dropped or re-analyzed, all of the\n> old entries got removed --\n\nI have checked it:\n- when table is reanalyzed, then cache entries are replaced.\n- when table is dropped, then cache entries are removed.\n\n> and then it would still fail if any code\n> tried to access the statistics directly from the table, rather than\n> via the caches. My assumption is that the statistics ought to be\n> stored in some backend-private data structure designed for that\n> purpose, and that the code that needs the data should be taught to\n> look for it there when the table is a GTT.\n\nYes, if you do \"select * from pg_statistic\" then you will not see \nstatistic for GTT in this case.\nBut I do not think that it is so critical. I do not believe that anybody \nis trying to manually interpret values in this table.\nAnd optimizer is retrieving statistic through sys-cache mechanism and so \nis able to build correct plan in this case.Years ago, when I though about it, I wrote patch with similar design. It's working, but surely it's ugly.I have another idea. Can be pg_statistics view instead a table?Some likeSELECT * FROM pg_catalog.pg_statistics_relUNION ALLSELECT * FROM pg_catalog.pg_statistics_gtt();Internally - when stat cache is filled, then there can be used pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there was not possibility to work with queries, only with just relations.Or crazy idea - today we can implement own types of heaps. Is possible to create engine where result can be combination of some shared data and local data. So union will be implemented on heap level.This implementation can be simple, just scanning pages from shared buffers and from local buffers. For these tables we don't need complex metadata. It's crazy idea, and I think so union with table function should be best.RegardsPavel\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 2 Nov 2019 06:30:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Sat, Nov 2, 2019 at 6:31 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n>>\n>> On 01.11.2019 18:26, Robert Haas wrote:\n>> > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n>> > <k.knizhnik@postgrespro.ru> wrote:\n>> >> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n>> >> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n>> >> I wonder if there are some pitfalls of such approach?\n>> > That sounds pretty hackish. You'd have to be very careful, for\n>> > example, that if the tables were dropped or re-analyzed, all of the\n>> > old entries got removed --\n>>\n>> I have checked it:\n>> - when table is reanalyzed, then cache entries are replaced.\n>> - when table is dropped, then cache entries are removed.\n>>\n>> > and then it would still fail if any code\n>> > tried to access the statistics directly from the table, rather than\n>> > via the caches. My assumption is that the statistics ought to be\n>> > stored in some backend-private data structure designed for that\n>> > purpose, and that the code that needs the data should be taught to\n>> > look for it there when the table is a GTT.\n>>\n>> Yes, if you do \"select * from pg_statistic\" then you will not see\n>> statistic for GTT in this case.\n>> But I do not think that it is so critical. I do not believe that anybody\n>> is trying to manually interpret values in this table.\n>> And optimizer is retrieving statistic through sys-cache mechanism and so\n>> is able to build correct plan in this case.\n>\n>\n> Years ago, when I though about it, I wrote patch with similar design. It's working, but surely it's ugly.\n>\n> I have another idea. Can be pg_statistics view instead a table?\n>\n> Some like\n>\n> SELECT * FROM pg_catalog.pg_statistics_rel\n> UNION ALL\n> SELECT * FROM pg_catalog.pg_statistics_gtt();\n>\n> Internally - when stat cache is filled, then there can be used pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there was not possibility to work with queries, only with just relations.\n\nIt'd be a loss if you lose the ability to see the statistics, as there\nare valid use cases where you need to see the stats, eg. understanding\nwhy you don't get the plan you wanted. There's also at least one\nextension [1] that allows you to backup and use restored statistics,\nso there are definitely people interested in it.\n\n[1]: https://github.com/ossc-db/pg_dbms_stats\n\n\n", "msg_date": "Sat, 2 Nov 2019 08:19:47 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "so 2. 11. 2019 v 8:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Sat, Nov 2, 2019 at 6:31 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n> >>\n> >> On 01.11.2019 18:26, Robert Haas wrote:\n> >> > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n> >> > <k.knizhnik@postgrespro.ru> wrote:\n> >> >> It seems to me that I have found quite elegant solution for\n> per-backend statistic for GTT: I just inserting it in backend's catalog\n> cache, but not in pg_statistic table itself.\n> >> >> To do it I have to add InsertSysCache/InsertCatCache functions which\n> insert pinned entry in the correspondent cache.\n> >> >> I wonder if there are some pitfalls of such approach?\n> >> > That sounds pretty hackish. You'd have to be very careful, for\n> >> > example, that if the tables were dropped or re-analyzed, all of the\n> >> > old entries got removed --\n> >>\n> >> I have checked it:\n> >> - when table is reanalyzed, then cache entries are replaced.\n> >> - when table is dropped, then cache entries are removed.\n> >>\n> >> > and then it would still fail if any code\n> >> > tried to access the statistics directly from the table, rather than\n> >> > via the caches. My assumption is that the statistics ought to be\n> >> > stored in some backend-private data structure designed for that\n> >> > purpose, and that the code that needs the data should be taught to\n> >> > look for it there when the table is a GTT.\n> >>\n> >> Yes, if you do \"select * from pg_statistic\" then you will not see\n> >> statistic for GTT in this case.\n> >> But I do not think that it is so critical. I do not believe that anybody\n> >> is trying to manually interpret values in this table.\n> >> And optimizer is retrieving statistic through sys-cache mechanism and so\n> >> is able to build correct plan in this case.\n> >\n> >\n> > Years ago, when I though about it, I wrote patch with similar design.\n> It's working, but surely it's ugly.\n> >\n> > I have another idea. Can be pg_statistics view instead a table?\n> >\n> > Some like\n> >\n> > SELECT * FROM pg_catalog.pg_statistics_rel\n> > UNION ALL\n> > SELECT * FROM pg_catalog.pg_statistics_gtt();\n> >\n> > Internally - when stat cache is filled, then there can be used\n> pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there\n> was not possibility to work with queries, only with just relations.\n>\n> It'd be a loss if you lose the ability to see the statistics, as there\n> are valid use cases where you need to see the stats, eg. understanding\n> why you don't get the plan you wanted. There's also at least one\n> extension [1] that allows you to backup and use restored statistics,\n> so there are definitely people interested in it.\n>\n> [1]: https://github.com/ossc-db/pg_dbms_stats\n\n\nI don't think - the extensions can use UNION and the content will be same\nas caches used by planner.\n\nso 2. 11. 2019 v 8:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sat, Nov 2, 2019 at 6:31 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n>>\n>> On 01.11.2019 18:26, Robert Haas wrote:\n>> > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n>> > <k.knizhnik@postgrespro.ru> wrote:\n>> >> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n>> >> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n>> >> I wonder if there are some pitfalls of such approach?\n>> > That sounds pretty hackish. You'd have to be very careful, for\n>> > example, that if the tables were dropped or re-analyzed, all of the\n>> > old entries got removed --\n>>\n>> I have checked it:\n>> - when table is reanalyzed, then cache entries are replaced.\n>> - when table is dropped, then cache entries are removed.\n>>\n>> > and then it would still fail if any code\n>> > tried to access the statistics directly from the table, rather than\n>> > via the caches. My assumption is that the statistics ought to be\n>> > stored in some backend-private data structure designed for that\n>> > purpose, and that the code that needs the data should be taught to\n>> > look for it there when the table is a GTT.\n>>\n>> Yes, if you do \"select * from pg_statistic\" then you will not see\n>> statistic for GTT in this case.\n>> But I do not think that it is so critical. I do not believe that anybody\n>> is trying to manually interpret values in this table.\n>> And optimizer is retrieving statistic through sys-cache mechanism and so\n>> is able to build correct plan in this case.\n>\n>\n> Years ago, when I though about it, I wrote patch with similar design. It's working, but surely it's ugly.\n>\n> I have another idea. Can be pg_statistics view instead a table?\n>\n> Some like\n>\n> SELECT * FROM pg_catalog.pg_statistics_rel\n> UNION ALL\n> SELECT * FROM pg_catalog.pg_statistics_gtt();\n>\n> Internally - when stat cache is filled, then there can be used pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there was not possibility to work with queries, only with just relations.\n\nIt'd be a loss if you lose the ability to see the statistics, as there\nare valid use cases where you need to see the stats, eg. understanding\nwhy you don't get the plan you wanted.  There's also at least one\nextension [1] that allows you to backup and use restored statistics,\nso there are definitely people interested in it.\n\n[1]: https://github.com/ossc-db/pg_dbms_statsI don't think - the extensions can use UNION and the content will be same as caches used by planner.", "msg_date": "Sat, 2 Nov 2019 08:23:05 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "so 2. 11. 2019 v 8:23 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> so 2. 11. 2019 v 8:18 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n>> On Sat, Nov 2, 2019 at 6:31 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> >\n>> > pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <\n>> k.knizhnik@postgrespro.ru> napsal:\n>> >>\n>> >> On 01.11.2019 18:26, Robert Haas wrote:\n>> >> > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n>> >> > <k.knizhnik@postgrespro.ru> wrote:\n>> >> >> It seems to me that I have found quite elegant solution for\n>> per-backend statistic for GTT: I just inserting it in backend's catalog\n>> cache, but not in pg_statistic table itself.\n>> >> >> To do it I have to add InsertSysCache/InsertCatCache functions\n>> which insert pinned entry in the correspondent cache.\n>> >> >> I wonder if there are some pitfalls of such approach?\n>> >> > That sounds pretty hackish. You'd have to be very careful, for\n>> >> > example, that if the tables were dropped or re-analyzed, all of the\n>> >> > old entries got removed --\n>> >>\n>> >> I have checked it:\n>> >> - when table is reanalyzed, then cache entries are replaced.\n>> >> - when table is dropped, then cache entries are removed.\n>> >>\n>> >> > and then it would still fail if any code\n>> >> > tried to access the statistics directly from the table, rather than\n>> >> > via the caches. My assumption is that the statistics ought to be\n>> >> > stored in some backend-private data structure designed for that\n>> >> > purpose, and that the code that needs the data should be taught to\n>> >> > look for it there when the table is a GTT.\n>> >>\n>> >> Yes, if you do \"select * from pg_statistic\" then you will not see\n>> >> statistic for GTT in this case.\n>> >> But I do not think that it is so critical. I do not believe that\n>> anybody\n>> >> is trying to manually interpret values in this table.\n>> >> And optimizer is retrieving statistic through sys-cache mechanism and\n>> so\n>> >> is able to build correct plan in this case.\n>> >\n>> >\n>> > Years ago, when I though about it, I wrote patch with similar design.\n>> It's working, but surely it's ugly.\n>> >\n>> > I have another idea. Can be pg_statistics view instead a table?\n>> >\n>> > Some like\n>> >\n>> > SELECT * FROM pg_catalog.pg_statistics_rel\n>> > UNION ALL\n>> > SELECT * FROM pg_catalog.pg_statistics_gtt();\n>> >\n>> > Internally - when stat cache is filled, then there can be used\n>> pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there\n>> was not possibility to work with queries, only with just relations.\n>>\n>> It'd be a loss if you lose the ability to see the statistics, as there\n>> are valid use cases where you need to see the stats, eg. understanding\n>> why you don't get the plan you wanted. There's also at least one\n>> extension [1] that allows you to backup and use restored statistics,\n>> so there are definitely people interested in it.\n>>\n>> [1]: https://github.com/ossc-db/pg_dbms_stats\n>\n>\n> I don't think - the extensions can use UNION and the content will be same\n> as caches used by planner.\n>\n\nsure, if some one try to modify directly system tables, then it should be\nfixed.\n\nso 2. 11. 2019 v 8:23 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:so 2. 11. 2019 v 8:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sat, Nov 2, 2019 at 6:31 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n>>\n>> On 01.11.2019 18:26, Robert Haas wrote:\n>> > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n>> > <k.knizhnik@postgrespro.ru> wrote:\n>> >> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n>> >> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n>> >> I wonder if there are some pitfalls of such approach?\n>> > That sounds pretty hackish. You'd have to be very careful, for\n>> > example, that if the tables were dropped or re-analyzed, all of the\n>> > old entries got removed --\n>>\n>> I have checked it:\n>> - when table is reanalyzed, then cache entries are replaced.\n>> - when table is dropped, then cache entries are removed.\n>>\n>> > and then it would still fail if any code\n>> > tried to access the statistics directly from the table, rather than\n>> > via the caches. My assumption is that the statistics ought to be\n>> > stored in some backend-private data structure designed for that\n>> > purpose, and that the code that needs the data should be taught to\n>> > look for it there when the table is a GTT.\n>>\n>> Yes, if you do \"select * from pg_statistic\" then you will not see\n>> statistic for GTT in this case.\n>> But I do not think that it is so critical. I do not believe that anybody\n>> is trying to manually interpret values in this table.\n>> And optimizer is retrieving statistic through sys-cache mechanism and so\n>> is able to build correct plan in this case.\n>\n>\n> Years ago, when I though about it, I wrote patch with similar design. It's working, but surely it's ugly.\n>\n> I have another idea. Can be pg_statistics view instead a table?\n>\n> Some like\n>\n> SELECT * FROM pg_catalog.pg_statistics_rel\n> UNION ALL\n> SELECT * FROM pg_catalog.pg_statistics_gtt();\n>\n> Internally - when stat cache is filled, then there can be used pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there was not possibility to work with queries, only with just relations.\n\nIt'd be a loss if you lose the ability to see the statistics, as there\nare valid use cases where you need to see the stats, eg. understanding\nwhy you don't get the plan you wanted.  There's also at least one\nextension [1] that allows you to backup and use restored statistics,\nso there are definitely people interested in it.\n\n[1]: https://github.com/ossc-db/pg_dbms_statsI don't think - the extensions can use UNION and the content will be same as caches used by planner.sure, if some one try to modify directly system tables, then it should be fixed.", "msg_date": "Sat, 2 Nov 2019 08:24:20 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Sat, Nov 2, 2019 at 8:23 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> so 2. 11. 2019 v 8:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>>\n>> On Sat, Nov 2, 2019 at 6:31 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> >\n>> > pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n>> >>\n>> >> On 01.11.2019 18:26, Robert Haas wrote:\n>> >> > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n>> >> > <k.knizhnik@postgrespro.ru> wrote:\n>> >> >> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n>> >> >> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n>> >> >> I wonder if there are some pitfalls of such approach?\n>> >> > That sounds pretty hackish. You'd have to be very careful, for\n>> >> > example, that if the tables were dropped or re-analyzed, all of the\n>> >> > old entries got removed --\n>> >>\n>> >> I have checked it:\n>> >> - when table is reanalyzed, then cache entries are replaced.\n>> >> - when table is dropped, then cache entries are removed.\n>> >>\n>> >> > and then it would still fail if any code\n>> >> > tried to access the statistics directly from the table, rather than\n>> >> > via the caches. My assumption is that the statistics ought to be\n>> >> > stored in some backend-private data structure designed for that\n>> >> > purpose, and that the code that needs the data should be taught to\n>> >> > look for it there when the table is a GTT.\n>> >>\n>> >> Yes, if you do \"select * from pg_statistic\" then you will not see\n>> >> statistic for GTT in this case.\n>> >> But I do not think that it is so critical. I do not believe that anybody\n>> >> is trying to manually interpret values in this table.\n>> >> And optimizer is retrieving statistic through sys-cache mechanism and so\n>> >> is able to build correct plan in this case.\n>> >\n>> >\n>> > Years ago, when I though about it, I wrote patch with similar design. It's working, but surely it's ugly.\n>> >\n>> > I have another idea. Can be pg_statistics view instead a table?\n>> >\n>> > Some like\n>> >\n>> > SELECT * FROM pg_catalog.pg_statistics_rel\n>> > UNION ALL\n>> > SELECT * FROM pg_catalog.pg_statistics_gtt();\n>> >\n>> > Internally - when stat cache is filled, then there can be used pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there was not possibility to work with queries, only with just relations.\n>>\n>> It'd be a loss if you lose the ability to see the statistics, as there\n>> are valid use cases where you need to see the stats, eg. understanding\n>> why you don't get the plan you wanted. There's also at least one\n>> extension [1] that allows you to backup and use restored statistics,\n>> so there are definitely people interested in it.\n>>\n>> [1]: https://github.com/ossc-db/pg_dbms_stats\n>\n>\n> I don't think - the extensions can use UNION and the content will be same as caches used by planner.\n\nYes, I agree that changing pg_statistics to be a view as you showed\nwould fix the problem. I was answering Konstantin's point:\n\n>> >> But I do not think that it is so critical. I do not believe that anybody\n>> >> is trying to manually interpret values in this table.\n>> >> And optimizer is retrieving statistic through sys-cache mechanism and so\n>> >> is able to build correct plan in this case.\n\nwhich is IMHO a wrong assumption.\n\n\n", "msg_date": "Sat, 2 Nov 2019 09:56:09 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 02.11.2019 10:19, Julien Rouhaud wrote:\n> On Sat, Nov 2, 2019 at 6:31 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n>>> On 01.11.2019 18:26, Robert Haas wrote:\n>>>> On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n>>>> <k.knizhnik@postgrespro.ru> wrote:\n>>>>> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n>>>>> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n>>>>> I wonder if there are some pitfalls of such approach?\n>>>> That sounds pretty hackish. You'd have to be very careful, for\n>>>> example, that if the tables were dropped or re-analyzed, all of the\n>>>> old entries got removed --\n>>> I have checked it:\n>>> - when table is reanalyzed, then cache entries are replaced.\n>>> - when table is dropped, then cache entries are removed.\n>>>\n>>>> and then it would still fail if any code\n>>>> tried to access the statistics directly from the table, rather than\n>>>> via the caches. My assumption is that the statistics ought to be\n>>>> stored in some backend-private data structure designed for that\n>>>> purpose, and that the code that needs the data should be taught to\n>>>> look for it there when the table is a GTT.\n>>> Yes, if you do \"select * from pg_statistic\" then you will not see\n>>> statistic for GTT in this case.\n>>> But I do not think that it is so critical. I do not believe that anybody\n>>> is trying to manually interpret values in this table.\n>>> And optimizer is retrieving statistic through sys-cache mechanism and so\n>>> is able to build correct plan in this case.\n>>\n>> Years ago, when I though about it, I wrote patch with similar design. It's working, but surely it's ugly.\n>>\n>> I have another idea. Can be pg_statistics view instead a table?\n>>\n>> Some like\n>>\n>> SELECT * FROM pg_catalog.pg_statistics_rel\n>> UNION ALL\n>> SELECT * FROM pg_catalog.pg_statistics_gtt();\n>>\n>> Internally - when stat cache is filled, then there can be used pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there was not possibility to work with queries, only with just relations.\n> It'd be a loss if you lose the ability to see the statistics, as there\n> are valid use cases where you need to see the stats, eg. understanding\n> why you don't get the plan you wanted. There's also at least one\n> extension [1] that allows you to backup and use restored statistics,\n> so there are definitely people interested in it.\n>\n> [1]: https://github.com/ossc-db/pg_dbms_stats\nIt seems to have completely no sense to backup and restore statistic for \ntemporary tables which life time is limited to life time of backend,\ndoesn't it?\n\n\n\n\n", "msg_date": "Sat, 2 Nov 2019 18:09:42 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 02.11.2019 8:30, Pavel Stehule wrote:\n>\n>\n> pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 01.11.2019 18:26, Robert Haas wrote:\n> > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>\n> wrote:\n> >> It seems to me that I have found quite elegant solution for\n> per-backend statistic for GTT: I just inserting it in backend's\n> catalog cache, but not in pg_statistic table itself.\n> >> To do it I have to add InsertSysCache/InsertCatCache functions\n> which insert pinned entry in the correspondent cache.\n> >> I wonder if there are some pitfalls of such approach?\n> > That sounds pretty hackish. You'd have to be very careful, for\n> > example, that if the tables were dropped or re-analyzed, all of the\n> > old entries got removed --\n>\n> I have checked it:\n> - when table is reanalyzed, then cache entries are replaced.\n> - when table is dropped, then cache entries are removed.\n>\n> > and then it would still fail if any code\n> > tried to access the statistics directly from the table, rather than\n> > via the caches. My assumption is that the statistics ought to be\n> > stored in some backend-private data structure designed for that\n> > purpose, and that the code that needs the data should be taught to\n> > look for it there when the table is a GTT.\n>\n> Yes, if you do \"select * from pg_statistic\" then you will not see\n> statistic for GTT in this case.\n> But I do not think that it is so critical. I do not believe that\n> anybody\n> is trying to manually interpret values in this table.\n> And optimizer is retrieving statistic through sys-cache mechanism\n> and so\n> is able to build correct plan in this case.\n>\n>\n> Years ago, when I though about it, I wrote patch with similar design. \n> It's working, but surely it's ugly.\n>\n> I have another idea. Can be pg_statistics view instead a table?\n>\n> Some like\n>\n> SELECT * FROM pg_catalog.pg_statistics_rel\n> UNION ALL\n> SELECT * FROM pg_catalog.pg_statistics_gtt();\n\nAnd pg_catalog.pg_statistics_gtt() is set returning functions?\nI afraid that it is not acceptable solution from performance point of \nview: pg_statictic table is accessed by keys (<relid>,<attpos>,<inh>)\nIf it can not be done using index scan, then it can cause significant \nperformance slow down.\n\n>\n> Internally - when stat cache is filled, then there can be used \n> pg_statistics_rel and pg_statistics_gtt() directly. What I remember, \n> there was not possibility to work with queries, only with just relations.\n>\n> Or crazy idea - today we can implement own types of heaps. Is possible \n> to create engine where result can be combination of some shared data \n> and local data. So union will be implemented on heap level.\n> This implementation can be simple, just scanning pages from shared \n> buffers and from local buffers. For these tables we don't need complex \n> metadata. It's crazy idea, and I think so union with table function \n> should be best.\n\nFrankly speaking, implementing special heap access method for \npg_statistic just to handle case of global temp tables seems to be overkill\nfrom my point of view. It requires a lot coding (or at least copying a \nlot of code from heapam). Also, as I wrote above, we need also index for \nefficient lookup of statistic.\n\n\n\n\n\n\n\n\n\n\nOn 02.11.2019 8:30, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npá 1. 11. 2019 v 17:09\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n\n On 01.11.2019 18:26, Robert Haas wrote:\n > On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n > <k.knizhnik@postgrespro.ru>\n wrote:\n >> It seems to me that I have found quite elegant\n solution for per-backend statistic for GTT: I just inserting\n it in backend's catalog cache, but not in pg_statistic table\n itself.\n >> To do it I have to add\n InsertSysCache/InsertCatCache functions which insert pinned\n entry in the correspondent cache.\n >> I wonder if there are some pitfalls of such\n approach?\n > That sounds pretty hackish. You'd have to be very\n careful, for\n > example, that if the tables were dropped or\n re-analyzed, all of the\n > old entries got removed --\n\n I have checked it:\n - when table is reanalyzed, then cache entries are replaced.\n - when table is dropped, then cache entries are removed.\n\n > and then it would still fail if any code\n > tried to access the statistics directly from the table,\n rather than\n > via the caches. My assumption is that the statistics\n ought to be\n > stored in some backend-private data structure designed\n for that\n > purpose, and that the code that needs the data should\n be taught to\n > look for it there when the table is a GTT.\n\n Yes, if you do \"select * from pg_statistic\" then you will\n not see \n statistic for GTT in this case.\n But I do not think that it is so critical. I do not believe\n that anybody \n is trying to manually interpret values in this table.\n And optimizer is retrieving statistic through sys-cache\n mechanism and so \n is able to build correct plan in this case.\n\n\n\nYears ago, when I though about it, I wrote patch with\n similar design. It's working, but surely it's ugly.\n\n\nI have another idea. Can be pg_statistics view instead a\n table?\n\n\nSome like\n\n\nSELECT * FROM pg_catalog.pg_statistics_rel\nUNION ALL\nSELECT * FROM pg_catalog.pg_statistics_gtt();\n\n\n\n\n And pg_catalog.pg_statistics_gtt() is set returning functions?\n I afraid that it is not acceptable solution from performance point\n of view: pg_statictic table is accessed by keys\n (<relid>,<attpos>,<inh>)\n If it can not be done using index scan, then it can cause\n significant performance slow down.\n\n\n\n\n\n\nInternally - when stat cache is filled, then there can be\n used pg_statistics_rel and pg_statistics_gtt() directly.\n What I remember, there was not possibility to work with\n queries, only with just relations.\n\n\nOr crazy idea - today we can implement own types of\n heaps. Is possible to create engine where result can be\n combination of some shared data and local data. So union\n will be implemented on heap level.\nThis implementation can be simple, just scanning pages\n from shared buffers and from local buffers. For these tables\n we don't need complex metadata. It's crazy idea, and I think\n so union with table function should be best.\n\n\n\n\n Frankly speaking, implementing special heap access method for\n pg_statistic just to handle case of global temp tables seems to be\n overkill\n from my point of view. It requires a lot coding (or at least copying\n a lot of code from heapam). Also, as I wrote above, we need also\n index for efficient lookup of statistic.", "msg_date": "Sat, 2 Nov 2019 18:15:50 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Sat, Nov 2, 2019 at 4:09 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n>\n> On 02.11.2019 10:19, Julien Rouhaud wrote:\n> > On Sat, Nov 2, 2019 at 6:31 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >> pá 1. 11. 2019 v 17:09 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n> >>> On 01.11.2019 18:26, Robert Haas wrote:\n> >>>> On Fri, Nov 1, 2019 at 11:15 AM Konstantin Knizhnik\n> >>>> <k.knizhnik@postgrespro.ru> wrote:\n> >>>>> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n> >>>>> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n> >>>>> I wonder if there are some pitfalls of such approach?\n> >>>> That sounds pretty hackish. You'd have to be very careful, for\n> >>>> example, that if the tables were dropped or re-analyzed, all of the\n> >>>> old entries got removed --\n> >>> I have checked it:\n> >>> - when table is reanalyzed, then cache entries are replaced.\n> >>> - when table is dropped, then cache entries are removed.\n> >>>\n> >>>> and then it would still fail if any code\n> >>>> tried to access the statistics directly from the table, rather than\n> >>>> via the caches. My assumption is that the statistics ought to be\n> >>>> stored in some backend-private data structure designed for that\n> >>>> purpose, and that the code that needs the data should be taught to\n> >>>> look for it there when the table is a GTT.\n> >>> Yes, if you do \"select * from pg_statistic\" then you will not see\n> >>> statistic for GTT in this case.\n> >>> But I do not think that it is so critical. I do not believe that anybody\n> >>> is trying to manually interpret values in this table.\n> >>> And optimizer is retrieving statistic through sys-cache mechanism and so\n> >>> is able to build correct plan in this case.\n> >>\n> >> Years ago, when I though about it, I wrote patch with similar design. It's working, but surely it's ugly.\n> >>\n> >> I have another idea. Can be pg_statistics view instead a table?\n> >>\n> >> Some like\n> >>\n> >> SELECT * FROM pg_catalog.pg_statistics_rel\n> >> UNION ALL\n> >> SELECT * FROM pg_catalog.pg_statistics_gtt();\n> >>\n> >> Internally - when stat cache is filled, then there can be used pg_statistics_rel and pg_statistics_gtt() directly. What I remember, there was not possibility to work with queries, only with just relations.\n> > It'd be a loss if you lose the ability to see the statistics, as there\n> > are valid use cases where you need to see the stats, eg. understanding\n> > why you don't get the plan you wanted. There's also at least one\n> > extension [1] that allows you to backup and use restored statistics,\n> > so there are definitely people interested in it.\n> >\n> > [1]: https://github.com/ossc-db/pg_dbms_stats\n> It seems to have completely no sense to backup and restore statistic for\n> temporary tables which life time is limited to life time of backend,\n> doesn't it?\n\nIn general yes I agree, but it doesn't if the goal is to understand\nwhy even after an analyze on the temporary table your query is still\nbehaving poorly. It can be useful to allow reproduction or just give\nsomeone else the statistics to see what's going on.\n\n\n", "msg_date": "Sat, 2 Nov 2019 16:30:08 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "And pg_catalog.pg_statistics_gtt() is set returning functions?\n>\n\nyes\n\nI afraid that it is not acceptable solution from performance point of view:\n> pg_statictic table is accessed by keys (<relid>,<attpos>,<inh>)\n>\n\nI don't think so it is problem. The any component, that needs to use fast\naccess can use some special function that check index or check some memory\nbuffers.\n\n\nIf it can not be done using index scan, then it can cause significant\n> performance slow down.\n>\n\nwhere you need fast access when you use SQL access? Inside postgres\noptimizer is caches everywhere. And statistics cache should to know so have\nto check index and some memory buffers.\n\nThe proposed view will not be used by optimizer, but it can be used by some\nhigher layers. I think so there is a agreement so GTT metadata should not\nbe stored in system catalogue. If are stored in some syscache or somewhere\nelse is not important in this moment. But can be nice if for user the GTT\nmetadata should not be black hole. I think so is better to change some\ncurrent tables to views, than use some special function just specialized\nfor GTT (these functions should to exists in both variants). When I think\nabout it - this is important not just for functionality that we expect from\nGTT. It is important for consistency of Postgres catalog - how much\ndifferent should be GTT than other types of tables in system catalogue from\nuser's perspective.\n\n\n And pg_catalog.pg_statistics_gtt() is set returning functions?yes \n I afraid that it is not acceptable solution from performance point\n of view: pg_statictic table is accessed by keys\n (<relid>,<attpos>,<inh>)I don't think so it is problem. The any component, that needs to use fast access can use some special function that check index or check some memory buffers. \n If it can not be done using index scan, then it can cause\n significant performance slow down.where you need fast access when you use SQL access? Inside postgres optimizer is caches everywhere. And statistics cache should to know so have to check index and some memory buffers.The proposed view will not be used by optimizer, but it can be used by some higher layers. I think so there is a agreement so GTT metadata should not be stored in system catalogue. If are stored in some syscache or somewhere else is not important in this moment. But can be nice if for user the GTT metadata should not be black hole. I think so is better to change some current tables to views, than use some special function just specialized for GTT (these functions should to exists in both variants). When I think about it - this is important not just for functionality that we expect from GTT. It is important for consistency of Postgres catalog - how much different should be GTT than other types of tables in system catalogue from user's perspective.", "msg_date": "Sat, 2 Nov 2019 17:34:10 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Dear Hackers\n\n\nI attached the patch of GTT implementationI base on PG12.\nThe GTT design came from my first email.\nSome limitations in patch will be eliminated in later versions.\n\nLater, I will comment on Konstantin's patch and make some proposals for cooperation.\nLooking forward to your feedback.\n\nThanks.\n\nZeng Wenjing\n\n\n\n\n\n\n\n> 2019年10月29日 上午12:40,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Mon, Oct 28, 2019 at 9:37 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> Sorry, but both statements are not true.\n> \n> Well, I think they are true.\n> \n>> I am not sure how vital is lack of aborts for transactions working with\n>> GTT at replica.\n>> Some people said that there is no sense in aborts of read-only\n>> transactions at replica (despite to the fact that them are saving\n>> intermediate results in GTT).\n>> Some people said something similar with your's \"dead on arrival\".\n>> But inconsistency is not possible: if such transaction is really\n>> aborted, then backend is terminated and nobody can see this inconsistency.\n> \n> Aborting the current transaction is a very different thing from\n> terminating the backend.\n> \n> Also, the idea that there is no sense in aborts of read-only\n> transactions on a replica seems totally wrong. Suppose that you insert\n> a row into the table and then you go to insert a row in each index,\n> but one of the index inserts fails - duplicate key, out of memory\n> error, I/O error, whatever. Now the table and the index are\n> inconsistent. Normally, we're protected against this by MVCC, but if\n> you use a solution that breaks MVCC by using the same XID for all\n> transactions, then it can happen.\n> \n>> Concerning second alternative: you can check yourself that it is not\n>> *extremely* complicated and invasive.\n>> I extracted changes which are related with handling transactions at\n>> replica and attached them to this mail.\n>> It is just 500 lines (including diff contexts). Certainly there are some\n>> limitation of this implementation: number of transactions working with\n>> GTT at replica is limited by 2^32\n>> and since GTT tuples are not frozen, analog of GTT CLOG kept in memory\n>> is never truncated.\n> \n> I admit that this patch is not lengthy, but there remains the question\n> of whether it is correct. It's possible that the problem isn't as\n> complicated as I think it is, but I do think there are quite a number\n> of reasons why this patch wouldn't be considered acceptable...\n> \n>> I agree with it and think that implementation of GTT with private\n>> buffers and no replica access is good starting point.\n> \n> ...but given that we seem to agree on this point, perhaps it isn't\n> necessary to argue about those things right now.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Wed, 06 Nov 2019 21:24:36 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 06.11.2019 16:24, 曾文旌(义从) wrote:\n> Dear Hackers\n>\n>\n> I attached the patch of GTT implementationI base on PG12.\n> The GTT design came from my first email.\n> Some limitations in patch will be eliminated in later versions.\n>\n> Later, I will comment on Konstantin's patch and make some proposals for cooperation.\n> Looking forward to your feedback.\n>\n> Thanks.\n>\n> Zeng Wenjing\n>\n\nThank you for this patch.\nMy first comments:\n\n1.  I have ported you patch to the latest Postgres version (my patch is \nattached).\n2. You patch is supporting only B-Tree index for GTT. All other indexes \n(hash, gin, gist, brin,...) are not currently supported.\n3. I do not understand the reason for the following limitation:\n\"We allow to create index on global temp table only this session use it\"\n\nFirst of all it seems to significantly reduce usage of global temp tables.\nWhy do we need GTT at all? Mostly because we need to access temporary \ndata in more than one backend. Otherwise we can just use normal table.\nIf temp table is expected to be larger enough, so that we need to create \nindex for it, then it is hard to believe that it will be needed only in \none backend.\n\nMay be the assumption is that all indexes has to be created before GTT \nstart to be used.\nBut right now this check is not working correctly in any case - if you \ninsert some data into the table, then\nyou can not create index any more:\n\npostgres=# create global temp table gtt(x integer primary key, y integer);\nCREATE TABLE\npostgres=# insert into gtt values (generate_series(1,100000), \ngenerate_series(1,100000));\nINSERT 0 100000\npostgres=# create index on gtt(y);\nERROR:  can not create index when have one or more backend attached this \nglobal temp table\n\nI wonder why do you need such restriction?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 6 Nov 2019 19:08:12 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2019年11月7日 上午12:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 06.11.2019 16:24, 曾文旌(义从) wrote:\n>> Dear Hackers\n>> \n>> \n>> I attached the patch of GTT implementationI base on PG12.\n>> The GTT design came from my first email.\n>> Some limitations in patch will be eliminated in later versions.\n>> \n>> Later, I will comment on Konstantin's patch and make some proposals for cooperation.\n>> Looking forward to your feedback.\n>> \n>> Thanks.\n>> \n>> Zeng Wenjing\n>> \n> \n> Thank you for this patch.\n> My first comments:\n> \n> 1. I have ported you patch to the latest Postgres version (my patch is attached).\n> 2. You patch is supporting only B-Tree index for GTT. All other indexes (hash, gin, gist, brin,...) are not currently supported.\nCurrently I only support btree index.\nI noticed that your patch supports more index types, which is where I'd like to work with you.\n\n> 3. I do not understand the reason for the following limitation:\n> \"We allow to create index on global temp table only this session use it\"\n> \n> First of all it seems to significantly reduce usage of global temp tables.\n> Why do we need GTT at all? Mostly because we need to access temporary data in more than one backend. Otherwise we can just use normal table.\n> If temp table is expected to be larger enough, so that we need to create index for it, then it is hard to believe that it will be needed only in one backend.\n> \n> May be the assumption is that all indexes has to be created before GTT start to be used.\nYes, Currently, GTT's index is only supported and created in an empty table state, and other sessions are not using it.\nThere has two improvements pointer:\n1 Index can create on GTT(A) when the GTT(A) in the current session is not empty, requiring the GTT table to be empty in the other session.\nIndex_build needs to be done in the current session just like a normal table. This improvement is relatively easy.\n\n2 Index can create on GTT(A) when more than one session are using this GTT(A).\nBecause when I'm done creating an index of the GTT in this session and setting it to be an valid index, it's not true for the GTT in other sessions.\nIndexes on gtt in other sessions require \"rebuild_index\" before using it. \nI don't have a better solution right now, maybe you have some suggestions.\n\n\n> But right now this check is not working correctly in any case - if you insert some data into the table, then\n> you can not create index any more:\n> \n> postgres=# create global temp table gtt(x integer primary key, y integer);\n> CREATE TABLE\n> postgres=# insert into gtt values (generate_series(1,100000), generate_series(1,100000));\n> INSERT 0 100000\n> postgres=# create index on gtt(y);\n> ERROR: can not create index when have one or more backend attached this global temp table\n> \n> I wonder why do you need such restriction?\n> \n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n> \n> <global_temporary_table_v1-pg13.patch>\n\n\n\n", "msg_date": "Thu, 07 Nov 2019 17:30:27 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 7. 11. 2019 v 10:30 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> > 2019年11月7日 上午12:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> >\n> >\n> >\n> > On 06.11.2019 16:24, 曾文旌(义从) wrote:\n> >> Dear Hackers\n> >>\n> >>\n> >> I attached the patch of GTT implementationI base on PG12.\n> >> The GTT design came from my first email.\n> >> Some limitations in patch will be eliminated in later versions.\n> >>\n> >> Later, I will comment on Konstantin's patch and make some proposals for\n> cooperation.\n> >> Looking forward to your feedback.\n> >>\n> >> Thanks.\n> >>\n> >> Zeng Wenjing\n> >>\n> >\n> > Thank you for this patch.\n> > My first comments:\n> >\n> > 1. I have ported you patch to the latest Postgres version (my patch is\n> attached).\n> > 2. You patch is supporting only B-Tree index for GTT. All other indexes\n> (hash, gin, gist, brin,...) are not currently supported.\n> Currently I only support btree index.\n> I noticed that your patch supports more index types, which is where I'd\n> like to work with you.\n>\n> > 3. I do not understand the reason for the following limitation:\n> > \"We allow to create index on global temp table only this session use it\"\n> >\n> > First of all it seems to significantly reduce usage of global temp\n> tables.\n> > Why do we need GTT at all? Mostly because we need to access temporary\n> data in more than one backend. Otherwise we can just use normal table.\n> > If temp table is expected to be larger enough, so that we need to create\n> index for it, then it is hard to believe that it will be needed only in one\n> backend.\n> >\n> > May be the assumption is that all indexes has to be created before GTT\n> start to be used.\n> Yes, Currently, GTT's index is only supported and created in an empty\n> table state, and other sessions are not using it.\n> There has two improvements pointer:\n> 1 Index can create on GTT(A) when the GTT(A) in the current session is\n> not empty, requiring the GTT table to be empty in the other session.\n> Index_build needs to be done in the current session just like a normal\n> table. This improvement is relatively easy.\n>\n> 2 Index can create on GTT(A) when more than one session are using this\n> GTT(A).\n> Because when I'm done creating an index of the GTT in this session and\n> setting it to be an valid index, it's not true for the GTT in other\n> sessions.\n> Indexes on gtt in other sessions require \"rebuild_index\" before using it.\n> I don't have a better solution right now, maybe you have some suggestions.\n>\n\nI think so DDL operations can be implemented in some reduced form - so DDL\nare active only for one session, and for other sessions are invisible.\nImportant is state of GTT object on session start.\n\nFor example ALTER TABLE DROP COLUMN can has very fatal impact on other\nsessions. So I think the best of GTT can be pattern - the structure of GTT\ntable is immutable for any session that doesn't do DDL operations.\n\n\n>\n> > But right now this check is not working correctly in any case - if you\n> insert some data into the table, then\n> > you can not create index any more:\n> >\n> > postgres=# create global temp table gtt(x integer primary key, y\n> integer);\n> > CREATE TABLE\n> > postgres=# insert into gtt values (generate_series(1,100000),\n> generate_series(1,100000));\n> > INSERT 0 100000\n> > postgres=# create index on gtt(y);\n> > ERROR: can not create index when have one or more backend attached this\n> global temp table\n> >\n> > I wonder why do you need such restriction?\n> >\n> >\n> > --\n> > Konstantin Knizhnik\n> > Postgres Professional: http://www.postgrespro.com\n> > The Russian Postgres Company\n> >\n> > <global_temporary_table_v1-pg13.patch>\n>\n>\n\nčt 7. 11. 2019 v 10:30 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2019年11月7日 上午12:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 06.11.2019 16:24, 曾文旌(义从) wrote:\n>> Dear Hackers\n>> \n>> \n>> I attached the patch of GTT implementationI base on PG12.\n>> The GTT design came from my first email.\n>> Some limitations in patch will be eliminated in later versions.\n>> \n>> Later, I will comment on Konstantin's patch and make some proposals for cooperation.\n>> Looking forward to your feedback.\n>> \n>> Thanks.\n>> \n>> Zeng Wenjing\n>> \n> \n> Thank you for this patch.\n> My first comments:\n> \n> 1.  I have ported you patch to the latest Postgres version (my patch is attached).\n> 2. You patch is supporting only B-Tree index for GTT. All other indexes (hash, gin, gist, brin,...) are not currently supported.\nCurrently I only support btree index.\nI noticed that your patch supports more index types, which is where I'd like to work with you.\n\n> 3. I do not understand the reason for the following limitation:\n> \"We allow to create index on global temp table only this session use it\"\n> \n> First of all it seems to significantly reduce usage of global temp tables.\n> Why do we need GTT at all? Mostly because we need to access temporary data in more than one backend. Otherwise we can just use normal table.\n> If temp table is expected to be larger enough, so that we need to create index for it, then it is hard to believe that it will be needed only in one backend.\n> \n> May be the assumption is that all indexes has to be created before GTT start to be used.\nYes, Currently, GTT's index is only supported and created in an empty table state, and other sessions are not using it.\nThere has two improvements pointer:\n1 Index can create on GTT(A) when the GTT(A)  in the current session is not empty, requiring the GTT table to be empty in the other session.\nIndex_build needs to be done in the current session just like a normal table. This improvement is relatively easy.\n\n2 Index can create on GTT(A)  when more than one session are using this GTT(A).\nBecause when I'm done creating an index of the GTT in this session and setting it to be an valid index, it's not true for the GTT in other sessions.\nIndexes on gtt in other sessions require \"rebuild_index\" before using it. \nI don't have a better solution right now, maybe you have some suggestions.I think so DDL operations can be implemented in some reduced form - so DDL are active only for one session, and for other sessions are invisible. Important is state of GTT object on session start. For example ALTER TABLE DROP COLUMN can has very fatal impact on other sessions. So I think the best of GTT can be pattern - the structure of GTT table is immutable for any session that doesn't do DDL operations.\n\n\n> But right now this check is not working correctly in any case - if you insert some data into the table, then\n> you can not create index any more:\n> \n> postgres=# create global temp table gtt(x integer primary key, y integer);\n> CREATE TABLE\n> postgres=# insert into gtt values (generate_series(1,100000), generate_series(1,100000));\n> INSERT 0 100000\n> postgres=# create index on gtt(y);\n> ERROR:  can not create index when have one or more backend attached this global temp table\n> \n> I wonder why do you need such restriction?\n> \n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n> \n> <global_temporary_table_v1-pg13.patch>", "msg_date": "Thu, 7 Nov 2019 10:40:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2019年11月7日 下午5:30,曾文旌(义从) <wenjing.zwj@alibaba-inc.com> 写道:\n> \n> \n> \n>> 2019年11月7日 上午12:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> 写道:\n>> \n>> \n>> \n>> On 06.11.2019 16:24, 曾文旌(义从) wrote:\n>>> Dear Hackers\n>>> \n>>> \n>>> I attached the patch of GTT implementationI base on PG12.\n>>> The GTT design came from my first email.\n>>> Some limitations in patch will be eliminated in later versions.\n>>> \n>>> Later, I will comment on Konstantin's patch and make some proposals for cooperation.\n>>> Looking forward to your feedback.\n>>> \n>>> Thanks.\n>>> \n>>> Zeng Wenjing\n>>> \n>> \n>> Thank you for this patch.\n>> My first comments:\n>> \n>> 1. I have ported you patch to the latest Postgres version (my patch is attached).\n>> 2. You patch is supporting only B-Tree index for GTT. All other indexes (hash, gin, gist, brin,...) are not currently supported.\n> Currently I only support btree index.\n> I noticed that your patch supports more index types, which is where I'd like to work with you.\n> \n>> 3. I do not understand the reason for the following limitation:\n>> \"We allow to create index on global temp table only this session use it\"\n>> \n>> First of all it seems to significantly reduce usage of global temp tables.\n>> Why do we need GTT at all? Mostly because we need to access temporary data in more than one backend. Otherwise we can just use normal table.\n>> If temp table is expected to be larger enough, so that we need to create index for it, then it is hard to believe that it will be needed only in one backend.\n>> \n>> May be the assumption is that all indexes has to be created before GTT start to be used.\n> Yes, Currently, GTT's index is only supported and created in an empty table state, and other sessions are not using it.\n> There has two improvements pointer:\n> 1 Index can create on GTT(A) when the GTT(A) in the current session is not empty, requiring the GTT table to be empty in the other session.\n> Index_build needs to be done in the current session just like a normal table. This improvement is relatively easy.\nThis part of the improvement has been completed.\nNew patch is attached.\n\n> \n> 2 Index can create on GTT(A) when more than one session are using this GTT(A).\n> Because when I'm done creating an index of the GTT in this session and setting it to be an valid index, it's not true for the GTT in other sessions.\n> Indexes on gtt in other sessions require \"rebuild_index\" before using it. \n> I don't have a better solution right now, maybe you have some suggestions.\n> \n> \n>> But right now this check is not working correctly in any case - if you insert some data into the table, then\n>> you can not create index any more:\n>> \n>> postgres=# create global temp table gtt(x integer primary key, y integer);\n>> CREATE TABLE\n>> postgres=# insert into gtt values (generate_series(1,100000), generate_series(1,100000));\n>> INSERT 0 100000\n>> postgres=# create index on gtt(y);\n>> ERROR: can not create index when have one or more backend attached this global temp table\n>> \nIndex can create on GTT(A) when the GTT(A) in the current session is not empty now.\nBut still requiring the GTT table to be empty in the other session.\n\n>> I wonder why do you need such restriction?\n>> \n>> \n>> -- \n>> Konstantin Knizhnik\n>> Postgres Professional: http://www.postgrespro.com\n>> The Russian Postgres Company\n>> \n>> <global_temporary_table_v1-pg13.patch>\n\n\nZeng Wenjing", "msg_date": "Thu, 07 Nov 2019 19:49:29 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2019年11月7日 下午5:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> čt 7. 11. 2019 v 10:30 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n> > 2019年11月7日 上午12:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> 写道:\n> > \n> > \n> > \n> > On 06.11.2019 16:24, 曾文旌(义从) wrote:\n> >> Dear Hackers\n> >> \n> >> \n> >> I attached the patch of GTT implementationI base on PG12.\n> >> The GTT design came from my first email.\n> >> Some limitations in patch will be eliminated in later versions.\n> >> \n> >> Later, I will comment on Konstantin's patch and make some proposals for cooperation.\n> >> Looking forward to your feedback.\n> >> \n> >> Thanks.\n> >> \n> >> Zeng Wenjing\n> >> \n> > \n> > Thank you for this patch.\n> > My first comments:\n> > \n> > 1. I have ported you patch to the latest Postgres version (my patch is attached).\n> > 2. You patch is supporting only B-Tree index for GTT. All other indexes (hash, gin, gist, brin,...) are not currently supported.\n> Currently I only support btree index.\n> I noticed that your patch supports more index types, which is where I'd like to work with you.\n> \n> > 3. I do not understand the reason for the following limitation:\n> > \"We allow to create index on global temp table only this session use it\"\n> > \n> > First of all it seems to significantly reduce usage of global temp tables.\n> > Why do we need GTT at all? Mostly because we need to access temporary data in more than one backend. Otherwise we can just use normal table.\n> > If temp table is expected to be larger enough, so that we need to create index for it, then it is hard to believe that it will be needed only in one backend.\n> > \n> > May be the assumption is that all indexes has to be created before GTT start to be used.\n> Yes, Currently, GTT's index is only supported and created in an empty table state, and other sessions are not using it.\n> There has two improvements pointer:\n> 1 Index can create on GTT(A) when the GTT(A) in the current session is not empty, requiring the GTT table to be empty in the other session.\n> Index_build needs to be done in the current session just like a normal table. This improvement is relatively easy.\n> \n> 2 Index can create on GTT(A) when more than one session are using this GTT(A).\n> Because when I'm done creating an index of the GTT in this session and setting it to be an valid index, it's not true for the GTT in other sessions.\n> Indexes on gtt in other sessions require \"rebuild_index\" before using it. \n> I don't have a better solution right now, maybe you have some suggestions.\n> \n> I think so DDL operations can be implemented in some reduced form - so DDL are active only for one session, and for other sessions are invisible. Important is state of GTT object on session start. \n> \n> For example ALTER TABLE DROP COLUMN can has very fatal impact on other sessions. So I think the best of GTT can be pattern - the structure of GTT table is immutable for any session that doesn't do DDL operations.\nYes, Those ddl that need to rewrite data files will have this problem.\nThis is why I disabled alter GTT in the current version.\nIt can be improved, such as Alter GTT can also be allowed when only the current session is in use.\nUsers can also choose to kick off other sessions that are using gtt, then do alter GTT.\nI provide a function(pg_gtt_attached_pid(relation, schema)) to query which session a GTT is being used by.\n\n> \n> \n> \n> > But right now this check is not working correctly in any case - if you insert some data into the table, then\n> > you can not create index any more:\n> > \n> > postgres=# create global temp table gtt(x integer primary key, y integer);\n> > CREATE TABLE\n> > postgres=# insert into gtt values (generate_series(1,100000), generate_series(1,100000));\n> > INSERT 0 100000\n> > postgres=# create index on gtt(y);\n> > ERROR: can not create index when have one or more backend attached this global temp table\n> > \n> > I wonder why do you need such restriction?\n> > \n> > \n> > -- \n> > Konstantin Knizhnik\n> > Postgres Professional: http://www.postgrespro.com <http://www.postgrespro.com/>\n> > The Russian Postgres Company\n> > \n> > <global_temporary_table_v1-pg13.patch>\n> \n\n\n2019年11月7日 下午5:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:čt 7. 11. 2019 v 10:30 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2019年11月7日 上午12:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 06.11.2019 16:24, 曾文旌(义从) wrote:\n>> Dear Hackers\n>> \n>> \n>> I attached the patch of GTT implementationI base on PG12.\n>> The GTT design came from my first email.\n>> Some limitations in patch will be eliminated in later versions.\n>> \n>> Later, I will comment on Konstantin's patch and make some proposals for cooperation.\n>> Looking forward to your feedback.\n>> \n>> Thanks.\n>> \n>> Zeng Wenjing\n>> \n> \n> Thank you for this patch.\n> My first comments:\n> \n> 1.  I have ported you patch to the latest Postgres version (my patch is attached).\n> 2. You patch is supporting only B-Tree index for GTT. All other indexes (hash, gin, gist, brin,...) are not currently supported.\nCurrently I only support btree index.\nI noticed that your patch supports more index types, which is where I'd like to work with you.\n\n> 3. I do not understand the reason for the following limitation:\n> \"We allow to create index on global temp table only this session use it\"\n> \n> First of all it seems to significantly reduce usage of global temp tables.\n> Why do we need GTT at all? Mostly because we need to access temporary data in more than one backend. Otherwise we can just use normal table.\n> If temp table is expected to be larger enough, so that we need to create index for it, then it is hard to believe that it will be needed only in one backend.\n> \n> May be the assumption is that all indexes has to be created before GTT start to be used.\nYes, Currently, GTT's index is only supported and created in an empty table state, and other sessions are not using it.\nThere has two improvements pointer:\n1 Index can create on GTT(A) when the GTT(A)  in the current session is not empty, requiring the GTT table to be empty in the other session.\nIndex_build needs to be done in the current session just like a normal table. This improvement is relatively easy.\n\n2 Index can create on GTT(A)  when more than one session are using this GTT(A).\nBecause when I'm done creating an index of the GTT in this session and setting it to be an valid index, it's not true for the GTT in other sessions.\nIndexes on gtt in other sessions require \"rebuild_index\" before using it. \nI don't have a better solution right now, maybe you have some suggestions.I think so DDL operations can be implemented in some reduced form - so DDL are active only for one session, and for other sessions are invisible. Important is state of GTT object on session start. For example ALTER TABLE DROP COLUMN can has very fatal impact on other sessions. So I think the best of GTT can be pattern - the structure of GTT table is immutable for any session that doesn't do DDL operations.Yes, Those ddl that need to rewrite data files will have this problem.This is why I disabled alter GTT in the current version.It can be improved, such as Alter GTT can also be allowed when only the current session is in use.Users can also choose to kick off other sessions that are using gtt, then do alter GTT.I provide a function(pg_gtt_attached_pid(relation, schema)) to query which session a GTT is being used by.\n\n\n> But right now this check is not working correctly in any case - if you insert some data into the table, then\n> you can not create index any more:\n> \n> postgres=# create global temp table gtt(x integer primary key, y integer);\n> CREATE TABLE\n> postgres=# insert into gtt values (generate_series(1,100000), generate_series(1,100000));\n> INSERT 0 100000\n> postgres=# create index on gtt(y);\n> ERROR:  can not create index when have one or more backend attached this global temp table\n> \n> I wonder why do you need such restriction?\n> \n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n> \n> <global_temporary_table_v1-pg13.patch>", "msg_date": "Thu, 07 Nov 2019 20:17:50 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 7. 11. 2019 v 13:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> 2019年11月7日 下午5:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n>\n>\n> čt 7. 11. 2019 v 10:30 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>>\n>>\n>> > 2019年11月7日 上午12:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n>> >\n>> >\n>> >\n>> > On 06.11.2019 16:24, 曾文旌(义从) wrote:\n>> >> Dear Hackers\n>> >>\n>> >>\n>> >> I attached the patch of GTT implementationI base on PG12.\n>> >> The GTT design came from my first email.\n>> >> Some limitations in patch will be eliminated in later versions.\n>> >>\n>> >> Later, I will comment on Konstantin's patch and make some proposals\n>> for cooperation.\n>> >> Looking forward to your feedback.\n>> >>\n>> >> Thanks.\n>> >>\n>> >> Zeng Wenjing\n>> >>\n>> >\n>> > Thank you for this patch.\n>> > My first comments:\n>> >\n>> > 1. I have ported you patch to the latest Postgres version (my patch is\n>> attached).\n>> > 2. You patch is supporting only B-Tree index for GTT. All other indexes\n>> (hash, gin, gist, brin,...) are not currently supported.\n>> Currently I only support btree index.\n>> I noticed that your patch supports more index types, which is where I'd\n>> like to work with you.\n>>\n>> > 3. I do not understand the reason for the following limitation:\n>> > \"We allow to create index on global temp table only this session use it\"\n>> >\n>> > First of all it seems to significantly reduce usage of global temp\n>> tables.\n>> > Why do we need GTT at all? Mostly because we need to access temporary\n>> data in more than one backend. Otherwise we can just use normal table.\n>> > If temp table is expected to be larger enough, so that we need to\n>> create index for it, then it is hard to believe that it will be needed only\n>> in one backend.\n>> >\n>> > May be the assumption is that all indexes has to be created before GTT\n>> start to be used.\n>> Yes, Currently, GTT's index is only supported and created in an empty\n>> table state, and other sessions are not using it.\n>> There has two improvements pointer:\n>> 1 Index can create on GTT(A) when the GTT(A) in the current session is\n>> not empty, requiring the GTT table to be empty in the other session.\n>> Index_build needs to be done in the current session just like a normal\n>> table. This improvement is relatively easy.\n>>\n>> 2 Index can create on GTT(A) when more than one session are using this\n>> GTT(A).\n>> Because when I'm done creating an index of the GTT in this session and\n>> setting it to be an valid index, it's not true for the GTT in other\n>> sessions.\n>> Indexes on gtt in other sessions require \"rebuild_index\" before using it.\n>> I don't have a better solution right now, maybe you have some suggestions.\n>>\n>\n> I think so DDL operations can be implemented in some reduced form - so DDL\n> are active only for one session, and for other sessions are invisible.\n> Important is state of GTT object on session start.\n>\n> For example ALTER TABLE DROP COLUMN can has very fatal impact on other\n> sessions. So I think the best of GTT can be pattern - the structure of GTT\n> table is immutable for any session that doesn't do DDL operations.\n>\n> Yes, Those ddl that need to rewrite data files will have this problem.\n> This is why I disabled alter GTT in the current version.\n> It can be improved, such as Alter GTT can also be allowed when only the\n> current session is in use.\n>\n\nI think so it is acceptable solution for some first steps, but I cannot to\nimagine so this behave can be good for production usage. But can be good\nenough for some time.\n\nRegards\n\nPavel\n\nUsers can also choose to kick off other sessions that are using gtt, then\n> do alter GTT.\n> I provide a function(pg_gtt_attached_pid(relation, schema)) to query which\n> session a GTT is being used by.\n>\n>\n>\n>>\n>> > But right now this check is not working correctly in any case - if you\n>> insert some data into the table, then\n>> > you can not create index any more:\n>> >\n>> > postgres=# create global temp table gtt(x integer primary key, y\n>> integer);\n>> > CREATE TABLE\n>> > postgres=# insert into gtt values (generate_series(1,100000),\n>> generate_series(1,100000));\n>> > INSERT 0 100000\n>> > postgres=# create index on gtt(y);\n>> > ERROR: can not create index when have one or more backend attached\n>> this global temp table\n>> >\n>> > I wonder why do you need such restriction?\n>> >\n>> >\n>> > --\n>> > Konstantin Knizhnik\n>> > Postgres Professional: http://www.postgrespro.com\n>> > The Russian Postgres Company\n>> >\n>> > <global_temporary_table_v1-pg13.patch>\n>>\n>>\n>\n\nčt 7. 11. 2019 v 13:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2019年11月7日 下午5:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:čt 7. 11. 2019 v 10:30 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2019年11月7日 上午12:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 06.11.2019 16:24, 曾文旌(义从) wrote:\n>> Dear Hackers\n>> \n>> \n>> I attached the patch of GTT implementationI base on PG12.\n>> The GTT design came from my first email.\n>> Some limitations in patch will be eliminated in later versions.\n>> \n>> Later, I will comment on Konstantin's patch and make some proposals for cooperation.\n>> Looking forward to your feedback.\n>> \n>> Thanks.\n>> \n>> Zeng Wenjing\n>> \n> \n> Thank you for this patch.\n> My first comments:\n> \n> 1.  I have ported you patch to the latest Postgres version (my patch is attached).\n> 2. You patch is supporting only B-Tree index for GTT. All other indexes (hash, gin, gist, brin,...) are not currently supported.\nCurrently I only support btree index.\nI noticed that your patch supports more index types, which is where I'd like to work with you.\n\n> 3. I do not understand the reason for the following limitation:\n> \"We allow to create index on global temp table only this session use it\"\n> \n> First of all it seems to significantly reduce usage of global temp tables.\n> Why do we need GTT at all? Mostly because we need to access temporary data in more than one backend. Otherwise we can just use normal table.\n> If temp table is expected to be larger enough, so that we need to create index for it, then it is hard to believe that it will be needed only in one backend.\n> \n> May be the assumption is that all indexes has to be created before GTT start to be used.\nYes, Currently, GTT's index is only supported and created in an empty table state, and other sessions are not using it.\nThere has two improvements pointer:\n1 Index can create on GTT(A) when the GTT(A)  in the current session is not empty, requiring the GTT table to be empty in the other session.\nIndex_build needs to be done in the current session just like a normal table. This improvement is relatively easy.\n\n2 Index can create on GTT(A)  when more than one session are using this GTT(A).\nBecause when I'm done creating an index of the GTT in this session and setting it to be an valid index, it's not true for the GTT in other sessions.\nIndexes on gtt in other sessions require \"rebuild_index\" before using it. \nI don't have a better solution right now, maybe you have some suggestions.I think so DDL operations can be implemented in some reduced form - so DDL are active only for one session, and for other sessions are invisible. Important is state of GTT object on session start. For example ALTER TABLE DROP COLUMN can has very fatal impact on other sessions. So I think the best of GTT can be pattern - the structure of GTT table is immutable for any session that doesn't do DDL operations.Yes, Those ddl that need to rewrite data files will have this problem.This is why I disabled alter GTT in the current version.It can be improved, such as Alter GTT can also be allowed when only the current session is in use.I think so it is acceptable solution for some first steps, but I cannot to imagine so this behave can be good for production usage. But can be good enough for some time.RegardsPavel Users can also choose to kick off other sessions that are using gtt, then do alter GTT.I provide a function(pg_gtt_attached_pid(relation, schema)) to query which session a GTT is being used by.\n\n\n> But right now this check is not working correctly in any case - if you insert some data into the table, then\n> you can not create index any more:\n> \n> postgres=# create global temp table gtt(x integer primary key, y integer);\n> CREATE TABLE\n> postgres=# insert into gtt values (generate_series(1,100000), generate_series(1,100000));\n> INSERT 0 100000\n> postgres=# create index on gtt(y);\n> ERROR:  can not create index when have one or more backend attached this global temp table\n> \n> I wonder why do you need such restriction?\n> \n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n> \n> <global_temporary_table_v1-pg13.patch>", "msg_date": "Thu, 7 Nov 2019 13:29:36 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 07.11.2019 12:30, 曾文旌(义从) wrote:\n>\n>> May be the assumption is that all indexes has to be created before GTT start to be used.\n> Yes, Currently, GTT's index is only supported and created in an empty table state, and other sessions are not using it.\n> There has two improvements pointer:\n> 1 Index can create on GTT(A) when the GTT(A) in the current session is not empty, requiring the GTT table to be empty in the other session.\n> Index_build needs to be done in the current session just like a normal table. This improvement is relatively easy.\n>\n> 2 Index can create on GTT(A) when more than one session are using this GTT(A).\n> Because when I'm done creating an index of the GTT in this session and setting it to be an valid index, it's not true for the GTT in other sessions.\n> Indexes on gtt in other sessions require \"rebuild_index\" before using it.\n> I don't have a better solution right now, maybe you have some suggestions.\nIt is possible to create index on demand:\n\nBuffer\n_bt_getbuf(Relation rel, BlockNumber blkno, int access)\n{\n     Buffer        buf;\n\n     if (blkno != P_NEW)\n     {\n         /* Read an existing block of the relation */\n         buf = ReadBuffer(rel, blkno);\n         /* Session temporary relation may be not yet initialized for \nthis backend. */\n         if (blkno == BTREE_METAPAGE && \nGlobalTempRelationPageIsNotInitialized(rel, BufferGetPage(buf)))\n         {\n             Relation heap = RelationIdGetRelation(rel->rd_index->indrelid);\n             ReleaseBuffer(buf);\n             DropRelFileNodeLocalBuffers(rel->rd_node, MAIN_FORKNUM, blkno);\n             btbuild(heap, rel, BuildIndexInfo(rel));\n             RelationClose(heap);\n             buf = ReadBuffer(rel, blkno);\n             LockBuffer(buf, access);\n         }\n         else\n         {\n             LockBuffer(buf, access);\n             _bt_checkpage(rel, buf);\n         }\n     }\n     ...\n\n\nThis code initializes B-Tree and load data in it when GTT index is \naccess and is not initialized yet.\nIt looks a little bit hacker but it works.\n\nI also wonder why you are keeping information about GTT in shared \nmemory. Looks like the only information we really need to share is \ntable's metadata.\nBut it is already shared though catalog. All other GTT related \ninformation is private to backend so I do not see reasons to place it in \nshared memory.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 7 Nov 2019 19:32:44 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2019年11月8日 上午12:32,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 07.11.2019 12:30, 曾文旌(义从) wrote:\n>> \n>>> May be the assumption is that all indexes has to be created before GTT start to be used.\n>> Yes, Currently, GTT's index is only supported and created in an empty table state, and other sessions are not using it.\n>> There has two improvements pointer:\n>> 1 Index can create on GTT(A) when the GTT(A) in the current session is not empty, requiring the GTT table to be empty in the other session.\n>> Index_build needs to be done in the current session just like a normal table. This improvement is relatively easy.\n>> \n>> 2 Index can create on GTT(A) when more than one session are using this GTT(A).\n>> Because when I'm done creating an index of the GTT in this session and setting it to be an valid index, it's not true for the GTT in other sessions.\n>> Indexes on gtt in other sessions require \"rebuild_index\" before using it.\n>> I don't have a better solution right now, maybe you have some suggestions.\n> It is possible to create index on demand:\n> \n> Buffer\n> _bt_getbuf(Relation rel, BlockNumber blkno, int access)\n> {\n> Buffer buf;\n> \n> if (blkno != P_NEW)\n> {\n> /* Read an existing block of the relation */\n> buf = ReadBuffer(rel, blkno);\n> /* Session temporary relation may be not yet initialized for this backend. */\n> if (blkno == BTREE_METAPAGE && GlobalTempRelationPageIsNotInitialized(rel, BufferGetPage(buf)))\n> {\n> Relation heap = RelationIdGetRelation(rel->rd_index->indrelid);\n> ReleaseBuffer(buf);\n> DropRelFileNodeLocalBuffers(rel->rd_node, MAIN_FORKNUM, blkno);\n> btbuild(heap, rel, BuildIndexInfo(rel));\n> RelationClose(heap);\n> buf = ReadBuffer(rel, blkno);\n> LockBuffer(buf, access);\n> }\n> else\n> {\n> LockBuffer(buf, access);\n> _bt_checkpage(rel, buf);\n> }\n> }\n> ...\nIn my opinion, it is not a good idea to trigger a btbuild with a select or DML, the cost of which depends on the amount of data in the GTT.\n\n> \n> \n> This code initializes B-Tree and load data in it when GTT index is access and is not initialized yet.\n> It looks a little bit hacker but it works.\n> \n> I also wonder why you are keeping information about GTT in shared memory. Looks like the only information we really need to share is table's metadata.\n> But it is already shared though catalog. All other GTT related information is private to backend so I do not see reasons to place it in shared memory.\nThe shared hash structure tracks which backend has initialized the GTT storage in order to implement the DDL of the GTT.\nAs for GTT, there is only one definition(include index on GTT), but each backend may have one data.\nFor the implementation of drop GTT, I assume that all data and definitions need to be deleted.\n\n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n> \n> \n>", "msg_date": "Fri, 08 Nov 2019 15:50:42 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 08.11.2019 10:50, 曾文旌(义从) wrote:\n> In my opinion, it is not a good idea to trigger a btbuild with a select or DML, the cost of which depends on the amount of data in the GTT.\nIMHO it is better than returning error.\nAlso index will be used only if cost of plan with index will be \nconsidered better than cost of plan without index. If you do not have \nindex, then you have to scan the whole table.\nTime of such scan is comparable with time of building index.\n\nYes, I agree that indexes for GTT are used to be created together with \ntable itself before it is used by any application.\nBut if later DBA recognized that efficient execution of queries requires \nsome more indexes,\nit will be strange and dangerous to prevent him from adding such index \nuntil all clients which have accessed this table will drop their \nconnections.\nAlso maintaining in shared memory information about attached backends \nseems to be overkill.\n\n>>\n>> This code initializes B-Tree and load data in it when GTT index is access and is not initialized yet.\n>> It looks a little bit hacker but it works.\n>>\n>> I also wonder why you are keeping information about GTT in shared memory. Looks like the only information we really need to share is table's metadata.\n>> But it is already shared though catalog. All other GTT related information is private to backend so I do not see reasons to place it in shared memory.\n> The shared hash structure tracks which backend has initialized the GTT storage in order to implement the DDL of the GTT.\nSorry, I do not understand this argument.\nDDL is performed on shared metadata present in global catalog.\nStandard postgres invalidation mechanism is used to notify all backends \nabout schema changes.\nWhy do we need to maintain some extra information in shared memory.\nCan you give me example of DLL which does't work without such shared hash?\n\n> As for GTT, there is only one definition(include index on GTT), but each backend may have one data.\n> For the implementation of drop GTT, I assume that all data and definitions need to be deleted.\n\nData of dropped GTT is removed on normal backend termination or cleaned \nup at server restart in case of abnormal shutdown (as it is done for \nlocal temp tables).\nI have not used any shared control structures for GTT in my \nimplementation and that is why I wonder why do you need it and what are \nthe expected problems with my\nimplementation?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 8 Nov 2019 15:57:13 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "My comments for global_private_temp-4.patch\n\ngood side:\n1 Lots of index type on GTT. I think we need support for all kinds of indexes.\n2 serial column on GTT.\n3 INHERITS GTT.\n4 PARTITION GTT.\n\nI didn't choose to support them in the first release, but you did.\n\nOther side:\n1 case: create global temp table gtt2(a int primary key, b text) on commit delete rows;\nI think you've lost the meaning of the on commit delete rows clause.\nAfter the GTT is created, the other sessions feel that this is an on commit PRESERVE rows GTT.\n\n2 truncate gtt, mybe this is a bug in DropRelFileNodeBuffers.\nGTT's local buffer is not released.\nCase:\npostgres=# insert into gtt2 values(1,'xx');\nINSERT 0 1\npostgres=# truncate gtt2;\nTRUNCATE TABLE\npostgres=# insert into gtt2 values(1,'xx');\nERROR: unexpected data beyond EOF in block 0 of relation base/13579/t3_16384\nHINT: This has been seen to occur with buggy kernels; consider updating your system.\n\n3 lock type of truncate GTT.\nI don't think it's a good idea to hold a big lock with truncate GTT, because it only needs to process private data.\n\n4 GTT's ddl Those ddl that need to rewrite data files may need attention.\nWe have discussed in the previous email. This is why I used shared hash to track the GTT file.\n\n\n5 There will be problems with DDL that will change relfilenode. Such as cluster GTT ,vacuum full GTT.\nA session completes vacuum full gtt(a), and other sessions will immediately start reading and writing new storage files and existing data is also lost.\nI disable them in my current version.\n\n6 drop GTT\nI think drop GTT should clean up all storage files and definitions. How do you think?\n\n7 MVCC visibility clog clean\nGTT data visibility rules, like regular tables, so GTT also need clog.\nWe need to avoid the clog that GTT needs to be cleaned up. \nAt the same time, GTT does not do autovacuum, and retaining \"too old data\" will cause wraparound data loss.\nI have given a solution in my design.\n\n\nZeng Wenjing\n\n> 2019年11月1日 下午11:15,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 25.10.2019 20:00, Pavel Stehule wrote:\n>> \n>> >\n>> >> So except the limitation mentioned above (which I do not consider as critical) there is only one problem which was not addressed: maintaining statistics for GTT.\n>> >> If all of the following conditions are true:\n>> >>\n>> >> 1) GTT are used in joins\n>> >> 2) There are indexes defined for GTT\n>> >> 3) Size and histogram of GTT in different backends can significantly vary.\n>> >> 4) ANALYZE was explicitly called for GTT\n>> >>\n>> >> then query execution plan built in one backend will be also used for other backends where it can be inefficient.\n>> >> I also do not consider this problem as \"show stopper\" for adding GTT to Postgres.\n>> > I think that's *definitely* a show stopper.\n>> Well, if both you and Pavel think that it is really \"show stopper\", then \n>> this problem really has to be addressed.\n>> I slightly confused about this opinion, because Pavel has told me \n>> himself that 99% of users never create indexes for temp tables\n>> or run \"analyze\" for them. And without it, this problem is not a problem \n>> at all.\n>> \n>> \n>> Users doesn't do ANALYZE on temp tables in 99%. It's true. But second fact is so users has lot of problems. It's very similar to wrong statistics on persistent tables. When data are small, then it is not problem for users, although from my perspective it's not optimal. When data are not small, then the problem can be brutal. Temporary tables are not a exception. And users and developers are people - we know only about fatal problems. There are lot of unoptimized queries, but because the problem is not fatal, then it is not reason for report it. And lot of people has not any idea how fast the databases can be. The knowledges of users and app developers are sad book.\n>> \n>> Pavel\n> \n> It seems to me that I have found quite elegant solution for per-backend statistic for GTT: I just inserting it in backend's catalog cache, but not in pg_statistic table itself.\n> To do it I have to add InsertSysCache/InsertCatCache functions which insert pinned entry in the correspondent cache.\n> I wonder if there are some pitfalls of such approach?\n> \n> New patch for GTT is attached.\n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com <http://www.postgrespro.com/>\n> The Russian Postgres Company \n> <global_private_temp-4.patch>\n\n\nMy comments for global_private_temp-4.patchgood side:1 Lots of  index type on GTT. I think we need support for all kinds of indexes.2 serial column on GTT.3 INHERITS GTT.4 PARTITION GTT.I didn't choose to support them in the first release, but you did.Other side:1 case: create global temp table gtt2(a int primary key, b text) on commit delete rows;I think you've lost the meaning of the on commit delete rows clause.After the GTT is created, the other sessions feel that this is an on commit PRESERVE rows GTT.2 truncate gtt, mybe this is a bug in DropRelFileNodeBuffers.GTT's local buffer is not released.Case:postgres=# insert into gtt2 values(1,'xx');INSERT 0 1postgres=# truncate gtt2;TRUNCATE TABLEpostgres=# insert into gtt2 values(1,'xx');ERROR:  unexpected data beyond EOF in block 0 of relation base/13579/t3_16384HINT:  This has been seen to occur with buggy kernels; consider updating your system.3  lock type of truncate GTT.I don't think it's a good idea to hold a big lock with truncate GTT, because it only needs to process private data.4 GTT's ddl Those ddl that need to rewrite data files may need attention.We have discussed in the previous email. This is why I used shared hash to track the GTT file.5 There will be problems with DDL that will change relfilenode. Such as cluster GTT ,vacuum full GTT.A session completes vacuum full gtt(a), and other sessions will immediately start reading and writing new storage files and existing data is also lost.I disable them in my current version.6 drop GTTI think drop GTT should clean up all storage files and definitions. How do you think?7 MVCC visibility clog cleanGTT data visibility rules, like regular tables, so GTT also need clog.We need to avoid the clog that GTT needs to be cleaned up. At the same time, GTT does not do autovacuum, and retaining \"too old data\" will cause wraparound data loss.I have given a solution in my design.Zeng Wenjing2019年11月1日 下午11:15,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n\n\n\n\nOn 25.10.2019 20:00, Pavel Stehule\n wrote:\n\n\n\n\n\n\n >\n >> So except the limitation mentioned above (which I\n do not consider as critical) there is only one problem which\n was not addressed: maintaining statistics for GTT.\n >> If all of the following conditions are true:\n >>\n >> 1) GTT are used in joins\n >> 2) There are indexes defined for GTT\n >> 3) Size and histogram of GTT in different backends\n can significantly vary.\n >> 4) ANALYZE was explicitly called for GTT\n >>\n >> then query execution plan built in one backend will\n be also used for other backends where it can be inefficient.\n >> I also do not consider this problem as \"show\n stopper\" for adding GTT to Postgres.\n > I think that's *definitely* a show stopper.\n Well, if both you and Pavel think that it is really \"show\n stopper\", then \n this problem really has to be addressed.\n I slightly confused about this opinion, because Pavel has\n told me \n himself that 99% of users never create indexes for temp\n tables\n or run \"analyze\" for them. And without it, this problem is\n not a problem \n at all.\n\n\n\n\nUsers doesn't do ANALYZE on temp tables in 99%. It's\n true. But second fact is so users has lot of problems. It's\n very similar to wrong statistics on persistent tables. When\n data are small, then it is not problem for users, although\n from my perspective it's not optimal. When data are not\n small, then the problem can be brutal. Temporary tables are\n not a exception. And users and developers are people - we\n know only about fatal problems. There are lot of unoptimized\n queries, but because the problem is not fatal, then it is\n not reason for report it. And lot of people has not any idea\n how fast the databases can be. The knowledges of  users and\n app developers are sad book.\n\n\nPavel\n\n\n\n\n\n It seems to me that I have found quite elegant solution for\n per-backend statistic for GTT: I just inserting it in backend's\n catalog cache, but not in pg_statistic table itself.\n To do it I have to add InsertSysCache/InsertCatCache functions which\n insert pinned entry in the correspondent cache.\n I wonder if there are some pitfalls of such approach?\n\n New patch for GTT is attached.\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company \n\n<global_private_temp-4.patch>", "msg_date": "Fri, 08 Nov 2019 23:06:17 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 08.11.2019 18:06, 曾文旌(义从) wrote:\n> My comments for global_private_temp-4.patch\n\nThank you very much for inspecting my patch.\n>\n> good side:\n> 1 Lots of  index type on GTT. I think we need support for all kinds of \n> indexes.\n> 2 serial column on GTT.\n> 3 INHERITS GTT.\n> 4 PARTITION GTT.\n>\n> I didn't choose to support them in the first release, but you did.\n>\n> Other side:\n> 1 case: create global temp table gtt2(a int primary key, b text) on \n> commit delete rows;\n> I think you've lost the meaning of the on commit delete rows clause.\n> After the GTT is created, the other sessions feel that this is an on \n> commit PRESERVE rows GTT.\n>\n\nYes, there was bug in my implementation of ON COMMIT DELETE ROWS for GTT.\nIt is fixed in global_private_temp-6.patch\n\n> 2 truncate gtt, mybe this is a bug in DropRelFileNodeBuffers.\n> GTT's local buffer is not released.\n> Case:\n> postgres=# insert into gtt2 values(1,'xx');\n> INSERT 0 1\n> postgres=# truncate gtt2;\n> TRUNCATE TABLE\n> postgres=# insert into gtt2 values(1,'xx');\n> ERROR:  unexpected data beyond EOF in block 0 of relation \n> base/13579/t3_16384\n> HINT:  This has been seen to occur with buggy kernels; consider \n> updating your system.\n>\n\nYes another bug, also fixed in new version of the patch.\n\n> 3  lock type of truncate GTT.\n> I don't think it's a good idea to hold a big lock with truncate GTT, \n> because it only needs to process private data.\n\nSorry, I do not understand which lock you are talking about.\nI have not introduced any special locks for GTT.\n\n> 4 GTT's ddl Those ddl that need to rewrite data files may need attention.\n> We have discussed in the previous email. This is why I used shared \n> hash to track the GTT file.\n>\n\nYou are right.\nBut instead of prohibiting ALTER TABLE at all for GTT, we can check\nthat there are no other backends using it.\nI do not think that we should maintain some hash in shared memory to \ncheck it.\nAs far as ALTER TABLE is rare and slow operation in any case, we can \njust check presence of GTT files\ncreated by other backends.\nI have implemented this check in global_private_temp-6.patch\n\n\n>\n> 5 There will be problems with DDL that will change relfilenode. Such \n> as cluster GTT ,vacuum full GTT.\n> A session completes vacuum full gtt(a), and other sessions will \n> immediately start reading and writing new storage files and existing \n> data is also lost.\n> I disable them in my current version.\n\nThank you for noticing it.\nAutovacuum full should really be prohibited for GTT.\n\n>\n> 6 drop GTT\n> I think drop GTT should clean up all storage files and definitions. \n> How do you think?\n>\nStorage files will be cleaned in any case on backend termination.\nCertainly if backend creates  and deletes huge number of GTT in the \nloop, it can cause space exhaustion.\nBut it seems to be very strange pattern of GTT usage.\n\n\n\n> 7 MVCC visibility clog clean\n> GTT data visibility rules, like regular tables, so GTT also need clog.\n> We need to avoid the clog that GTT needs to be cleaned up.\n> At the same time, GTT does not do autovacuum, and retaining \"too old \n> data\" will cause wraparound data loss.\n> I have given a solution in my design.\n>\nBut why do we need some special handling of visibility rules for GTT \ncomparing with normal (local) temp tables?\nThem are also not proceeded by autovacuum?\n\nIn principle, I have also implemented special visibility rules for GTT, \nbut only for the case when them\nare accessed at replica. And it is not included in this patch, because \neverybody think that access to GTT\nreplica should be considered in separate patch.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 08.11.2019 18:06, 曾文旌(义从) wrote:\n\n\n\nMy comments for global_private_temp-4.patch\n\n\n Thank you very much for inspecting my patch.\n\n\n\n\ngood side:\n1 Lots of\n  index type on GTT. I think we need support for all\n kinds of indexes.\n2 serial\n column on GTT.\n\n3 INHERITS GTT.\n4 PARTITION GTT.\n\n\nI didn't choose to support them\n in the first release, but you did.\n\n\nOther side:\n1 case: create global temp table gtt2(a int primary key,\n b text) on commit delete rows;\nI think you've lost the meaning of the on commit delete\n rows clause.\nAfter the GTT is created, the other sessions feel that\n this is an on commit PRESERVE rows GTT.\n\n\n\n\n\n\n Yes, there was bug in my implementation of ON COMMIT DELETE ROWS for\n GTT.\n It is fixed in global_private_temp-6.patch\n\n\n\n\n2 truncate gtt, mybe this is a\n bug in DropRelFileNodeBuffers.\nGTT's local buffer is not released.\nCase:\npostgres=# insert into gtt2\n values(1,'xx');\nINSERT 0 1\npostgres=# truncate gtt2;\nTRUNCATE TABLE\npostgres=# insert into gtt2\n values(1,'xx');\nERROR:  unexpected data\n beyond EOF in block 0 of relation base/13579/t3_16384\nHINT:  This has been seen to\n occur with buggy kernels; consider updating your system.\n\n\n\n\n\n\n Yes another bug, also fixed in new version of the patch.\n\n\n\n\n3  lock type of truncate GTT.\nI don't think it's a good idea to hold a big\n lock with truncate GTT, because it only needs to process\n private data.\n\n\n\n\n Sorry, I do not understand which lock you are talking about.\n I have not introduced any special locks for GTT.\n\n\n\n\n4 GTT's ddl Those ddl that need to rewrite data files may\n need attention.\nWe have discussed in the previous email. This is why I\n used shared hash to track the GTT file.\n\n\n\n\n\n\n You are right.\n But instead of prohibiting ALTER TABLE at all for GTT, we can check\n \n that there are no other backends using it.\n I do not think that we should maintain some hash in shared memory to\n check it.\n As far as ALTER TABLE is rare and slow operation in any case, we can\n just check presence of GTT files\n created by other backends.\n I have implemented this check in global_private_temp-6.patch\n\n\n\n\n\n\n\n5 There will be problems with DDL that will change\n relfilenode. Such as cluster GTT ,vacuum full GTT.\nA session completes vacuum full gtt(a), and other\n sessions will immediately start reading and writing new\n storage files and existing data is also lost.\nI disable them in my current version.\n\n\n\n\n Thank you for noticing it.\n Autovacuum full should really be prohibited for GTT.\n\n\n\n\n\n\n6 drop GTT\nI think drop GTT should clean up all storage files and\n definitions. How do you think?\n\n\n\n\n\n Storage files will be cleaned in any case on backend termination.\n Certainly if backend creates  and deletes huge number of GTT in the\n loop, it can cause space exhaustion.\n But it seems to be very strange pattern of GTT usage.\n\n\n\n\n\n\n7 MVCC visibility clog clean\nGTT data visibility rules, like regular tables, so GTT\n also need clog.\nWe need to avoid the clog that GTT needs to be cleaned\n up. \nAt the same time, GTT does not do autovacuum, and\n retaining \"too old data\" will cause wraparound data loss.\nI have given a solution in my design.\n\n\n\n\n\n But why do we need some special handling of visibility rules for GTT\n comparing with normal (local) temp tables?\n Them are also not proceeded by autovacuum?\n\n In principle, I have also implemented special visibility rules for\n GTT, but only for the case when them\n are accessed at replica. And it is not included in this patch,\n because everybody think that access to GTT\n replica should be considered in separate patch.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 11 Nov 2019 18:19:21 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi,\n\nI think we need to do something with having two patches aiming to add\nglobal temporary tables:\n\n[1] https://commitfest.postgresql.org/26/2349/\n\n[2] https://commitfest.postgresql.org/26/2233/\n\nAs a reviewer I have no idea which of the threads to look at - certainly\nnot without reading both threads, which I doubt anyone will really do.\nThe reviews and discussions are somewhat intermixed between those two\nthreads, which makes it even more confusing.\n\nI think we should agree on a minimal patch combining the necessary/good\nbits from the various patches, and terminate one of the threads (i.e.\nmark it as rejected or RWF). And we need to do that now, otherwise\nthere's about 0% chance of getting this into v13.\n\nIn general, I agree with the sentiment Rober expressed in [1] - the\npatch needs to be as small as possible, not adding \"nice to have\"\nfeatures (like support for parallel queries - I very much doubt just\nusing shared instead of local buffers is enough to make it work.)\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 5 Jan 2020 21:06:03 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "In the previous communication\n\n1 we agreed on the general direction\n1.1 gtt use local (private) buffer\n1.2 no replica access in first version\n\n2 We feel that gtt needs to maintain statistics, but there is no agreement on what it will be done.\n\n3 Still no one commented on GTT's transaction information processing, they include\n3.1 Should gtt's frozenxid need to be care?\n3.2 gtt’s clog clean\n3.3 How to deal with \"too old\" gtt data\n\nI suggest we discuss further, reach an agreement, and merge the two patches to one.\n\n\nWenjing\n\n\n> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:\n> \n> Hi,\n> \n> I think we need to do something with having two patches aiming to add\n> global temporary tables:\n> \n> [1] https://commitfest.postgresql.org/26/2349/\n> \n> [2] https://commitfest.postgresql.org/26/2233/\n> \n> As a reviewer I have no idea which of the threads to look at - certainly\n> not without reading both threads, which I doubt anyone will really do.\n> The reviews and discussions are somewhat intermixed between those two\n> threads, which makes it even more confusing.\n> \n> I think we should agree on a minimal patch combining the necessary/good\n> bits from the various patches, and terminate one of the threads (i.e.\n> mark it as rejected or RWF). And we need to do that now, otherwise\n> there's about 0% chance of getting this into v13.\n> \n> In general, I agree with the sentiment Rober expressed in [1] - the\n> patch needs to be as small as possible, not adding \"nice to have\"\n> features (like support for parallel queries - I very much doubt just\n> using shared instead of local buffers is enough to make it work.)\n> \n> regards\n> \n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 06 Jan 2020 13:04:15 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:\n>In the previous communication\n>\n>1 we agreed on the general direction\n>1.1 gtt use local (private) buffer\n>1.2 no replica access in first version\n>\n\nOK, good.\n\n>2 We feel that gtt needs to maintain statistics, but there is no\n>agreement on what it will be done.\n>\n\nI certainly agree GTT needs to maintain statistics, otherwise it'll lead\nto poor query plans. AFAIK the current patch stores the info in a hash\ntable in a backend private memory, and I don't see how else to do that\n(e.g. storing it in a catalog would cause catalog bloat).\n\nFWIW this is a reasons why I think just using shared buffers (instead of\nlocal ones) is not sufficient to support parallel queriesl as proposed\nby Alexander. The workers would not know the stats, breaking planning of\nqueries in PARALLEL SAFE plpgsql functions etc.\n\n>3 Still no one commented on GTT's transaction information processing, they include\n>3.1 Should gtt's frozenxid need to be care?\n>3.2 gtt’s clog clean\n>3.3 How to deal with \"too old\" gtt data\n>\n\nNo idea what to do about this.\n\n>I suggest we discuss further, reach an agreement, and merge the two patches to one.\n>\n\nOK, cool. Thanks for the clarification.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Jan 2020 12:01:19 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, 6 Jan 2020 at 11:01, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:\n>\n> >2 We feel that gtt needs to maintain statistics, but there is no\n> >agreement on what it will be done.\n> >\n>\n> I certainly agree GTT needs to maintain statistics, otherwise it'll lead\n> to poor query plans.\n\n+1\n\n> AFAIK the current patch stores the info in a hash\n> table in a backend private memory, and I don't see how else to do that\n> (e.g. storing it in a catalog would cause catalog bloat).\n>\n\nIt sounds like it needs a pair of system GTTs to hold the table and\ncolumn statistics for other GTTs. One would probably have the same\ncolumns as pg_statistic, and the other just the relevant columns from\npg_class. I can see it being useful for the user to be able to see\nthese stats, so perhaps they could be UNIONed into the existing stats\nview.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 6 Jan 2020 12:17:43 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Jan 06, 2020 at 12:17:43PM +0000, Dean Rasheed wrote:\n>On Mon, 6 Jan 2020 at 11:01, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:\n>>\n>> >2 We feel that gtt needs to maintain statistics, but there is no\n>> >agreement on what it will be done.\n>> >\n>>\n>> I certainly agree GTT needs to maintain statistics, otherwise it'll lead\n>> to poor query plans.\n>\n>+1\n>\n>> AFAIK the current patch stores the info in a hash\n>> table in a backend private memory, and I don't see how else to do that\n>> (e.g. storing it in a catalog would cause catalog bloat).\n>>\n>\n>It sounds like it needs a pair of system GTTs to hold the table and\n>column statistics for other GTTs. One would probably have the same\n>columns as pg_statistic, and the other just the relevant columns from\n>pg_class. I can see it being useful for the user to be able to see\n>these stats, so perhaps they could be UNIONed into the existing stats\n>view.\n>\n\nHmmm, yeah. A \"temporary catalog\" (not sure if it can work exactly the\nsame as GTT) storing pg_statistics data for GTTs might work, I think. It\nwould not have the catalog bloat issue, which is good.\n\nI still think we'd need to integrate this with the regular pg_statistic\ncatalogs somehow, so that people don't have to care about two things. I\nmean, extensions like hypopg do use pg_statistic data to propose indexes\netc. and it would be nice if we don't make them more complicated.\n\nNot sure why we'd need a temporary version of pg_class, though?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Jan 2020 13:52:34 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "po 6. 1. 2020 v 13:17 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com>\nnapsal:\n\n> On Mon, 6 Jan 2020 at 11:01, Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> wrote:\n> >\n> > On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:\n> >\n> > >2 We feel that gtt needs to maintain statistics, but there is no\n> > >agreement on what it will be done.\n> > >\n> >\n> > I certainly agree GTT needs to maintain statistics, otherwise it'll lead\n> > to poor query plans.\n>\n> +1\n>\n> > AFAIK the current patch stores the info in a hash\n> > table in a backend private memory, and I don't see how else to do that\n> > (e.g. storing it in a catalog would cause catalog bloat).\n> >\n>\n> It sounds like it needs a pair of system GTTs to hold the table and\n> column statistics for other GTTs. One would probably have the same\n> columns as pg_statistic, and the other just the relevant columns from\n> pg_class. I can see it being useful for the user to be able to see\n> these stats, so perhaps they could be UNIONed into the existing stats\n> view.\n>\n\n+1\n\nPavel\n\n\n> Regards,\n> Dean\n>\n\npo 6. 1. 2020 v 13:17 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com> napsal:On Mon, 6 Jan 2020 at 11:01, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:\n>\n> >2 We feel that gtt needs to maintain statistics, but there is no\n> >agreement on what it will be done.\n> >\n>\n> I certainly agree GTT needs to maintain statistics, otherwise it'll lead\n> to poor query plans.\n\n+1\n\n> AFAIK the current patch stores the info in a hash\n> table in a backend private memory, and I don't see how else to do that\n> (e.g. storing it in a catalog would cause catalog bloat).\n>\n\nIt sounds like it needs a pair of system GTTs to hold the table and\ncolumn statistics for other GTTs. One would probably have the same\ncolumns as pg_statistic, and the other just the relevant columns from\npg_class. I can see it being useful for the user to be able to see\nthese stats, so perhaps they could be UNIONed into the existing stats\nview.+1Pavel\n\nRegards,\nDean", "msg_date": "Mon, 6 Jan 2020 14:50:49 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月6日 下午8:17,Dean Rasheed <dean.a.rasheed@gmail.com> 写道:\n> \n> On Mon, 6 Jan 2020 at 11:01, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> \n>> On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:\n>> \n>>> 2 We feel that gtt needs to maintain statistics, but there is no\n>>> agreement on what it will be done.\n>>> \n>> \n>> I certainly agree GTT needs to maintain statistics, otherwise it'll lead\n>> to poor query plans.\n> \n> +1\n> \n>> AFAIK the current patch stores the info in a hash\n>> table in a backend private memory, and I don't see how else to do that\n>> (e.g. storing it in a catalog would cause catalog bloat).\n>> \n> \n> It sounds like it needs a pair of system GTTs to hold the table and\n> column statistics for other GTTs. One would probably have the same\n> columns as pg_statistic, and the other just the relevant columns from\n> pg_class. I can see it being useful for the user to be able to see\n> these stats, so perhaps they could be UNIONed into the existing stats\n> view.\nThe current patch provides several functions as extension(pg_gtt) for read gtt statistics. \nNext I can move them to the kernel and let the view pg_stats can see gtt’s statistics.\n\n\n> Regards,\n> Dean\n\n\n2020年1月6日 下午8:17,Dean Rasheed <dean.a.rasheed@gmail.com> 写道:On Mon, 6 Jan 2020 at 11:01, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:2 We feel that gtt needs to maintain statistics, but there is noagreement on what it will be done.I certainly agree GTT needs to maintain statistics, otherwise it'll leadto poor query plans.+1AFAIK the current patch stores the info in a hashtable in a backend private memory, and I don't see how else to do that(e.g. storing it in a catalog would cause catalog bloat).It sounds like it needs a pair of system GTTs to hold the table andcolumn statistics for other GTTs. One would probably have the samecolumns as pg_statistic, and the other just the relevant columns frompg_class. I can see it being useful for the user to be able to seethese stats, so perhaps they could be UNIONed into the existing statsview.The current patch provides several functions as extension(pg_gtt) for read gtt statistics. Next I can move them to the kernel and let the view pg_stats can see gtt’s statistics.Regards,Dean", "msg_date": "Wed, 08 Jan 2020 15:03:09 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 06.01.2020 8:04, 曾文旌(义从) wrote:\n> In the previous communication\n>\n> 1 we agreed on the general direction\n> 1.1 gtt use local (private) buffer\n> 1.2 no replica access in first version\n>\n> 2 We feel that gtt needs to maintain statistics, but there is no agreement on what it will be done.\n>\n> 3 Still no one commented on GTT's transaction information processing, they include\n> 3.1 Should gtt's frozenxid need to be care?\n> 3.2 gtt’s clog clean\n> 3.3 How to deal with \"too old\" gtt data\n>\n> I suggest we discuss further, reach an agreement, and merge the two patches to one.\n>\n\nI also hope that we should come to the common solution for GTT.\nIf we do not try to address parallel execution issues and access to temp \ntables at replicas (and I agreed\nthat it should be avoided in first version of the patch), then GTT patch \nbecomes quite small.\n\nThe most complex and challenged task is to support GTT for all kind of \nindexes. Unfortunately I can not proposed some good universal solution \nfor it.\nJust patching all existed indexes implementation seems to be the only \nchoice.\n\nStatistic is another important case.\nBut once again I do not completely understand why we want to address all \nthis issues with statistic in first version of the patch? It contradicts \nto the idea to make this patch as small as possible.\nAlso it seems to me that everybody agreed that users very rarely create \nindexes for temp tables and explicitly analyze them.\nSo I think GTT will be useful even with limited support of statistic. In \nmy version statistics for GTT is provided by pushing correspondent \ninformation to backend's cache for pg_statistic table.\nAlso I provided pg_temp_statistic view for inspecting it by users. The \nidea to make pg_statistic a view which combines statistic of normal and \ntemporary tables is overkill from my point of view.\n\nI do not understand why do we need to maintain hash with some extra \ninformation for GTT in backends memory (as it was done in Wenjing patch).\nAlso idea to use create extension for accessing this information seems \nto be dubious.\n\n-- \nKonstantin Knizhnik\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 9 Jan 2020 14:17:08 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 06.01.2020 14:01, Tomas Vondra wrote:\n> On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:\n>> In the previous communication\n>>\n>> 1 we agreed on the general direction\n>> 1.1 gtt use local (private) buffer\n>> 1.2 no replica access in first version\n>>\n>\n> OK, good.\n>\n>> 2 We feel that gtt needs to maintain statistics, but there is no\n>> agreement on what it will be done.\n>>\n>\n> I certainly agree GTT needs to maintain statistics, otherwise it'll lead\n> to poor query plans. AFAIK the current patch stores the info in a hash\n> table in a backend private memory, and I don't see how else to do that\n> (e.g. storing it in a catalog would cause catalog bloat).\n>\n> FWIW this is a reasons why I think just using shared buffers (instead of\n> local ones) is not sufficient to support parallel queriesl as proposed\n> by Alexander. The workers would not know the stats, breaking planning of\n> queries in PARALLEL SAFE plpgsql functions etc.\n\n\nI do not think that \"all or nothing\" approach is so good for software \ndevelopment as for database transactions.\nYes, if we have function in PL/pgSQL which performs queries om temporary \ntables, then\nparallel workers may build inefficient plan for this queries due to lack \nof statistics.\n From my point of view this is not a pitfall of GTT but result of lack \nof global plan cache in Postgres. And it should be fixed not at GTT level.\n\nAlso I never see real use cases with such functions, even in the systems \nwhich using hard temporary tables and stored procedures.\nBut there are many other real problems with temp tables  (except already \nmentioned in this thread).\nIn PgPro/EE we have fixes for some of them, for example:\n\n1. Do not reserve space in the file for temp relations. Right now append \nof relation cause writing zero page to the disk by mdextend.\nIt cause useless disk IO for temp tables which in most cases fit in \nmemory and should not be written at disk.\n\n2. Implicitly perform analyze of temp table intermediately after storing \ndata in it. Usually tables are analyzed by autovacuum in background.\nBut it doesn't work for temp tables which are not processes by \nautovacuum and are accessed immediately after filling them with data and \nlack of statistic  may cause\nbuilding very inefficient plan. We have online_analyze extension which \nforce analyze of the table after appending some bulk of data to it.\nIt can be used for normal table but most of all it is useful for temp \nrelations.\n\nUnlike hypothetical example with parallel safe function working with \ntemp tables,\nthis are real problems observed by some of our customers.\nThem are applicable both to local and global temp tables and this is why \nI do not want to discuss them in context of GTT.\n\n\n>\n>> 3 Still no one commented on GTT's transaction information processing, \n>> they include\n>> 3.1 Should gtt's frozenxid need to be care?\n>> 3.2 gtt’s clog clean\n>> 3.3 How to deal with \"too old\" gtt data\n>>\n>\n> No idea what to do about this.\n>\n\nI wonder what is the specific of GTT here?\nThe same problem takes place for normal (local) temp tables, doesn't it?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 9 Jan 2020 18:07:46 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Jan 09, 2020 at 06:07:46PM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 06.01.2020 14:01, Tomas Vondra wrote:\n>>On Mon, Jan 06, 2020 at 01:04:15PM +0800, 曾文旌(义从) wrote:\n>>>In the previous communication\n>>>\n>>>1 we agreed on the general direction 1.1 gtt use local (private)\n>>>buffer 1.2 no replica access in first version\n>>>\n>>\n>>OK, good.\n>>\n>>>2 We feel that gtt needs to maintain statistics, but there is no\n>>>agreement on what it will be done.\n>>>\n>>\n>>I certainly agree GTT needs to maintain statistics, otherwise it'll\n>>lead to poor query plans. AFAIK the current patch stores the info in a\n>>hash table in a backend private memory, and I don't see how else to do\n>>that (e.g. storing it in a catalog would cause catalog bloat).\n>>\n>>FWIW this is a reasons why I think just using shared buffers (instead\n>>of local ones) is not sufficient to support parallel queriesl as\n>>proposed by Alexander. The workers would not know the stats, breaking\n>>planning of queries in PARALLEL SAFE plpgsql functions etc.\n>\n>\n>I do not think that \"all or nothing\" approach is so good for software\n>development as for database transactions.\n\nWell, sure. I'm not saying we need to have a perfect solution in v1. I'm\nsaying if we have two choices:\n\n(1) Use shared buffers even if it means the parallel query plan may be\n arbitrarily bad.\n\n(2) Use private buffers, even if it means no parallel queries with temp\n tables.\n\nThen I'm voting for (2) because it's less likely to break down. I can\nimagine allowing parallel queries with GTT when there's no risk of\nhaving to plan in the worker, but that's not there yet.\n\nIf we can come up with a reasonable solution for the parallel case, we\ncan enable it later.\n\n>Yes, if we have function in PL/pgSQL which performs queries om \n>temporary tables, then\n>parallel workers may build inefficient plan for this queries due to \n>lack of statistics.\n\nIMHO that's a pretty awful deficiency, because it essentially means\nusers may need to disable parallelism for such queries. Which means\nwe'll get complaints from users, and we'll have to come up with some\nsort of solution. I'd rather not be in that position.\n\n>From my point of view this is not a pitfall of GTT but result of lack \n>of global plan cache in Postgres. And it should be fixed not at GTT \n>level.\n>\n\nThat doesn't give us free pass to just ignore the issue. Even if it\nreally was due to a lack of global plan cache, the fact is we don't have\nthat feature, so we have a problem. I mean, if you need infrastructure\nthat is not available, you either have to implement that infrastructure\nor make it work properly without it.\n\n>Also I never see real use cases with such functions, even in the \n>systems which using hard temporary tables and stored procedures.\n>But there are many other real problems with temp tables  (except \n>already mentioned in this thread).\n\nOh, I'm sure there are pretty large plpgsql applications, and I'd be\nsurprised if at least some of those were not affected. And I'm sure\nthere are apps using UDF to do all sorts of stuff (e.g. I wonder if\nPostGIS would have this issue - IIRC it's using SPI etc.).\n\nThe question is whether we should consider existing apps affected,\nbecause they are using the regular temporary tables and not GTT. So\nunless they switch to GTT there is no regression ...\n\nBut even in that case I don't think it's a good idea to accept this as\nan acceptable limitation. I admit one of the reasons why I think that\nmay be that statistics and planning are my areas of interest, so I'm not\nquite willing to accept incomplete stuff as OK.\n\n>In PgPro/EE we have fixes for some of them, for example:\n>\n>1. Do not reserve space in the file for temp relations. Right now \n>append of relation cause writing zero page to the disk by mdextend.\n>It cause useless disk IO for temp tables which in most cases fit in \n>memory and should not be written at disk.\n>\n>2. Implicitly perform analyze of temp table intermediately after \n>storing data in it. Usually tables are analyzed by autovacuum in \n>background.\n>But it doesn't work for temp tables which are not processes by \n>autovacuum and are accessed immediately after filling them with data \n>and lack of statistic  may cause\n>building very inefficient plan. We have online_analyze extension which \n>force analyze of the table after appending some bulk of data to it.\n>It can be used for normal table but most of all it is useful for temp \n>relations.\n>\n>Unlike hypothetical example with parallel safe function working with \n>temp tables,\n>this are real problems observed by some of our customers.\n>Them are applicable both to local and global temp tables and this is \n>why I do not want to discuss them in context of GTT.\n>\n\nI think those are both interesting issues worth fixing, but I don't\nthink it makes the issue discussed here less important.\n\n>\n>>\n>>>3 Still no one commented on GTT's transaction information \n>>>processing, they include\n>>>3.1 Should gtt's frozenxid need to be care?\n>>>3.2 gtt’s clog clean\n>>>3.3 How to deal with \"too old\" gtt data\n>>>\n>>\n>>No idea what to do about this.\n>>\n>\n>I wonder what is the specific of GTT here?\n>The same problem takes place for normal (local) temp tables, doesn't it?\n>\n\nNot sure. TBH I'm not sure I understand what the issue actually is.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jan 2020 17:30:37 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Jan 09, 2020 at 02:17:08PM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 06.01.2020 8:04, 曾文旌(义从) wrote:\n>>In the previous communication\n>>\n>>1 we agreed on the general direction\n>>1.1 gtt use local (private) buffer\n>>1.2 no replica access in first version\n>>\n>>2 We feel that gtt needs to maintain statistics, but there is no agreement on what it will be done.\n>>\n>>3 Still no one commented on GTT's transaction information processing, they include\n>>3.1 Should gtt's frozenxid need to be care?\n>>3.2 gtt’s clog clean\n>>3.3 How to deal with \"too old\" gtt data\n>>\n>>I suggest we discuss further, reach an agreement, and merge the two patches to one.\n>>\n>\n>I also hope that we should come to the common solution for GTT.\n>If we do not try to address parallel execution issues and access to \n>temp tables at replicas (and I agreed\n>that it should be avoided in first version of the patch), then GTT \n>patch becomes quite small.\n>\n\nWell, that was kinda my goal - making the patch as small as possible by\neliminating bits that are contentious or where we don't know the\nsolution (like planning for parallel queries).\n\n>The most complex and challenged task is to support GTT for all kind of \n>indexes. Unfortunately I can not proposed some good universal solution \n>for it.\n>Just patching all existed indexes implementation seems to be the only \n>choice.\n>\n\nI haven't looked at the indexing issue closely, but IMO we need to\nensure that every session sees/uses only indexes on GTT that were\ndefined before the seesion started using the table.\n\nCan't we track which indexes a particular session sees, somehow?\n\n>Statistic is another important case.\n>But once again I do not completely understand why we want to address \n>all this issues with statistic in first version of the patch?\n\nI think the question is which \"issues with statistic\" you mean. I'm sure\nwe can ignore some of them, e.g. the one with parallel workers not\nhaving any stats (assuming we consider functions using GTT to be\nparallel restricted).\n\n>It contradicts to the idea to make this patch as small as possible.\n\nWell, there's \"making patch as small as possible\" vs. \"patch behaving\ncorrectly\" trade-off ;-)\n\n>Also it seems to me that everybody agreed that users very rarely \n>create indexes for temp tables and explicitly analyze them.\n\nI certainly *disagree* with this.\n\nWe often see temporary tables as a fix or misestimates in complex\nqueries, and/or as a replacement for CTEs with statistics/indexes. In\nfact it's a pretty valuable tool when helping customers with complex\nqueries affected by poor estimates.\n\n>So I think GTT will be useful even with limited support of statistic. \n>In my version statistics for GTT is provided by pushing correspondent \n>information to backend's cache for pg_statistic table.\n\nI think someone pointed out pushing stuff directly into the cache is\nrather problematic, but I don't recall the details.\n\n>Also I provided pg_temp_statistic view for inspecting it by users. The \n>idea to make pg_statistic a view which combines statistic of normal \n>and temporary tables is overkill from my point of view.\n>\n>I do not understand why do we need to maintain hash with some extra \n>information for GTT in backends memory (as it was done in Wenjing \n>patch).\n>Also idea to use create extension for accessing this information seems \n>to be dubious.\n>\n\nI think the extension was more a PoC rather than a final solution.\n\n\nregards\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jan 2020 17:48:29 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 09.01.2020 19:48, Tomas Vondra wrote:\n>\n>> The most complex and challenged task is to support GTT for all kind \n>> of indexes. Unfortunately I can not proposed some good universal \n>> solution for it.\n>> Just patching all existed indexes implementation seems to be the only \n>> choice.\n>>\n>\n> I haven't looked at the indexing issue closely, but IMO we need to\n> ensure that every session sees/uses only indexes on GTT that were\n> defined before the seesion started using the table.\n\nWhy? It contradicts with behavior of normal tables.\nAssume that you have active clients and at some point of time DBA \nrecognizes that them are spending to much time in scanning some GTT.\nIt cab create index for this GTT but if existed client will not be able \nto use this index, then we need somehow make this clients to restart \ntheir sessions?\nIn my patch I have implemented building indexes for GTT on demand: if \naccessed index on GTT is not yet initialized, then it is filled with \nlocal data.\n>\n> Can't we track which indexes a particular session sees, somehow?\n>\n>> Statistic is another important case.\n>> But once again I do not completely understand why we want to address \n>> all this issues with statistic in first version of the patch?\n>\n> I think the question is which \"issues with statistic\" you mean. I'm sure\n> we can ignore some of them, e.g. the one with parallel workers not\n> having any stats (assuming we consider functions using GTT to be\n> parallel restricted).\n\nIf we do not use shared buffers for GTT then parallel processing of GTT \nis not possible at all, so there is no problem with statistic for \nparallel workers.\n\n>\n> I think someone pointed out pushing stuff directly into the cache is\n> rather problematic, but I don't recall the details.\n>\nI have not encountered any problems, so if you can point me on what is \nwrong with this approach, I will think about alternative solution.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 10 Jan 2020 11:47:42 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 09.01.2020 19:30, Tomas Vondra wrote:\n\n\n>\n>>\n>>>\n>>>> 3 Still no one commented on GTT's transaction information \n>>>> processing, they include\n>>>> 3.1 Should gtt's frozenxid need to be care?\n>>>> 3.2 gtt’s clog clean\n>>>> 3.3 How to deal with \"too old\" gtt data\n>>>>\n>>>\n>>> No idea what to do about this.\n>>>\n>>\n>> I wonder what is the specific of GTT here?\n>> The same problem takes place for normal (local) temp tables, doesn't it?\n>>\n>\n> Not sure. TBH I'm not sure I understand what the issue actually is. \n\nJust open session, create temporary table and insert some data in it.\nThen in other session run 2^31 transactions (at my desktop it takes \nabout 2 hours).\nAs far as temp tables are not proceeded by vacuum, database is stalled:\n\n  ERROR:  database is not accepting commands to avoid wraparound data \nloss in database \"postgres\"\n\nIt seems to be quite dubious behavior and it is strange to me that \nnobody complains about it.\nWe discuss  many issues related with temp tables (statistic, parallel \nqueries,...) which seems to be less critical.\n\nBut this problem is not specific to GTT - it can be reproduced with \nnormal (local) temp tables.\nThis is why I wonder why do we need to solve it in GTT patch.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 09.01.2020 19:30, Tomas Vondra\n wrote:\n\n\n\n\n\n\n\n\n3 Still no one commented on GTT's\n transaction information processing, they include\n \n 3.1 Should gtt's frozenxid need to be care?\n \n 3.2 gtt’s clog clean\n \n 3.3 How to deal with \"too old\" gtt data\n \n\n\n\n No idea what to do about this.\n \n\n\n\n I wonder what is the specific of GTT here?\n \n The same problem takes place for normal (local) temp tables,\n doesn't it?\n \n\n\n\n Not sure. TBH I'm not sure I understand what the issue actually\n is.\n \n\n Just open session, create temporary table and insert some data in\n it.\n\n Then in other session run 2^31 transactions (at my desktop it takes\n about 2 hours).\n As far as temp tables are not proceeded by vacuum, database is\n stalled:\n\n  ERROR:  database is not accepting commands to avoid wraparound data\n loss in database \"postgres\"\n\n It seems to be quite dubious behavior and it is strange to me that\n nobody complains about it.\n We discuss  many issues related with temp tables (statistic,\n parallel queries,...) which seems to be less critical.\n\n But this problem is not specific to GTT - it can be reproduced with\n normal (local) temp tables.\n This is why I wonder why do we need to solve it in GTT patch.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 10 Jan 2020 15:24:34 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi all\n\nThis is the latest patch\n\nThe updates are as follows:\n1. Support global temp Inherit table global temp partition table\n2. Support serial column in GTT\n3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n4. Provide view pg_gtt_attached_pids to manage GTT\n5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n6. Alter GTT or rename GTT is allowed under some conditions\n\n\nPlease give me feedback.\n\nWenjing\n\n\n\n\n\n> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:\n> \n> Hi,\n> \n> I think we need to do something with having two patches aiming to add\n> global temporary tables:\n> \n> [1] https://commitfest.postgresql.org/26/2349/\n> \n> [2] https://commitfest.postgresql.org/26/2233/\n> \n> As a reviewer I have no idea which of the threads to look at - certainly\n> not without reading both threads, which I doubt anyone will really do.\n> The reviews and discussions are somewhat intermixed between those two\n> threads, which makes it even more confusing.\n> \n> I think we should agree on a minimal patch combining the necessary/good\n> bits from the various patches, and terminate one of the threads (i.e.\n> mark it as rejected or RWF). And we need to do that now, otherwise\n> there's about 0% chance of getting this into v13.\n> \n> In general, I agree with the sentiment Rober expressed in [1] - the\n> patch needs to be as small as possible, not adding \"nice to have\"\n> features (like support for parallel queries - I very much doubt just\n> using shared instead of local buffers is enough to make it work.)\n> \n> regards\n> \n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 11 Jan 2020 22:00:44 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\nso 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n> Hi all\n>\n> This is the latest patch\n>\n> The updates are as follows:\n> 1. Support global temp Inherit table global temp partition table\n> 2. Support serial column in GTT\n> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n> 4. Provide view pg_gtt_attached_pids to manage GTT\n> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n> 6. Alter GTT or rename GTT is allowed under some conditions\n>\n>\n> Please give me feedback.\n>\n\nI tested the functionality\n\n1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local\ntemp tables).\n\nI tested some simple scripts\n\ntest01.sql\n\nCREATE TEMP TABLE foo(a int, b int);\nINSERT INTO foo SELECT random()*100, random()*1000 FROM\ngenerate_series(1,1000);\nANALYZE foo;\nSELECT sum(a), sum(b) FROM foo;\nDROP TABLE foo; -- simulate disconnect\n\n\nafter 100 sec, the table pg_attribute has 3.2MB\nand 64 tps, 6446 transaction\n\ntest02.sql\n\nINSERT INTO foo SELECT random()*100, random()*1000 FROM\ngenerate_series(1,1000);\nANALYZE foo;\nSELECT sum(a), sum(b) FROM foo;\nDELETE FROM foo; -- simulate disconnect\n\n\nafter 100 sec, 1688 tps, 168830 transactions\n\nSo performance is absolutely different as we expected.\n\n From my perspective, this functionality is great.\n\nTodo:\n\npg_table_size function doesn't work\n\nRegards\n\nPavel\n\n\n> Wenjing\n>\n>\n>\n>\n>\n> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:\n>\n> Hi,\n>\n> I think we need to do something with having two patches aiming to add\n> global temporary tables:\n>\n> [1] https://commitfest.postgresql.org/26/2349/\n>\n> [2] https://commitfest.postgresql.org/26/2233/\n>\n> As a reviewer I have no idea which of the threads to look at - certainly\n> not without reading both threads, which I doubt anyone will really do.\n> The reviews and discussions are somewhat intermixed between those two\n> threads, which makes it even more confusing.\n>\n> I think we should agree on a minimal patch combining the necessary/good\n> bits from the various patches, and terminate one of the threads (i.e.\n> mark it as rejected or RWF). And we need to do that now, otherwise\n> there's about 0% chance of getting this into v13.\n>\n> In general, I agree with the sentiment Rober expressed in [1] - the\n> patch needs to be as small as possible, not adding \"nice to have\"\n> features (like support for parallel queries - I very much doubt just\n> using shared instead of local buffers is enough to make it work.)\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\nHiso 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Hi allThis is the latest patchThe updates are as follows:1. Support global temp Inherit table global temp partition table2. Support serial column in GTT3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics4. Provide view pg_gtt_attached_pids to manage GTT5. Provide function pg_list_gtt_relfrozenxids() to manage GTT6. Alter GTT or rename GTT is allowed under some conditionsPlease give me feedback.I tested the functionality1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).I tested some simple scripts test01.sqlCREATE TEMP TABLE foo(a int, b int);INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DROP TABLE foo; -- simulate disconnectafter 100 sec, the table pg_attribute has 3.2MBand 64 tps, 6446 transactiontest02.sqlINSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DELETE FROM foo; -- simulate disconnectafter 100 sec, 1688 tps, 168830 transactionsSo performance is absolutely different as we expected.From my perspective, this functionality is great.Todo:pg_table_size function doesn't workRegardsPavelWenjing2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:Hi,I think we need to do something with having two patches aiming to addglobal temporary tables:[1] https://commitfest.postgresql.org/26/2349/[2] https://commitfest.postgresql.org/26/2233/As a reviewer I have no idea which of the threads to look at - certainlynot without reading both threads, which I doubt anyone will really do.The reviews and discussions are somewhat intermixed between those twothreads, which makes it even more confusing.I think we should agree on a minimal patch combining the necessary/goodbits from the various patches, and terminate one of the threads (i.e.mark it as rejected or RWF). And we need to do that now, otherwisethere's about 0% chance of getting this into v13.In general, I agree with the sentiment Rober expressed in [1] - thepatch needs to be as small as possible, not adding \"nice to have\"features (like support for parallel queries - I very much doubt justusing shared instead of local buffers is enough to make it work.)regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 11 Jan 2020 21:27:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Jan 10, 2020 at 03:24:34PM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 09.01.2020 19:30, Tomas Vondra wrote:\n>\n>\n>>\n>>>\n>>>>\n>>>>>3 Still no one commented on GTT's transaction information \n>>>>>processing, they include\n>>>>>3.1 Should gtt's frozenxid need to be care?\n>>>>>3.2 gtt’s clog clean\n>>>>>3.3 How to deal with \"too old\" gtt data\n>>>>>\n>>>>\n>>>>No idea what to do about this.\n>>>>\n>>>\n>>>I wonder what is the specific of GTT here?\n>>>The same problem takes place for normal (local) temp tables, doesn't it?\n>>>\n>>\n>>Not sure. TBH I'm not sure I understand what the issue actually is.\n>\n>Just open session, create temporary table and insert some data in it.\n>Then in other session run 2^31 transactions (at my desktop it takes \n>about 2 hours).\n>As far as temp tables are not proceeded by vacuum, database is stalled:\n>\n> ERROR:  database is not accepting commands to avoid wraparound data \n>loss in database \"postgres\"\n>\n>It seems to be quite dubious behavior and it is strange to me that \n>nobody complains about it.\n>We discuss  many issues related with temp tables (statistic, parallel \n>queries,...) which seems to be less critical.\n>\n>But this problem is not specific to GTT - it can be reproduced with \n>normal (local) temp tables.\n>This is why I wonder why do we need to solve it in GTT patch.\n>\n\nYeah, I think that's out of scope for GTT patch. Once we solve it for\nplain temporary tables, we'll solve it for GTT too.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 12 Jan 2020 02:14:01 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Jan 10, 2020 at 11:47:42AM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 09.01.2020 19:48, Tomas Vondra wrote:\n>>\n>>>The most complex and challenged task is to support GTT for all \n>>>kind of indexes. Unfortunately I can not proposed some good \n>>>universal solution for it.\n>>>Just patching all existed indexes implementation seems to be the \n>>>only choice.\n>>>\n>>\n>>I haven't looked at the indexing issue closely, but IMO we need to\n>>ensure that every session sees/uses only indexes on GTT that were\n>>defined before the seesion started using the table.\n>\n>Why? It contradicts with behavior of normal tables.\n>Assume that you have active clients and at some point of time DBA \n>recognizes that them are spending to much time in scanning some GTT.\n>It cab create index for this GTT but if existed client will not be \n>able to use this index, then we need somehow make this clients to \n>restart their sessions?\n>In my patch I have implemented building indexes for GTT on demand: if \n>accessed index on GTT is not yet initialized, then it is filled with \n>local data.\n\nYes, I know the behavior would be different from behavior for regular\ntables. And yes, it would not allow fixing slow queries in sessions\nwithout interrupting those sessions.\n\nI proposed just ignoring those new indexes because it seems much simpler\nthan alternative solutions that I can think of, and it's not like those\nother solutions don't have other issues.\n\nFor example, I've looked at the \"on demand\" building as implemented in\nglobal_private_temp-8.patch, I kinda doubt adding a bunch of index build\ncalls into various places in index code seems somewht suspicious.\n\n* brinbuild is added to brinRevmapInitialize, which is meant to\n initialize state for scanning. It seems wrong to build the index we're\n scanning from this function (layering and all that).\n\n* btbuild is called from _bt_getbuf. That seems a bit ... suspicious?\n\n... and so on for other index types. Also, what about custom indexes\nimplemented in extensions? It seems a bit strange each of them has to\nsupport this separately.\n\nIMHO if this really is the right solution, we need to make it work for\nexisting indexes without having to tweak them individually. Why don't we\ntrack a flag whether an index on GTT was initialized in a given session,\nand if it was not then call the build function before calling any other\nfunction from the index AM? \n\nBut let's talk about other issues caused by \"on demand\" build. Imagine\nyou have 50 sessions, each using the same GTT with a GB of per-session\ndata. Now you create a new index on the GTT, which forces the sessions\nto build it's \"local\" index. Those builds will use maintenance_work_mem\neach, so 50 * m_w_m. I doubt that's expected/sensible.\n\nSo I suggest we start by just ignoring the *new* indexes, and improve\nthis in the future (by building the indexes on demand or whatever).\n\n>>\n>>Can't we track which indexes a particular session sees, somehow?\n>>\n>>>Statistic is another important case.\n>>>But once again I do not completely understand why we want to \n>>>address all this issues with statistic in first version of the \n>>>patch?\n>>\n>>I think the question is which \"issues with statistic\" you mean. I'm sure\n>>we can ignore some of them, e.g. the one with parallel workers not\n>>having any stats (assuming we consider functions using GTT to be\n>>parallel restricted).\n>\n>If we do not use shared buffers for GTT then parallel processing of \n>GTT is not possible at all, so there is no problem with statistic for \n>parallel workers.\n>\n\nRight.\n\n>>\n>>I think someone pointed out pushing stuff directly into the cache is\n>>rather problematic, but I don't recall the details.\n>>\n>I have not encountered any problems, so if you can point me on what is \n>wrong with this approach, I will think about alternative solution.\n>\n\nI meant this comment by Robert:\n\nhttps://www.postgresql.org/message-id/CA%2BTgmoZFWaND4PpT_CJbeu6VZGZKi2rrTuSTL-Ykd97fexTN-w%40mail.gmail.com\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 12 Jan 2020 02:51:09 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 12.01.2020 4:51, Tomas Vondra wrote:\n> On Fri, Jan 10, 2020 at 11:47:42AM +0300, Konstantin Knizhnik wrote:\n>>\n>>\n>> On 09.01.2020 19:48, Tomas Vondra wrote:\n>>>\n>>>> The most complex and challenged task is to support GTT for all kind \n>>>> of indexes. Unfortunately I can not proposed some good universal \n>>>> solution for it.\n>>>> Just patching all existed indexes implementation seems to be the \n>>>> only choice.\n>>>>\n>>>\n>>> I haven't looked at the indexing issue closely, but IMO we need to\n>>> ensure that every session sees/uses only indexes on GTT that were\n>>> defined before the seesion started using the table.\n>>\n>> Why? It contradicts with behavior of normal tables.\n>> Assume that you have active clients and at some point of time DBA \n>> recognizes that them are spending to much time in scanning some GTT.\n>> It cab create index for this GTT but if existed client will not be \n>> able to use this index, then we need somehow make this clients to \n>> restart their sessions?\n>> In my patch I have implemented building indexes for GTT on demand: if \n>> accessed index on GTT is not yet initialized, then it is filled with \n>> local data.\n>\n> Yes, I know the behavior would be different from behavior for regular\n> tables. And yes, it would not allow fixing slow queries in sessions\n> without interrupting those sessions.\n>\n> I proposed just ignoring those new indexes because it seems much simpler\n> than alternative solutions that I can think of, and it's not like those\n> other solutions don't have other issues.\n\nQuit opposite: prohibiting sessions to see indexes created before \nsession start to use GTT requires more efforts. We need to somehow \nmaintain and check GTT first access time.\n\n>\n> For example, I've looked at the \"on demand\" building as implemented in\n> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n> calls into various places in index code seems somewht suspicious.\n\nWe in any case has to initialize GTT indexes on demand even if we \nprohibit usages of indexes created after first access by session to GTT.\nSo the difference is only in one thing: should we just initialize empty \nindex or populate it with local data (if rules for index usability are \nthe same for GTT as for normal tables).\n From implementation point of view there is no big difference. Actually \nbuilding index in standard way is even simpler than constructing empty \nindex. Originally I have implemented\nfirst approach (I just forgot to consider case when GTT was already user \nby a session). Then I rewrited it using second approach and patch even \nbecame simpler.\n\n>\n> * brinbuild is added to brinRevmapInitialize, which is meant to\n>   initialize state for scanning. It seems wrong to build the index we're\n>   scanning from this function (layering and all that).\n>\n> * btbuild is called from _bt_getbuf. That seems a bit ... suspicious?\n\n\nAs I already mentioned - support of indexes for GTT is one of the most \nchallenged things in my patch.\nI didn't find good and universal solution. So I agreed that call of \nbtbuild from _bt_getbuf may be considered as suspicious.\nI will be pleased if you or sombody else can propose better elternative \nand not only for B-Tree, but for all other indexes.\n\nBut as I already wrote above, prohibiting session to used indexes \ncreated after first access to GTT doesn't solve the problem.\nFor normal tables (and for local temp tables) indexes are initialized at \nthe time of their creation.\nWith GTT it doesn't work, because each session has its own local data of \nGTT.\nWe should either initialize/build index on demand (when it is first \naccessed), either at the moment of session start initialize indexes for \nall existed GTTs.\nLast options seem to be much worser from my point of view: there may me \nhuge number of GTT and session may not need to access GTT at all.\n>\n> ... and so on for other index types. Also, what about custom indexes\n> implemented in extensions? It seems a bit strange each of them has to\n> support this separately.\n\nI have already complained about it: my patch supports GTT for all \nbuilt-in indexes, but custom indexes has to handle it themselves.\nLooks like to provide some generic solution we need to extend index API, \nproviding two diffrent operations: creation and initialization.\nBut extending index API is very critical change... And also it doesn't \nsolve the problem with all existed extensions: them in any case have\nto be rewritten to implement new API version in order to support GTT.\n>\n> IMHO if this really is the right solution, we need to make it work for\n> existing indexes without having to tweak them individually. Why don't we\n> track a flag whether an index on GTT was initialized in a given session,\n> and if it was not then call the build function before calling any other\n> function from the index AM?\n> But let's talk about other issues caused by \"on demand\" build. Imagine\n> you have 50 sessions, each using the same GTT with a GB of per-session\n> data. Now you create a new index on the GTT, which forces the sessions\n> to build it's \"local\" index. Those builds will use maintenance_work_mem\n> each, so 50 * m_w_m. I doubt that's expected/sensible.\n\nI do not see principle difference here with scenario when 50 sessions \ncreate (local) temp table,\npopulate it with GB of data and create index for it.\n\n>\n> So I suggest we start by just ignoring the *new* indexes, and improve\n> this in the future (by building the indexes on demand or whatever).\n\nSorry, but still do not agree with this suggestions:\n- it doesn't simplify things\n- it makes behavior of GTT incompatible with normal tables.\n- it doesn't prevent some bad or unexpected behavior which can't be \ncurrently reproduced with normal (local) temp tables.\n\n>\n>>>\n>>> I think someone pointed out pushing stuff directly into the cache is\n>>> rather problematic, but I don't recall the details.\n>>>\n>> I have not encountered any problems, so if you can point me on what \n>> is wrong with this approach, I will think about alternative solution.\n>>\n>\n> I meant this comment by Robert:\n>\n> https://www.postgresql.org/message-id/CA%2BTgmoZFWaND4PpT_CJbeu6VZGZKi2rrTuSTL-Ykd97fexTN-w%40mail.gmail.com \n>\n>\n\"if any code tried to access the statistics directly from the table, \nrather than via the caches\".\n\nCurrently optimizer is accessing statistic though caches. So this \napproach works. If somebody will rewrite optimizer or provide own custom \noptimizer in extension which access statistic directly\nthen it we really be a problem. But I wonder why bypassing catalog cache \nmay be needed.\n\nMoreover, if we implement alternative solution - for example make \npg_statistic a view which combines results for normal tables and GTT, \nthen existed optimizer has to be rewritten\nbecause it can not access statistic in the way it is doing now. And \nthere will be all problem with all existed extensions which are \naccessing statistic in most natural way - through system cache.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 13 Jan 2020 11:08:40 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Jan 13, 2020 at 11:08:40AM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 12.01.2020 4:51, Tomas Vondra wrote:\n>>On Fri, Jan 10, 2020 at 11:47:42AM +0300, Konstantin Knizhnik wrote:\n>>>\n>>>\n>>>On 09.01.2020 19:48, Tomas Vondra wrote:\n>>>>\n>>>>>The most complex and challenged task is to support GTT for all \n>>>>>kind of indexes. Unfortunately I can not proposed some good \n>>>>>universal solution for it.\n>>>>>Just patching all existed indexes implementation seems to be \n>>>>>the only choice.\n>>>>>\n>>>>\n>>>>I haven't looked at the indexing issue closely, but IMO we need to\n>>>>ensure that every session sees/uses only indexes on GTT that were\n>>>>defined before the seesion started using the table.\n>>>\n>>>Why? It contradicts with behavior of normal tables.\n>>>Assume that you have active clients and at some point of time DBA \n>>>recognizes that them are spending to much time in scanning some \n>>>GTT.\n>>>It cab create index for this GTT but if existed client will not be \n>>>able to use this index, then we need somehow make this clients to \n>>>restart their sessions?\n>>>In my patch I have implemented building indexes for GTT on demand: \n>>>if accessed index on GTT is not yet initialized, then it is filled \n>>>with local data.\n>>\n>>Yes, I know the behavior would be different from behavior for regular\n>>tables. And yes, it would not allow fixing slow queries in sessions\n>>without interrupting those sessions.\n>>\n>>I proposed just ignoring those new indexes because it seems much simpler\n>>than alternative solutions that I can think of, and it's not like those\n>>other solutions don't have other issues.\n>\n>Quit opposite: prohibiting sessions to see indexes created before \n>session start to use GTT requires more efforts. We need to somehow \n>maintain and check GTT first access time.\n>\n\nHmmm, OK. I'd expect such check to be much simpler than the on-demand\nindex building, but I admit I haven't tried implementing either of those\noptions.\n\n>>\n>>For example, I've looked at the \"on demand\" building as implemented in\n>>global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>>calls into various places in index code seems somewht suspicious.\n>\n>We in any case has to initialize GTT indexes on demand even if we \n>prohibit usages of indexes created after first access by session to \n>GTT.\n>So the difference is only in one thing: should we just initialize \n>empty index or populate it with local data (if rules for index \n>usability are the same for GTT as for normal tables).\n>From implementation point of view there is no big difference. Actually \n>building index in standard way is even simpler than constructing empty \n>index. Originally I have implemented\n>first approach (I just forgot to consider case when GTT was already \n>user by a session). Then I rewrited it using second approach and patch \n>even became simpler.\n>\n>>\n>>* brinbuild is added to brinRevmapInitialize, which is meant to\n>>� initialize state for scanning. It seems wrong to build the index we're\n>>� scanning from this function (layering and all that).\n>>\n>>* btbuild is called from _bt_getbuf. That seems a bit ... suspicious?\n>\n>\n>As I already mentioned - support of indexes for GTT is one of the most \n>challenged things in my patch.\n>I didn't find good and universal solution. So I agreed that call of \n>btbuild from _bt_getbuf may be considered as suspicious.\n>I will be pleased if you or sombody else can propose better \n>elternative and not only for B-Tree, but for all other indexes.\n>\n>But as I already wrote above, prohibiting session to used indexes \n>created after first access to GTT doesn't solve the problem.\n>For normal tables (and for local temp tables) indexes are initialized \n>at the time of their creation.\n>With GTT it doesn't work, because each session has its own local data \n>of GTT.\n>We should either initialize/build index on demand (when it is first \n>accessed), either at the moment of session start initialize indexes \n>for all existed GTTs.\n>Last options seem to be much worser from my point of view: there may \n>me huge number of GTT and session may not need to access GTT at all.\n>>\n>>... and so on for other index types. Also, what about custom indexes\n>>implemented in extensions? It seems a bit strange each of them has to\n>>support this separately.\n>\n>I have already complained about it: my patch supports GTT for all \n>built-in indexes, but custom indexes has to handle it themselves.\n>Looks like to provide some generic solution we need to extend index \n>API, providing two diffrent operations: creation and initialization.\n>But extending index API is very critical change... And also it doesn't \n>solve the problem with all existed extensions: them in any case have\n>to be rewritten to implement new API version in order to support GTT.\n>\n\nWhy not to allow creating only indexes implementing this new API method\n(on GTT)?\n\n>>\n>>IMHO if this really is the right solution, we need to make it work for\n>>existing indexes without having to tweak them individually. Why don't we\n>>track a flag whether an index on GTT was initialized in a given session,\n>>and if it was not then call the build function before calling any other\n>>function from the index AM?\n>>But let's talk about other issues caused by \"on demand\" build. Imagine\n>>you have 50 sessions, each using the same GTT with a GB of per-session\n>>data. Now you create a new index on the GTT, which forces the sessions\n>>to build it's \"local\" index. Those builds will use maintenance_work_mem\n>>each, so 50 * m_w_m. I doubt that's expected/sensible.\n>\n>I do not see principle difference here with scenario when 50 sessions \n>create (local) temp table,\n>populate it with GB of data and create index for it.\n>\n\nI'd say the high memory consumption is pretty significant.\n\n>>\n>>So I suggest we start by just ignoring the *new* indexes, and improve\n>>this in the future (by building the indexes on demand or whatever).\n>\n>Sorry, but still do not agree with this suggestions:\n>- it doesn't simplify things\n>- it makes behavior of GTT incompatible with normal tables.\n>- it doesn't prevent some bad or unexpected behavior which can't be \n>currently reproduced with normal (local) temp tables.\n>\n>>\n>>>>\n>>>>I think someone pointed out pushing stuff directly into the cache is\n>>>>rather problematic, but I don't recall the details.\n>>>>\n>>>I have not encountered any problems, so if you can point me on \n>>>what is wrong with this approach, I will think about alternative \n>>>solution.\n>>>\n>>\n>>I meant this comment by Robert:\n>>\n>>https://www.postgresql.org/message-id/CA%2BTgmoZFWaND4PpT_CJbeu6VZGZKi2rrTuSTL-Ykd97fexTN-w%40mail.gmail.com\n>>\n>>\n>\"if any code tried to access the statistics directly from the table, \n>rather than via the caches\".\n>\n>Currently optimizer is accessing statistic though caches. So this \n>approach works. If somebody will rewrite optimizer or provide own \n>custom optimizer in extension which access statistic directly\n>then it we really be a problem. But I wonder why bypassing catalog \n>cache may be needed.\n>\n\nI don't know, but it seems extensions like hypopg do it.\n\n>Moreover, if we implement alternative solution - for example make \n>pg_statistic a view which combines results for normal tables and GTT, \n>then existed optimizer has to be rewritten\n>because it can not access statistic in the way it is doing now. And \n>there will be all problem with all existed extensions which are \n>accessing statistic in most natural way - through system cache.\n>\n\nPerhaps. I don't know enough about this part of the code to have a\nstrong opinion.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 13 Jan 2020 17:32:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Jan 13, 2020 at 05:32:53PM +0100, Tomas Vondra wrote:\n> On Mon, Jan 13, 2020 at 11:08:40AM +0300, Konstantin Knizhnik wrote:\n> >\n> >\"if any code tried to access the statistics directly from the table, \n> >rather than via the caches\".\n> >\n> >Currently optimizer is accessing statistic though caches. So this \n> >approach works. If somebody will rewrite optimizer or provide own \n> >custom optimizer in extension which access statistic directly\n> >then it we really be a problem. But I wonder why bypassing catalog \n> >cache may be needed.\n> >\n> \n> I don't know, but it seems extensions like hypopg do it.\n\nAFAIR, hypopg only opens pg_statistic to use its tupledesc when creating\nstatistics on hypothetical partitions, but it should otherwise never reads or\nneed plain pg_statistic rows.\n\n\n", "msg_date": "Mon, 13 Jan 2020 21:12:38 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Jan 13, 2020 at 09:12:38PM +0100, Julien Rouhaud wrote:\n>On Mon, Jan 13, 2020 at 05:32:53PM +0100, Tomas Vondra wrote:\n>> On Mon, Jan 13, 2020 at 11:08:40AM +0300, Konstantin Knizhnik wrote:\n>> >\n>> >\"if any code tried to access the statistics directly from the table,\n>> >rather than via the caches\".\n>> >\n>> >Currently optimizer is accessing statistic though caches. So this\n>> >approach works. If somebody will rewrite optimizer or provide own\n>> >custom optimizer in extension which access statistic directly\n>> >then it we really be a problem. But I wonder why bypassing catalog\n>> >cache may be needed.\n>> >\n>>\n>> I don't know, but it seems extensions like hypopg do it.\n>\n>AFAIR, hypopg only opens pg_statistic to use its tupledesc when creating\n>statistics on hypothetical partitions, but it should otherwise never reads or\n>need plain pg_statistic rows.\n\nAh, OK! Thanks for the clarification. I knew it does something with the\ncatalog, didn't realize it only gets the descriptor.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 13 Jan 2020 22:03:01 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thank you for review my patch.\n\n\n> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> Hi\n> \n> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> Hi all\n> \n> This is the latest patch\n> \n> The updates are as follows:\n> 1. Support global temp Inherit table global temp partition table\n> 2. Support serial column in GTT\n> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n> 4. Provide view pg_gtt_attached_pids to manage GTT\n> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n> 6. Alter GTT or rename GTT is allowed under some conditions\n> \n> \n> Please give me feedback.\n> \n> I tested the functionality\n> \n> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).\nmakes sense, I will fix it.\n\n> \n> I tested some simple scripts \n> \n> test01.sql\n> \n> CREATE TEMP TABLE foo(a int, b int);\n> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n> ANALYZE foo;\n> SELECT sum(a), sum(b) FROM foo;\n> DROP TABLE foo; -- simulate disconnect\n> \n> \n> after 100 sec, the table pg_attribute has 3.2MB\n> and 64 tps, 6446 transaction\n> \n> test02.sql\n> \n> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n> ANALYZE foo;\n> SELECT sum(a), sum(b) FROM foo;\n> DELETE FROM foo; -- simulate disconnect\n> \n> \n> after 100 sec, 1688 tps, 168830 transactions\n> \n> So performance is absolutely different as we expected.\n> \n> From my perspective, this functionality is great.\nYes, frequent ddl causes catalog bloat, GTT avoids this problem.\n\n> \n> Todo:\n> \n> pg_table_size function doesn't work\nDo you mean that function pg_table_size() need get the storage space used by the one GTT in the entire db(include all session) .\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> Wenjing\n> \n> \n> \n> \n> \n>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> 写道:\n>> \n>> Hi,\n>> \n>> I think we need to do something with having two patches aiming to add\n>> global temporary tables:\n>> \n>> [1] https://commitfest.postgresql.org/26/2349/ <https://commitfest.postgresql.org/26/2349/>\n>> \n>> [2] https://commitfest.postgresql.org/26/2233/ <https://commitfest.postgresql.org/26/2233/>\n>> \n>> As a reviewer I have no idea which of the threads to look at - certainly\n>> not without reading both threads, which I doubt anyone will really do.\n>> The reviews and discussions are somewhat intermixed between those two\n>> threads, which makes it even more confusing.\n>> \n>> I think we should agree on a minimal patch combining the necessary/good\n>> bits from the various patches, and terminate one of the threads (i.e.\n>> mark it as rejected or RWF). And we need to do that now, otherwise\n>> there's about 0% chance of getting this into v13.\n>> \n>> In general, I agree with the sentiment Rober expressed in [1] - the\n>> patch needs to be as small as possible, not adding \"nice to have\"\n>> features (like support for parallel queries - I very much doubt just\n>> using shared instead of local buffers is enough to make it work.)\n>> \n>> regards\n>> \n>> -- \n>> Tomas Vondra http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> \n\n\nThank you for review my patch.2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hiso 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Hi allThis is the latest patchThe updates are as follows:1. Support global temp Inherit table global temp partition table2. Support serial column in GTT3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics4. Provide view pg_gtt_attached_pids to manage GTT5. Provide function pg_list_gtt_relfrozenxids() to manage GTT6. Alter GTT or rename GTT is allowed under some conditionsPlease give me feedback.I tested the functionality1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).makes sense, I will fix it.I tested some simple scripts test01.sqlCREATE TEMP TABLE foo(a int, b int);INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DROP TABLE foo; -- simulate disconnectafter 100 sec, the table pg_attribute has 3.2MBand 64 tps, 6446 transactiontest02.sqlINSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DELETE FROM foo; -- simulate disconnectafter 100 sec, 1688 tps, 168830 transactionsSo performance is absolutely different as we expected.From my perspective, this functionality is great.Yes, frequent ddl causes catalog bloat, GTT avoids this problem.Todo:pg_table_size function doesn't workDo you mean that function pg_table_size() need get the storage space used by the one GTT in the entire db(include all session) .RegardsPavelWenjing2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:Hi,I think we need to do something with having two patches aiming to addglobal temporary tables:[1] https://commitfest.postgresql.org/26/2349/[2] https://commitfest.postgresql.org/26/2233/As a reviewer I have no idea which of the threads to look at - certainlynot without reading both threads, which I doubt anyone will really do.The reviews and discussions are somewhat intermixed between those twothreads, which makes it even more confusing.I think we should agree on a minimal patch combining the necessary/goodbits from the various patches, and terminate one of the threads (i.e.mark it as rejected or RWF). And we need to do that now, otherwisethere's about 0% chance of getting this into v13.In general, I agree with the sentiment Rober expressed in [1] - thepatch needs to be as small as possible, not adding \"nice to have\"features (like support for parallel queries - I very much doubt justusing shared instead of local buffers is enough to make it work.)regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 14 Jan 2020 21:09:34 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n> Thank you for review my patch.\n>\n>\n> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n> Hi\n>\n> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>> Hi all\n>>\n>> This is the latest patch\n>>\n>> The updates are as follows:\n>> 1. Support global temp Inherit table global temp partition table\n>> 2. Support serial column in GTT\n>> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n>> 4. Provide view pg_gtt_attached_pids to manage GTT\n>> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n>> 6. Alter GTT or rename GTT is allowed under some conditions\n>>\n>>\n>> Please give me feedback.\n>>\n>\n> I tested the functionality\n>\n> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local\n> temp tables).\n>\n> makes sense, I will fix it.\n>\n>\n> I tested some simple scripts\n>\n> test01.sql\n>\n> CREATE TEMP TABLE foo(a int, b int);\n> INSERT INTO foo SELECT random()*100, random()*1000 FROM\n> generate_series(1,1000);\n> ANALYZE foo;\n> SELECT sum(a), sum(b) FROM foo;\n> DROP TABLE foo; -- simulate disconnect\n>\n>\n> after 100 sec, the table pg_attribute has 3.2MB\n> and 64 tps, 6446 transaction\n>\n> test02.sql\n>\n> INSERT INTO foo SELECT random()*100, random()*1000 FROM\n> generate_series(1,1000);\n> ANALYZE foo;\n> SELECT sum(a), sum(b) FROM foo;\n> DELETE FROM foo; -- simulate disconnect\n>\n>\n> after 100 sec, 1688 tps, 168830 transactions\n>\n> So performance is absolutely different as we expected.\n>\n> From my perspective, this functionality is great.\n>\n> Yes, frequent ddl causes catalog bloat, GTT avoids this problem.\n>\n>\n> Todo:\n>\n> pg_table_size function doesn't work\n>\n> Do you mean that function pg_table_size() need get the storage space used\n> by the one GTT in the entire db(include all session) .\n>\n\nIt's question how much GTT tables should be similar to classic tables. But\nthe reporting in psql should to work \\dt+, \\l+, \\di+\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Wenjing\n>>\n>>\n>>\n>>\n>>\n>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:\n>>\n>> Hi,\n>>\n>> I think we need to do something with having two patches aiming to add\n>> global temporary tables:\n>>\n>> [1] https://commitfest.postgresql.org/26/2349/\n>>\n>> [2] https://commitfest.postgresql.org/26/2233/\n>>\n>> As a reviewer I have no idea which of the threads to look at - certainly\n>> not without reading both threads, which I doubt anyone will really do.\n>> The reviews and discussions are somewhat intermixed between those two\n>> threads, which makes it even more confusing.\n>>\n>> I think we should agree on a minimal patch combining the necessary/good\n>> bits from the various patches, and terminate one of the threads (i.e.\n>> mark it as rejected or RWF). And we need to do that now, otherwise\n>> there's about 0% chance of getting this into v13.\n>>\n>> In general, I agree with the sentiment Rober expressed in [1] - the\n>> patch needs to be as small as possible, not adding \"nice to have\"\n>> features (like support for parallel queries - I very much doubt just\n>> using shared instead of local buffers is enough to make it work.)\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra http://www.2ndQuadrant.com\n>> <http://www.2ndquadrant.com/>\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>>\n>>\n>\n\nút 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Thank you for review my patch.2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hiso 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Hi allThis is the latest patchThe updates are as follows:1. Support global temp Inherit table global temp partition table2. Support serial column in GTT3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics4. Provide view pg_gtt_attached_pids to manage GTT5. Provide function pg_list_gtt_relfrozenxids() to manage GTT6. Alter GTT or rename GTT is allowed under some conditionsPlease give me feedback.I tested the functionality1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).makes sense, I will fix it.I tested some simple scripts test01.sqlCREATE TEMP TABLE foo(a int, b int);INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DROP TABLE foo; -- simulate disconnectafter 100 sec, the table pg_attribute has 3.2MBand 64 tps, 6446 transactiontest02.sqlINSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DELETE FROM foo; -- simulate disconnectafter 100 sec, 1688 tps, 168830 transactionsSo performance is absolutely different as we expected.From my perspective, this functionality is great.Yes, frequent ddl causes catalog bloat, GTT avoids this problem.Todo:pg_table_size function doesn't workDo you mean that function pg_table_size() need get the storage space used by the one GTT in the entire db(include all session) .It's question how much GTT tables should be similar to classic tables. But the reporting in psql should to work \\dt+, \\l+, \\di+ RegardsPavelWenjing2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:Hi,I think we need to do something with having two patches aiming to addglobal temporary tables:[1] https://commitfest.postgresql.org/26/2349/[2] https://commitfest.postgresql.org/26/2233/As a reviewer I have no idea which of the threads to look at - certainlynot without reading both threads, which I doubt anyone will really do.The reviews and discussions are somewhat intermixed between those twothreads, which makes it even more confusing.I think we should agree on a minimal patch combining the necessary/goodbits from the various patches, and terminate one of the threads (i.e.mark it as rejected or RWF). And we need to do that now, otherwisethere's about 0% chance of getting this into v13.In general, I agree with the sentiment Rober expressed in [1] - thepatch needs to be as small as possible, not adding \"nice to have\"features (like support for parallel queries - I very much doubt justusing shared instead of local buffers is enough to make it work.)regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 14 Jan 2020 14:20:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2020年1月12日 上午9:14,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:\n> \n> On Fri, Jan 10, 2020 at 03:24:34PM +0300, Konstantin Knizhnik wrote:\n>> \n>> \n>> On 09.01.2020 19:30, Tomas Vondra wrote:\n>> \n>> \n>>> \n>>>> \n>>>>> \n>>>>>> 3 Still no one commented on GTT's transaction information processing, they include\n>>>>>> 3.1 Should gtt's frozenxid need to be care?\n>>>>>> 3.2 gtt’s clog clean\n>>>>>> 3.3 How to deal with \"too old\" gtt data\n>>>>>> \n>>>>> \n>>>>> No idea what to do about this.\n>>>>> \n>>>> \n>>>> I wonder what is the specific of GTT here?\n>>>> The same problem takes place for normal (local) temp tables, doesn't it?\n>>>> \n>>> \n>>> Not sure. TBH I'm not sure I understand what the issue actually is.\n>> \n>> Just open session, create temporary table and insert some data in it.\n>> Then in other session run 2^31 transactions (at my desktop it takes about 2 hours).\n>> As far as temp tables are not proceeded by vacuum, database is stalled:\n>> \n>> ERROR: database is not accepting commands to avoid wraparound data loss in database \"postgres\"\n>> \n>> It seems to be quite dubious behavior and it is strange to me that nobody complains about it.\n>> We discuss many issues related with temp tables (statistic, parallel queries,...) which seems to be less critical.\n>> \n>> But this problem is not specific to GTT - it can be reproduced with normal (local) temp tables.\n>> This is why I wonder why do we need to solve it in GTT patch.\n>> \n> \n> Yeah, I think that's out of scope for GTT patch. Once we solve it for\n> plain temporary tables, we'll solve it for GTT too.\n1. The core problem is that the data contains transaction information (xid), which needs to be vacuum(freeze) regularly to avoid running out of xid.\nThe autovacuum supports vacuum regular table but local temp does not. autovacuum also does not support GTT.\n\n2. However, the difference between the local temp table and the global temp table(GTT) is that\na) For local temp table: one table hava one piece of data. the frozenxid of one local temp table is store in the catalog(pg_class). \nb) For global temp table: each session has a separate copy of data, one GTT may contain maxbackend frozenxid.\nand I don't think it's a good idea to keep frozenxid of GTT in the catalog(pg_class). \nIt becomes a question: how to handle GTT transaction information?\n\nI agree that problem 1 should be completely solved by a some feature, such as local transactions. It is definitely not included in the GTT patch.\n\nBut, I think we need to ensure the durability of GTT data. For example, data in GTT cannot be lost due to the clog being cleaned up. It belongs to problem 2.\n\n\n\nWenjing\n\n\n> \n> regards\n> \n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 14 Jan 2020 22:15:11 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> Thank you for review my patch.\n> \n> \n>> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> Hi all\n>> \n>> This is the latest patch\n>> \n>> The updates are as follows:\n>> 1. Support global temp Inherit table global temp partition table\n>> 2. Support serial column in GTT\n>> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n>> 4. Provide view pg_gtt_attached_pids to manage GTT\n>> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n>> 6. Alter GTT or rename GTT is allowed under some conditions\n>> \n>> \n>> Please give me feedback.\n>> \n>> I tested the functionality\n>> \n>> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).\n> makes sense, I will fix it.\n> \n>> \n>> I tested some simple scripts \n>> \n>> test01.sql\n>> \n>> CREATE TEMP TABLE foo(a int, b int);\n>> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n>> ANALYZE foo;\n>> SELECT sum(a), sum(b) FROM foo;\n>> DROP TABLE foo; -- simulate disconnect\n>> \n>> \n>> after 100 sec, the table pg_attribute has 3.2MB\n>> and 64 tps, 6446 transaction\n>> \n>> test02.sql\n>> \n>> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n>> ANALYZE foo;\n>> SELECT sum(a), sum(b) FROM foo;\n>> DELETE FROM foo; -- simulate disconnect\n>> \n>> \n>> after 100 sec, 1688 tps, 168830 transactions\n>> \n>> So performance is absolutely different as we expected.\n>> \n>> From my perspective, this functionality is great.\n> Yes, frequent ddl causes catalog bloat, GTT avoids this problem.\n> \n>> \n>> Todo:\n>> \n>> pg_table_size function doesn't work\n> Do you mean that function pg_table_size() need get the storage space used by the one GTT in the entire db(include all session) .\n> \n> It's question how much GTT tables should be similar to classic tables. But the reporting in psql should to work \\dt+, \\l+, \\di+\nGet it, I will fix it.\n> \n> \n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> \n>>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> 写道:\n>>> \n>>> Hi,\n>>> \n>>> I think we need to do something with having two patches aiming to add\n>>> global temporary tables:\n>>> \n>>> [1] https://commitfest.postgresql.org/26/2349/ <https://commitfest.postgresql.org/26/2349/>\n>>> \n>>> [2] https://commitfest.postgresql.org/26/2233/ <https://commitfest.postgresql.org/26/2233/>\n>>> \n>>> As a reviewer I have no idea which of the threads to look at - certainly\n>>> not without reading both threads, which I doubt anyone will really do.\n>>> The reviews and discussions are somewhat intermixed between those two\n>>> threads, which makes it even more confusing.\n>>> \n>>> I think we should agree on a minimal patch combining the necessary/good\n>>> bits from the various patches, and terminate one of the threads (i.e.\n>>> mark it as rejected or RWF). And we need to do that now, otherwise\n>>> there's about 0% chance of getting this into v13.\n>>> \n>>> In general, I agree with the sentiment Rober expressed in [1] - the\n>>> patch needs to be as small as possible, not adding \"nice to have\"\n>>> features (like support for parallel queries - I very much doubt just\n>>> using shared instead of local buffers is enough to make it work.)\n>>> \n>>> regards\n>>> \n>>> -- \n>>> Tomas Vondra http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\n>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>> \n> \n\n\n2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Thank you for review my patch.2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hiso 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Hi allThis is the latest patchThe updates are as follows:1. Support global temp Inherit table global temp partition table2. Support serial column in GTT3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics4. Provide view pg_gtt_attached_pids to manage GTT5. Provide function pg_list_gtt_relfrozenxids() to manage GTT6. Alter GTT or rename GTT is allowed under some conditionsPlease give me feedback.I tested the functionality1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).makes sense, I will fix it.I tested some simple scripts test01.sqlCREATE TEMP TABLE foo(a int, b int);INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DROP TABLE foo; -- simulate disconnectafter 100 sec, the table pg_attribute has 3.2MBand 64 tps, 6446 transactiontest02.sqlINSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DELETE FROM foo; -- simulate disconnectafter 100 sec, 1688 tps, 168830 transactionsSo performance is absolutely different as we expected.From my perspective, this functionality is great.Yes, frequent ddl causes catalog bloat, GTT avoids this problem.Todo:pg_table_size function doesn't workDo you mean that function pg_table_size() need get the storage space used by the one GTT in the entire db(include all session) .It's question how much GTT tables should be similar to classic tables. But the reporting in psql should to work \\dt+, \\l+, \\di+Get it, I will fix it. RegardsPavelWenjing2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:Hi,I think we need to do something with having two patches aiming to addglobal temporary tables:[1] https://commitfest.postgresql.org/26/2349/[2] https://commitfest.postgresql.org/26/2233/As a reviewer I have no idea which of the threads to look at - certainlynot without reading both threads, which I doubt anyone will really do.The reviews and discussions are somewhat intermixed between those twothreads, which makes it even more confusing.I think we should agree on a minimal patch combining the necessary/goodbits from the various patches, and terminate one of the threads (i.e.mark it as rejected or RWF). And we need to do that now, otherwisethere's about 0% chance of getting this into v13.In general, I agree with the sentiment Rober expressed in [1] - thepatch needs to be as small as possible, not adding \"nice to have\"features (like support for parallel queries - I very much doubt justusing shared instead of local buffers is enough to make it work.)regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 14 Jan 2020 22:16:12 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2020年1月13日 下午4:08,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 12.01.2020 4:51, Tomas Vondra wrote:\n>> On Fri, Jan 10, 2020 at 11:47:42AM +0300, Konstantin Knizhnik wrote:\n>>> \n>>> \n>>> On 09.01.2020 19:48, Tomas Vondra wrote:\n>>>> \n>>>>> The most complex and challenged task is to support GTT for all kind of indexes. Unfortunately I can not proposed some good universal solution for it.\n>>>>> Just patching all existed indexes implementation seems to be the only choice.\n>>>>> \n>>>> \n>>>> I haven't looked at the indexing issue closely, but IMO we need to\n>>>> ensure that every session sees/uses only indexes on GTT that were\n>>>> defined before the seesion started using the table.\n>>> \n>>> Why? It contradicts with behavior of normal tables.\n>>> Assume that you have active clients and at some point of time DBA recognizes that them are spending to much time in scanning some GTT.\n>>> It cab create index for this GTT but if existed client will not be able to use this index, then we need somehow make this clients to restart their sessions?\n>>> In my patch I have implemented building indexes for GTT on demand: if accessed index on GTT is not yet initialized, then it is filled with local data.\n>> \n>> Yes, I know the behavior would be different from behavior for regular\n>> tables. And yes, it would not allow fixing slow queries in sessions\n>> without interrupting those sessions.\n>> \n>> I proposed just ignoring those new indexes because it seems much simpler\n>> than alternative solutions that I can think of, and it's not like those\n>> other solutions don't have other issues.\n> \n> Quit opposite: prohibiting sessions to see indexes created before session start to use GTT requires more efforts. We need to somehow maintain and check GTT first access time.\n> \n>> \n>> For example, I've looked at the \"on demand\" building as implemented in\n>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>> calls into various places in index code seems somewht suspicious.\n> \n> We in any case has to initialize GTT indexes on demand even if we prohibit usages of indexes created after first access by session to GTT.\n> So the difference is only in one thing: should we just initialize empty index or populate it with local data (if rules for index usability are the same for GTT as for normal tables).\n> From implementation point of view there is no big difference. Actually building index in standard way is even simpler than constructing empty index. Originally I have implemented\n> first approach (I just forgot to consider case when GTT was already user by a session). Then I rewrited it using second approach and patch even became simpler.\n> \n>> \n>> * brinbuild is added to brinRevmapInitialize, which is meant to\n>> initialize state for scanning. It seems wrong to build the index we're\n>> scanning from this function (layering and all that).\n>> \n>> * btbuild is called from _bt_getbuf. That seems a bit ... suspicious?\n> \n> \n> As I already mentioned - support of indexes for GTT is one of the most challenged things in my patch.\n> I didn't find good and universal solution. So I agreed that call of btbuild from _bt_getbuf may be considered as suspicious.\n> I will be pleased if you or sombody else can propose better elternative and not only for B-Tree, but for all other indexes.\n> \n> But as I already wrote above, prohibiting session to used indexes created after first access to GTT doesn't solve the problem.\n> For normal tables (and for local temp tables) indexes are initialized at the time of their creation.\n> With GTT it doesn't work, because each session has its own local data of GTT.\n> We should either initialize/build index on demand (when it is first accessed), either at the moment of session start initialize indexes for all existed GTTs.\n> Last options seem to be much worser from my point of view: there may me huge number of GTT and session may not need to access GTT at all.\n>> \n>> ... and so on for other index types. Also, what about custom indexes\n>> implemented in extensions? It seems a bit strange each of them has to\n>> support this separately.\n> \n> I have already complained about it: my patch supports GTT for all built-in indexes, but custom indexes has to handle it themselves.\n> Looks like to provide some generic solution we need to extend index API, providing two diffrent operations: creation and initialization.\n> But extending index API is very critical change... And also it doesn't solve the problem with all existed extensions: them in any case have\n> to be rewritten to implement new API version in order to support GTT.\n>> \n>> IMHO if this really is the right solution, we need to make it work for\n>> existing indexes without having to tweak them individually. Why don't we\n>> track a flag whether an index on GTT was initialized in a given session,\n>> and if it was not then call the build function before calling any other\n>> function from the index AM?\n>> But let's talk about other issues caused by \"on demand\" build. Imagine\n>> you have 50 sessions, each using the same GTT with a GB of per-session\n>> data. Now you create a new index on the GTT, which forces the sessions\n>> to build it's \"local\" index. Those builds will use maintenance_work_mem\n>> each, so 50 * m_w_m. I doubt that's expected/sensible.\n> \n> I do not see principle difference here with scenario when 50 sessions create (local) temp table,\n> populate it with GB of data and create index for it.\nI think the problem is that when one session completes the creation of the index on GTT,\nit will trigger the other sessions build own local index of GTT in a centralized time.\nThis will consume a lot of hardware resources (cpu io memory) in a short time, \nand even the database service becomes slow, because 50 sessions are building index.\nI think this is not what we expected.\n\n> \n>> \n>> So I suggest we start by just ignoring the *new* indexes, and improve\n>> this in the future (by building the indexes on demand or whatever).\n> \n> Sorry, but still do not agree with this suggestions:\n> - it doesn't simplify things\n> - it makes behavior of GTT incompatible with normal tables.\n> - it doesn't prevent some bad or unexpected behavior which can't be currently reproduced with normal (local) temp tables.\nFrom a user perspective, this proposal is reasonable.\nFrom an implementation perspective, the same GTT index needs to maintain different states (valid or invalid) in different sessions, \nwhich seems difficult to do in the current framework.\n\nSo in my first version, I chose to complete all index creation before using GTT.\nI think this will satisfy most use cases.\n\n> \n>> \n>>>> \n>>>> I think someone pointed out pushing stuff directly into the cache is\n>>>> rather problematic, but I don't recall the details.\n>>>> \n>>> I have not encountered any problems, so if you can point me on what is wrong with this approach, I will think about alternative solution.\n>>> \n>> \n>> I meant this comment by Robert:\n>> \n>> https://www.postgresql.org/message-id/CA%2BTgmoZFWaND4PpT_CJbeu6VZGZKi2rrTuSTL-Ykd97fexTN-w%40mail.gmail.com \n>> \n> \"if any code tried to access the statistics directly from the table, rather than via the caches\".\n> \n> Currently optimizer is accessing statistic though caches. So this approach works. If somebody will rewrite optimizer or provide own custom optimizer in extension which access statistic directly\n> then it we really be a problem. But I wonder why bypassing catalog cache may be needed.\n> \n> Moreover, if we implement alternative solution - for example make pg_statistic a view which combines results for normal tables and GTT, then existed optimizer has to be rewritten\n> because it can not access statistic in the way it is doing now. And there will be all problem with all existed extensions which are accessing statistic in most natural way - through system cache.\n> \n> \n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\n\n\n", "msg_date": "Wed, 15 Jan 2020 21:10:25 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 15.01.2020 16:10, 曾文旌(义从) wrote:\n>\n>> I do not see principle difference here with scenario when 50 sessions create (local) temp table,\n>> populate it with GB of data and create index for it.\n> I think the problem is that when one session completes the creation of the index on GTT,\n> it will trigger the other sessions build own local index of GTT in a centralized time.\n> This will consume a lot of hardware resources (cpu io memory) in a short time,\n> and even the database service becomes slow, because 50 sessions are building index.\n> I think this is not what we expected.\n\n\nFirst of all creating index for GTT ni one session doesn't immediately \ninitiate building indexes in all other sessions.\nIndexes are built on demand. If session is not using this GTT any more, \nthen index for it will not build at all.\nAnd if GTT is really are actively used by all sessions, then building \nindex and using it for constructing optimal execution plan is better,\nthen continue to  use sequential scan and read all GTT data from the disk.\n\nAnd as I already mentioned I do not see some principle difference in \naspect of resource consumptions comparing with current usage of local \ntemp tables.\nIf we have have many sessions, each creating temp table, populating it \nwith data and building index for it, then we will\nobserve the same CPU utilization and memory resource consumption as in \ncase of using GTT and creating index for it.\n\nSorry, but I still not convinced by your and Tomas arguments.\nYes, building GTT index may cause high memory consumption \n(maintenance_work_mem * n_backends).\nBut such consumption can be  observed also without GTT and it has to be \ntaken in account when choosing value for maintenance_work_mem.\nBut from my point of view it is much more important to make behavior of \nGTT as much compatible with normal tables as possible.\nAlso from database administration point of view, necessity to restart \nsessions to make then use new indexes seems to be very strange and \ninconvenient.\nAlternatively DBA can address the problem with high memory consumption \nby adjusting maintenance_work_mem, so this solution is more flexible.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 16 Jan 2020 10:23:33 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> Thank you for review my patch.\n> \n> \n>> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> Hi all\n>> \n>> This is the latest patch\n>> \n>> The updates are as follows:\n>> 1. Support global temp Inherit table global temp partition table\n>> 2. Support serial column in GTT\n>> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n>> 4. Provide view pg_gtt_attached_pids to manage GTT\n>> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n>> 6. Alter GTT or rename GTT is allowed under some conditions\n>> \n>> \n>> Please give me feedback.\n>> \n>> I tested the functionality\n>> \n>> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).\n> makes sense, I will fix it.\n> \n>> \n>> I tested some simple scripts \n>> \n>> test01.sql\n>> \n>> CREATE TEMP TABLE foo(a int, b int);\n>> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n>> ANALYZE foo;\n>> SELECT sum(a), sum(b) FROM foo;\n>> DROP TABLE foo; -- simulate disconnect\n>> \n>> \n>> after 100 sec, the table pg_attribute has 3.2MB\n>> and 64 tps, 6446 transaction\n>> \n>> test02.sql\n>> \n>> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n>> ANALYZE foo;\n>> SELECT sum(a), sum(b) FROM foo;\n>> DELETE FROM foo; -- simulate disconnect\n>> \n>> \n>> after 100 sec, 1688 tps, 168830 transactions\n>> \n>> So performance is absolutely different as we expected.\n>> \n>> From my perspective, this functionality is great.\n> Yes, frequent ddl causes catalog bloat, GTT avoids this problem.\n> \n>> \n>> Todo:\n>> \n>> pg_table_size function doesn't work\n> Do you mean that function pg_table_size() need get the storage space used by the one GTT in the entire db(include all session) .\n> \n> It's question how much GTT tables should be similar to classic tables. But the reporting in psql should to work \\dt+, \\l+, \\di+\n\nI have fixed this problem.\n\nPlease let me know where I need to improve.\n\nThanks\n\n\nWenjing\n\n\n\n\n> \n> \n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> \n>>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> 写道:\n>>> \n>>> Hi,\n>>> \n>>> I think we need to do something with having two patches aiming to add\n>>> global temporary tables:\n>>> \n>>> [1] https://commitfest.postgresql.org/26/2349/ <https://commitfest.postgresql.org/26/2349/>\n>>> \n>>> [2] https://commitfest.postgresql.org/26/2233/ <https://commitfest.postgresql.org/26/2233/>\n>>> \n>>> As a reviewer I have no idea which of the threads to look at - certainly\n>>> not without reading both threads, which I doubt anyone will really do.\n>>> The reviews and discussions are somewhat intermixed between those two\n>>> threads, which makes it even more confusing.\n>>> \n>>> I think we should agree on a minimal patch combining the necessary/good\n>>> bits from the various patches, and terminate one of the threads (i.e.\n>>> mark it as rejected or RWF). And we need to do that now, otherwise\n>>> there's about 0% chance of getting this into v13.\n>>> \n>>> In general, I agree with the sentiment Rober expressed in [1] - the\n>>> patch needs to be as small as possible, not adding \"nice to have\"\n>>> features (like support for parallel queries - I very much doubt just\n>>> using shared instead of local buffers is enough to make it work.)\n>>> \n>>> regards\n>>> \n>>> -- \n>>> Tomas Vondra http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\n>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>> \n>", "msg_date": "Mon, 20 Jan 2020 01:04:38 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 2020-01-19 18:04, 曾文旌(义从) wrote:\n>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com \n>> <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n\n>> [global_temporary_table_v4-pg13.patch ]\n\nHi,\n\nThis patch doesn't quiet apply for me:\n\npatching file src/backend/access/common/reloptions.c\npatching file src/backend/access/gist/gistutil.c\npatching file src/backend/access/hash/hash.c\nHunk #1 succeeded at 149 (offset 3 lines).\npatching file src/backend/access/heap/heapam_handler.c\npatching file src/backend/access/heap/vacuumlazy.c\npatching file src/backend/access/nbtree/nbtpage.c\npatching file src/backend/access/table/tableam.c\npatching file src/backend/access/transam/xlog.c\npatching file src/backend/catalog/Makefile\nHunk #1 FAILED at 44.\n1 out of 1 hunk FAILED -- saving rejects to file \nsrc/backend/catalog/Makefile.rej\n[...]\n (The rest applies without errors)\n\nsrc/backend/catalog/Makefile.rej contains:\n\n------------------------\n--- src/backend/catalog/Makefile\n+++ src/backend/catalog/Makefile\n@@ -44,6 +44,8 @@ OBJS = \\\n \tstorage.o \\\n \ttoasting.o\n\n+OBJS += storage_gtt.o\n+\n BKIFILES = postgres.bki postgres.description postgres.shdescription\n\n include $(top_srcdir)/src/backend/common.mk\n------------------------\n\nCan you have a look?\n\n\nthanks,\n\nErik Rijkers\n\n\n\n\n\n\n\n\n", "msg_date": "Sun, 19 Jan 2020 18:32:58 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月20日 上午1:32,Erik Rijkers <er@xs4all.nl> 写道:\n> \n> On 2020-01-19 18:04, 曾文旌(义从) wrote:\n>>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n>>> [global_temporary_table_v4-pg13.patch ]\n> \n> Hi,\n> \n> This patch doesn't quiet apply for me:\n> \n> patching file src/backend/access/common/reloptions.c\n> patching file src/backend/access/gist/gistutil.c\n> patching file src/backend/access/hash/hash.c\n> Hunk #1 succeeded at 149 (offset 3 lines).\n> patching file src/backend/access/heap/heapam_handler.c\n> patching file src/backend/access/heap/vacuumlazy.c\n> patching file src/backend/access/nbtree/nbtpage.c\n> patching file src/backend/access/table/tableam.c\n> patching file src/backend/access/transam/xlog.c\n> patching file src/backend/catalog/Makefile\n> Hunk #1 FAILED at 44.\n> 1 out of 1 hunk FAILED -- saving rejects to file src/backend/catalog/Makefile.rej\n> [...]\n> (The rest applies without errors)\n> \n> src/backend/catalog/Makefile.rej contains:\n> \n> ------------------------\n> --- src/backend/catalog/Makefile\n> +++ src/backend/catalog/Makefile\n> @@ -44,6 +44,8 @@ OBJS = \\\n> \tstorage.o \\\n> \ttoasting.o\n> \n> +OBJS += storage_gtt.o\n> +\n> BKIFILES = postgres.bki postgres.description postgres.shdescription\n> \n> include $(top_srcdir)/src/backend/common.mk\n> ------------------------\n> \n> Can you have a look?\nI updated the code and remade the patch.\nPlease give me feedback if you have any more questions.\n\n\n\n\n\n\n> \n> \n> thanks,\n> \n> Erik Rijkers\n> \n> \n> \n> \n>", "msg_date": "Tue, 21 Jan 2020 00:27:17 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\nI have a free time this evening, so I will check this patch\n\nI have a one question\n\n+ /* global temp table get relstats from localhash */\n+ if (RELATION_IS_GLOBAL_TEMP(rel))\n+ {\n+ get_gtt_relstats(RelationGetRelid(rel),\n+ &relpages, &reltuples, &relallvisible,\n+ NULL, NULL);\n+ }\n+ else\n+ {\n+ /* coerce values in pg_class to more desirable types */\n+ relpages = (BlockNumber) rel->rd_rel->relpages;\n+ reltuples = (double) rel->rd_rel->reltuples;\n+ relallvisible = (BlockNumber) rel->rd_rel->relallvisible;\n+ }\n\nIsbn't possible to fill the rd_rel structure too, so this branching can be\nreduced?\n\nRegards\n\nPavel\n\npo 20. 1. 2020 v 17:27 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> > 2020年1月20日 上午1:32,Erik Rijkers <er@xs4all.nl> 写道:\n> >\n> > On 2020-01-19 18:04, 曾文旌(义从) wrote:\n> >>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> >>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com\n> <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> >\n> >>> [global_temporary_table_v4-pg13.patch ]\n> >\n> > Hi,\n> >\n> > This patch doesn't quiet apply for me:\n> >\n> > patching file src/backend/access/common/reloptions.c\n> > patching file src/backend/access/gist/gistutil.c\n> > patching file src/backend/access/hash/hash.c\n> > Hunk #1 succeeded at 149 (offset 3 lines).\n> > patching file src/backend/access/heap/heapam_handler.c\n> > patching file src/backend/access/heap/vacuumlazy.c\n> > patching file src/backend/access/nbtree/nbtpage.c\n> > patching file src/backend/access/table/tableam.c\n> > patching file src/backend/access/transam/xlog.c\n> > patching file src/backend/catalog/Makefile\n> > Hunk #1 FAILED at 44.\n> > 1 out of 1 hunk FAILED -- saving rejects to file\n> src/backend/catalog/Makefile.rej\n> > [...]\n> > (The rest applies without errors)\n> >\n> > src/backend/catalog/Makefile.rej contains:\n> >\n> > ------------------------\n> > --- src/backend/catalog/Makefile\n> > +++ src/backend/catalog/Makefile\n> > @@ -44,6 +44,8 @@ OBJS = \\\n> > storage.o \\\n> > toasting.o\n> >\n> > +OBJS += storage_gtt.o\n> > +\n> > BKIFILES = postgres.bki postgres.description postgres.shdescription\n> >\n> > include $(top_srcdir)/src/backend/common.mk\n> > ------------------------\n> >\n> > Can you have a look?\n> I updated the code and remade the patch.\n> Please give me feedback if you have any more questions.\n>\n>\n>\n>\n> >\n> >\n> > thanks,\n> >\n> > Erik Rijkers\n> >\n> >\n> >\n> >\n> >\n>\n>\n\nHiI have a free time this evening, so I will check this patchI have a one question+\t/* global temp table get relstats from localhash */+\tif (RELATION_IS_GLOBAL_TEMP(rel))+\t{+\t\tget_gtt_relstats(RelationGetRelid(rel),+\t\t\t\t\t\t&relpages, &reltuples, &relallvisible,+\t\t\t\t\t\tNULL, NULL);+\t}+\telse+\t{+\t\t/* coerce values in pg_class to more desirable types */+\t\trelpages = (BlockNumber) rel->rd_rel->relpages;+\t\treltuples = (double) rel->rd_rel->reltuples;+\t\trelallvisible = (BlockNumber) rel->rd_rel->relallvisible;+\t}Isbn't possible to fill the rd_rel structure too, so this branching can be reduced?RegardsPavelpo 20. 1. 2020 v 17:27 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2020年1月20日 上午1:32,Erik Rijkers <er@xs4all.nl> 写道:\n> \n> On 2020-01-19 18:04, 曾文旌(义从) wrote:\n>>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n>>> [global_temporary_table_v4-pg13.patch ]\n> \n> Hi,\n> \n> This patch doesn't quiet apply for me:\n> \n> patching file src/backend/access/common/reloptions.c\n> patching file src/backend/access/gist/gistutil.c\n> patching file src/backend/access/hash/hash.c\n> Hunk #1 succeeded at 149 (offset 3 lines).\n> patching file src/backend/access/heap/heapam_handler.c\n> patching file src/backend/access/heap/vacuumlazy.c\n> patching file src/backend/access/nbtree/nbtpage.c\n> patching file src/backend/access/table/tableam.c\n> patching file src/backend/access/transam/xlog.c\n> patching file src/backend/catalog/Makefile\n> Hunk #1 FAILED at 44.\n> 1 out of 1 hunk FAILED -- saving rejects to file src/backend/catalog/Makefile.rej\n> [...]\n>   (The rest applies without errors)\n> \n> src/backend/catalog/Makefile.rej contains:\n> \n> ------------------------\n> --- src/backend/catalog/Makefile\n> +++ src/backend/catalog/Makefile\n> @@ -44,6 +44,8 @@ OBJS = \\\n>       storage.o \\\n>       toasting.o\n> \n> +OBJS += storage_gtt.o\n> +\n> BKIFILES = postgres.bki postgres.description postgres.shdescription\n> \n> include $(top_srcdir)/src/backend/common.mk\n> ------------------------\n> \n> Can you have a look?\nI updated the code and remade the patch.\nPlease give me feedback if you have any more questions.\n\n\n\n\n> \n> \n> thanks,\n> \n> Erik Rijkers\n> \n> \n> \n> \n>", "msg_date": "Tue, 21 Jan 2020 06:43:32 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> Hi\n> \n> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> Hi all\n> \n> This is the latest patch\n> \n> The updates are as follows:\n> 1. Support global temp Inherit table global temp partition table\n> 2. Support serial column in GTT\n> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n> 4. Provide view pg_gtt_attached_pids to manage GTT\n> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n> 6. Alter GTT or rename GTT is allowed under some conditions\n> \n> \n> Please give me feedback.\n> \n> I tested the functionality\n> \n> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).\n\nON COMMIT PRESERVE ROWS is default mode now.\n\n\nWenjing\n\n\n\n> \n> I tested some simple scripts \n> \n> test01.sql\n> \n> CREATE TEMP TABLE foo(a int, b int);\n> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n> ANALYZE foo;\n> SELECT sum(a), sum(b) FROM foo;\n> DROP TABLE foo; -- simulate disconnect\n> \n> \n> after 100 sec, the table pg_attribute has 3.2MB\n> and 64 tps, 6446 transaction\n> \n> test02.sql\n> \n> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n> ANALYZE foo;\n> SELECT sum(a), sum(b) FROM foo;\n> DELETE FROM foo; -- simulate disconnect\n> \n> \n> after 100 sec, 1688 tps, 168830 transactions\n> \n> So performance is absolutely different as we expected.\n> \n> From my perspective, this functionality is great.\n> \n> Todo:\n> \n> pg_table_size function doesn't work\n> \n> Regards\n> \n> Pavel\n> \n> \n> Wenjing\n> \n> \n> \n> \n> \n>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> 写道:\n>> \n>> Hi,\n>> \n>> I think we need to do something with having two patches aiming to add\n>> global temporary tables:\n>> \n>> [1] https://commitfest.postgresql.org/26/2349/ <https://commitfest.postgresql.org/26/2349/>\n>> \n>> [2] https://commitfest.postgresql.org/26/2233/ <https://commitfest.postgresql.org/26/2233/>\n>> \n>> As a reviewer I have no idea which of the threads to look at - certainly\n>> not without reading both threads, which I doubt anyone will really do.\n>> The reviews and discussions are somewhat intermixed between those two\n>> threads, which makes it even more confusing.\n>> \n>> I think we should agree on a minimal patch combining the necessary/good\n>> bits from the various patches, and terminate one of the threads (i.e.\n>> mark it as rejected or RWF). And we need to do that now, otherwise\n>> there's about 0% chance of getting this into v13.\n>> \n>> In general, I agree with the sentiment Rober expressed in [1] - the\n>> patch needs to be as small as possible, not adding \"nice to have\"\n>> features (like support for parallel queries - I very much doubt just\n>> using shared instead of local buffers is enough to make it work.)\n>> \n>> regards\n>> \n>> -- \n>> Tomas Vondra http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>", "msg_date": "Tue, 21 Jan 2020 16:45:53 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "út 21. 1. 2020 v 9:46 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n> Hi\n>\n> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>> Hi all\n>>\n>> This is the latest patch\n>>\n>> The updates are as follows:\n>> 1. Support global temp Inherit table global temp partition table\n>> 2. Support serial column in GTT\n>> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n>> 4. Provide view pg_gtt_attached_pids to manage GTT\n>> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n>> 6. Alter GTT or rename GTT is allowed under some conditions\n>>\n>>\n>> Please give me feedback.\n>>\n>\n> I tested the functionality\n>\n> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local\n> temp tables).\n>\n>\n> ON COMMIT PRESERVE ROWS is default mode now.\n>\n\nThank you\n\n* I tried to create global temp table with index. When I tried to drop this\ntable (and this table was used by second instance), then I got error message\n\npostgres=# drop table foo;\nERROR: can not drop index when other backend attached this global temp\ntable\n\nIt is expected, but it is not too much user friendly. Is better to check if\nyou can drop table, then lock it, and then drop all objects.\n\n* tab complete can be nice for CREATE GLOBAL TEMP table\n\n\\dt+ \\di+ doesn't work correctly, or maybe I don't understand to the\nimplementation.\n\nI see same size in all sessions. Global temp tables shares same files?\n\nRegards\n\nPavel\n\n\n\n\n>\n> Wenjing\n>\n>\n>\n>\n> I tested some simple scripts\n>\n> test01.sql\n>\n> CREATE TEMP TABLE foo(a int, b int);\n> INSERT INTO foo SELECT random()*100, random()*1000 FROM\n> generate_series(1,1000);\n> ANALYZE foo;\n> SELECT sum(a), sum(b) FROM foo;\n> DROP TABLE foo; -- simulate disconnect\n>\n>\n> after 100 sec, the table pg_attribute has 3.2MB\n> and 64 tps, 6446 transaction\n>\n> test02.sql\n>\n> INSERT INTO foo SELECT random()*100, random()*1000 FROM\n> generate_series(1,1000);\n> ANALYZE foo;\n> SELECT sum(a), sum(b) FROM foo;\n> DELETE FROM foo; -- simulate disconnect\n>\n>\n> after 100 sec, 1688 tps, 168830 transactions\n>\n> So performance is absolutely different as we expected.\n>\n> From my perspective, this functionality is great.\n>\n> Todo:\n>\n> pg_table_size function doesn't work\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Wenjing\n>>\n>>\n>>\n>>\n>>\n>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:\n>>\n>> Hi,\n>>\n>> I think we need to do something with having two patches aiming to add\n>> global temporary tables:\n>>\n>> [1] https://commitfest.postgresql.org/26/2349/\n>>\n>> [2] https://commitfest.postgresql.org/26/2233/\n>>\n>> As a reviewer I have no idea which of the threads to look at - certainly\n>> not without reading both threads, which I doubt anyone will really do.\n>> The reviews and discussions are somewhat intermixed between those two\n>> threads, which makes it even more confusing.\n>>\n>> I think we should agree on a minimal patch combining the necessary/good\n>> bits from the various patches, and terminate one of the threads (i.e.\n>> mark it as rejected or RWF). And we need to do that now, otherwise\n>> there's about 0% chance of getting this into v13.\n>>\n>> In general, I agree with the sentiment Rober expressed in [1] - the\n>> patch needs to be as small as possible, not adding \"nice to have\"\n>> features (like support for parallel queries - I very much doubt just\n>> using shared instead of local buffers is enough to make it work.)\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra http://www.2ndQuadrant.com\n>> <http://www.2ndquadrant.com/>\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>>\n>>\n>\n\nút 21. 1. 2020 v 9:46 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hiso 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Hi allThis is the latest patchThe updates are as follows:1. Support global temp Inherit table global temp partition table2. Support serial column in GTT3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics4. Provide view pg_gtt_attached_pids to manage GTT5. Provide function pg_list_gtt_relfrozenxids() to manage GTT6. Alter GTT or rename GTT is allowed under some conditionsPlease give me feedback.I tested the functionality1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).ON COMMIT PRESERVE ROWS is default mode now.Thank you* I tried to create global temp table with index. When I tried to drop this table (and this table was used by second instance), then I got error messagepostgres=# drop table foo;ERROR:  can not drop index when other backend attached this global temp tableIt is expected, but it is not too much user friendly. Is better to check if you can drop table, then lock it, and then drop all objects.* tab complete can be nice for CREATE GLOBAL TEMP table\\dt+ \\di+ doesn't work correctly, or maybe I don't understand to the implementation.I see same size in all sessions. Global temp tables shares same files?RegardsPavelWenjingI tested some simple scripts test01.sqlCREATE TEMP TABLE foo(a int, b int);INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DROP TABLE foo; -- simulate disconnectafter 100 sec, the table pg_attribute has 3.2MBand 64 tps, 6446 transactiontest02.sqlINSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DELETE FROM foo; -- simulate disconnectafter 100 sec, 1688 tps, 168830 transactionsSo performance is absolutely different as we expected.From my perspective, this functionality is great.Todo:pg_table_size function doesn't workRegardsPavelWenjing2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:Hi,I think we need to do something with having two patches aiming to addglobal temporary tables:[1] https://commitfest.postgresql.org/26/2349/[2] https://commitfest.postgresql.org/26/2233/As a reviewer I have no idea which of the threads to look at - certainlynot without reading both threads, which I doubt anyone will really do.The reviews and discussions are somewhat intermixed between those twothreads, which makes it even more confusing.I think we should agree on a minimal patch combining the necessary/goodbits from the various patches, and terminate one of the threads (i.e.mark it as rejected or RWF). And we need to do that now, otherwisethere's about 0% chance of getting this into v13.In general, I agree with the sentiment Rober expressed in [1] - thepatch needs to be as small as possible, not adding \"nice to have\"features (like support for parallel queries - I very much doubt justusing shared instead of local buffers is enough to make it work.)regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 21 Jan 2020 19:51:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月21日 下午1:43,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> Hi\n> \n> I have a free time this evening, so I will check this patch\n> \n> I have a one question\n> \n> +\t/* global temp table get relstats from localhash */\n> +\tif (RELATION_IS_GLOBAL_TEMP(rel))\n> +\t{\n> +\tget_gtt_relstats(RelationGetRelid(rel),\n> +\t&relpages, &reltuples, &relallvisible,\n> +\tNULL, NULL);\n> +\t}\n> +\telse\n> +\t{\n> +\t/* coerce values in pg_class to more desirable types */\n> +\trelpages = (BlockNumber) rel->rd_rel->relpages;\n> +\treltuples = (double) rel->rd_rel->reltuples;\n> +\trelallvisible = (BlockNumber) rel->rd_rel->relallvisible;\n> +\t}\n> \n> Isbn't possible to fill the rd_rel structure too, so this branching can be reduced?\nI'll make some improvements to optimize this part of the code.\n\n> \n> Regards\n> \n> Pavel\n> \n> po 20. 1. 2020 v 17:27 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n> > 2020年1月20日 上午1:32,Erik Rijkers <er@xs4all.nl <mailto:er@xs4all.nl>> 写道:\n> > \n> > On 2020-01-19 18:04, 曾文旌(义从) wrote:\n> >>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n> >>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com> <mailto:wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>>> napsal:\n> > \n> >>> [global_temporary_table_v4-pg13.patch ]\n> > \n> > Hi,\n> > \n> > This patch doesn't quiet apply for me:\n> > \n> > patching file src/backend/access/common/reloptions.c\n> > patching file src/backend/access/gist/gistutil.c\n> > patching file src/backend/access/hash/hash.c\n> > Hunk #1 succeeded at 149 (offset 3 lines).\n> > patching file src/backend/access/heap/heapam_handler.c\n> > patching file src/backend/access/heap/vacuumlazy.c\n> > patching file src/backend/access/nbtree/nbtpage.c\n> > patching file src/backend/access/table/tableam.c\n> > patching file src/backend/access/transam/xlog.c\n> > patching file src/backend/catalog/Makefile\n> > Hunk #1 FAILED at 44.\n> > 1 out of 1 hunk FAILED -- saving rejects to file src/backend/catalog/Makefile.rej\n> > [...]\n> > (The rest applies without errors)\n> > \n> > src/backend/catalog/Makefile.rej contains:\n> > \n> > ------------------------\n> > --- src/backend/catalog/Makefile\n> > +++ src/backend/catalog/Makefile\n> > @@ -44,6 +44,8 @@ OBJS = \\\n> > storage.o \\\n> > toasting.o\n> > \n> > +OBJS += storage_gtt.o\n> > +\n> > BKIFILES = postgres.bki postgres.description postgres.shdescription\n> > \n> > include $(top_srcdir)/src/backend/common.mk <http://common.mk/>\n> > ------------------------\n> > \n> > Can you have a look?\n> I updated the code and remade the patch.\n> Please give me feedback if you have any more questions.\n> \n> \n> \n> \n> > \n> > \n> > thanks,\n> > \n> > Erik Rijkers\n> > \n> > \n> > \n> > \n> > \n> \n\n\n2020年1月21日 下午1:43,Pavel Stehule <pavel.stehule@gmail.com> 写道:HiI have a free time this evening, so I will check this patchI have a one question+\t/* global temp table get relstats from localhash */+\tif (RELATION_IS_GLOBAL_TEMP(rel))+\t{+\t\tget_gtt_relstats(RelationGetRelid(rel),+\t\t\t\t\t\t&relpages, &reltuples, &relallvisible,+\t\t\t\t\t\tNULL, NULL);+\t}+\telse+\t{+\t\t/* coerce values in pg_class to more desirable types */+\t\trelpages = (BlockNumber) rel->rd_rel->relpages;+\t\treltuples = (double) rel->rd_rel->reltuples;+\t\trelallvisible = (BlockNumber) rel->rd_rel->relallvisible;+\t}Isbn't possible to fill the rd_rel structure too, so this branching can be reduced?I'll make some improvements to optimize this part of the code.RegardsPavelpo 20. 1. 2020 v 17:27 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2020年1月20日 上午1:32,Erik Rijkers <er@xs4all.nl> 写道:\n> \n> On 2020-01-19 18:04, 曾文旌(义从) wrote:\n>>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n>>> [global_temporary_table_v4-pg13.patch ]\n> \n> Hi,\n> \n> This patch doesn't quiet apply for me:\n> \n> patching file src/backend/access/common/reloptions.c\n> patching file src/backend/access/gist/gistutil.c\n> patching file src/backend/access/hash/hash.c\n> Hunk #1 succeeded at 149 (offset 3 lines).\n> patching file src/backend/access/heap/heapam_handler.c\n> patching file src/backend/access/heap/vacuumlazy.c\n> patching file src/backend/access/nbtree/nbtpage.c\n> patching file src/backend/access/table/tableam.c\n> patching file src/backend/access/transam/xlog.c\n> patching file src/backend/catalog/Makefile\n> Hunk #1 FAILED at 44.\n> 1 out of 1 hunk FAILED -- saving rejects to file src/backend/catalog/Makefile.rej\n> [...]\n>   (The rest applies without errors)\n> \n> src/backend/catalog/Makefile.rej contains:\n> \n> ------------------------\n> --- src/backend/catalog/Makefile\n> +++ src/backend/catalog/Makefile\n> @@ -44,6 +44,8 @@ OBJS = \\\n>       storage.o \\\n>       toasting.o\n> \n> +OBJS += storage_gtt.o\n> +\n> BKIFILES = postgres.bki postgres.description postgres.shdescription\n> \n> include $(top_srcdir)/src/backend/common.mk\n> ------------------------\n> \n> Can you have a look?\nI updated the code and remade the patch.\nPlease give me feedback if you have any more questions.\n\n\n\n\n> \n> \n> thanks,\n> \n> Erik Rijkers\n> \n> \n> \n> \n>", "msg_date": "Wed, 22 Jan 2020 13:29:50 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月22日 上午2:51,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> út 21. 1. 2020 v 9:46 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> Hi all\n>> \n>> This is the latest patch\n>> \n>> The updates are as follows:\n>> 1. Support global temp Inherit table global temp partition table\n>> 2. Support serial column in GTT\n>> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n>> 4. Provide view pg_gtt_attached_pids to manage GTT\n>> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n>> 6. Alter GTT or rename GTT is allowed under some conditions\n>> \n>> \n>> Please give me feedback.\n>> \n>> I tested the functionality\n>> \n>> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).\n> \n> ON COMMIT PRESERVE ROWS is default mode now.\n> \n> Thank you\n> \n> * I tried to create global temp table with index. When I tried to drop this table (and this table was used by second instance), then I got error message\n> \n> postgres=# drop table foo;\n> ERROR: can not drop index when other backend attached this global temp table\n> \n> It is expected, but it is not too much user friendly. Is better to check if you can drop table, then lock it, and then drop all objects.\nI don't understand what needs to be improved. Could you describe it in detail?\n\n> \n> * tab complete can be nice for CREATE GLOBAL TEMP table\nYes, I will improve it.\n> \n> \\dt+ \\di+ doesn't work correctly, or maybe I don't understand to the implementation.\n> \n\npostgres=# create table t(a int primary key);\nCREATE TABLE\npostgres=# create global temp table gt(a int primary key);\nCREATE TABLE\npostgres=# insert into t values(generate_series(1,10000));\nINSERT 0 10000\npostgres=# insert into gt values(generate_series(1,10000));\nINSERT 0 10000\n\npostgres=# \\dt+\n List of relations\n Schema | Name | Type | Owner | Persistence | Size | Description \n--------+------+-------+-------------+-------------+--------+-------------\n public | gt | table | wenjing.zwj | session | 384 kB | \n public | t | table | wenjing.zwj | permanent | 384 kB | \n(2 rows)\n\npostgres=# \\di+\n List of relations\n Schema | Name | Type | Owner | Table | Persistence | Size | Description \n--------+---------+-------+-------------+-------+-------------+--------+-------------\n public | gt_pkey | index | wenjing.zwj | gt | session | 240 kB | \n public | t_pkey | index | wenjing.zwj | t | permanent | 240 kB | \n(2 rows)\n\n\n> I see same size in all sessions. Global temp tables shares same files?\nNo, they use their own files.\nBut \\dt+ \\di+ counts the total file sizes in all sessions for each GTT.\n\n\n\nWenjing\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> \n> \n> Wenjing\n> \n> \n> \n>> \n>> I tested some simple scripts \n>> \n>> test01.sql\n>> \n>> CREATE TEMP TABLE foo(a int, b int);\n>> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n>> ANALYZE foo;\n>> SELECT sum(a), sum(b) FROM foo;\n>> DROP TABLE foo; -- simulate disconnect\n>> \n>> \n>> after 100 sec, the table pg_attribute has 3.2MB\n>> and 64 tps, 6446 transaction\n>> \n>> test02.sql\n>> \n>> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n>> ANALYZE foo;\n>> SELECT sum(a), sum(b) FROM foo;\n>> DELETE FROM foo; -- simulate disconnect\n>> \n>> \n>> after 100 sec, 1688 tps, 168830 transactions\n>> \n>> So performance is absolutely different as we expected.\n>> \n>> From my perspective, this functionality is great.\n>> \n>> Todo:\n>> \n>> pg_table_size function doesn't work\n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> \n>>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> 写道:\n>>> \n>>> Hi,\n>>> \n>>> I think we need to do something with having two patches aiming to add\n>>> global temporary tables:\n>>> \n>>> [1] https://commitfest.postgresql.org/26/2349/ <https://commitfest.postgresql.org/26/2349/>\n>>> \n>>> [2] https://commitfest.postgresql.org/26/2233/ <https://commitfest.postgresql.org/26/2233/>\n>>> \n>>> As a reviewer I have no idea which of the threads to look at - certainly\n>>> not without reading both threads, which I doubt anyone will really do.\n>>> The reviews and discussions are somewhat intermixed between those two\n>>> threads, which makes it even more confusing.\n>>> \n>>> I think we should agree on a minimal patch combining the necessary/good\n>>> bits from the various patches, and terminate one of the threads (i.e.\n>>> mark it as rejected or RWF). And we need to do that now, otherwise\n>>> there's about 0% chance of getting this into v13.\n>>> \n>>> In general, I agree with the sentiment Rober expressed in [1] - the\n>>> patch needs to be as small as possible, not adding \"nice to have\"\n>>> features (like support for parallel queries - I very much doubt just\n>>> using shared instead of local buffers is enough to make it work.)\n>>> \n>>> regards\n>>> \n>>> -- \n>>> Tomas Vondra http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\n>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>> \n> \n\n\n2020年1月22日 上午2:51,Pavel Stehule <pavel.stehule@gmail.com> 写道:út 21. 1. 2020 v 9:46 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hiso 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Hi allThis is the latest patchThe updates are as follows:1. Support global temp Inherit table global temp partition table2. Support serial column in GTT3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics4. Provide view pg_gtt_attached_pids to manage GTT5. Provide function pg_list_gtt_relfrozenxids() to manage GTT6. Alter GTT or rename GTT is allowed under some conditionsPlease give me feedback.I tested the functionality1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).ON COMMIT PRESERVE ROWS is default mode now.Thank you* I tried to create global temp table with index. When I tried to drop this table (and this table was used by second instance), then I got error messagepostgres=# drop table foo;ERROR:  can not drop index when other backend attached this global temp tableIt is expected, but it is not too much user friendly. Is better to check if you can drop table, then lock it, and then drop all objects.I don't understand what needs to be improved. Could you describe it in detail?* tab complete can be nice for CREATE GLOBAL TEMP tableYes, I will improve it.\\dt+ \\di+ doesn't work correctly, or maybe I don't understand to the implementation.postgres=# create table t(a int primary key);CREATE TABLEpostgres=# create global temp table gt(a int primary key);CREATE TABLEpostgres=# insert into t values(generate_series(1,10000));INSERT 0 10000postgres=# insert into gt values(generate_series(1,10000));INSERT 0 10000postgres=# \\dt+                            List of relations Schema | Name | Type  |    Owner    | Persistence |  Size  | Description --------+------+-------+-------------+-------------+--------+------------- public | gt   | table | wenjing.zwj | session     | 384 kB |  public | t    | table | wenjing.zwj | permanent   | 384 kB | (2 rows)postgres=# \\di+                                  List of relations Schema |  Name   | Type  |    Owner    | Table | Persistence |  Size  | Description --------+---------+-------+-------------+-------+-------------+--------+------------- public | gt_pkey | index | wenjing.zwj | gt    | session     | 240 kB |  public | t_pkey  | index | wenjing.zwj | t     | permanent   | 240 kB | (2 rows)I see same size in all sessions. Global temp tables shares same files?No, they use their own files.But \\dt+ \\di+ counts the total file sizes in all sessions for each GTT.WenjingRegardsPavelWenjingI tested some simple scripts test01.sqlCREATE TEMP TABLE foo(a int, b int);INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DROP TABLE foo; -- simulate disconnectafter 100 sec, the table pg_attribute has 3.2MBand 64 tps, 6446 transactiontest02.sqlINSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DELETE FROM foo; -- simulate disconnectafter 100 sec, 1688 tps, 168830 transactionsSo performance is absolutely different as we expected.From my perspective, this functionality is great.Todo:pg_table_size function doesn't workRegardsPavelWenjing2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:Hi,I think we need to do something with having two patches aiming to addglobal temporary tables:[1] https://commitfest.postgresql.org/26/2349/[2] https://commitfest.postgresql.org/26/2233/As a reviewer I have no idea which of the threads to look at - certainlynot without reading both threads, which I doubt anyone will really do.The reviews and discussions are somewhat intermixed between those twothreads, which makes it even more confusing.I think we should agree on a minimal patch combining the necessary/goodbits from the various patches, and terminate one of the threads (i.e.mark it as rejected or RWF). And we need to do that now, otherwisethere's about 0% chance of getting this into v13.In general, I agree with the sentiment Rober expressed in [1] - thepatch needs to be as small as possible, not adding \"nice to have\"features (like support for parallel queries - I very much doubt justusing shared instead of local buffers is enough to make it work.)regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 22 Jan 2020 14:16:10 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "st 22. 1. 2020 v 7:16 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> 2020年1月22日 上午2:51,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n>\n>\n> út 21. 1. 2020 v 9:46 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>>\n>>\n>> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>>\n>> Hi\n>>\n>> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n>> napsal:\n>>\n>>> Hi all\n>>>\n>>> This is the latest patch\n>>>\n>>> The updates are as follows:\n>>> 1. Support global temp Inherit table global temp partition table\n>>> 2. Support serial column in GTT\n>>> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n>>> 4. Provide view pg_gtt_attached_pids to manage GTT\n>>> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n>>> 6. Alter GTT or rename GTT is allowed under some conditions\n>>>\n>>>\n>>> Please give me feedback.\n>>>\n>>\n>> I tested the functionality\n>>\n>> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like\n>> local temp tables).\n>>\n>>\n>> ON COMMIT PRESERVE ROWS is default mode now.\n>>\n>\n> Thank you\n>\n> * I tried to create global temp table with index. When I tried to drop\n> this table (and this table was used by second instance), then I got error\n> message\n>\n> postgres=# drop table foo;\n> ERROR: can not drop index when other backend attached this global temp\n> table\n>\n> It is expected, but it is not too much user friendly. Is better to check\n> if you can drop table, then lock it, and then drop all objects.\n>\n> I don't understand what needs to be improved. Could you describe it in\n> detail?\n>\n\nthe error messages should be some like\n\ncan not drop table when other backend attached this global temp table.\n\nIt is little bit messy, when you try to drop table and you got message\nabout index\n\n\n>\n> * tab complete can be nice for CREATE GLOBAL TEMP table\n>\n> Yes, I will improve it.\n>\n>\n> \\dt+ \\di+ doesn't work correctly, or maybe I don't understand to the\n> implementation.\n>\n>\n> postgres=# create table t(a int primary key);\n> CREATE TABLE\n> postgres=# create global temp table gt(a int primary key);\n> CREATE TABLE\n> postgres=# insert into t values(generate_series(1,10000));\n> INSERT 0 10000\n> postgres=# insert into gt values(generate_series(1,10000));\n> INSERT 0 10000\n>\n> postgres=# \\dt+\n> List of relations\n> Schema | Name | Type | Owner | Persistence | Size | Description\n> --------+------+-------+-------------+-------------+--------+-------------\n> public | gt | table | wenjing.zwj | session | 384 kB |\n> public | t | table | wenjing.zwj | permanent | 384 kB |\n> (2 rows)\n>\n> postgres=# \\di+\n> List of relations\n> Schema | Name | Type | Owner | Table | Persistence | Size |\n> Description\n>\n> --------+---------+-------+-------------+-------+-------------+--------+-------------\n> public | gt_pkey | index | wenjing.zwj | gt | session | 240 kB |\n> public | t_pkey | index | wenjing.zwj | t | permanent | 240 kB |\n> (2 rows)\n>\n>\n> I see same size in all sessions. Global temp tables shares same files?\n>\n> No, they use their own files.\n> But \\dt+ \\di+ counts the total file sizes in all sessions for each GTT.\n>\n\nI think so it is wrong. The data are independent and the sizes should be\nindependent too\n\n\n>\n>\n> Wenjing\n>\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>>\n>> Wenjing\n>>\n>>\n>>\n>>\n>> I tested some simple scripts\n>>\n>> test01.sql\n>>\n>> CREATE TEMP TABLE foo(a int, b int);\n>> INSERT INTO foo SELECT random()*100, random()*1000 FROM\n>> generate_series(1,1000);\n>> ANALYZE foo;\n>> SELECT sum(a), sum(b) FROM foo;\n>> DROP TABLE foo; -- simulate disconnect\n>>\n>>\n>> after 100 sec, the table pg_attribute has 3.2MB\n>> and 64 tps, 6446 transaction\n>>\n>> test02.sql\n>>\n>> INSERT INTO foo SELECT random()*100, random()*1000 FROM\n>> generate_series(1,1000);\n>> ANALYZE foo;\n>> SELECT sum(a), sum(b) FROM foo;\n>> DELETE FROM foo; -- simulate disconnect\n>>\n>>\n>> after 100 sec, 1688 tps, 168830 transactions\n>>\n>> So performance is absolutely different as we expected.\n>>\n>> From my perspective, this functionality is great.\n>>\n>> Todo:\n>>\n>> pg_table_size function doesn't work\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> Wenjing\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:\n>>>\n>>> Hi,\n>>>\n>>> I think we need to do something with having two patches aiming to add\n>>> global temporary tables:\n>>>\n>>> [1] https://commitfest.postgresql.org/26/2349/\n>>>\n>>> [2] https://commitfest.postgresql.org/26/2233/\n>>>\n>>> As a reviewer I have no idea which of the threads to look at - certainly\n>>> not without reading both threads, which I doubt anyone will really do.\n>>> The reviews and discussions are somewhat intermixed between those two\n>>> threads, which makes it even more confusing.\n>>>\n>>> I think we should agree on a minimal patch combining the necessary/good\n>>> bits from the various patches, and terminate one of the threads (i.e.\n>>> mark it as rejected or RWF). And we need to do that now, otherwise\n>>> there's about 0% chance of getting this into v13.\n>>>\n>>> In general, I agree with the sentiment Rober expressed in [1] - the\n>>> patch needs to be as small as possible, not adding \"nice to have\"\n>>> features (like support for parallel queries - I very much doubt just\n>>> using shared instead of local buffers is enough to make it work.)\n>>>\n>>> regards\n>>>\n>>> --\n>>> Tomas Vondra http://www.2ndQuadrant.com\n>>> <http://www.2ndquadrant.com/>\n>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>>\n>>>\n>>>\n>>\n>\n\nst 22. 1. 2020 v 7:16 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月22日 上午2:51,Pavel Stehule <pavel.stehule@gmail.com> 写道:út 21. 1. 2020 v 9:46 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hiso 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:Hi allThis is the latest patchThe updates are as follows:1. Support global temp Inherit table global temp partition table2. Support serial column in GTT3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics4. Provide view pg_gtt_attached_pids to manage GTT5. Provide function pg_list_gtt_relfrozenxids() to manage GTT6. Alter GTT or rename GTT is allowed under some conditionsPlease give me feedback.I tested the functionality1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).ON COMMIT PRESERVE ROWS is default mode now.Thank you* I tried to create global temp table with index. When I tried to drop this table (and this table was used by second instance), then I got error messagepostgres=# drop table foo;ERROR:  can not drop index when other backend attached this global temp tableIt is expected, but it is not too much user friendly. Is better to check if you can drop table, then lock it, and then drop all objects.I don't understand what needs to be improved. Could you describe it in detail?the error messages should be some like can not drop table when other backend attached this global temp table.It is little bit messy, when you try to drop table and you got message about index* tab complete can be nice for CREATE GLOBAL TEMP tableYes, I will improve it.\\dt+ \\di+ doesn't work correctly, or maybe I don't understand to the implementation.postgres=# create table t(a int primary key);CREATE TABLEpostgres=# create global temp table gt(a int primary key);CREATE TABLEpostgres=# insert into t values(generate_series(1,10000));INSERT 0 10000postgres=# insert into gt values(generate_series(1,10000));INSERT 0 10000postgres=# \\dt+                            List of relations Schema | Name | Type  |    Owner    | Persistence |  Size  | Description --------+------+-------+-------------+-------------+--------+------------- public | gt   | table | wenjing.zwj | session     | 384 kB |  public | t    | table | wenjing.zwj | permanent   | 384 kB | (2 rows)postgres=# \\di+                                  List of relations Schema |  Name   | Type  |    Owner    | Table | Persistence |  Size  | Description --------+---------+-------+-------------+-------+-------------+--------+------------- public | gt_pkey | index | wenjing.zwj | gt    | session     | 240 kB |  public | t_pkey  | index | wenjing.zwj | t     | permanent   | 240 kB | (2 rows)I see same size in all sessions. Global temp tables shares same files?No, they use their own files.But \\dt+ \\di+ counts the total file sizes in all sessions for each GTT.I think so it is wrong. The data are independent and the sizes should be independent too WenjingRegardsPavelWenjingI tested some simple scripts test01.sqlCREATE TEMP TABLE foo(a int, b int);INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DROP TABLE foo; -- simulate disconnectafter 100 sec, the table pg_attribute has 3.2MBand 64 tps, 6446 transactiontest02.sqlINSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);ANALYZE foo;SELECT sum(a), sum(b) FROM foo;DELETE FROM foo; -- simulate disconnectafter 100 sec, 1688 tps, 168830 transactionsSo performance is absolutely different as we expected.From my perspective, this functionality is great.Todo:pg_table_size function doesn't workRegardsPavelWenjing2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com> 写道:Hi,I think we need to do something with having two patches aiming to addglobal temporary tables:[1] https://commitfest.postgresql.org/26/2349/[2] https://commitfest.postgresql.org/26/2233/As a reviewer I have no idea which of the threads to look at - certainlynot without reading both threads, which I doubt anyone will really do.The reviews and discussions are somewhat intermixed between those twothreads, which makes it even more confusing.I think we should agree on a minimal patch combining the necessary/goodbits from the various patches, and terminate one of the threads (i.e.mark it as rejected or RWF). And we need to do that now, otherwisethere's about 0% chance of getting this into v13.In general, I agree with the sentiment Rober expressed in [1] - thepatch needs to be as small as possible, not adding \"nice to have\"features (like support for parallel queries - I very much doubt justusing shared instead of local buffers is enough to make it work.)regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 22 Jan 2020 07:31:00 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月22日 下午2:31,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> st 22. 1. 2020 v 7:16 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年1月22日 上午2:51,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> \n>> \n>> út 21. 1. 2020 v 9:46 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> \n>> \n>>> 2020年1月12日 上午4:27,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>>> \n>>> Hi\n>>> \n>>> so 11. 1. 2020 v 15:00 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>>> Hi all\n>>> \n>>> This is the latest patch\n>>> \n>>> The updates are as follows:\n>>> 1. Support global temp Inherit table global temp partition table\n>>> 2. Support serial column in GTT\n>>> 3. Provide views pg_gtt_relstats pg_gtt_stats for GTT’s statistics\n>>> 4. Provide view pg_gtt_attached_pids to manage GTT\n>>> 5. Provide function pg_list_gtt_relfrozenxids() to manage GTT\n>>> 6. Alter GTT or rename GTT is allowed under some conditions\n>>> \n>>> \n>>> Please give me feedback.\n>>> \n>>> I tested the functionality\n>>> \n>>> 1. i think so \"ON COMMIT PRESERVE ROWS\" should be default mode (like local temp tables).\n>> \n>> ON COMMIT PRESERVE ROWS is default mode now.\n>> \n>> Thank you\n>> \n>> * I tried to create global temp table with index. When I tried to drop this table (and this table was used by second instance), then I got error message\n>> \n>> postgres=# drop table foo;\n>> ERROR: can not drop index when other backend attached this global temp table\n>> \n>> It is expected, but it is not too much user friendly. Is better to check if you can drop table, then lock it, and then drop all objects.\n> I don't understand what needs to be improved. Could you describe it in detail?\n> \n> the error messages should be some like \n> \n> can not drop table when other backend attached this global temp table.\n> \n> It is little bit messy, when you try to drop table and you got message about index\nIt has been repaired in global_temporary_table_v7-pg13.patch\n\n> \n> \n>> \n>> * tab complete can be nice for CREATE GLOBAL TEMP table\n> Yes, I will improve it.\nIt has been repaired in global_temporary_table_v7-pg13.patch\n\n>> \n>> \\dt+ \\di+ doesn't work correctly, or maybe I don't understand to the implementation.\n>> \n> \n> postgres=# create table t(a int primary key);\n> CREATE TABLE\n> postgres=# create global temp table gt(a int primary key);\n> CREATE TABLE\n> postgres=# insert into t values(generate_series(1,10000));\n> INSERT 0 10000\n> postgres=# insert into gt values(generate_series(1,10000));\n> INSERT 0 10000\n> \n> postgres=# \\dt+\n> List of relations\n> Schema | Name | Type | Owner | Persistence | Size | Description \n> --------+------+-------+-------------+-------------+--------+-------------\n> public | gt | table | wenjing.zwj | session | 384 kB | \n> public | t | table | wenjing.zwj | permanent | 384 kB | \n> (2 rows)\n> \n> postgres=# \\di+\n> List of relations\n> Schema | Name | Type | Owner | Table | Persistence | Size | Description \n> --------+---------+-------+-------------+-------+-------------+--------+-------------\n> public | gt_pkey | index | wenjing.zwj | gt | session | 240 kB | \n> public | t_pkey | index | wenjing.zwj | t | permanent | 240 kB | \n> (2 rows)\n> \n> \n>> I see same size in all sessions. Global temp tables shares same files?\n> No, they use their own files.\n> But \\dt+ \\di+ counts the total file sizes in all sessions for each GTT.\n> \n> I think so it is wrong. The data are independent and the sizes should be independent too\nIt has been repaired in global_temporary_table_v7-pg13.patch.\n\n\nWenjing\n\n\n\n\n> \n> \n> \n> \n> Wenjing\n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>>> \n>>> I tested some simple scripts \n>>> \n>>> test01.sql\n>>> \n>>> CREATE TEMP TABLE foo(a int, b int);\n>>> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n>>> ANALYZE foo;\n>>> SELECT sum(a), sum(b) FROM foo;\n>>> DROP TABLE foo; -- simulate disconnect\n>>> \n>>> \n>>> after 100 sec, the table pg_attribute has 3.2MB\n>>> and 64 tps, 6446 transaction\n>>> \n>>> test02.sql\n>>> \n>>> INSERT INTO foo SELECT random()*100, random()*1000 FROM generate_series(1,1000);\n>>> ANALYZE foo;\n>>> SELECT sum(a), sum(b) FROM foo;\n>>> DELETE FROM foo; -- simulate disconnect\n>>> \n>>> \n>>> after 100 sec, 1688 tps, 168830 transactions\n>>> \n>>> So performance is absolutely different as we expected.\n>>> \n>>> From my perspective, this functionality is great.\n>>> \n>>> Todo:\n>>> \n>>> pg_table_size function doesn't work\n>>> \n>>> Regards\n>>> \n>>> Pavel\n>>> \n>>> \n>>> Wenjing\n>>> \n>>> \n>>> \n>>> \n>>> \n>>>> 2020年1月6日 上午4:06,Tomas Vondra <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> 写道:\n>>>> \n>>>> Hi,\n>>>> \n>>>> I think we need to do something with having two patches aiming to add\n>>>> global temporary tables:\n>>>> \n>>>> [1] https://commitfest.postgresql.org/26/2349/ <https://commitfest.postgresql.org/26/2349/>\n>>>> \n>>>> [2] https://commitfest.postgresql.org/26/2233/ <https://commitfest.postgresql.org/26/2233/>\n>>>> \n>>>> As a reviewer I have no idea which of the threads to look at - certainly\n>>>> not without reading both threads, which I doubt anyone will really do.\n>>>> The reviews and discussions are somewhat intermixed between those two\n>>>> threads, which makes it even more confusing.\n>>>> \n>>>> I think we should agree on a minimal patch combining the necessary/good\n>>>> bits from the various patches, and terminate one of the threads (i.e.\n>>>> mark it as rejected or RWF). And we need to do that now, otherwise\n>>>> there's about 0% chance of getting this into v13.\n>>>> \n>>>> In general, I agree with the sentiment Rober expressed in [1] - the\n>>>> patch needs to be as small as possible, not adding \"nice to have\"\n>>>> features (like support for parallel queries - I very much doubt just\n>>>> using shared instead of local buffers is enough to make it work.)\n>>>> \n>>>> regards\n>>>> \n>>>> -- \n>>>> Tomas Vondra http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\n>>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>> \n>> \n>", "msg_date": "Fri, 24 Jan 2020 00:22:31 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月22日 下午1:29,曾文旌(义从) <wenjing.zwj@alibaba-inc.com> 写道:\n> \n> \n> \n>> 2020年1月21日 下午1:43,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> I have a free time this evening, so I will check this patch\n>> \n>> I have a one question\n>> \n>> +\t/* global temp table get relstats from localhash */\n>> +\tif (RELATION_IS_GLOBAL_TEMP(rel))\n>> +\t{\n>> +\tget_gtt_relstats(RelationGetRelid(rel),\n>> +\t&relpages, &reltuples, &relallvisible,\n>> +\tNULL, NULL);\n>> +\t}\n>> +\telse\n>> +\t{\n>> +\t/* coerce values in pg_class to more desirable types */\n>> +\trelpages = (BlockNumber) rel->rd_rel->relpages;\n>> +\treltuples = (double) rel->rd_rel->reltuples;\n>> +\trelallvisible = (BlockNumber) rel->rd_rel->relallvisible;\n>> +\t}\n>> \n>> Isbn't possible to fill the rd_rel structure too, so this branching can be reduced?\n> I'll make some improvements to optimize this part of the code.\nI'm trying to improve this part of the implementation in global_temporary_table_v7-pg13.patch\nPlease check my patch and give me feedback.\n\n\nThanks\n\nWenjing\n\n\n\n\n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> po 20. 1. 2020 v 17:27 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> \n>> \n>> > 2020年1月20日 上午1:32,Erik Rijkers <er@xs4all.nl <mailto:er@xs4all.nl>> 写道:\n>> > \n>> > On 2020-01-19 18:04, 曾文旌(义从) wrote:\n>> >>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> >>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com> <mailto:wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>>> napsal:\n>> > \n>> >>> [global_temporary_table_v4-pg13.patch ]\n>> > \n>> > Hi,\n>> > \n>> > This patch doesn't quiet apply for me:\n>> > \n>> > patching file src/backend/access/common/reloptions.c\n>> > patching file src/backend/access/gist/gistutil.c\n>> > patching file src/backend/access/hash/hash.c\n>> > Hunk #1 succeeded at 149 (offset 3 lines).\n>> > patching file src/backend/access/heap/heapam_handler.c\n>> > patching file src/backend/access/heap/vacuumlazy.c\n>> > patching file src/backend/access/nbtree/nbtpage.c\n>> > patching file src/backend/access/table/tableam.c\n>> > patching file src/backend/access/transam/xlog.c\n>> > patching file src/backend/catalog/Makefile\n>> > Hunk #1 FAILED at 44.\n>> > 1 out of 1 hunk FAILED -- saving rejects to file src/backend/catalog/Makefile.rej\n>> > [...]\n>> > (The rest applies without errors)\n>> > \n>> > src/backend/catalog/Makefile.rej contains:\n>> > \n>> > ------------------------\n>> > --- src/backend/catalog/Makefile\n>> > +++ src/backend/catalog/Makefile\n>> > @@ -44,6 +44,8 @@ OBJS = \\\n>> > storage.o \\\n>> > toasting.o\n>> > \n>> > +OBJS += storage_gtt.o\n>> > +\n>> > BKIFILES = postgres.bki postgres.description postgres.shdescription\n>> > \n>> > include $(top_srcdir)/src/backend/common.mk <http://common.mk/>\n>> > ------------------------\n>> > \n>> > Can you have a look?\n>> I updated the code and remade the patch.\n>> Please give me feedback if you have any more questions.\n>> \n>> \n>> \n>> \n>> > \n>> > \n>> > thanks,\n>> > \n>> > Erik Rijkers\n>> > \n>> > \n>> > \n>> > \n>> > \n>> \n>", "msg_date": "Fri, 24 Jan 2020 00:28:05 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 23. 1. 2020 v 17:28 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> 2020年1月22日 下午1:29,曾文旌(义从) <wenjing.zwj@alibaba-inc.com> 写道:\n>\n>\n>\n> 2020年1月21日 下午1:43,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n> Hi\n>\n> I have a free time this evening, so I will check this patch\n>\n> I have a one question\n>\n> + /* global temp table get relstats from localhash */\n> + if (RELATION_IS_GLOBAL_TEMP(rel))\n> + {\n> + get_gtt_relstats(RelationGetRelid(rel),\n> + &relpages, &reltuples, &relallvisible,\n> + NULL, NULL);\n> + }\n> + else\n> + {\n> + /* coerce values in pg_class to more desirable types */\n> + relpages = (BlockNumber) rel->rd_rel->relpages;\n> + reltuples = (double) rel->rd_rel->reltuples;\n> + relallvisible = (BlockNumber) rel->rd_rel->relallvisible;\n> + }\n>\n> Isbn't possible to fill the rd_rel structure too, so this branching can be\n> reduced?\n>\n> I'll make some improvements to optimize this part of the code.\n>\n> I'm trying to improve this part of the implementation in\n> global_temporary_table_v7-pg13.patch\n> Please check my patch and give me feedback.\n>\n>\nIt is looking better, still there are some strange things (I didn't tested\nfunctionality yet)\n\n elog(ERROR, \"invalid relpersistence: %c\",\n relation->rd_rel->relpersistence);\n@@ -3313,6 +3336,10 @@ RelationBuildLocalRelation(const char *relname,\n rel->rd_backend = BackendIdForTempRelations();\n rel->rd_islocaltemp = true;\n break;\n+ case RELPERSISTENCE_GLOBAL_TEMP:\n+ rel->rd_backend = BackendIdForTempRelations();\n+ rel->rd_islocaltemp = true;\n+ break;\n default:\n\n+ rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of\nfield \"rd_islocaltemp\" is not probably best\n\n\n\nregards\n\nPavel\n\n\n\n\n>\n> Thanks\n>\n> Wenjing\n>\n>\n>\n>\n>\n> Regards\n>\n> Pavel\n>\n> po 20. 1. 2020 v 17:27 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>>\n>>\n>> > 2020年1月20日 上午1:32,Erik Rijkers <er@xs4all.nl> 写道:\n>> >\n>> > On 2020-01-19 18:04, 曾文旌(义从) wrote:\n>> >>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>> >>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <\n>> wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> >\n>> >>> [global_temporary_table_v4-pg13.patch ]\n>> >\n>> > Hi,\n>> >\n>> > This patch doesn't quiet apply for me:\n>> >\n>> > patching file src/backend/access/common/reloptions.c\n>> > patching file src/backend/access/gist/gistutil.c\n>> > patching file src/backend/access/hash/hash.c\n>> > Hunk #1 succeeded at 149 (offset 3 lines).\n>> > patching file src/backend/access/heap/heapam_handler.c\n>> > patching file src/backend/access/heap/vacuumlazy.c\n>> > patching file src/backend/access/nbtree/nbtpage.c\n>> > patching file src/backend/access/table/tableam.c\n>> > patching file src/backend/access/transam/xlog.c\n>> > patching file src/backend/catalog/Makefile\n>> > Hunk #1 FAILED at 44.\n>> > 1 out of 1 hunk FAILED -- saving rejects to file\n>> src/backend/catalog/Makefile.rej\n>> > [...]\n>> > (The rest applies without errors)\n>> >\n>> > src/backend/catalog/Makefile.rej contains:\n>> >\n>> > ------------------------\n>> > --- src/backend/catalog/Makefile\n>> > +++ src/backend/catalog/Makefile\n>> > @@ -44,6 +44,8 @@ OBJS = \\\n>> > storage.o \\\n>> > toasting.o\n>> >\n>> > +OBJS += storage_gtt.o\n>> > +\n>> > BKIFILES = postgres.bki postgres.description postgres.shdescription\n>> >\n>> > include $(top_srcdir)/src/backend/common.mk\n>> > ------------------------\n>> >\n>> > Can you have a look?\n>> I updated the code and remade the patch.\n>> Please give me feedback if you have any more questions.\n>>\n>>\n>>\n>>\n>> >\n>> >\n>> > thanks,\n>> >\n>> > Erik Rijkers\n>> >\n>> >\n>> >\n>> >\n>> >\n>>\n>>\n>\n>\n\nčt 23. 1. 2020 v 17:28 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月22日 下午1:29,曾文旌(义从) <wenjing.zwj@alibaba-inc.com> 写道:2020年1月21日 下午1:43,Pavel Stehule <pavel.stehule@gmail.com> 写道:HiI have a free time this evening, so I will check this patchI have a one question+\t/* global temp table get relstats from localhash */+\tif (RELATION_IS_GLOBAL_TEMP(rel))+\t{+\t\tget_gtt_relstats(RelationGetRelid(rel),+\t\t\t\t\t\t&relpages, &reltuples, &relallvisible,+\t\t\t\t\t\tNULL, NULL);+\t}+\telse+\t{+\t\t/* coerce values in pg_class to more desirable types */+\t\trelpages = (BlockNumber) rel->rd_rel->relpages;+\t\treltuples = (double) rel->rd_rel->reltuples;+\t\trelallvisible = (BlockNumber) rel->rd_rel->relallvisible;+\t}Isbn't possible to fill the rd_rel structure too, so this branching can be reduced?I'll make some improvements to optimize this part of the code.I'm trying to improve this part of the implementation in global_temporary_table_v7-pg13.patchPlease check my patch and give me feedback.It is looking better, still there are some strange things (I didn't tested functionality yet) \t\t\telog(ERROR, \"invalid relpersistence: %c\", \t\t\t\t relation->rd_rel->relpersistence);@@ -3313,6 +3336,10 @@ RelationBuildLocalRelation(const char *relname, \t\t\trel->rd_backend = BackendIdForTempRelations(); \t\t\trel->rd_islocaltemp = true; \t\t\tbreak;+\t\tcase RELPERSISTENCE_GLOBAL_TEMP:+\t\t\trel->rd_backend = BackendIdForTempRelations();+\t\t\trel->rd_islocaltemp = true;+\t\t\tbreak; \t\tdefault:+\t\t\trel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably bestregardsPavel ThanksWenjingRegardsPavelpo 20. 1. 2020 v 17:27 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2020年1月20日 上午1:32,Erik Rijkers <er@xs4all.nl> 写道:\n> \n> On 2020-01-19 18:04, 曾文旌(义从) wrote:\n>>> 2020年1月14日 下午9:20,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>>> út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n>>> [global_temporary_table_v4-pg13.patch ]\n> \n> Hi,\n> \n> This patch doesn't quiet apply for me:\n> \n> patching file src/backend/access/common/reloptions.c\n> patching file src/backend/access/gist/gistutil.c\n> patching file src/backend/access/hash/hash.c\n> Hunk #1 succeeded at 149 (offset 3 lines).\n> patching file src/backend/access/heap/heapam_handler.c\n> patching file src/backend/access/heap/vacuumlazy.c\n> patching file src/backend/access/nbtree/nbtpage.c\n> patching file src/backend/access/table/tableam.c\n> patching file src/backend/access/transam/xlog.c\n> patching file src/backend/catalog/Makefile\n> Hunk #1 FAILED at 44.\n> 1 out of 1 hunk FAILED -- saving rejects to file src/backend/catalog/Makefile.rej\n> [...]\n>   (The rest applies without errors)\n> \n> src/backend/catalog/Makefile.rej contains:\n> \n> ------------------------\n> --- src/backend/catalog/Makefile\n> +++ src/backend/catalog/Makefile\n> @@ -44,6 +44,8 @@ OBJS = \\\n>       storage.o \\\n>       toasting.o\n> \n> +OBJS += storage_gtt.o\n> +\n> BKIFILES = postgres.bki postgres.description postgres.shdescription\n> \n> include $(top_srcdir)/src/backend/common.mk\n> ------------------------\n> \n> Can you have a look?\nI updated the code and remade the patch.\nPlease give me feedback if you have any more questions.\n\n\n\n\n> \n> \n> thanks,\n> \n> Erik Rijkers\n> \n> \n> \n> \n>", "msg_date": "Thu, 23 Jan 2020 18:21:42 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I proposed just ignoring those new indexes because it seems much simpler\n> than alternative solutions that I can think of, and it's not like those\n> other solutions don't have other issues.\n\n+1.\n\n> For example, I've looked at the \"on demand\" building as implemented in\n> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n> calls into various places in index code seems somewht suspicious.\n\n+1. I can't imagine that's a safe or sane thing to do.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Jan 2020 15:47:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 23.01.2020 19:28, 曾文旌(义从) wrote:\n>\n> I'm trying to improve this part of the implementation in \n> global_temporary_table_v7-pg13.patch\n> Please check my patch and give me feedback.\n>\n>\n> Thanks\n>\n> Wenjing\n>\n>\n\nBelow is my short review of the patch:\n\n+    /*\n+     * For global temp table only\n+     * use AccessExclusiveLock for ensure safety\n+     */\n+    {\n+        {\n+            \"on_commit_delete_rows\",\n+            \"global temp table on commit options\",\n+            RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n+            ShareUpdateExclusiveLock\n+        },\n+        true\n+    },\n\n\nThe comment seems to be confusing: it says about AccessExclusiveLock but \nactually uses ShareUpdateExclusiveLock.\n\n- Assert(TransactionIdIsNormal(onerel->rd_rel->relfrozenxid));\n-    Assert(MultiXactIdIsValid(onerel->rd_rel->relminmxid));\n+    Assert((RELATION_IS_GLOBAL_TEMP(onerel) && \nonerel->rd_rel->relfrozenxid == InvalidTransactionId) ||\n+        (!RELATION_IS_GLOBAL_TEMP(onerel) && \nTransactionIdIsNormal(onerel->rd_rel->relfrozenxid)));\n+    Assert((RELATION_IS_GLOBAL_TEMP(onerel) && \nonerel->rd_rel->relminmxid == InvalidMultiXactId) ||\n+        (!RELATION_IS_GLOBAL_TEMP(onerel) && \nMultiXactIdIsValid(onerel->rd_rel->relminmxid)));\n\nIt is actually equivalent to:\n\nAssert(RELATION_IS_GLOBAL_TEMP(onerel) ^ \nTransactionIdIsNormal(onerel->rd_rel->relfrozenxid);\nAssert(RELATION_IS_GLOBAL_TEMP(onerel) ^ \nMultiXactIdIsValid(onerel->rd_rel->relminmxid));\n\n+    /* clean temp relation files */\n+    if (max_active_gtt > 0)\n+        RemovePgTempFiles();\n+\n      /*\n\nI wonder why do we need some special check for GTT here.\n From my point of view cleanup at startup of local storage of temp \ntables should be performed in the same way for local and global temp tables.\n\n\n-    new_rel_reltup->relfrozenxid = relfrozenxid;\n-    new_rel_reltup->relminmxid = relminmxid;\n+    /* global temp table not remember transaction info in catalog */\n+    if (relpersistence == RELPERSISTENCE_GLOBAL_TEMP)\n+    {\n+        new_rel_reltup->relfrozenxid = InvalidTransactionId;\n+        new_rel_reltup->relminmxid = InvalidMultiXactId;\n+    }\n+    else\n+    {\n+        new_rel_reltup->relfrozenxid = relfrozenxid;\n+        new_rel_reltup->relminmxid = relminmxid;\n+    }\n+\n\n\nWhy do we need to do it for GTT?\nDid you check that there will be no problems with GTT in case of XID \nwraparound?\nRight now if you create temp table and keep session open, then it will \nblock XID wraparound.\n\n+    /* We allow to drop global temp table only this session use it */\n+    if (RELATION_IS_GLOBAL_TEMP(rel))\n+    {\n+        if (is_other_backend_use_gtt(rel->rd_node))\n+            elog(ERROR, \"can not drop relation when other backend \nattached this global temp table\");\n+    }\n+\n\nHere we once again introduce incompatibility with normal (permanent) tables.\nAssume that DBA or programmer need to change format of GTT. But there \nare some active sessions which have used this GTT sometime in the past.\nWe will not be able to drop this GTT until all this sessions are terminated.\nI do not think that it is acceptable behaviour.\n\n+        LOCKMODE    lockmode = AccessExclusiveLock;\n+\n+        /* truncate global temp table only need RowExclusiveLock */\n+        if (get_rel_persistence(rid) == RELPERSISTENCE_GLOBAL_TEMP)\n+            lockmode = RowExclusiveLock;\n\n\nWhat are the reasons of using RowExclusiveLock for GTT instead of \nAccessExclusiveLock?\nYes, GTT data is access only by one backend so no locking here seems to \nbe needed at all.\nBut I wonder what are the motivations/benefits of using weaker lock \nlevel here?\nThere should be no conflicts in any case...\n\n+        /* We allow to create index on global temp table only this \nsession use it */\n+        if (is_other_backend_use_gtt(heapRelation->rd_node))\n+            elog(ERROR, \"can not create index when have other backend \nattached this global temp table\");\n+\n\nThe same argument as in case of dropping GTT: I do not think that \nprohibiting DLL operations on GTT used by more than one backend is bad idea.\n\n+    /* global temp table not support foreign key constraint yet */\n+    if (RELATION_IS_GLOBAL_TEMP(pkrel))\n+        ereport(ERROR,\n+                (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+                 errmsg(\"referenced relation \\\"%s\\\" is not a global \ntemp table\",\n+                        RelationGetRelationName(pkrel))));\n+\n\nWhy do we need to prohibit foreign key constraint on GTT?\n\n+    /*\n+     * Global temp table get frozenxid from MyProc\n+     * to avoid the vacuum truncate clog that gtt need.\n+     */\n+    if (max_active_gtt > 0)\n+    {\n+        TransactionId oldest_gtt_frozenxid =\n+            list_all_session_gtt_frozenxids(0, NULL, NULL, NULL);\n+\n+        if (TransactionIdIsNormal(oldest_gtt_frozenxid) &&\n+            TransactionIdPrecedes(oldest_gtt_frozenxid, newFrozenXid))\n+        {\n+            ereport(WARNING,\n+                (errmsg(\"global temp table oldest FrozenXid is far in \nthe past\"),\n+                 errhint(\"please truncate them or kill those sessions \nthat use them.\")));\n+            newFrozenXid = oldest_gtt_frozenxid;\n+        }\n+    }\n+\n\nAs far as I understand, content of GTT will never be processes by \nautovacuum.\nSo who will update frozenxid of GTT?\nI see that up_gtt_relstats is invoked when:\n- index is created on GTT\n- GTT is truncated\n- GTT is vacuumed\nSo unless GTT is explicitly vacuumed by user, its GTT is and them will \nnot be taken in account\nwhen computing new frozen xid value. Autovacumm will produce this \nwarnings (which will ton be visible by end user and only appended to the \nlog).\nAnd at some moment of time wrap around happen and if there still some \nold active GTT, we will get incorrect results.\n\n\n\n-- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 23.01.2020 19:28, 曾文旌(义从) wrote:\n\n\n\n\n\n\n\n\n\nI'm trying to improve this part of the implementation in global_temporary_table_v7-pg13.patch\nPlease check my patch and give me feedback.\n\n\n\n\nThanks\n\n\nWenjing\n\n\n\n\n\n\n Below is my short review of the patch:\n\n +    /*\n +     * For global temp table only\n +     * use AccessExclusiveLock for ensure safety\n +     */\n +    {\n +        {\n +            \"on_commit_delete_rows\",\n +            \"global temp table on commit options\",\n +            RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n +            ShareUpdateExclusiveLock\n +        },\n +        true\n +    },    \n\n\n The comment seems to be confusing: it says about AccessExclusiveLock\n but actually uses ShareUpdateExclusiveLock.\n\n -   \n Assert(TransactionIdIsNormal(onerel->rd_rel->relfrozenxid));\n -    Assert(MultiXactIdIsValid(onerel->rd_rel->relminmxid));\n +    Assert((RELATION_IS_GLOBAL_TEMP(onerel) &&\n onerel->rd_rel->relfrozenxid == InvalidTransactionId) ||\n +        (!RELATION_IS_GLOBAL_TEMP(onerel) &&\n TransactionIdIsNormal(onerel->rd_rel->relfrozenxid)));\n +    Assert((RELATION_IS_GLOBAL_TEMP(onerel) &&\n onerel->rd_rel->relminmxid == InvalidMultiXactId) ||\n +        (!RELATION_IS_GLOBAL_TEMP(onerel) &&\n MultiXactIdIsValid(onerel->rd_rel->relminmxid)));\n  \n It is actually equivalent to:\n\n Assert(RELATION_IS_GLOBAL_TEMP(onerel) ^\n TransactionIdIsNormal(onerel->rd_rel->relfrozenxid);\n Assert(RELATION_IS_GLOBAL_TEMP(onerel) ^\n MultiXactIdIsValid(onerel->rd_rel->relminmxid));\n\n +    /* clean temp relation files */\n +    if (max_active_gtt > 0)\n +        RemovePgTempFiles();\n +\n      /*\n  \n I wonder why do we need some special check for GTT here.\n From my point of view cleanup at startup of local storage of temp\n tables should be performed in the same way for local and global temp\n tables.\n\n\n -    new_rel_reltup->relfrozenxid = relfrozenxid;\n -    new_rel_reltup->relminmxid = relminmxid;\n +    /* global temp table not remember transaction info in catalog\n */\n +    if (relpersistence == RELPERSISTENCE_GLOBAL_TEMP)\n +    {\n +        new_rel_reltup->relfrozenxid = InvalidTransactionId;\n +        new_rel_reltup->relminmxid = InvalidMultiXactId;\n +    }\n +    else\n +    {\n +        new_rel_reltup->relfrozenxid = relfrozenxid;\n +        new_rel_reltup->relminmxid = relminmxid;\n +    }\n +\n\n\n Why do we need to do it for GTT?\n Did you check that there will be no problems with GTT in case of XID\n wraparound?\n Right now if you create temp table and keep session open, then it\n will block XID wraparound.\n\n +    /* We allow to drop global temp table only this session use it\n */\n +    if (RELATION_IS_GLOBAL_TEMP(rel))\n +    {\n +        if (is_other_backend_use_gtt(rel->rd_node))\n +            elog(ERROR, \"can not drop relation when other backend\n attached this global temp table\");\n +    }\n +\n\n Here we once again introduce incompatibility with normal (permanent)\n tables.\n Assume that DBA or programmer need to change format of GTT. But\n there are some active sessions which have used this GTT sometime in\n the past.\n We will not be able to drop this GTT until all this sessions are\n terminated.\n I do not think that it is acceptable behaviour.\n\n +        LOCKMODE    lockmode = AccessExclusiveLock;\n +\n +        /* truncate global temp table only need RowExclusiveLock */\n +        if (get_rel_persistence(rid) == RELPERSISTENCE_GLOBAL_TEMP)\n +            lockmode = RowExclusiveLock;\n\n\n What are the reasons of using RowExclusiveLock for GTT instead of\n AccessExclusiveLock?\n Yes, GTT data is access only by one backend so no locking here seems\n to be needed at all.\n But I wonder what are the motivations/benefits of using weaker lock\n level here?\n There should be no conflicts in any case...\n\n +        /* We allow to create index on global temp table only this\n session use it */\n +        if (is_other_backend_use_gtt(heapRelation->rd_node))\n +            elog(ERROR, \"can not create index when have other\n backend attached this global temp table\");\n +\n\n The same argument as in case of dropping GTT: I do not think that\n prohibiting DLL operations on GTT used by more than one backend is\n bad idea.\n\n +    /* global temp table not support foreign key constraint yet */\n +    if (RELATION_IS_GLOBAL_TEMP(pkrel))\n +        ereport(ERROR,\n +                (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n +                 errmsg(\"referenced relation \\\"%s\\\" is not a global\n temp table\",\n +                        RelationGetRelationName(pkrel))));\n +\n\n Why do we need to prohibit foreign key constraint on GTT?\n\n +    /*\n +     * Global temp table get frozenxid from MyProc\n +     * to avoid the vacuum truncate clog that gtt need.\n +     */\n +    if (max_active_gtt > 0)\n +    {\n +        TransactionId oldest_gtt_frozenxid =\n +            list_all_session_gtt_frozenxids(0, NULL, NULL, NULL);\n +\n +        if (TransactionIdIsNormal(oldest_gtt_frozenxid) &&\n +            TransactionIdPrecedes(oldest_gtt_frozenxid,\n newFrozenXid))\n +        {\n +            ereport(WARNING,\n +                (errmsg(\"global temp table oldest FrozenXid is far\n in the past\"),\n +                 errhint(\"please truncate them or kill those\n sessions that use them.\")));\n +            newFrozenXid = oldest_gtt_frozenxid;\n +        }\n +    }\n +\n\n As far as I understand, content of GTT will never be processes by\n autovacuum.\n So who will update frozenxid of GTT?\n I see that up_gtt_relstats is invoked when:\n - index is created on GTT\n - GTT is truncated\n - GTT is vacuumed\n So unless GTT is explicitly vacuumed by user, its GTT is and them\n will not be taken in account \n when computing new frozen xid value. Autovacumm will produce this\n warnings (which will ton be visible by end user and only appended to\n the log).\n And at some moment of time wrap around happen and if there still\n some old active GTT, we will get incorrect results.\n\n\n\n --\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 24 Jan 2020 11:20:09 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 23.01.2020 23:47, Robert Haas wrote:\n> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> I proposed just ignoring those new indexes because it seems much simpler\n>> than alternative solutions that I can think of, and it's not like those\n>> other solutions don't have other issues.\n> +1.\n>\n>> For example, I've looked at the \"on demand\" building as implemented in\n>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>> calls into various places in index code seems somewht suspicious.\n> +1. I can't imagine that's a safe or sane thing to do.\n>\n\nAs far as you know there are two versions of GTT implementations now.\nAnd we are going to merge them into single patch.\nBut there are some principle question concerning provided functionality \nwhich has to be be discussed:\nshould we prohibit DDL on GTT if there are more than one sessions using \nit. It includes creation/dropping indexes, dropping table, altering table...\n\nIf the answer is \"yes\", then the question whether to populate new \nindexes with data is no relevant at all, because such situation will not \nbe possible.\nBut in this case we will get incompatible behavior with normal \n(permanent) tables and it seems to be very inconvenient from DBA point \nof view:\nit will be necessary to enforce all clients to close their sessions to \nperform some DDL manipulations with GTT.\nSome DDLs will be very difficult to implement if GTT is used by more \nthan one backend, for example altering table schema.\n\nMy current solution is to allow creation/droping index on GTT and \ndropping table itself, while prohibit alter schema at all for GTT.\nWenjing's approach is to prohibit any DDL if GTT is used by more than \none backend.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 24 Jan 2020 11:39:40 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 24. 1. 2020 v 9:39 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 23.01.2020 23:47, Robert Haas wrote:\n> > On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> >> I proposed just ignoring those new indexes because it seems much simpler\n> >> than alternative solutions that I can think of, and it's not like those\n> >> other solutions don't have other issues.\n> > +1.\n> >\n> >> For example, I've looked at the \"on demand\" building as implemented in\n> >> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n> >> calls into various places in index code seems somewht suspicious.\n> > +1. I can't imagine that's a safe or sane thing to do.\n> >\n>\n> As far as you know there are two versions of GTT implementations now.\n> And we are going to merge them into single patch.\n> But there are some principle question concerning provided functionality\n> which has to be be discussed:\n> should we prohibit DDL on GTT if there are more than one sessions using\n> it. It includes creation/dropping indexes, dropping table, altering\n> table...\n>\n> If the answer is \"yes\", then the question whether to populate new\n> indexes with data is no relevant at all, because such situation will not\n> be possible.\n> But in this case we will get incompatible behavior with normal\n> (permanent) tables and it seems to be very inconvenient from DBA point\n> of view:\n> it will be necessary to enforce all clients to close their sessions to\n> perform some DDL manipulations with GTT.\n> Some DDLs will be very difficult to implement if GTT is used by more\n> than one backend, for example altering table schema.\n>\n> My current solution is to allow creation/droping index on GTT and\n> dropping table itself, while prohibit alter schema at all for GTT.\n> Wenjing's approach is to prohibit any DDL if GTT is used by more than\n> one backend.\n>\n\nWhen I create index on GTT in one session, then I don't expect creating\nsame index in all other sessions that uses same GTT.\n\nBut I can imagine to creating index on GTT enforces index in current\nsession, and for other sessions this index will be invalid to end of\nsession.\n\nRegards\n\nPavel\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 24. 1. 2020 v 9:39 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\nOn 23.01.2020 23:47, Robert Haas wrote:\n> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> I proposed just ignoring those new indexes because it seems much simpler\n>> than alternative solutions that I can think of, and it's not like those\n>> other solutions don't have other issues.\n> +1.\n>\n>> For example, I've looked at the \"on demand\" building as implemented in\n>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>> calls into various places in index code seems somewht suspicious.\n> +1. I can't imagine that's a safe or sane thing to do.\n>\n\nAs far as you know there are two versions of GTT implementations now.\nAnd we are going to merge them into single patch.\nBut there are some principle question concerning provided functionality \nwhich has to be be discussed:\nshould we prohibit DDL on GTT if there are more than one sessions using \nit. It includes creation/dropping indexes, dropping table, altering table...\n\nIf the answer is \"yes\", then the question whether to populate new \nindexes with data is no relevant at all, because such situation will not \nbe possible.\nBut in this case we will get incompatible behavior with normal \n(permanent) tables and it seems to be very inconvenient from DBA point \nof view:\nit will be necessary to enforce all clients to close their sessions to \nperform some DDL manipulations with GTT.\nSome DDLs will be very difficult to implement if GTT is used by more \nthan one backend, for example altering table schema.\n\nMy current solution is to allow creation/droping index on GTT and \ndropping table itself, while prohibit alter schema at all for GTT.\nWenjing's approach is to prohibit any DDL if GTT is used by more than \none backend.When I create index on GTT in one session, then I don't expect creating same index in all other sessions that uses same GTT.But I can imagine to creating index on GTT enforces index in current session, and for other sessions this index will be invalid to end of session.RegardsPavel\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 24 Jan 2020 10:09:32 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 24.01.2020 12:09, Pavel Stehule wrote:\n>\n>\n> pá 24. 1. 2020 v 9:39 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 23.01.2020 23:47, Robert Haas wrote:\n> > On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com\n> <mailto:tomas.vondra@2ndquadrant.com>> wrote:\n> >> I proposed just ignoring those new indexes because it seems\n> much simpler\n> >> than alternative solutions that I can think of, and it's not\n> like those\n> >> other solutions don't have other issues.\n> > +1.\n> >\n> >> For example, I've looked at the \"on demand\" building as\n> implemented in\n> >> global_private_temp-8.patch, I kinda doubt adding a bunch of\n> index build\n> >> calls into various places in index code seems somewht suspicious.\n> > +1. I can't imagine that's a safe or sane thing to do.\n> >\n>\n> As far as you know there are two versions of GTT implementations now.\n> And we are going to merge them into single patch.\n> But there are some principle question concerning provided\n> functionality\n> which has to be be discussed:\n> should we prohibit DDL on GTT if there are more than one sessions\n> using\n> it. It includes creation/dropping indexes, dropping table,\n> altering table...\n>\n> If the answer is \"yes\", then the question whether to populate new\n> indexes with data is no relevant at all, because such situation\n> will not\n> be possible.\n> But in this case we will get incompatible behavior with normal\n> (permanent) tables and it seems to be very inconvenient from DBA\n> point\n> of view:\n> it will be necessary to enforce all clients to close their\n> sessions to\n> perform some DDL manipulations with GTT.\n> Some DDLs will be very difficult to implement if GTT is used by more\n> than one backend, for example altering table schema.\n>\n> My current solution is to allow creation/droping index on GTT and\n> dropping table itself, while prohibit alter schema at all for GTT.\n> Wenjing's approach is to prohibit any DDL if GTT is used by more than\n> one backend.\n>\n>\n> When I create index on GTT in one session, then I don't expect \n> creating same index in all other sessions that uses same GTT.\n>\n> But I can imagine to creating index on GTT enforces index in current \n> session, and for other sessions this index will be invalid to end of \n> session.\n\nSo there are three possible alternatives:\n\n1. Prohibit index creation of GTT when it used by more than once session.\n2. Create index and populate them with data in all sessions using this GTT.\n3. Create index only in current session and do not allow to use it in \nall other sessions already using this GTT (but definitely allow to use \nit in new sessions).\n\n1 is Wenjing's approach, 2 - is my approach, 3 - is your suggestion :)\n\nI can construct the following table with pro/cons of each approach:\n\nApproach\n\tCompatibility with normal table\n\tUser (DBA) friendly\n\tComplexity of implementation\n\tConsistency\n1\n\t-\n\t1: requires restart of all sessions to perform operation\n\t2: requires global cache of GTT\n\t3/: /no man, no problem\n2\n\t+\n\t3: if index is created then it is actually needed, isn't it? \t1: use \nexisted functionality to create index\n\t2: if alter schema is prohibited\n3\n\t-\n\t2: requires restart of all sessions to use created index\n\t3: requires some mechanism for prohibiting index created after first \nsession access to GTT\n\t1: can perform DDL but do no see effect of it\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 24.01.2020 12:09, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npá 24. 1. 2020 v 9:39\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n\n On 23.01.2020 23:47, Robert Haas wrote:\n > On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n > <tomas.vondra@2ndquadrant.com>\n wrote:\n >> I proposed just ignoring those new indexes because\n it seems much simpler\n >> than alternative solutions that I can think of, and\n it's not like those\n >> other solutions don't have other issues.\n > +1.\n >\n >> For example, I've looked at the \"on demand\"\n building as implemented in\n >> global_private_temp-8.patch, I kinda doubt adding a\n bunch of index build\n >> calls into various places in index code seems\n somewht suspicious.\n > +1. I can't imagine that's a safe or sane thing to do.\n >\n\n As far as you know there are two versions of GTT\n implementations now.\n And we are going to merge them into single patch.\n But there are some principle question concerning provided\n functionality \n which has to be be discussed:\n should we prohibit DDL on GTT if there are more than one\n sessions using \n it. It includes creation/dropping indexes, dropping table,\n altering table...\n\n If the answer is \"yes\", then the question whether to\n populate new \n indexes with data is no relevant at all, because such\n situation will not \n be possible.\n But in this case we will get incompatible behavior with\n normal \n (permanent) tables and it seems to be very inconvenient from\n DBA point \n of view:\n it will be necessary to enforce all clients to close their\n sessions to \n perform some DDL manipulations with GTT.\n Some DDLs will be very difficult to implement if GTT is used\n by more \n than one backend, for example altering table schema.\n\n My current solution is to allow creation/droping index on\n GTT and \n dropping table itself, while prohibit alter schema at all\n for GTT.\n Wenjing's approach is to prohibit any DDL if GTT is used by\n more than \n one backend.\n\n\n\nWhen I create index on GTT in one session, then I don't\n expect creating same index in all other sessions that uses\n same GTT.\n\n\nBut I can imagine to creating index on GTT enforces index\n in current session, and for other sessions this index will\n be invalid to end of session.\n\n\n\n\n So there are three possible alternatives:\n\n 1. Prohibit index creation of GTT when it used by more than once\n session.\n 2. Create index and populate them with data in all sessions using\n this GTT.\n 3. Create index only in current session and do not allow to use it\n in all other sessions already using this GTT (but definitely allow\n to use it in new sessions).\n\n 1 is Wenjing's approach, 2 - is my approach, 3 - is your suggestion\n :)\n\n I can construct the following table with pro/cons of each approach:\n\n\n\n\nApproach\n\nCompatibility with normal table\n\nUser (DBA) friendly\n\nComplexity of implementation\n\nConsistency\n\n\n\n1\n\n-\n\n1: requires restart of all sessions to\n perform operation\n\n2: requires global cache of GTT\n\n3: no man, no\n problem\n\n\n2\n\n+\n\n3: if index is created then it is actually\n needed, isn't it?\n1: use existed functionality to create index\n\n2: if alter schema is prohibited\n\n\n\n3\n\n-\n\n2: requires restart of all sessions to use\n created index\n\n3: requires some mechanism for prohibiting\n index created after first session access to GTT \n\n1: can perform DDL but do no see effect of it\n\n\n\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 24 Jan 2020 12:43:10 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 24. 1. 2020 v 10:43 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 24.01.2020 12:09, Pavel Stehule wrote:\n>\n>\n>\n> pá 24. 1. 2020 v 9:39 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 23.01.2020 23:47, Robert Haas wrote:\n>> > On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n>> > <tomas.vondra@2ndquadrant.com> wrote:\n>> >> I proposed just ignoring those new indexes because it seems much\n>> simpler\n>> >> than alternative solutions that I can think of, and it's not like those\n>> >> other solutions don't have other issues.\n>> > +1.\n>> >\n>> >> For example, I've looked at the \"on demand\" building as implemented in\n>> >> global_private_temp-8.patch, I kinda doubt adding a bunch of index\n>> build\n>> >> calls into various places in index code seems somewht suspicious.\n>> > +1. I can't imagine that's a safe or sane thing to do.\n>> >\n>>\n>> As far as you know there are two versions of GTT implementations now.\n>> And we are going to merge them into single patch.\n>> But there are some principle question concerning provided functionality\n>> which has to be be discussed:\n>> should we prohibit DDL on GTT if there are more than one sessions using\n>> it. It includes creation/dropping indexes, dropping table, altering\n>> table...\n>>\n>> If the answer is \"yes\", then the question whether to populate new\n>> indexes with data is no relevant at all, because such situation will not\n>> be possible.\n>> But in this case we will get incompatible behavior with normal\n>> (permanent) tables and it seems to be very inconvenient from DBA point\n>> of view:\n>> it will be necessary to enforce all clients to close their sessions to\n>> perform some DDL manipulations with GTT.\n>> Some DDLs will be very difficult to implement if GTT is used by more\n>> than one backend, for example altering table schema.\n>>\n>> My current solution is to allow creation/droping index on GTT and\n>> dropping table itself, while prohibit alter schema at all for GTT.\n>> Wenjing's approach is to prohibit any DDL if GTT is used by more than\n>> one backend.\n>>\n>\n> When I create index on GTT in one session, then I don't expect creating\n> same index in all other sessions that uses same GTT.\n>\n> But I can imagine to creating index on GTT enforces index in current\n> session, and for other sessions this index will be invalid to end of\n> session.\n>\n>\n> So there are three possible alternatives:\n>\n> 1. Prohibit index creation of GTT when it used by more than once session.\n> 2. Create index and populate them with data in all sessions using this GTT.\n> 3. Create index only in current session and do not allow to use it in all\n> other sessions already using this GTT (but definitely allow to use it in\n> new sessions).\n>\n> 1 is Wenjing's approach, 2 - is my approach, 3 - is your suggestion :)\n>\n> I can construct the following table with pro/cons of each approach:\n>\n> Approach\n> Compatibility with normal table\n> User (DBA) friendly\n> Complexity of implementation\n> Consistency\n> 1\n> -\n> 1: requires restart of all sessions to perform operation\n> 2: requires global cache of GTT\n> 3*: *no man, no problem\n> 2\n> +\n> 3: if index is created then it is actually needed, isn't it? 1: use\n> existed functionality to create index\n> 2: if alter schema is prohibited\n> 3\n> -\n> 2: requires restart of all sessions to use created index\n> 3: requires some mechanism for prohibiting index created after first\n> session access to GTT\n> 1: can perform DDL but do no see effect of it\n>\n>\nYou will see a effect of DDL in current session (where you did the change),\nall other sessions should to live without any any change do reconnect or to\nRESET connect\n\nI don't like 2 - when I do index on global temp table, I don't would to\nwait on indexing on all other sessions. These operations should be\nmaximally independent.\n\nRegards\n\nPavel\n\n\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 24. 1. 2020 v 10:43 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 24.01.2020 12:09, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\npá 24. 1. 2020 v 9:39\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n\n On 23.01.2020 23:47, Robert Haas wrote:\n > On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n > <tomas.vondra@2ndquadrant.com>\n wrote:\n >> I proposed just ignoring those new indexes because\n it seems much simpler\n >> than alternative solutions that I can think of, and\n it's not like those\n >> other solutions don't have other issues.\n > +1.\n >\n >> For example, I've looked at the \"on demand\"\n building as implemented in\n >> global_private_temp-8.patch, I kinda doubt adding a\n bunch of index build\n >> calls into various places in index code seems\n somewht suspicious.\n > +1. I can't imagine that's a safe or sane thing to do.\n >\n\n As far as you know there are two versions of GTT\n implementations now.\n And we are going to merge them into single patch.\n But there are some principle question concerning provided\n functionality \n which has to be be discussed:\n should we prohibit DDL on GTT if there are more than one\n sessions using \n it. It includes creation/dropping indexes, dropping table,\n altering table...\n\n If the answer is \"yes\", then the question whether to\n populate new \n indexes with data is no relevant at all, because such\n situation will not \n be possible.\n But in this case we will get incompatible behavior with\n normal \n (permanent) tables and it seems to be very inconvenient from\n DBA point \n of view:\n it will be necessary to enforce all clients to close their\n sessions to \n perform some DDL manipulations with GTT.\n Some DDLs will be very difficult to implement if GTT is used\n by more \n than one backend, for example altering table schema.\n\n My current solution is to allow creation/droping index on\n GTT and \n dropping table itself, while prohibit alter schema at all\n for GTT.\n Wenjing's approach is to prohibit any DDL if GTT is used by\n more than \n one backend.\n\n\n\nWhen I create index on GTT in one session, then I don't\n expect creating same index in all other sessions that uses\n same GTT.\n\n\nBut I can imagine to creating index on GTT enforces index\n in current session, and for other sessions this index will\n be invalid to end of session.\n\n\n\n\n So there are three possible alternatives:\n\n 1. Prohibit index creation of GTT when it used by more than once\n session.\n 2. Create index and populate them with data in all sessions using\n this GTT.\n 3. Create index only in current session and do not allow to use it\n in all other sessions already using this GTT (but definitely allow\n to use it in new sessions).\n\n 1 is Wenjing's approach, 2 - is my approach, 3 - is your suggestion\n :)\n\n I can construct the following table with pro/cons of each approach:\n\n\n\n\nApproach\n\nCompatibility with normal table\n\nUser (DBA) friendly\n\nComplexity of implementation\n\nConsistency\n\n\n\n1\n\n-\n\n1: requires restart of all sessions to\n perform operation\n\n2: requires global cache of GTT\n\n3: no man, no\n problem\n\n\n2\n\n+\n\n3: if index is created then it is actually\n needed, isn't it?\n1: use existed functionality to create index\n\n2: if alter schema is prohibited\n\n\n\n3\n\n-\n\n2: requires restart of all sessions to use\n created index\n\n3: requires some mechanism for prohibiting\n index created after first session access to GTT \n\n1: can perform DDL but do no see effect of it\n\n\n\n\nYou will see a effect of DDL in current session (where you did the change), all other sessions should to live without any any change do reconnect or to RESET connectI don't like 2 - when I do index on global temp table, I don't would to wait on indexing on all other sessions. These operations should be maximally independent.RegardsPavel \n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 24 Jan 2020 13:15:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 24.01.2020 15:15, Pavel Stehule wrote:\n> You will see a effect of DDL in current session (where you did the \n> change), all other sessions should to live without any any change do \n> reconnect or to RESET connect\n>\nWhy? I found this requirement quit unnatural and contradicting to the \nbehavior of normal tables.\nActually one of motivation for adding global tempo tables to Postgres is \nto provide compatibility with Oracle.\nAlthough I know that Oracle design decisions were never considered as  \naxioms by Postgres community,\nbut ni case of GTT design I think that we should take in account Oracle \napproach.\nAnd GTT in Oracle behaves exactly as in my implementation:\n\nhttps://www.oracletutorial.com/oracle-basics/oracle-global-temporary-table/\n\nIt is not clear from this documentation whether index created for GTT in \none session can be used in another session which already has some data \nin this GTT.\nBut I did experiment with install Oracle server and  can confirm that \nactually works in this way.\n\nSo I do not understand why do we need to complicate our GTT \nimplementation in order to prohibit useful functionality and introduce \ninconsistency between behavior of normal and global temp tables.\n\n\n\n> I don't like 2 - when I do index on global temp table, I don't would \n> to wait on indexing on all other sessions. These operations should be \n> maximally independent.\n>\n\nNobody suggest to wait building index in all sessions.\nIndexes will be constructed on demand when session access this table.\nIf session will no access this table at all, then index will never be \nconstructed.\n\nOnce again: logic of dealing with indexes in GTT is very simple.\nFor normal tables, indexes are initialized at the tame when them are \ncreated.\nFor GTT it is not true. We have to initialize index on demand when it is \naccessed first time in session.\n\nSo it has to be handled in any way.\nThe question is only whether we should allow creation of index for table \nalready populated with some data?\nActually doesn't require some additional efforts. We can use existed \nbuild_index function which initialize index and populates it with data.\nSo the solution proposed for me is most natural, convenient and simplest \nsolution at the same time. And compatible with Oracle.\n\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n> -- \n> Konstantin Knizhnik\n> Postgres Professional:http://www.postgrespro.com\n> The Russian Postgres Company\n>\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 24.01.2020 15:15, Pavel Stehule\n wrote:\n\n\n\nYou will see a effect of DDL in current session\n (where you did the change), all other sessions should to live\n without any any change do reconnect or to RESET connect\n\n\n\n\n\n\n Why? I found this requirement quit unnatural and contradicting to\n the behavior of normal tables.\n Actually one of motivation for adding global tempo tables to\n Postgres is to provide compatibility with Oracle.\n Although I know that Oracle design decisions were never considered\n as  axioms by Postgres community,\n but ni case of GTT design I think that we should take in account\n Oracle approach.\n And GTT in Oracle behaves exactly as in my implementation:\n\nhttps://www.oracletutorial.com/oracle-basics/oracle-global-temporary-table/\n\n It is not clear from this documentation whether index created for\n GTT in one session can be used in another session which already has\n some data in this GTT.\n But I did experiment with install Oracle server and  can confirm\n that actually works in this way.\n\n So I do not understand why do we need to complicate our GTT\n implementation in order to prohibit useful functionality and\n introduce inconsistency between behavior of normal and global temp\n tables.\n\n\n\n\n\n\nI don't like 2 - when I do index on global temp table, I\n don't would to wait on indexing on all other sessions. These\n operations should be maximally independent.\n\n\n\n\n\n\n Nobody suggest to wait building index in all sessions.\n Indexes will be constructed on demand when session access this\n table.\n If session will no access this table at all, then index will never\n be constructed.\n\n Once again: logic of dealing with indexes in GTT is very simple.\n For normal tables, indexes are initialized at the tame when them are\n created.\n For GTT it is not true. We have to initialize index on demand when\n it is accessed first time in session.\n\n So it has to be handled in any way. \n The question is only whether we should allow creation of index for\n table already populated with some data?\n Actually doesn't require some additional efforts. We can use existed\n build_index function which initialize index and populates it with\n data.\n So the solution proposed for me is most natural, convenient and\n simplest solution at the same time. And compatible with Oracle.\n\n\n\n\n\n\n\nRegards\n\n\nPavel\n\n \n\n \n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company \n\n\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 24 Jan 2020 16:17:17 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 24. 1. 2020 v 14:17 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 24.01.2020 15:15, Pavel Stehule wrote:\n>\n> You will see a effect of DDL in current session (where you did the\n> change), all other sessions should to live without any any change do\n> reconnect or to RESET connect\n>\n> Why? I found this requirement quit unnatural and contradicting to the\n> behavior of normal tables.\n> Actually one of motivation for adding global tempo tables to Postgres is\n> to provide compatibility with Oracle.\n> Although I know that Oracle design decisions were never considered as\n> axioms by Postgres community,\n> but ni case of GTT design I think that we should take in account Oracle\n> approach.\n> And GTT in Oracle behaves exactly as in my implementation:\n>\n> https://www.oracletutorial.com/oracle-basics/oracle-global-temporary-table/\n>\n> It is not clear from this documentation whether index created for GTT in\n> one session can be used in another session which already has some data in\n> this GTT.\n> But I did experiment with install Oracle server and can confirm that\n> actually works in this way.\n>\n> So I do not understand why do we need to complicate our GTT implementation\n> in order to prohibit useful functionality and introduce inconsistency\n> between behavior of normal and global temp tables.\n>\n>\n>\n> I don't like 2 - when I do index on global temp table, I don't would to\n> wait on indexing on all other sessions. These operations should be\n> maximally independent.\n>\n>\n> Nobody suggest to wait building index in all sessions.\n> Indexes will be constructed on demand when session access this table.\n> If session will no access this table at all, then index will never be\n> constructed.\n>\n\n> Once again: logic of dealing with indexes in GTT is very simple.\n> For normal tables, indexes are initialized at the tame when them are\n> created.\n> For GTT it is not true. We have to initialize index on demand when it is\n> accessed first time in session.\n>\n> So it has to be handled in any way.\n> The question is only whether we should allow creation of index for table\n> already populated with some data?\n> Actually doesn't require some additional efforts. We can use existed\n> build_index function which initialize index and populates it with data.\n> So the solution proposed for me is most natural, convenient and simplest\n> solution at the same time. And compatible with Oracle.\n>\n\nI cannot to evaluate your proposal, and I am sure, so you know more about\nthis code.\n\nThere is a question if we can allow to build local temp index on global\ntemp table. It is different situation. When I work with global properties\npersonally I prefer total asynchronous implementation of any DDL operations\nfor other than current session. When it is true, then I have not any\nobjection. For me, good enough design of any DDL can be based on catalog\nchange without forcing to living tables.\n\nI see following disadvantage of your proposal. See scenario\n\n1. I have two sessions\n\nA - small GTT with active owner\nB - big GTT with some active application.\n\nsession A will do new index - it is fast, but if creating index is forced\non B on demand (when B was touched), then this operation have to wait after\nindex will be created.\n\nSo I afraid build a index on other sessions on GTT when GTT tables in other\nsessions will not be empty.\n\nRegards\n\nPavel\n\n\n\n>\n>\n>\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>>\n>> --\n>> Konstantin Knizhnik\n>> Postgres Professional: http://www.postgrespro.com\n>> The Russian Postgres Company\n>>\n>>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 24. 1. 2020 v 14:17 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 24.01.2020 15:15, Pavel Stehule\n wrote:\n\n\nYou will see a effect of DDL in current session\n (where you did the change), all other sessions should to live\n without any any change do reconnect or to RESET connect\n\n\n\n\n\n\n Why? I found this requirement quit unnatural and contradicting to\n the behavior of normal tables.\n Actually one of motivation for adding global tempo tables to\n Postgres is to provide compatibility with Oracle.\n Although I know that Oracle design decisions were never considered\n as  axioms by Postgres community,\n but ni case of GTT design I think that we should take in account\n Oracle approach.\n And GTT in Oracle behaves exactly as in my implementation:\n\nhttps://www.oracletutorial.com/oracle-basics/oracle-global-temporary-table/\n\n It is not clear from this documentation whether index created for\n GTT in one session can be used in another session which already has\n some data in this GTT.\n But I did experiment with install Oracle server and  can confirm\n that actually works in this way.\n\n So I do not understand why do we need to complicate our GTT\n implementation in order to prohibit useful functionality and\n introduce inconsistency between behavior of normal and global temp\n tables.\n\n\n\n\n\n\nI don't like 2 - when I do index on global temp table, I\n don't would to wait on indexing on all other sessions. These\n operations should be maximally independent.\n\n\n\n\n\n\n Nobody suggest to wait building index in all sessions.\n Indexes will be constructed on demand when session access this\n table.\n If session will no access this table at all, then index will never\n be constructed. \n\n Once again: logic of dealing with indexes in GTT is very simple.\n For normal tables, indexes are initialized at the tame when them are\n created.\n For GTT it is not true. We have to initialize index on demand when\n it is accessed first time in session.\n\n So it has to be handled in any way. \n The question is only whether we should allow creation of index for\n table already populated with some data?\n Actually doesn't require some additional efforts. We can use existed\n build_index function which initialize index and populates it with\n data.\n So the solution proposed for me is most natural, convenient and\n simplest solution at the same time. And compatible with Oracle.I cannot to evaluate your proposal, and I am sure, so you know more about this code. There is a question if we can allow to build local temp index on global temp table. It is different situation. When I work with global properties personally I prefer total asynchronous implementation of any DDL operations for other than current session. When it is true, then I have not any objection. For me, good enough design of any DDL can be based on catalog change without forcing to living tables. I see following disadvantage of your proposal. See scenario1. I have two sessions A - small GTT with active ownerB - big GTT with some active application.session A will do new index - it is fast, but if creating index is forced on B on demand (when B was touched), then this operation have to wait after index will be created.So I afraid build a index on other sessions on GTT when GTT tables in other sessions will not be empty.RegardsPavel  \n\n\n\n\n\n\n\nRegards\n\n\nPavel\n\n \n\n \n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company \n\n\n\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 24 Jan 2020 20:39:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thank you for review patch.\n\n> 2020年1月24日 下午4:20,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 23.01.2020 19:28, 曾文旌(义从) wrote:\n>> \n>> I'm trying to improve this part of the implementation in global_temporary_table_v7-pg13.patch\n>> Please check my patch and give me feedback.\n>> \n>> \n>> Thanks\n>> \n>> Wenjing\n>> \n>> \n> \n> Below is my short review of the patch:\n> \n> + /*\n> + * For global temp table only\n> + * use AccessExclusiveLock for ensure safety\n> + */\n> + {\n> + {\n> + \"on_commit_delete_rows\",\n> + \"global temp table on commit options\",\n> + RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n> + ShareUpdateExclusiveLock\n> + },\n> + true\n> + }, \n> \n> \n> The comment seems to be confusing: it says about AccessExclusiveLock but actually uses ShareUpdateExclusiveLock.\nThere is a problem with the comment description, I will fix it.\n\n> \n> - Assert(TransactionIdIsNormal(onerel->rd_rel->relfrozenxid));\n> - Assert(MultiXactIdIsValid(onerel->rd_rel->relminmxid));\n> + Assert((RELATION_IS_GLOBAL_TEMP(onerel) && onerel->rd_rel->relfrozenxid == InvalidTransactionId) ||\n> + (!RELATION_IS_GLOBAL_TEMP(onerel) && TransactionIdIsNormal(onerel->rd_rel->relfrozenxid)));\n> + Assert((RELATION_IS_GLOBAL_TEMP(onerel) && onerel->rd_rel->relminmxid == InvalidMultiXactId) ||\n> + (!RELATION_IS_GLOBAL_TEMP(onerel) && MultiXactIdIsValid(onerel->rd_rel->relminmxid)));\n> \n> It is actually equivalent to:\n> \n> Assert(RELATION_IS_GLOBAL_TEMP(onerel) ^ TransactionIdIsNormal(onerel->rd_rel->relfrozenxid);\n> Assert(RELATION_IS_GLOBAL_TEMP(onerel) ^ MultiXactIdIsValid(onerel->rd_rel->relminmxid));\nYes, Thank you for your points out, It's simpler.\n\n> \n> + /* clean temp relation files */\n> + if (max_active_gtt > 0)\n> + RemovePgTempFiles();\n> +\n> /*\n> \n> I wonder why do we need some special check for GTT here.\n> From my point of view cleanup at startup of local storage of temp tables should be performed in the same way for local and global temp tables.\nAfter oom kill, In autovacuum, the Isolated local temp table will be cleaned like orphan temporary tables. The definition of local temp table is deleted with the storage file. \nBut GTT can not do that. So we have the this implementation in my patch.\nIf you have other solutions, please let me know.\n\n> \n> \n> - new_rel_reltup->relfrozenxid = relfrozenxid;\n> - new_rel_reltup->relminmxid = relminmxid;\n> + /* global temp table not remember transaction info in catalog */\n> + if (relpersistence == RELPERSISTENCE_GLOBAL_TEMP)\n> + {\n> + new_rel_reltup->relfrozenxid = InvalidTransactionId;\n> + new_rel_reltup->relminmxid = InvalidMultiXactId;\n> + }\n> + else\n> + {\n> + new_rel_reltup->relfrozenxid = relfrozenxid;\n> + new_rel_reltup->relminmxid = relminmxid;\n> + }\n> +\n> \n> \n> Why do we need to do it for GTT?\n> Did you check that there will be no problems with GTT in case of XID wraparound?\n> Right now if you create temp table and keep session open, then it will block XID wraparound.\nIn my design\n1 Because different sessions have different transaction information, I choose to store the transaction information of GTT in MyProc,not catalog.\n2 About the XID wraparound problem, the reason is the design of the temp table storage(local temp table and global temp table) that makes it can not to do vacuum by autovacuum. \nIt should be completely solve at the storage level.\n\n> \n> + /* We allow to drop global temp table only this session use it */\n> + if (RELATION_IS_GLOBAL_TEMP(rel))\n> + {\n> + if (is_other_backend_use_gtt(rel->rd_node))\n> + elog(ERROR, \"can not drop relation when other backend attached this global temp table\");\n> + }\n> +\n> \n> Here we once again introduce incompatibility with normal (permanent) tables.\n> Assume that DBA or programmer need to change format of GTT. But there are some active sessions which have used this GTT sometime in the past.\n> We will not be able to drop this GTT until all this sessions are terminated.\n> I do not think that it is acceptable behaviour.\nIn fact, The dba can still complete the DDL of the GTT.\nI've provided a set of functions for this case.\nIf the dba needs to modify a GTT A(or drop GTT or create index on GTT), he needs to do:\n1 Use the pg_gtt_attached_pids view to list the pids for the session that is using the GTT A.\n2 Use pg_terminate_backend(pid)terminate they except itself.\n3 Do alter GTT A.\n\n> \n> + LOCKMODE lockmode = AccessExclusiveLock;\n> +\n> + /* truncate global temp table only need RowExclusiveLock */\n> + if (get_rel_persistence(rid) == RELPERSISTENCE_GLOBAL_TEMP)\n> + lockmode = RowExclusiveLock;\n> \n> \n> What are the reasons of using RowExclusiveLock for GTT instead of AccessExclusiveLock?\n> Yes, GTT data is access only by one backend so no locking here seems to be needed at all.\n> But I wonder what are the motivations/benefits of using weaker lock level here?\n1 Truncate GTT deletes only the data in the session, so no need use high-level lock.\n2 I think it still needs to be block by DDL of GTT, which is why I use RowExclusiveLock.\n\n> There should be no conflicts in any case...\n> \n> + /* We allow to create index on global temp table only this session use it */\n> + if (is_other_backend_use_gtt(heapRelation->rd_node))\n> + elog(ERROR, \"can not create index when have other backend attached this global temp table\");\n> +\n> \n> The same argument as in case of dropping GTT: I do not think that prohibiting DLL operations on GTT used by more than one backend is bad idea.\nThe idea was to give the GTT almost all the features of a regular table with few code changes.\nThe current version DBA can still do all DDL for GTT, I've already described.\n\n> \n> + /* global temp table not support foreign key constraint yet */\n> + if (RELATION_IS_GLOBAL_TEMP(pkrel))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"referenced relation \\\"%s\\\" is not a global temp table\",\n> + RelationGetRelationName(pkrel))));\n> +\n> \n> Why do we need to prohibit foreign key constraint on GTT?\nIt may be possible to support FK on GTT in later versions. Before that, I need to check some code.\n\n> \n> + /*\n> + * Global temp table get frozenxid from MyProc\n> + * to avoid the vacuum truncate clog that gtt need.\n> + */\n> + if (max_active_gtt > 0)\n> + {\n> + TransactionId oldest_gtt_frozenxid =\n> + list_all_session_gtt_frozenxids(0, NULL, NULL, NULL);\n> +\n> + if (TransactionIdIsNormal(oldest_gtt_frozenxid) &&\n> + TransactionIdPrecedes(oldest_gtt_frozenxid, newFrozenXid))\n> + {\n> + ereport(WARNING,\n> + (errmsg(\"global temp table oldest FrozenXid is far in the past\"),\n> + errhint(\"please truncate them or kill those sessions that use them.\")));\n> + newFrozenXid = oldest_gtt_frozenxid;\n> + }\n> + }\n> +\n> \n> As far as I understand, content of GTT will never be processes by autovacuum.\n> So who will update frozenxid of GTT?\n> I see that up_gtt_relstats is invoked when:\n> - index is created on GTT\n> - GTT is truncated\n> - GTT is vacuumed\n> So unless GTT is explicitly vacuumed by user, its GTT is and them will not be taken in account \n> when computing new frozen xid value. Autovacumm will produce this warnings (which will ton be visible by end user and only appended to the log).\n> And at some moment of time wrap around happen and if there still some old active GTT, we will get incorrect results.\nI have already described my point in previous emails.\n\n1. The core problem is that the data contains transaction information (xid), which needs to be vacuum(freeze) regularly to avoid running out of xid.\nThe autovacuum supports vacuum regular table but local temp does not. autovacuum also does not support GTT.\n\n2. However, the difference between the local temp table and the global temp table(GTT) is that\na) For local temp table: one table hava one piece of data. the frozenxid of one local temp table is store in the catalog(pg_class). \nb) For global temp table: each session has a separate copy of data, one GTT may contain maxbackend frozenxid.\nand I don't think it's a good idea to keep frozenxid of GTT in the catalog(pg_class). \nIt becomes a question: how to handle GTT transaction information?\n\nI agree that problem 1 should be completely solved by a some feature, such as local transactions. It is definitely not included in the GTT patch.\nBut, I think we need to ensure the durability of GTT data. For example, data in GTT cannot be lost due to the clog being cleaned up. It belongs to problem 2.\n\nFor problem 2\nIf we ignore the frozenxid of GTT, when vacuum truncates the clog that GTT need, the GTT data in some sessions is completely lost.\nPerhaps we could consider let aotuvacuum terminate those sessions that contain \"too old\" data, \nBut It's not very friendly, so I didn't choose to implement it in the first version.\nMaybe you have a better idea.\n\n\nWenjing\n\n> \n> \n> \n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com <http://www.postgrespro.com/>\n> The Russian Postgres Company \n\n\nThank you for review patch.2020年1月24日 下午4:20,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n\n\n\n\nOn 23.01.2020 19:28, 曾文旌(义从) wrote:\n\n\n\n\n\n\n\n\n\nI'm trying to improve this part of the implementation in global_temporary_table_v7-pg13.patch\nPlease check my patch and give me feedback.\n\n\n\n\nThanks\n\n\nWenjing\n\n\n\n\n\n\n Below is my short review of the patch:\n\n +    /*\n +     * For global temp table only\n +     * use AccessExclusiveLock for ensure safety\n +     */\n +    {\n +        {\n +            \"on_commit_delete_rows\",\n +            \"global temp table on commit options\",\n +            RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n +            ShareUpdateExclusiveLock\n +        },\n +        true\n +    },    \n\n\n The comment seems to be confusing: it says about AccessExclusiveLock\n but actually uses ShareUpdateExclusiveLock.There is a problem with the comment description, I will fix it.\n\n -   \n Assert(TransactionIdIsNormal(onerel->rd_rel->relfrozenxid));\n -    Assert(MultiXactIdIsValid(onerel->rd_rel->relminmxid));\n +    Assert((RELATION_IS_GLOBAL_TEMP(onerel) &&\n onerel->rd_rel->relfrozenxid == InvalidTransactionId) ||\n +        (!RELATION_IS_GLOBAL_TEMP(onerel) &&\n TransactionIdIsNormal(onerel->rd_rel->relfrozenxid)));\n +    Assert((RELATION_IS_GLOBAL_TEMP(onerel) &&\n onerel->rd_rel->relminmxid == InvalidMultiXactId) ||\n +        (!RELATION_IS_GLOBAL_TEMP(onerel) &&\n MultiXactIdIsValid(onerel->rd_rel->relminmxid)));\n  \n It is actually equivalent to:\n\n Assert(RELATION_IS_GLOBAL_TEMP(onerel) ^\n TransactionIdIsNormal(onerel->rd_rel->relfrozenxid);\n Assert(RELATION_IS_GLOBAL_TEMP(onerel) ^\n MultiXactIdIsValid(onerel->rd_rel->relminmxid));Yes, Thank you for your points out, It's simpler.\n\n +    /* clean temp relation files */\n +    if (max_active_gtt > 0)\n +        RemovePgTempFiles();\n +\n      /*\n  \n I wonder why do we need some special check for GTT here.\n From my point of view cleanup at startup of local storage of temp\n tables should be performed in the same way for local and global temp\n tables.After oom kill, In autovacuum, the Isolated local temp table will be cleaned like orphan temporary tables. The definition of local temp table is deleted with the storage file. But GTT can not do that. So we have the this implementation in my patch.If you have other solutions, please let me know.\n\n\n -    new_rel_reltup->relfrozenxid = relfrozenxid;\n -    new_rel_reltup->relminmxid = relminmxid;\n +    /* global temp table not remember transaction info in catalog\n */\n +    if (relpersistence == RELPERSISTENCE_GLOBAL_TEMP)\n +    {\n +        new_rel_reltup->relfrozenxid = InvalidTransactionId;\n +        new_rel_reltup->relminmxid = InvalidMultiXactId;\n +    }\n +    else\n +    {\n +        new_rel_reltup->relfrozenxid = relfrozenxid;\n +        new_rel_reltup->relminmxid = relminmxid;\n +    }\n +\n\n\n Why do we need to do it for GTT?\n Did you check that there will be no problems with GTT in case of XID\n wraparound?\n Right now if you create temp table and keep session open, then it\n will block XID wraparound.In my design1 Because different sessions have different transaction information, I choose to store the transaction information of GTT in MyProc,not catalog.2 About the XID wraparound problem, the reason is the design of the temp table storage(local temp table and global temp table) that makes it can not to do vacuum by autovacuum. It should be completely solve at the storage level.\n\n +    /* We allow to drop global temp table only this session use it\n */\n +    if (RELATION_IS_GLOBAL_TEMP(rel))\n +    {\n +        if (is_other_backend_use_gtt(rel->rd_node))\n +            elog(ERROR, \"can not drop relation when other backend\n attached this global temp table\");\n +    }\n +\n\n Here we once again introduce incompatibility with normal (permanent)\n tables.\n Assume that DBA or programmer need to change format of GTT. But\n there are some active sessions which have used this GTT sometime in\n the past.\n We will not be able to drop this GTT until all this sessions are\n terminated.\n I do not think that it is acceptable behaviour.In fact, The dba can still complete the DDL of the GTT.I've provided a set of functions for this case.If the dba needs to modify a GTT A(or drop GTT or create index on GTT), he needs to do:1 Use the pg_gtt_attached_pids view to list the pids for the session that is using the GTT A.2 Use pg_terminate_backend(pid)terminate they except itself.3 Do alter GTT A.\n\n +        LOCKMODE    lockmode = AccessExclusiveLock;\n +\n +        /* truncate global temp table only need RowExclusiveLock */\n +        if (get_rel_persistence(rid) == RELPERSISTENCE_GLOBAL_TEMP)\n +            lockmode = RowExclusiveLock;\n\n\n What are the reasons of using RowExclusiveLock for GTT instead of\n AccessExclusiveLock?\n Yes, GTT data is access only by one backend so no locking here seems\n to be needed at all.\n But I wonder what are the motivations/benefits of using weaker lock\n level here?1 Truncate GTT deletes only the data in the session, so no need use high-level lock.2 I think it still needs to be block by DDL of GTT, which is why I use RowExclusiveLock.\n There should be no conflicts in any case...\n\n +        /* We allow to create index on global temp table only this\n session use it */\n +        if (is_other_backend_use_gtt(heapRelation->rd_node))\n +            elog(ERROR, \"can not create index when have other\n backend attached this global temp table\");\n +\n\n The same argument as in case of dropping GTT: I do not think that\n prohibiting DLL operations on GTT used by more than one backend is\n bad idea.The idea was to give the GTT almost all the features of a regular table with few code changes.The current version DBA can still do all DDL for GTT, I've already described.\n\n +    /* global temp table not support foreign key constraint yet */\n +    if (RELATION_IS_GLOBAL_TEMP(pkrel))\n +        ereport(ERROR,\n +                (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n +                 errmsg(\"referenced relation \\\"%s\\\" is not a global\n temp table\",\n +                        RelationGetRelationName(pkrel))));\n +\n\n Why do we need to prohibit foreign key constraint on GTT?It may be possible to support FK on GTT in later versions. Before that, I need to check some code.\n\n +    /*\n +     * Global temp table get frozenxid from MyProc\n +     * to avoid the vacuum truncate clog that gtt need.\n +     */\n +    if (max_active_gtt > 0)\n +    {\n +        TransactionId oldest_gtt_frozenxid =\n +            list_all_session_gtt_frozenxids(0, NULL, NULL, NULL);\n +\n +        if (TransactionIdIsNormal(oldest_gtt_frozenxid) &&\n +            TransactionIdPrecedes(oldest_gtt_frozenxid,\n newFrozenXid))\n +        {\n +            ereport(WARNING,\n +                (errmsg(\"global temp table oldest FrozenXid is far\n in the past\"),\n +                 errhint(\"please truncate them or kill those\n sessions that use them.\")));\n +            newFrozenXid = oldest_gtt_frozenxid;\n +        }\n +    }\n +\n\n As far as I understand, content of GTT will never be processes by\n autovacuum.\n So who will update frozenxid of GTT?\n I see that up_gtt_relstats is invoked when:\n - index is created on GTT\n - GTT is truncated\n - GTT is vacuumed\n So unless GTT is explicitly vacuumed by user, its GTT is and them\n will not be taken in account \n when computing new frozen xid value. Autovacumm will produce this\n warnings (which will ton be visible by end user and only appended to\n the log).\n And at some moment of time wrap around happen and if there still\n some old active GTT, we will get incorrect results.I have already described my point in previous emails.1. The core problem is that the data contains transaction information (xid), which needs to be vacuum(freeze) regularly to avoid running out of xid.The autovacuum supports vacuum regular table but local temp does not. autovacuum also does not support GTT.2. However, the difference between the local temp table and the global temp table(GTT) is thata) For local temp table: one table hava one piece of data. the frozenxid of one local temp table is store in the catalog(pg_class). b) For global temp table: each session has a separate copy of data, one GTT may contain maxbackend frozenxid.and I don't think it's a good idea to keep frozenxid of GTT in the catalog(pg_class). It becomes a question: how to handle GTT transaction information?I agree that problem 1 should be completely solved by a some feature, such as local transactions. It is definitely not included in the GTT patch.But, I think we need to ensure the durability of GTT data. For example, data in GTT cannot be lost due to the clog being cleaned up. It belongs to problem 2.For problem 2If we ignore the frozenxid of GTT, when vacuum truncates the clog that GTT need, the GTT data in some sessions is completely lost.Perhaps we could consider let aotuvacuum terminate those sessions that contain \"too old\" data, But It's not very friendly, so I didn't choose to implement it in the first version.Maybe you have a better idea.Wenjing\n\n\n\n --\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 25 Jan 2020 23:15:48 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 24.01.2020 22:39, Pavel Stehule wrote:\n> I cannot to evaluate your proposal, and I am sure, so you know more \n> about this code.\n>\n> There is a question if we can allow to build local temp index on \n> global temp table. It is different situation. When I work with global \n> properties personally I prefer total asynchronous implementation of \n> any DDL operations for other than current session. When it is true, \n> then I have not any objection. For me, good enough design of any DDL \n> can be based on catalog change without forcing to living tables.\n>\n\n From my point of view there are two difference uses cases of temp tables:\n1. Backend needs some private data source which is specific to this \nsession and has no relation with activities of other sessions.\n2. We need a table  containing private session data, but which is used \nin the same way by all database users.\n\nIn the first case current Postgres temp tables works well (if we forget \nfor a moment about all known issues related with temp tables).\nGlobal temp tables are used to address the second scenario.  Assume that \nwe write some stored procedure or implement some business logic  outside \ndatabase and\nwhat to perform some complex analtic query which requires tepmp table \nfor storing intermediate results. In this case we can create GTT with \nall needed index at the moment of database initialization\nand do not perform any extra DDL during query execution. If will prevent \ncatalog bloating and makes execution of query more efficient.\n\nI do not see any reasons to allow build local indexes for global table. \nYes,it can happen that some session will have small amount of data in \nparticular GTT and another - small amount of data in this table. But if \naccess pattern is the same  (and nature of GTT assumes it), then index \nin either appreciate, either useless in both cases.\n\n\n\n\n> I see following disadvantage of your proposal. See scenario\n>\n> 1. I have two sessions\n>\n> A - small GTT with active owner\n> B - big GTT with some active application.\n>\n> session A will do new index - it is fast, but if creating index is \n> forced on B on demand (when B was touched), then this operation have \n> to wait after index will be created.\n>\n> So I afraid build a index on other sessions on GTT when GTT tables in \n> other sessions will not be empty.\n\n\nYes, it is true. But is is not the most realistic scenario from my point \nof view.\nAs I explained above, GTT should be used when we need temporary storage \naccessed in the same way by all clients.\nIf (as with normal tables) at some moment of time DBA realizes, that \nefficient execution of some queries needs extra indexes,\nthen it should be able to do it. It is very inconvenient and unnatural \nto prohibit DBA to do it until all sessions using this GTT are closed \n(it may never happen)\nor require all sessions to restart to be able to use this index.\n\nSo it is possible to imagine different scenarios of working with GTTs.\nBut from my point of view the only non-contradictory model of their \nbehavior is to make it compatible with normal tables.\nAnd do not forget about compatibility with Oracle. Simplifying of \nporting existed applications from Oracle to Postgres  may be the\nmain motivation of adding GTT to Postgres. And making them incompatible \nwith Oracle will be very strange.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 24.01.2020 22:39, Pavel Stehule\n wrote:\n\n\n\nI cannot to evaluate your proposal, and I am sure,\n so you know more about this code. \n\n\n\nThere is a question if we can allow to build local temp\n index on global temp table. It is different situation. When\n I work with global properties personally I prefer total\n asynchronous implementation of any DDL operations for other\n than current session. When it is true, then I have not any\n objection. For me, good enough design of any DDL can be\n based on catalog change without forcing to living tables. \n\n\n\n\n\n\n\n From my point of view there are two difference uses cases of temp\n tables:\n 1. Backend needs some private data source which is specific to this\n session and has no relation with activities of other sessions.\n 2. We need a table  containing private session data, but which is\n used in the same way by all database users.\n\n In the first case current Postgres temp tables works well (if we\n forget for a moment about all known issues related with temp\n tables).\n Global temp tables are used to address the second scenario.  Assume\n that we write some stored procedure or implement some business\n logic  outside database and\n what to perform some complex analtic query which requires tepmp\n table for storing intermediate results. In this case we can create\n GTT with all needed index at the moment of database initialization\n and do not perform any extra DDL during query execution. If will\n prevent catalog bloating and makes execution of query more\n efficient.\n\n I do not see any reasons to allow build local indexes for global\n table. Yes,it can happen that some session will have small amount of\n data in particular GTT and another - small amount of data in this\n table. But if access pattern is the same  (and nature of GTT assumes\n it), then index in either appreciate, either useless in both cases.\n\n\n\n\n\n\n\nI see following disadvantage of your proposal. See\n scenario\n\n\n1. I have two sessions \n\n\n\nA - small GTT with active owner\nB - big GTT with some active application.\n\n\nsession A will do new index - it is fast, but if creating\n index is forced on B on demand (when B was touched), then\n this operation have to wait after index will be created.\n\n\nSo I afraid build a index on other sessions on GTT when\n GTT tables in other sessions will not be empty.\n\n\n\n\n\n Yes, it is true. But is is not the most realistic scenario from my\n point of view. \n As I explained above, GTT should be used when we need temporary\n storage accessed in the same way by all clients. \n If (as with normal tables) at some moment of time DBA realizes, that\n efficient execution of some queries needs extra indexes,\n then it should be able to do it. It is very inconvenient and\n unnatural to prohibit DBA to do it until all sessions using this GTT\n are closed (it may never happen)\n or require all sessions to restart to be able to use this index.\n\n So it is possible to imagine different scenarios of working with\n GTTs. \n But from my point of view the only non-contradictory model of their\n behavior is to make it compatible with normal tables.\n And do not forget about compatibility with Oracle. Simplifying of\n porting existed applications from Oracle to Postgres  may be the\n main motivation of adding GTT to Postgres. And making them\n incompatible with Oracle will be very strange.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 27 Jan 2020 12:11:29 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 25.01.2020 18:15, 曾文旌(义从) wrote:\n> I wonder why do we need some special check for GTT here.\n>> From my point of view cleanup at startup of local storage of temp \n>> tables should be performed in the same way for local and global temp \n>> tables.\n> After oom kill, In autovacuum, the Isolated local temp table will be \n> cleaned like orphan temporary tables. The definition of local temp \n> table is deleted with the storage file.\n> But GTT can not do that. So we have the this implementation in my patch.\n> If you have other solutions, please let me know.\n>\nI wonder if it is possible that autovacuum or some other Postgres \nprocess is killed by OOM and postmaster is not noticing it can doens't \nrestart Postgres instance?\nas far as I know, crash of any process connected to Postgres shared \nmemory (and autovacuum definitely has such connection) cause Postgres \nrestart.\n\n\n> In my design\n> 1 Because different sessions have different transaction information, I \n> choose to store the transaction information of GTT in MyProc,not catalog.\n> 2 About the XID wraparound problem, the reason is the design of the \n> temp table storage(local temp table and global temp table) that makes \n> it can not to do vacuum by autovacuum.\n> It should be completely solve at the storage level.\n>\n\nMy point of view is that vacuuming of temp tables is common problem for \nlocal and global temp tables.\nSo it has to be addressed in the common way and so we should not try to \nfix this problem only for GTT.\n\n\n> In fact, The dba can still complete the DDL of the GTT.\n> I've provided a set of functions for this case.\n> If the dba needs to modify a GTT A(or drop GTT or create index on \n> GTT), he needs to do:\n> 1 Use the pg_gtt_attached_pids view to list the pids for the session \n> that is using the GTT A.\n> 2 Use pg_terminate_backend(pid)terminate they except itself.\n> 3 Do alter GTT A.\n>\nIMHO forced terminated of client sessions is not acceptable solution.\nAnd it is not an absolutely necessary requirement.\nSo from my point of view we should not add such limitations to GTT design.\n\n\n\n>>\n>> What are the reasons of using RowExclusiveLock for GTT instead of \n>> AccessExclusiveLock?\n>> Yes, GTT data is access only by one backend so no locking here seems \n>> to be needed at all.\n>> But I wonder what are the motivations/benefits of using weaker lock \n>> level here?\n> 1 Truncate GTT deletes only the data in the session, so no need use \n> high-level lock.\n> 2 I think it still needs to be block by DDL of GTT, which is why I use \n> RowExclusiveLock.\n\nSorry, I do not understand your arguments: we do not need exclusive lock \nbecause we drop only local (private) data\nbut we need some kind of lock. I agree with 1) and not 2).\n\n>\n>> There should be no conflicts in any case...\n>>\n>> +        /* We allow to create index on global temp table only this \n>> session use it */\n>> +        if (is_other_backend_use_gtt(heapRelation->rd_node))\n>> +            elog(ERROR, \"can not create index when have other \n>> backend attached this global temp table\");\n>> +\n>>\n>> The same argument as in case of dropping GTT: I do not think that \n>> prohibiting DLL operations on GTT used by more than one backend is \n>> bad idea.\n> The idea was to give the GTT almost all the features of a regular \n> table with few code changes.\n> The current version DBA can still do all DDL for GTT, I've already \n> described.\n\nI absolutely agree with you that GTT should be given the same features \nas regular tables.\nThe irony is that this most natural and convenient behavior is most easy \nto implement without putting some extra restrictions.\nJust let indexes for GTT be constructed on demand. It it can be done \nusing the same function used for regular index creation.\n\n\n>\n>>\n>> +    /* global temp table not support foreign key constraint yet */\n>> +    if (RELATION_IS_GLOBAL_TEMP(pkrel))\n>> +        ereport(ERROR,\n>> +                (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n>> +                 errmsg(\"referenced relation \\\"%s\\\" is not a global \n>> temp table\",\n>> + RelationGetRelationName(pkrel))));\n>> +\n>>\n>> Why do we need to prohibit foreign key constraint on GTT?\n> It may be possible to support FK on GTT in later versions. Before \n> that, I need to check some code.\n\nOk,  may be approach to prohibit everything except minimally required \nfunctionality  is safe and reliable.\nBut frankly speaking I prefer different approach: if I do not see any \ncontradictions of new feature with existed operations\nand it is passing tests, then we should  not prohibit this operations \nfor new feature.\n\n\n> I have already described my point in previous emails.\n>\n> 1. The core problem is that the data contains transaction information \n> (xid), which needs to be vacuum(freeze) regularly to avoid running out \n> of xid.\n> The autovacuum supports vacuum regular table but local temp does not. \n> autovacuum also does not support GTT.\n>\n> 2. However, the difference between the local temp table and the global \n> temp table(GTT) is that\n> a) For local temp table: one table hava one piece of data. the \n> frozenxid of one local temp table is store in the catalog(pg_class).\n> b) For global temp table: each session has a separate copy of data, \n> one GTT may contain maxbackend frozenxid.\n> and I don't think it's a good idea to keep frozenxid of GTT in the \n> catalog(pg_class).\n> It becomes a question: how to handle GTT transaction information?\n>\n> I agree that problem 1 should be completely solved by a some feature, \n> such as local transactions. It is definitely not included in the GTT \n> patch.\n> But, I think we need to ensure the durability of GTT data. For \n> example, data in GTT cannot be lost due to the clog being cleaned up. \n> It belongs to problem 2.\n>\n> For problem 2\n> If we ignore the frozenxid of GTT, when vacuum truncates the clog that \n> GTT need, the GTT data in some sessions is completely lost.\n> Perhaps we could consider let aotuvacuum terminate those sessions that \n> contain \"too old\" data,\n> But It's not very friendly, so I didn't choose to implement it in the \n> first version.\n> Maybe you have a better idea.\n\nSorry, I do not have better idea.\nI prefer not to address this problem in first version of the patch at all.\nfozen_xid of temp table is never changed unless user explicitly invoke \nvacuum on it.\nI do not think that anybody is doing it (because it accentually contains \ntemporary data which is not expected to live long time.\nCertainly it is possible to imagine situation when session use GTT to \nstore some local data which is valid during all session life time (which \ncan be large enough).\nBut I am not sure that it is popular scenario.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 25.01.2020 18:15, 曾文旌(义从) wrote:\n\n\n\nI wonder why do\n we need some special check for GTT here.\n\n\n\n\n From my\n point of view cleanup at startup of local storage of\n temp tables should be performed in the same way for\n local and global temp tables.\n\n\n\n After oom kill, In autovacuum, the Isolated local temp table\n will be cleaned like orphan temporary tables. The definition\n of local temp table is deleted with the storage file. \nBut GTT can not do that. So we have the this\n implementation in my patch.\nIf you have other solutions, please let me know.\n\n\n\n\n\n I wonder if it is possible that autovacuum or some other Postgres\n process is killed by OOM and postmaster is not noticing it can\n doens't restart Postgres instance?\n as far as I know, crash of any process connected to Postgres shared\n memory (and autovacuum definitely has such connection) cause\n Postgres restart.\n\n\n\n\n\nIn my design\n 1 Because different sessions have different transaction\n information, I choose to store the transaction information\n of GTT in MyProc,not catalog.\n2 About the XID wraparound problem,\n the reason is the design of the temp table storage(local\n temp table and global temp table) that makes it can not to\n do vacuum by autovacuum. \nIt should be completely solve at the storage level.\n\n\n\n\n\n\n My point of view is that vacuuming of temp tables is common problem\n for local and global temp tables. \n So it has to be addressed in the common way and so we should not try\n to fix this problem only for GTT.\n\n\n\n\n\nIn fact, The dba can still complete the DDL of the GTT.\n I've provided a set of functions for this case.\nIf the dba needs to modify a GTT A(or drop GTT or create\n index on GTT), he needs to do:\n\n1 Use the pg_gtt_attached_pids\n view to list the pids for the session that is\n using the GTT A.\n2 Use pg_terminate_backend(pid)terminate they except itself.\n3 Do alter GTT A.\n\n\n\n\n\n\n IMHO forced terminated of client sessions is not acceptable\n solution.\n And it is not an absolutely necessary requirement.\n So from my point of view we should not add such limitations to GTT\n design.\n\n\n\n\n\n\n\n\n\n \n What are the reasons of using RowExclusiveLock for GTT\n instead of AccessExclusiveLock?\n Yes, GTT data is access only by one backend so no\n locking here seems to be needed at all.\n But I wonder what are the motivations/benefits of\n using weaker lock level here?\n\n\n\n1 Truncate GTT deletes only the data in the session, so\n no need use high-level lock.\n2 I think it still needs to be block by DDL of GTT, which is why I use\n RowExclusiveLock.\n\n\n\n\n\n Sorry, I do not understand your arguments: we do not need exclusive\n lock because we drop only local (private) data\n but we need some kind of lock. I agree with 1) and not 2).\n\n\n\n\n\n\n\n There\n should be no conflicts in any case...\n\n +        /* We allow to create index on global temp\n table only this session use it */\n +        if\n (is_other_backend_use_gtt(heapRelation->rd_node))\n +            elog(ERROR, \"can not create index when\n have other backend attached this global temp table\");\n +\n\n The same argument as in case of dropping GTT: I do not\n think that prohibiting DLL operations on GTT used by\n more than one backend is bad idea.\n\n\n\n The idea was to give the GTT almost all the features of a\n regular table with few code changes.\nThe current version DBA can still do all DDL for GTT,\n I've already described.\n\n\n\n\n I absolutely agree with you that GTT should be given the same\n features as regular tables.\n The irony is that this most natural and convenient behavior is most\n easy to implement without putting some extra restrictions.\n Just let indexes for GTT be constructed on demand. It it can be done\n using the same function used for regular index creation.\n\n\n\n\n\n\n\n\n \n +    /* global temp table not support foreign key\n constraint yet */\n +    if (RELATION_IS_GLOBAL_TEMP(pkrel))\n +        ereport(ERROR,\n +                (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n +                 errmsg(\"referenced relation \\\"%s\\\"\n is not a global temp table\",\n +                       \n RelationGetRelationName(pkrel))));\n +\n\n Why do we need to prohibit foreign key constraint on\n GTT?\n\n\n\n It may be possible to support FK on GTT in later versions. Before that, I need to check some code.\n\n\n\n\n Ok,  may be approach to prohibit everything except minimally\n required functionality  is safe and reliable.\n But frankly speaking I prefer different approach: if I do not see\n any contradictions of new feature with existed operations \n and it is passing tests, then we should  not prohibit this\n operations for new feature.\n\n\n\n\nI have already described my point in previous\n emails.\n \n\n1. The core problem is that the data\n contains transaction information (xid), which needs to be\n vacuum(freeze) regularly to avoid running out of xid.\nThe autovacuum supports vacuum regular table\n but local temp does not. autovacuum also does not support\n GTT.\n\n2. However, the difference between the local\n temp table and the global temp table(GTT) is that\na) For local temp table: one table hava one\n piece of data. the frozenxid of one local temp table is\n store in the catalog(pg_class). \nb) For global temp table: each session has a\n separate copy of data, one GTT may contain maxbackend\n frozenxid.\nand I don't think it's a good idea to keep\n frozenxid of GTT in the catalog(pg_class). \nIt becomes a question: how to handle GTT\n transaction information?\n\nI agree that problem 1 should be completely\n solved by a some feature, such as local transactions. It\n is definitely not included in the GTT patch.\nBut, I think we need to ensure the\n durability of GTT data. For example, data in GTT cannot be\n lost due to the clog being cleaned up. It belongs to\n problem 2.\n\n\nFor problem 2\nIf we ignore the frozenxid of\n GTT, when vacuum truncates the clog that GTT need, the\n GTT data in some sessions is completely lost.\nPerhaps we could consider\n let aotuvacuum terminate those sessions that contain \"too\n old\" data, \nBut It's not very\n friendly, so I didn't choose to implement it in the first\n version.\nMaybe you have a better idea.\n\n\n\n\n Sorry, I do not have better idea.\n I prefer not to address this problem in first version of the patch\n at all. \n fozen_xid of temp table is never changed unless user explicitly\n invoke vacuum on it.\n I do not think that anybody is doing it (because it accentually\n contains temporary data which is not expected to live long time.\n Certainly it is possible to imagine situation when session use GTT\n to store some local data which is valid during all session life time\n (which can be large enough).\n But I am not sure that it is popular scenario.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 27 Jan 2020 12:38:13 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "po 27. 1. 2020 v 10:11 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 24.01.2020 22:39, Pavel Stehule wrote:\n>\n> I cannot to evaluate your proposal, and I am sure, so you know more about\n> this code.\n>\n> There is a question if we can allow to build local temp index on global\n> temp table. It is different situation. When I work with global properties\n> personally I prefer total asynchronous implementation of any DDL operations\n> for other than current session. When it is true, then I have not any\n> objection. For me, good enough design of any DDL can be based on catalog\n> change without forcing to living tables.\n>\n>\n> From my point of view there are two difference uses cases of temp tables:\n> 1. Backend needs some private data source which is specific to this\n> session and has no relation with activities of other sessions.\n> 2. We need a table containing private session data, but which is used in\n> the same way by all database users.\n>\n> In the first case current Postgres temp tables works well (if we forget\n> for a moment about all known issues related with temp tables).\n> Global temp tables are used to address the second scenario. Assume that\n> we write some stored procedure or implement some business logic outside\n> database and\n> what to perform some complex analtic query which requires tepmp table for\n> storing intermediate results. In this case we can create GTT with all\n> needed index at the moment of database initialization\n> and do not perform any extra DDL during query execution. If will prevent\n> catalog bloating and makes execution of query more efficient.\n>\n> I do not see any reasons to allow build local indexes for global table.\n> Yes,it can happen that some session will have small amount of data in\n> particular GTT and another - small amount of data in this table. But if\n> access pattern is the same (and nature of GTT assumes it), then index in\n> either appreciate, either useless in both cases.\n>\n>\n>\n>\n> I see following disadvantage of your proposal. See scenario\n>\n> 1. I have two sessions\n>\n> A - small GTT with active owner\n> B - big GTT with some active application.\n>\n> session A will do new index - it is fast, but if creating index is forced\n> on B on demand (when B was touched), then this operation have to wait after\n> index will be created.\n>\n> So I afraid build a index on other sessions on GTT when GTT tables in\n> other sessions will not be empty.\n>\n>\n>\n> Yes, it is true. But is is not the most realistic scenario from my point\n> of view.\n> As I explained above, GTT should be used when we need temporary storage\n> accessed in the same way by all clients.\n> If (as with normal tables) at some moment of time DBA realizes, that\n> efficient execution of some queries needs extra indexes,\n> then it should be able to do it. It is very inconvenient and unnatural to\n> prohibit DBA to do it until all sessions using this GTT are closed (it may\n> never happen)\n> or require all sessions to restart to be able to use this index.\n>\n> So it is possible to imagine different scenarios of working with GTTs.\n> But from my point of view the only non-contradictory model of their\n> behavior is to make it compatible with normal tables.\n> And do not forget about compatibility with Oracle. Simplifying of porting\n> existed applications from Oracle to Postgres may be the\n> main motivation of adding GTT to Postgres. And making them incompatible\n> with Oracle will be very strange.\n>\n\nI don't think so compatibility with Oracle is valid point in this case. We\nneed GTT, but the mechanism of index building should be designed for\nPostgres, and for users.\n\nMaybe the method proposed by you can be activated by some option like\nCREATE INDEX IMMEDIATELY FOR ALL SESSION. When you use GTT without index,\nthen\nit should to work some time more, and if you use short life sessions, then\nindex build can be last or almost last operation over table and can be\nsuboptimal.\n\nAnyway, this behave can be changed later without bigger complications - and\nnow I am have strong opinion to prefer don't allow to any DDL (with index\ncreation) on any active GTT in other sessions.\nProbably your proposal - build indexes on other sessions when GTT is\ntouched can share code with just modify metadata and wait on session reset\nor GTT reset\n\nUsually it is not hard problem to refresh sessions, and what I know when\nyou update plpgsql code, it is best practice to refresh session early.\n\n\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npo 27. 1. 2020 v 10:11 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 24.01.2020 22:39, Pavel Stehule\n wrote:\n\n\nI cannot to evaluate your proposal, and I am sure,\n so you know more about this code. \n\n\n\nThere is a question if we can allow to build local temp\n index on global temp table. It is different situation. When\n I work with global properties personally I prefer total\n asynchronous implementation of any DDL operations for other\n than current session. When it is true, then I have not any\n objection. For me, good enough design of any DDL can be\n based on catalog change without forcing to living tables. \n\n\n\n\n\n\n\n From my point of view there are two difference uses cases of temp\n tables:\n 1. Backend needs some private data source which is specific to this\n session and has no relation with activities of other sessions.\n 2. We need a table  containing private session data, but which is\n used in the same way by all database users.\n\n In the first case current Postgres temp tables works well (if we\n forget for a moment about all known issues related with temp\n tables).\n Global temp tables are used to address the second scenario.  Assume\n that we write some stored procedure or implement some business\n logic  outside database and\n what to perform some complex analtic query which requires tepmp\n table for storing intermediate results. In this case we can create\n GTT with all needed index at the moment of database initialization\n and do not perform any extra DDL during query execution. If will\n prevent catalog bloating and makes execution of query more\n efficient.\n\n I do not see any reasons to allow build local indexes for global\n table. Yes,it can happen that some session will have small amount of\n data in particular GTT and another - small amount of data in this\n table. But if access pattern is the same  (and nature of GTT assumes\n it), then index in either appreciate, either useless in both cases.\n\n\n\n\n\n\n\nI see following disadvantage of your proposal. See\n scenario\n\n\n1. I have two sessions \n\n\n\nA - small GTT with active owner\nB - big GTT with some active application.\n\n\nsession A will do new index - it is fast, but if creating\n index is forced on B on demand (when B was touched), then\n this operation have to wait after index will be created.\n\n\nSo I afraid build a index on other sessions on GTT when\n GTT tables in other sessions will not be empty.\n\n\n\n\n\n Yes, it is true. But is is not the most realistic scenario from my\n point of view. \n As I explained above, GTT should be used when we need temporary\n storage accessed in the same way by all clients. \n If (as with normal tables) at some moment of time DBA realizes, that\n efficient execution of some queries needs extra indexes,\n then it should be able to do it. It is very inconvenient and\n unnatural to prohibit DBA to do it until all sessions using this GTT\n are closed (it may never happen)\n or require all sessions to restart to be able to use this index.\n\n So it is possible to imagine different scenarios of working with\n GTTs. \n But from my point of view the only non-contradictory model of their\n behavior is to make it compatible with normal tables.\n And do not forget about compatibility with Oracle. Simplifying of\n porting existed applications from Oracle to Postgres  may be the\n main motivation of adding GTT to Postgres. And making them\n incompatible with Oracle will be very strange.I don't think so compatibility with Oracle is valid point in this case. We need GTT, but the mechanism of index building should be designed for Postgres, and for users.Maybe the method proposed by you can be activated by some option like CREATE INDEX IMMEDIATELY FOR ALL SESSION. When you use GTT without index, thenit should to work some time more, and if you use short life sessions, then index build can be last or almost last operation over table and can be suboptimal. Anyway, this behave can be changed later without bigger complications - and now I am have strong opinion to prefer don't allow to any DDL (with index creation) on any active GTT in other sessions.Probably your proposal - build indexes on other sessions when GTT is touched can share code with just modify metadata and wait on session reset or GTT resetUsually it is not hard problem to refresh sessions, and what I know when you update plpgsql code, it is best practice to refresh session early.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 27 Jan 2020 20:44:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> I proposed just ignoring those new indexes because it seems much simpler\n>> than alternative solutions that I can think of, and it's not like those\n>> other solutions don't have other issues.\n> \n> +1.\nI complete the implementation of this feature.\nWhen a session x create an index idx_a on GTT A then\nFor session x, idx_a is valid when after create index.\nFor session y, before session x create index done, GTT A has some data, then index_a is invalid.\nFor session z, before session x create index done, GTT A has no data, then index_a is valid.\n\n> \n>> For example, I've looked at the \"on demand\" building as implemented in\n>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>> calls into various places in index code seems somewht suspicious.\n> \n> +1. I can't imagine that's a safe or sane thing to do.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nOpinion by Pavel\n+\trel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\nI renamed rd_islocaltemp\n\nOpinion by Konstantin Knizhnik\n1 Fixed comments\n2 Fixed assertion\n\n\nPlease help me review.\n\n\nWenjing", "msg_date": "Wed, 29 Jan 2020 00:01:17 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> 2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com> 写道:\n>\n> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>\n> I proposed just ignoring those new indexes because it seems much simpler\n> than alternative solutions that I can think of, and it's not like those\n> other solutions don't have other issues.\n>\n>\n> +1.\n>\n> I complete the implementation of this feature.\n> When a session x create an index idx_a on GTT A then\n> For session x, idx_a is valid when after create index.\n> For session y, before session x create index done, GTT A has some data,\n> then index_a is invalid.\n> For session z, before session x create index done, GTT A has no data,\n> then index_a is valid.\n>\n>\n> For example, I've looked at the \"on demand\" building as implemented in\n> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n> calls into various places in index code seems somewht suspicious.\n>\n>\n> +1. I can't imagine that's a safe or sane thing to do.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n> Opinion by Pavel\n> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of\n> field \"rd_islocaltemp\" is not probably best\n> I renamed rd_islocaltemp\n>\n\nI don't see any change?\n\n\n\n> Opinion by Konstantin Knizhnik\n> 1 Fixed comments\n> 2 Fixed assertion\n>\n>\n> Please help me review.\n>\n>\n> Wenjing\n>\n>\n\nút 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com> 写道:On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra<tomas.vondra@2ndquadrant.com> wrote:I proposed just ignoring those new indexes because it seems much simplerthan alternative solutions that I can think of, and it's not like thoseother solutions don't have other issues.+1.I complete the implementation of this feature.When a session x create an index idx_a on GTT A thenFor session x, idx_a is valid when after create index.For session y, before session x create index done, GTT A has some data, then index_a is invalid.For session z, before session x create index done, GTT A has no data, then index_a is valid.For example, I've looked at the \"on demand\" building as implemented inglobal_private_temp-8.patch, I kinda doubt adding a bunch of index buildcalls into various places in index code seems somewht suspicious.+1. I can't imagine that's a safe or sane thing to do.-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL CompanyOpinion by Pavel+\trel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably bestI renamed rd_islocaltempI don't see any change? Opinion by Konstantin Knizhnik1 Fixed comments2 Fixed assertionPlease help me review.Wenjing", "msg_date": "Tue, 28 Jan 2020 17:40:14 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月29日 上午12:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n>> \n>> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> wrote:\n>>> I proposed just ignoring those new indexes because it seems much simpler\n>>> than alternative solutions that I can think of, and it's not like those\n>>> other solutions don't have other issues.\n>> \n>> +1.\n> I complete the implementation of this feature.\n> When a session x create an index idx_a on GTT A then\n> For session x, idx_a is valid when after create index.\n> For session y, before session x create index done, GTT A has some data, then index_a is invalid.\n> For session z, before session x create index done, GTT A has no data, then index_a is valid.\n> \n>> \n>>> For example, I've looked at the \"on demand\" building as implemented in\n>>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>>> calls into various places in index code seems somewht suspicious.\n>> \n>> +1. I can't imagine that's a safe or sane thing to do.\n>> \n>> -- \n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> The Enterprise PostgreSQL Company\n> \n> Opinion by Pavel\n> +\trel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n> I renamed rd_islocaltemp\n> \n> I don't see any change?\nRename rd_islocaltemp to rd_istemp in global_temporary_table_v8-pg13.patch\n\n\nWenjing\n\n\n\n> \n> \n> \n> Opinion by Konstantin Knizhnik\n> 1 Fixed comments\n> 2 Fixed assertion\n> \n> \n> Please help me review.\n> \n> \n> Wenjing\n>", "msg_date": "Wed, 29 Jan 2020 01:12:01 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "út 28. 1. 2020 v 18:12 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> 2020年1月29日 上午12:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n>\n>\n> út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>>\n>>\n>> 2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com> 写道:\n>>\n>> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> I proposed just ignoring those new indexes because it seems much simpler\n>> than alternative solutions that I can think of, and it's not like those\n>> other solutions don't have other issues.\n>>\n>>\n>> +1.\n>>\n>> I complete the implementation of this feature.\n>> When a session x create an index idx_a on GTT A then\n>> For session x, idx_a is valid when after create index.\n>> For session y, before session x create index done, GTT A has some data,\n>> then index_a is invalid.\n>> For session z, before session x create index done, GTT A has no data,\n>> then index_a is valid.\n>>\n>>\n>> For example, I've looked at the \"on demand\" building as implemented in\n>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>> calls into various places in index code seems somewht suspicious.\n>>\n>>\n>> +1. I can't imagine that's a safe or sane thing to do.\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>>\n>> Opinion by Pavel\n>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of\n>> field \"rd_islocaltemp\" is not probably best\n>> I renamed rd_islocaltemp\n>>\n>\n> I don't see any change?\n>\n> Rename rd_islocaltemp to rd_istemp\n> in global_temporary_table_v8-pg13.patch\n>\n\nok :)\n\nPavel\n\n>\n>\n> Wenjing\n>\n>\n>\n>\n>\n>\n>> Opinion by Konstantin Knizhnik\n>> 1 Fixed comments\n>> 2 Fixed assertion\n>>\n>>\n>> Please help me review.\n>>\n>>\n>> Wenjing\n>>\n>>\n>\n\nút 28. 1. 2020 v 18:12 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月29日 上午12:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com> 写道:On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra<tomas.vondra@2ndquadrant.com> wrote:I proposed just ignoring those new indexes because it seems much simplerthan alternative solutions that I can think of, and it's not like thoseother solutions don't have other issues.+1.I complete the implementation of this feature.When a session x create an index idx_a on GTT A thenFor session x, idx_a is valid when after create index.For session y, before session x create index done, GTT A has some data, then index_a is invalid.For session z, before session x create index done, GTT A has no data, then index_a is valid.For example, I've looked at the \"on demand\" building as implemented inglobal_private_temp-8.patch, I kinda doubt adding a bunch of index buildcalls into various places in index code seems somewht suspicious.+1. I can't imagine that's a safe or sane thing to do.-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL CompanyOpinion by Pavel+\trel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably bestI renamed rd_islocaltempI don't see any change?Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patchok :)PavelWenjing Opinion by Konstantin Knizhnik1 Fixed comments2 Fixed assertionPlease help me review.Wenjing", "msg_date": "Tue, 28 Jan 2020 18:13:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "út 28. 1. 2020 v 18:13 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 28. 1. 2020 v 18:12 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>>\n>>\n>> 2020年1月29日 上午12:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>>\n>>\n>>\n>> út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n>> napsal:\n>>\n>>>\n>>>\n>>> 2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com> 写道:\n>>>\n>>> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n>>> <tomas.vondra@2ndquadrant.com> wrote:\n>>>\n>>> I proposed just ignoring those new indexes because it seems much simpler\n>>> than alternative solutions that I can think of, and it's not like those\n>>> other solutions don't have other issues.\n>>>\n>>>\n>>> +1.\n>>>\n>>> I complete the implementation of this feature.\n>>> When a session x create an index idx_a on GTT A then\n>>> For session x, idx_a is valid when after create index.\n>>> For session y, before session x create index done, GTT A has some data,\n>>> then index_a is invalid.\n>>> For session z, before session x create index done, GTT A has no data,\n>>> then index_a is valid.\n>>>\n>>>\n>>> For example, I've looked at the \"on demand\" building as implemented in\n>>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>>> calls into various places in index code seems somewht suspicious.\n>>>\n>>>\n>>> +1. I can't imagine that's a safe or sane thing to do.\n>>>\n>>> --\n>>> Robert Haas\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>> The Enterprise PostgreSQL Company\n>>>\n>>>\n>>> Opinion by Pavel\n>>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name\n>>> of field \"rd_islocaltemp\" is not probably best\n>>> I renamed rd_islocaltemp\n>>>\n>>\n>> I don't see any change?\n>>\n>> Rename rd_islocaltemp to rd_istemp\n>> in global_temporary_table_v8-pg13.patch\n>>\n>\n> ok :)\n>\n\nI found a bug\n\npostgres=# create global temp table x(a int);\nCREATE TABLE\npostgres=# insert into x values(1);\nINSERT 0 1\npostgres=# create index on x (a);\nCREATE INDEX\npostgres=# create index on x((a + 1));\nCREATE INDEX\npostgres=# analyze x;\nWARNING: oid 16468 not a relation\nANALYZE\n\nother behave looks well for me.\n\nRegards\n\nPavel\n\n\n> Pavel\n>\n>>\n>>\n>> Wenjing\n>>\n>>\n>>\n>>\n>>\n>>\n>>> Opinion by Konstantin Knizhnik\n>>> 1 Fixed comments\n>>> 2 Fixed assertion\n>>>\n>>>\n>>> Please help me review.\n>>>\n>>>\n>>> Wenjing\n>>>\n>>>\n>>\n\nút 28. 1. 2020 v 18:13 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 28. 1. 2020 v 18:12 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月29日 上午12:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com> 写道:On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra<tomas.vondra@2ndquadrant.com> wrote:I proposed just ignoring those new indexes because it seems much simplerthan alternative solutions that I can think of, and it's not like thoseother solutions don't have other issues.+1.I complete the implementation of this feature.When a session x create an index idx_a on GTT A thenFor session x, idx_a is valid when after create index.For session y, before session x create index done, GTT A has some data, then index_a is invalid.For session z, before session x create index done, GTT A has no data, then index_a is valid.For example, I've looked at the \"on demand\" building as implemented inglobal_private_temp-8.patch, I kinda doubt adding a bunch of index buildcalls into various places in index code seems somewht suspicious.+1. I can't imagine that's a safe or sane thing to do.-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL CompanyOpinion by Pavel+\trel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably bestI renamed rd_islocaltempI don't see any change?Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patchok :)I found a bugpostgres=# create global temp table x(a int);CREATE TABLEpostgres=# insert into x values(1);INSERT 0 1postgres=# create index on x (a);CREATE INDEXpostgres=# create index on x((a + 1));CREATE INDEXpostgres=# analyze x;WARNING:  oid 16468 not a relationANALYZEother behave looks well for me.RegardsPavel PavelWenjing Opinion by Konstantin Knizhnik1 Fixed comments2 Fixed assertionPlease help me review.Wenjing", "msg_date": "Tue, 28 Jan 2020 18:54:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 27.01.2020 22:44, Pavel Stehule wrote:\n>\n> I don't think so compatibility with Oracle is valid point in this \n> case. We need GTT, but the mechanism of index building should be \n> designed for Postgres, and for users.\n>\n> Maybe the method proposed by you can be activated by some option like \n> CREATE INDEX IMMEDIATELY FOR ALL SESSION. When you use GTT without \n> index, then\n> it should to work some time more, and if you use short life sessions, \n> then index build can be last or almost last operation over table and \n> can be suboptimal.\n>\n> Anyway, this behave can be changed later without bigger complications \n> - and now I am have strong opinion to prefer don't allow to any DDL \n> (with index creation) on any active GTT in other sessions.\n> Probably your proposal - build indexes on other sessions when GTT is \n> touched can share code with just modify metadata and wait on session \n> reset or GTT reset\n>\nWell, compatibility with Oracle was never treated as important argument \nin this group:)\nBut I hope that you agree that it real argument against your proposal.\nMuch more important argument is incompatibility with behavior of regular \ntable.\nIf you propose such incompatibility, then you should have some very \nstrong arguments for such behavior which will definitely confuse users.\n\nBut I heard only two arguments:\n\n1. Concurrent building of indexes by all backends may consume much \nmemory (n_backends * maintenance_work_mem) and consume a lot of disk/CPU \nresources.\n\nFirst of all it is not completely true. Indexes will be created on \ndemand when GTT will be accessed and chance that all sessions will \nbecome building indexes simultaneously is very small.\n\nBut what will happen if we prohibit access to this index for existed \nsessions? If we need index for GTT, then most likely it is used for joins.\nIf there is no index, then optimizer has to choose some other plan to \nperform this join. For example use hash join. Hash join also requires \nmemory,\nso if all backends will perform such join simultaneously, then them \nconsume (n_backends * work_mem) memory.\nYes, work_mem is used to be smaller than maintenance_work_mem. But in \nany case DBA has a choice to adjust this parameters to avoid this problem.\nAnd in case of your proposal (prohibit access to this index) you give \nhim no choice to optimize query execution in existed sessions.\n\nAlso if all sessions will simultaneously perform sequential scan of GTT \ninstead of building index for it, then them will read the same amount of \ndata and consume comparable CPU time.\nSo prohibiting access to the indexes will not save us from high \nresources consumption if all existed sessions are really actively \nworking with this GTT.\n\n2. GTT in one session can contains large amount of data and we need \nindex for it, but small amount of data in another session and we do not \nneed index for it.\n\nSuch situation definitely can happen. But it contradicts to the main \nassumption of GTT use case (that it is accessed in the same way by all \nsessions).\nAlso I may be agree with this argument if you propose to create indexes \nlocally for each sessions.\nBut your proposal is to prohibit access to the index to the sessions \nwhich already have populated GTT with data but allow it for sessions \nwhich have not accessed this GTT yet.\nSo if some session stores some data in GTT after index was created, then \nit will build index for it, doesn't matter whether size of table is \nsmall or large.\nWhy do we make an exception for sessions which already have data in GTT \nin this case?\n\nSo from my point of view both arguments are doubtful and can not explain \nwhy rules of index usability for GTT should be different from regular \ntables.\n\n> Usually it is not hard problem to refresh sessions, and what I know \n> when you update plpgsql code, it is best practice to refresh session \n> early.\n>\n\nI know may systems where session is established once client is connected \nto the system and not closed until client is disconnected.\nAnd any attempt to force termination of the session will cause \napplication errors which are not expected by the client.\n\n\nSorry, I think that it is principle point in discussion concerning GTT \ndesign.\nImplementation of GTT can be changed in future, but it is bad if \nbehavior of GTT will be changed.\nIt is not clear for me why from the very beginning we should provide \ninconsistent behavior which is even more difficult to implement than \nbehavior compatible with regular tables.\nAnd say that in the future it can be changed...\n\nSorry, but I do not consider proposals to create indexes locally for \neach session (i.e. global tables but private indexes) or use some \nspecial complicated SQL syntax constructions like\nCREATE INDEX IMMEDIATELY FOR ALL SESSION as some real alternatives which \nhave to be discussed.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 27.01.2020 22:44, Pavel Stehule\n wrote:\n\n\n\n\n\nI don't think so compatibility with Oracle is valid point\n in this case. We need GTT, but the mechanism of index\n building should be designed for Postgres, and for users.\n\n\nMaybe the method proposed by you can be activated by some\n option like CREATE INDEX IMMEDIATELY FOR ALL SESSION. When\n you use GTT without index, then\nit should to work some time more, and if you use short\n life sessions, then index build can be last or almost last\n operation over table and can be suboptimal. \n\n\n\nAnyway, this behave can be changed later without bigger\n complications - and now I am have strong opinion to prefer\n don't allow to any DDL (with index creation) on any active\n GTT in other sessions.\nProbably your proposal - build indexes on other sessions\n when GTT is touched can share code with just modify metadata\n and wait on session reset or GTT reset\n\n\n\n\n\n\n Well, compatibility with Oracle was never treated as important\n argument in this group:)\n But I hope that you agree that it real argument against your\n proposal.\n Much more important argument is incompatibility with behavior of\n regular table.\n If you propose such incompatibility, then you should have some very\n strong arguments for such behavior which will definitely confuse\n users.\n\n But I heard only two arguments: \n\n 1. Concurrent building of indexes by all backends may consume much\n memory (n_backends * maintenance_work_mem) and consume a lot of\n disk/CPU resources.\n\n First of all it is not completely true. Indexes will be created on\n demand when GTT will be accessed and chance that all sessions will\n become building indexes simultaneously is very small.\n\n But what will happen if we prohibit access to this index for existed\n sessions? If we need index for GTT, then most likely it is used for\n joins.\n If there is no index, then optimizer has to choose some other plan\n to perform this join. For example use hash join. Hash join also\n requires memory, \n so if all backends will perform such join simultaneously, then them\n consume (n_backends * work_mem) memory.\n Yes, work_mem is used to be smaller than maintenance_work_mem. But\n in any case DBA has a choice to adjust this parameters to avoid this\n problem.\n And in case of your proposal (prohibit access to this index) you\n give him no choice to optimize query execution in existed sessions.\n\n Also if all sessions will simultaneously perform sequential scan of\n GTT instead of building index for it, then them will read the same\n amount of data and consume comparable CPU time.\n So prohibiting access to the indexes will not save us from high\n resources consumption if all existed sessions are really actively\n working with this GTT.\n\n 2. GTT in one session can contains large amount of data and we need\n index for it, but small amount of data in another session and we do\n not need index for it.\n\n Such situation definitely can happen. But it contradicts to the main\n assumption of GTT use case (that it is accessed in the same way by\n all sessions).\n Also I may be agree with this argument if you propose to create\n indexes locally for each sessions.\n But your proposal is to prohibit access to the index to the sessions\n which already have populated GTT with data but allow it for sessions\n which have not accessed this GTT yet.\n So if some session stores some data in GTT after index was created,\n then it will build index for it, doesn't matter whether size of\n table is small or large.\n Why do we make an exception for sessions which already have data in\n GTT in this case?\n\n So from my point of view both arguments are doubtful and can not\n explain why rules of index usability for GTT should be different\n from regular tables.\n\n\n\n\nUsually it is not hard problem to refresh sessions, and\n what I know when you update plpgsql code, it is best\n practice to refresh session early.\n\n\n\n\n\n\n I know may systems where session is established once client is\n connected to the system and not closed until client is disconnected.\n And any attempt to force termination of the session will cause\n application errors which are not expected by the client.\n\n\n Sorry, I think that it is principle point in discussion concerning\n GTT design.\n Implementation of GTT can be changed in future, but it is bad if\n behavior of GTT will be changed.\n It is not clear for me why from the very beginning we should provide\n inconsistent behavior which is even more difficult to implement than\n behavior compatible with regular tables.\n And say that in the future it can be changed...\n\n Sorry, but I do not consider proposals to create indexes locally for\n each session (i.e. global tables but private indexes) or use some\n special complicated SQL syntax constructions like \n CREATE INDEX IMMEDIATELY FOR ALL SESSION as some real alternatives\n which have to be discussed.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 29 Jan 2020 11:12:59 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月29日 上午1:54,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> út 28. 1. 2020 v 18:13 odesílatel Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> napsal:\n> \n> \n> út 28. 1. 2020 v 18:12 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年1月29日 上午12:40,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> \n>> \n>> út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> \n>> \n>>> 2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n>>> \n>>> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n>>> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> wrote:\n>>>> I proposed just ignoring those new indexes because it seems much simpler\n>>>> than alternative solutions that I can think of, and it's not like those\n>>>> other solutions don't have other issues.\n>>> \n>>> +1.\n>> I complete the implementation of this feature.\n>> When a session x create an index idx_a on GTT A then\n>> For session x, idx_a is valid when after create index.\n>> For session y, before session x create index done, GTT A has some data, then index_a is invalid.\n>> For session z, before session x create index done, GTT A has no data, then index_a is valid.\n>> \n>>> \n>>>> For example, I've looked at the \"on demand\" building as implemented in\n>>>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>>>> calls into various places in index code seems somewht suspicious.\n>>> \n>>> +1. I can't imagine that's a safe or sane thing to do.\n>>> \n>>> -- \n>>> Robert Haas\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>>> The Enterprise PostgreSQL Company\n>> \n>> Opinion by Pavel\n>> +\trel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>> I renamed rd_islocaltemp\n>> \n>> I don't see any change?\n> Rename rd_islocaltemp to rd_istemp in global_temporary_table_v8-pg13.patch\n> \n> ok :)\n> \n> I found a bug\n> \n> postgres=# create global temp table x(a int);\n> CREATE TABLE\n> postgres=# insert into x values(1);\n> INSERT 0 1\n> postgres=# create index on x (a);\n> CREATE INDEX\n> postgres=# create index on x((a + 1));\n> CREATE INDEX\n> postgres=# analyze x;\n> WARNING: oid 16468 not a relation\n> ANALYZE\nThanks for review.\n\nThe index expression need to store statistics on index, I missed it and I'll fix it later.\n\n\nWenjing\n\n> \n> other behave looks well for me.\n> \n> Regards\n> \n> Pavel\n> \n> \n> Pavel\n> \n> \n> Wenjing\n> \n> \n> \n>> \n>> \n>> \n>> Opinion by Konstantin Knizhnik\n>> 1 Fixed comments\n>> 2 Fixed assertion\n>> \n>> \n>> Please help me review.\n>> \n>> \n>> Wenjing\n>> \n> \n\n\n2020年1月29日 上午1:54,Pavel Stehule <pavel.stehule@gmail.com> 写道:út 28. 1. 2020 v 18:13 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:út 28. 1. 2020 v 18:12 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月29日 上午12:40,Pavel Stehule <pavel.stehule@gmail.com> 写道:út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com> 写道:On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra<tomas.vondra@2ndquadrant.com> wrote:I proposed just ignoring those new indexes because it seems much simplerthan alternative solutions that I can think of, and it's not like thoseother solutions don't have other issues.+1.I complete the implementation of this feature.When a session x create an index idx_a on GTT A thenFor session x, idx_a is valid when after create index.For session y, before session x create index done, GTT A has some data, then index_a is invalid.For session z, before session x create index done, GTT A has no data, then index_a is valid.For example, I've looked at the \"on demand\" building as implemented inglobal_private_temp-8.patch, I kinda doubt adding a bunch of index buildcalls into various places in index code seems somewht suspicious.+1. I can't imagine that's a safe or sane thing to do.-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL CompanyOpinion by Pavel+\trel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably bestI renamed rd_islocaltempI don't see any change?Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patchok :)I found a bugpostgres=# create global temp table x(a int);CREATE TABLEpostgres=# insert into x values(1);INSERT 0 1postgres=# create index on x (a);CREATE INDEXpostgres=# create index on x((a + 1));CREATE INDEXpostgres=# analyze x;WARNING:  oid 16468 not a relationANALYZEThanks for review.The index expression need to store statistics on index, I missed it and I'll fix it later.Wenjingother behave looks well for me.RegardsPavel PavelWenjing Opinion by Konstantin Knizhnik1 Fixed comments2 Fixed assertionPlease help me review.Wenjing", "msg_date": "Wed, 29 Jan 2020 21:06:44 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Jan 27, 2020 at 4:11 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> I do not see any reasons to allow build local indexes for global table. Yes,it can happen that some session will have small amount of data in particular GTT and another - small amount of data in this table. But if access pattern is the same (and nature of GTT assumes it), then index in either appreciate, either useless in both cases.\n\nI agree. I think allowing different backends to have different indexes\nis overly complicated.\n\nRegarding another point that was raised, I think it's not a good idea\nto prohibit DDL on global temporary tables altogether. It should be\nfine to change things when no sessions are using the GTT. Going\nfurther and allowing changes when there are attached sessions would be\nnice, but I think we shouldn't try. Getting this feature committed is\ngoing to be a huge amount of work with even a minimal feature set;\ncomplicating the problem by adding what are essentially new\nDDL-concurrency features on top of the basic feature seems very much\nunwise.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jan 2020 08:43:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>> Opinion by Pavel\n>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>> I renamed rd_islocaltemp\n>\n> I don't see any change?\n>\n> Rename rd_islocaltemp to rd_istemp in global_temporary_table_v8-pg13.patch\n\nIn view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\nthat this has approximately a 0% chance of being acceptable. If you're\nsetting a field in a way that is inconsistent with the current use of\nthe field, you're probably doing it wrong, because the field has an\nexisting purpose to which new code must conform. And if you're not\ndoing that, then you don't need to rename it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jan 2020 08:48:26 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Jan 29, 2020 at 3:13 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> But I heard only two arguments:\n>\n> 1. Concurrent building of indexes by all backends may consume much memory (n_backends * maintenance_work_mem) and consume a lot of disk/CPU resources.\n> 2. GTT in one session can contains large amount of data and we need index for it, but small amount of data in another session and we do not need index for it.\n\nYou seem to be ignoring the fact that two committers told you this\nprobably wasn't safe.\n\nPerhaps your view is that those people made no argument, and therefore\nyou don't have to respond to it. But the onus is not on somebody else\nto tell you why a completely novel idea is not safe. The onus is on\nyou to analyze it in detail and prove that it is safe. What you need\nto show is that there is no code anywhere in the system which will be\nconfused by an index springing into existence at whatever time you're\ncreating it.\n\nOne problem is that there are various backend-local data structures in\nthe relcache, the planner, and the executor that remember information\nabout indexes, and that may not respond well to having more indexes\nshow up unexpectedly. On the one hand, they might crash; on the other\nhand, they might ignore the new index when they shouldn't. Another\nproblem is that the code which creates indexes might fail or misbehave\nwhen run in an environment different from the one in which it\ncurrently runs. I haven't really studied your code, so I don't know\nexactly what it does, but for example it would be really bad to try to\nbuild an index while holding a buffer lock, both because it might\ncause (low-probability) undetected deadlocks and also because it might\nblock another process that wants that buffer lock in a\nnon-interruptible wait state for a long time.\n\nNow, maybe you can make an argument that you only create indexes at\npoints in the query that are \"safe.\" But I am skeptical, because of\nthis example:\n\nrhaas=# create table foo (a int primary key, b text, c text, d text);\nCREATE TABLE\nrhaas=# create function blump() returns trigger as $$begin create\nindex on foo (b); return new; end$$ language plpgsql;\nCREATE FUNCTION\nrhaas=# create trigger thud before insert on foo execute function blump();\nCREATE TRIGGER\nrhaas=# insert into foo (a) select generate_series(1,10);\nERROR: cannot CREATE INDEX \"foo\" because it is being used by active\nqueries in this session\nCONTEXT: SQL statement \"create index on foo (b)\"\nPL/pgSQL function blump() line 1 at SQL statement\n\nThat prohibition is there for some reason. Someone did not just decide\nto arbitrarily prohibit it. A CREATE INDEX command run in that context\nwon't run afoul of many of the things that might be problems in other\nplaces -- e.g. there won't be a buffer lock held. Yet, despite the\nfact that a trigger context is safe for executing a wide variety of\nuser-defined code, this particular operation is not allowed here. That\nis the sort of thing that should worry you.\n\nAt any rate, even if this somehow were or could be made safe,\non-the-fly index creation is a feature that cannot and should not be\ncombined with a patch to implement global temporary tables. Surely, it\nwill require a lot of study and work to get the details right. And so\nwill GTT. As I said in the other email I wrote, this feature is hard\nenough without adding this kind of thing to it. There's a reason why I\nnever got around to implementing this ten years ago when I did\nunlogged tables; I was intending that to be a precursor to the GTT\nwork. I found that it was too hard and I gave up. I'm glad to see\npeople trying again, but the idea that we can afford to add in extra\nfeatures, or frankly that either of the dueling patches on this thread\nare close to committable, is just plain wrong.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jan 2020 09:47:52 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 29.01.2020 17:47, Robert Haas wrote:\n> On Wed, Jan 29, 2020 at 3:13 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> But I heard only two arguments:\n>>\n>> 1. Concurrent building of indexes by all backends may consume much memory (n_backends * maintenance_work_mem) and consume a lot of disk/CPU resources.\n>> 2. GTT in one session can contains large amount of data and we need index for it, but small amount of data in another session and we do not need index for it.\n> You seem to be ignoring the fact that two committers told you this\n> probably wasn't safe.\n>\n> Perhaps your view is that those people made no argument, and therefore\n> you don't have to respond to it. But the onus is not on somebody else\n> to tell you why a completely novel idea is not safe. The onus is on\n> you to analyze it in detail and prove that it is safe. What you need\n> to show is that there is no code anywhere in the system which will be\n> confused by an index springing into existence at whatever time you're\n> creating it.\n>\n> One problem is that there are various backend-local data structures in\n> the relcache, the planner, and the executor that remember information\n> about indexes, and that may not respond well to having more indexes\n> show up unexpectedly. On the one hand, they might crash; on the other\n> hand, they might ignore the new index when they shouldn't. Another\n> problem is that the code which creates indexes might fail or misbehave\n> when run in an environment different from the one in which it\n> currently runs. I haven't really studied your code, so I don't know\n> exactly what it does, but for example it would be really bad to try to\n> build an index while holding a buffer lock, both because it might\n> cause (low-probability) undetected deadlocks and also because it might\n> block another process that wants that buffer lock in a\n> non-interruptible wait state for a long time.\n>\n> Now, maybe you can make an argument that you only create indexes at\n> points in the query that are \"safe.\" But I am skeptical, because of\n> this example:\n>\n> rhaas=# create table foo (a int primary key, b text, c text, d text);\n> CREATE TABLE\n> rhaas=# create function blump() returns trigger as $$begin create\n> index on foo (b); return new; end$$ language plpgsql;\n> CREATE FUNCTION\n> rhaas=# create trigger thud before insert on foo execute function blump();\n> CREATE TRIGGER\n> rhaas=# insert into foo (a) select generate_series(1,10);\n> ERROR: cannot CREATE INDEX \"foo\" because it is being used by active\n> queries in this session\n> CONTEXT: SQL statement \"create index on foo (b)\"\n> PL/pgSQL function blump() line 1 at SQL statement\n>\n> That prohibition is there for some reason. Someone did not just decide\n> to arbitrarily prohibit it. A CREATE INDEX command run in that context\n> won't run afoul of many of the things that might be problems in other\n> places -- e.g. there won't be a buffer lock held. Yet, despite the\n> fact that a trigger context is safe for executing a wide variety of\n> user-defined code, this particular operation is not allowed here. That\n> is the sort of thing that should worry you.\n>\n> At any rate, even if this somehow were or could be made safe,\n> on-the-fly index creation is a feature that cannot and should not be\n> combined with a patch to implement global temporary tables. Surely, it\n> will require a lot of study and work to get the details right. And so\n> will GTT. As I said in the other email I wrote, this feature is hard\n> enough without adding this kind of thing to it. There's a reason why I\n> never got around to implementing this ten years ago when I did\n> unlogged tables; I was intending that to be a precursor to the GTT\n> work. I found that it was too hard and I gave up. I'm glad to see\n> people trying again, but the idea that we can afford to add in extra\n> features, or frankly that either of the dueling patches on this thread\n> are close to committable, is just plain wrong.\n>\n\nSorry, I really didn't consider statements containing word \"probably\" as \narguments.\nBut I agree with you: it is task of developer of new feature to prove \nthat proposed approach is safe rather than of reviewers to demonstrate \nthat it is unsafe.\nCan I provide such proof now? I afraid that not.\nBut please consider two arguments:\n\n1. Index for GTT in any case has to be initialized on demand. In case of \nregular tables index is initialized at the moment of its creation. In \ncase of GTT it doesn't work.\nSo we should somehow detect that accessed index is not initialized and \nperform lazy initialization of the index.\nThe only difference with the approach proposed by Pavel  (allow index \nfor empty GTT but prohibit it for GTT filled with data) is whether we \nalso need to populate index with data or not.\nI can imagine that implicit initialization of index in read-only query \n(select) can be unsafe and cause some problems. I have not encountered \nsuch problems yet after performing many tests with GTTs, but certainly I \nhave not covered all possible scenarios (not sure that it is possible at \nall).\nBut I do not understand how populating  index with data can add some \nextra unsafety.\n\nSo I can not prove that building index for GTT on demand is safe, but it \nis not more unsafe than initialization of index on demand which is \nrequired in any case.\n\n2. Actually I do not propose some completely new approach. I try to \nprovide behavior with is compatible with regular tables.\nIf you create index for regular table, then it can be used in all \nsessions, right?\nAnd all \"various backend-local data structures in the relcache, the \nplanner, and the executor that remember information about indexes\"\nhave to be properly updated.  It is done using invalidation mechanism. \nThe same mechanism is used in case of DDL operations with GTT, because \nwe change system catalog.\n\nSo my point here is that creation index of GTT is almost the same as \ncreation of index for regular tables and the same mechanism will be used \nto provide correctness of this operation.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Wed, 29 Jan 2020 18:30:44 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "2. Actually I do not propose some completely new approach. I try to\n> provide behavior with is compatible with regular tables.\n> If you create index for regular table, then it can be used in all\n> sessions, right?\n>\n\nI don't understand to this point. Regular tables shares data, shares files.\nYou cannot to separate it. More - you have to uses relatively aggressive\nlocks to be this operation safe.\n\nNothing from these points are valid for GTT.\n\nRegards\n\nPavel\n\n\n> And all \"various backend-local data structures in the relcache, the\n> planner, and the executor that remember information about indexes\"\n> have to be properly updated. It is done using invalidation mechanism.\n> The same mechanism is used in case of DDL operations with GTT, because\n> we change system catalog.\n>\n> So my point here is that creation index of GTT is almost the same as\n> creation of index for regular tables and the same mechanism will be used\n> to provide correctness of this operation.\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\n\n2. Actually I do not propose some completely new approach. I try to \nprovide behavior with is compatible with regular tables.\nIf you create index for regular table, then it can be used in all \nsessions, right?I don't understand to this point. Regular tables shares data, shares files. You cannot to separate it. More - you have to uses relatively aggressive locks to be this operation safe. Nothing from these points are valid for GTT.RegardsPavel \nAnd all \"various backend-local data structures in the relcache, the \nplanner, and the executor that remember information about indexes\"\nhave to be properly updated.  It is done using invalidation mechanism. \nThe same mechanism is used in case of DDL operations with GTT, because \nwe change system catalog.\n\nSo my point here is that creation index of GTT is almost the same as \ncreation of index for regular tables and the same mechanism will be used \nto provide correctness of this operation.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 29 Jan 2020 18:08:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 29.01.2020 20:08, Pavel Stehule wrote:\n>\n>\n>\n> 2. Actually I do not propose some completely new approach. I try to\n> provide behavior with is compatible with regular tables.\n> If you create index for regular table, then it can be used in all\n> sessions, right?\n>\n>\n> I don't understand to this point. Regular tables shares data, shares \n> files. You cannot to separate it. More - you have to uses relatively \n> aggressive locks to be this operation safe.\n>\n> Nothing from these points are valid for GTT.\n\nGTT shares metadata.\nAs far as them are not sharing data, then GTT are safer than regular \ntable, aren't them?\n\"Safer\" means that we need less \"aggressive\" locks for them: we need to \nprotect only metadata, not data itself.\n\nMy point is that if we allow other sessions to access created indexes \nfor regular tables, then it will be not more complex to support it for GTT.\nActually \"not more complex\" in this case means \"no extra efforts are \nneeded\".\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 29.01.2020 20:08, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\n\n 2. Actually I do not propose some completely new approach. I\n try to \n provide behavior with is compatible with regular tables.\n If you create index for regular table, then it can be used\n in all \n sessions, right?\n\n\n\nI don't understand to this point. Regular tables shares\n data, shares files. You cannot to separate it. More - you\n have to uses relatively aggressive locks to be this\n operation safe. \n\n\n\nNothing from these points are valid for GTT.\n\n\n\n\n GTT shares metadata.\n As far as them are not sharing data, then GTT are safer than regular\n table, aren't them?\n \"Safer\" means that we need less \"aggressive\" locks for them: we need\n to protect only metadata, not data itself.\n\n My point is that if we allow other sessions to access created\n indexes for regular tables, then it will be not more complex to\n support it for GTT.\n Actually \"not more complex\" in this case means \"no extra efforts are\n needed\".\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 29 Jan 2020 20:21:20 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "st 29. 1. 2020 v 18:21 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 29.01.2020 20:08, Pavel Stehule wrote:\n>\n>\n>\n>\n> 2. Actually I do not propose some completely new approach. I try to\n>> provide behavior with is compatible with regular tables.\n>> If you create index for regular table, then it can be used in all\n>> sessions, right?\n>>\n>\n> I don't understand to this point. Regular tables shares data, shares\n> files. You cannot to separate it. More - you have to uses relatively\n> aggressive locks to be this operation safe.\n>\n> Nothing from these points are valid for GTT.\n>\n>\n> GTT shares metadata.\n> As far as them are not sharing data, then GTT are safer than regular\n> table, aren't them?\n> \"Safer\" means that we need less \"aggressive\" locks for them: we need to\n> protect only metadata, not data itself.\n>\n> My point is that if we allow other sessions to access created indexes for\n> regular tables, then it will be not more complex to support it for GTT.\n> Actually \"not more complex\" in this case means \"no extra efforts are\n> needed\".\n>\n\nIt is hard to say. I see a significant difference. When I do index on\nregular table, then I don't change a context of other processes. I have to\nwait for lock, and after I got a lock then other processes waiting.\n\nWith GTT, I don't want to wait for others - and other processes should\nbuild indexes inside - without expected sequence of operations. Maybe it\ncan have positive effect, but it can have negative effect too. In this case\nI prefer (in this moment) zero effect on other sessions. So I would to\nbuild index in my session and I don't would to wait for other sessions, and\nif it is possible other sessions doesn't need to interact or react on my\naction too. It should be independent what is possible. The most simple\nsolution is request on unique usage. I understand so it can be not too\npractical. Better is allow to usage GTT by other tables, but the changes\nare invisible in other sessions to session reset. It is minimalistic\nstrategy. It has not benefits for other sessions, but it has not negative\nimpacts too.\n\n\n\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nst 29. 1. 2020 v 18:21 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 29.01.2020 20:08, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\n 2. Actually I do not propose some completely new approach. I\n try to \n provide behavior with is compatible with regular tables.\n If you create index for regular table, then it can be used\n in all \n sessions, right?\n\n\n\nI don't understand to this point. Regular tables shares\n data, shares files. You cannot to separate it. More - you\n have to uses relatively aggressive locks to be this\n operation safe. \n\n\n\nNothing from these points are valid for GTT.\n\n\n\n\n GTT shares metadata.\n As far as them are not sharing data, then GTT are safer than regular\n table, aren't them?\n \"Safer\" means that we need less \"aggressive\" locks for them: we need\n to protect only metadata, not data itself.\n\n My point is that if we allow other sessions to access created\n indexes for regular tables, then it will be not more complex to\n support it for GTT.\n Actually \"not more complex\" in this case means \"no extra efforts are\n needed\".It is hard to say. I see a significant difference. When I do index on regular table, then I don't change a context of other processes. I have to wait for lock, and after I got a lock then other processes waiting.With GTT, I don't want to wait for others - and other processes should build indexes inside - without expected sequence of operations. Maybe it can have positive effect, but it can have negative effect too. In this case I prefer (in this moment) zero effect on other sessions. So I would to build index in my session and I don't would to wait for other sessions, and if it is possible other sessions doesn't need to interact or react on my action too. It should be independent what is possible. The most simple solution is request on unique usage. I understand so it can be not too practical. Better is allow to usage GTT by other tables, but the changes are invisible in other sessions to session reset. It is minimalistic strategy. It has not benefits for other sessions, but it has not negative impacts too.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 29 Jan 2020 18:37:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Jan 29, 2020 at 10:30 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> But please consider two arguments:\n>\n> 1. Index for GTT in any case has to be initialized on demand. In case of\n> regular tables index is initialized at the moment of its creation. In\n> case of GTT it doesn't work.\n> So we should somehow detect that accessed index is not initialized and\n> perform lazy initialization of the index.\n> The only difference with the approach proposed by Pavel (allow index\n> for empty GTT but prohibit it for GTT filled with data) is whether we\n> also need to populate index with data or not.\n> I can imagine that implicit initialization of index in read-only query\n> (select) can be unsafe and cause some problems. I have not encountered\n> such problems yet after performing many tests with GTTs, but certainly I\n> have not covered all possible scenarios (not sure that it is possible at\n> all).\n> But I do not understand how populating index with data can add some\n> extra unsafety.\n>\n> So I can not prove that building index for GTT on demand is safe, but it\n> is not more unsafe than initialization of index on demand which is\n> required in any case.\n\nI think that the idea of calling ambuild() on the fly is not going to\nwork, because, again, I don't think that calling that from random\nplaces in the code is safe. What I expect we're going to need to do\nhere is model this on the approach used for unlogged tables. For an\nunlogged table, each table and index has an init fork which contains\nthe correct initial contents for that relation - which is nothing at\nall for a heap table, and a couple of boilerplate pages for an index.\nIn the case of an unlogged table, the init forks get copied over the\nmain forks after crash recovery, and then we have a brand-new, empty\nrelation with brand-new empty indexes which everyone can use. In the\ncase of global temporary tables, I think that we should do the same\nkind of copying, but at the time when the session first tries to\naccess the table. There is some fuzziness in my mind about what\nexactly constitutes accessing the table - it probably shouldn't be\nwhen the relcache entry is built, because that seems too early, but\nI'm not sure what is exactly right. In any event, it's easier to find\na context where copying some files on disk (that are certain not to be\nchanging) is safe than it is to find a context where index builds are\nsafe.\n\n> 2. Actually I do not propose some completely new approach. I try to\n> provide behavior with is compatible with regular tables.\n> If you create index for regular table, then it can be used in all\n> sessions, right?\n\nYes. :-)\n\n> And all \"various backend-local data structures in the relcache, the\n> planner, and the executor that remember information about indexes\"\n> have to be properly updated. It is done using invalidation mechanism.\n> The same mechanism is used in case of DDL operations with GTT, because\n> we change system catalog.\n\nI mean, that's not really a valid argument. Invalidations can only\ntake effect at certain points in the code, and the whole argument here\nis about which places in the code are safe for which operations, so\nthe fact that some things (like accepting invalidations) are safe at\nsome points in the code (like the places where we accept them) does\nnot prove that other things (like calling ambuild) are safe at other\npoints in the code (like wherever you are proposing to call it). In\nparticular, if you've got a relation open, there's currently no way\nfor another index to show up while you've still got that relation\nopen. That means that the planner and executor (which keep the\nrelevant relations open) don't ever have to worry about updating their\ndata structures, because it can never be necessary. It also means that\nany code anywhere in the system that keeps a lock on a relation can\ncount on the list of indexes for that relation staying the same until\nit releases the lock. In fact, it can hold on to pointers to data\nallocated by the relcache and count on those pointers being stable for\nas long as it holds the lock, and RelationClearRelation contain\nspecific code that aims to make sure that certain objects don't get\ndeallocated and reallocated at a different address precisely for that\nreason. That code, however, only works as long as nothing actually\nchanges. The upshot is that it's entirely possible for changing\ncatalog entries in one backend with an inadequate lock level -- or at\nunexpected point in the code -- to cause a core dump either in that\nbackend or in some other backend. This stuff is really subtle, and\nsuper-easy to screw up.\n\nI am speaking a bit generally here, because I haven't really studied\n*exactly* what might go wrong in the relcache, or elsewhere, as a\nresult of creating an index on the fly. However, I'm very sure that a\ngeneral appeal to invalidation messages is not sufficient to make\nsomething like what you want to do safe. Invalidation messages are a\ncomplex, ancient, under-documented, fragile system for solving a very\nspecific problem that is not the one you are hoping they'll solve\nhere. They could justifiably be called magic, but it's not the sort of\nmagic where the fairy godmother waves her wand and solves all of your\nproblems; it's more like the kind where you go explore the forbidden\nforest and are never seen or heard from again.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 29 Jan 2020 13:16:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 29.01.2020 20:37, Pavel Stehule wrote:\n>\n>\n> st 29. 1. 2020 v 18:21 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 29.01.2020 20:08, Pavel Stehule wrote:\n>>\n>>\n>>\n>> 2. Actually I do not propose some completely new approach. I\n>> try to\n>> provide behavior with is compatible with regular tables.\n>> If you create index for regular table, then it can be used in\n>> all\n>> sessions, right?\n>>\n>>\n>> I don't understand to this point. Regular tables shares data,\n>> shares files. You cannot to separate it. More - you have to uses\n>> relatively aggressive locks to be this operation safe.\n>>\n>> Nothing from these points are valid for GTT.\n>\n> GTT shares metadata.\n> As far as them are not sharing data, then GTT are safer than\n> regular table, aren't them?\n> \"Safer\" means that we need less \"aggressive\" locks for them: we\n> need to protect only metadata, not data itself.\n>\n> My point is that if we allow other sessions to access created\n> indexes for regular tables, then it will be not more complex to\n> support it for GTT.\n> Actually \"not more complex\" in this case means \"no extra efforts\n> are needed\".\n>\n>\n> It is hard to say. I see a significant difference. When I do index on \n> regular table, then I don't change a context of other processes. I \n> have to wait for lock, and after I got a lock then other processes \n> waiting.\n>\n> With GTT, I don't want to wait for others - and other processes should \n> build indexes inside - without expected sequence of operations. Maybe \n> it can have positive effect, but it can have negative effect too. In \n> this case I prefer (in this moment) zero effect on other sessions. So \n> I would to build index in my session and I don't would to wait for \n> other sessions, and if it is possible other sessions doesn't need to \n> interact or react on my action too. It should be independent what is \n> possible. The most simple solution is request on unique usage. I \n> understand so it can be not too practical. Better is allow to usage \n> GTT by other tables, but the changes are invisible in other sessions \n> to session reset. It is minimalistic strategy. It has not benefits for \n> other sessions, but it has not negative impacts too.\n>\n\nBuilding regular index requires two kinds of lock:\n1. You have to lock pg_class to make changes in system catalog.\n2. You need to lock heap relation  to pervent concurrent updates while \nbuilding index.\n\nGTT requires 1)  but not 2).\nOnce backend inserts information about new index in system catalog, all \nother sessions may use it. pg_class lock prevents any race condition here.\nAnd building index itself doesn't affect any other backends.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 29.01.2020 20:37, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\nst 29. 1. 2020 v 18:21\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 29.01.2020 20:08, Pavel Stehule wrote:\n\n\n\n\n\n\n\n 2. Actually I\n do not propose some completely new approach. I try\n to \n provide behavior with is compatible with regular\n tables.\n If you create index for regular table, then it can\n be used in all \n sessions, right?\n\n\n\nI don't understand to this point. Regular\n tables shares data, shares files. You cannot to\n separate it. More - you have to uses relatively\n aggressive locks to be this operation safe. \n\n\n\nNothing from these points are valid for GTT.\n\n\n\n\n GTT shares metadata.\n As far as them are not sharing data, then GTT are safer\n than regular table, aren't them?\n \"Safer\" means that we need less \"aggressive\" locks for\n them: we need to protect only metadata, not data itself.\n\n My point is that if we allow other sessions to access\n created indexes for regular tables, then it will be not\n more complex to support it for GTT.\n Actually \"not more complex\" in this case means \"no extra\n efforts are needed\".\n\n\n\n\nIt is hard to say. I see a significant difference. When I\n do index on regular table, then I don't change a context of\n other processes. I have to wait for lock, and after I got a\n lock then other processes waiting.\n\n\nWith GTT, I don't want to wait for others - and other\n processes should build indexes inside - without expected\n sequence of operations. Maybe it can have positive effect,\n but it can have negative effect too. In this case I prefer\n (in this moment) zero effect on other sessions. So I would\n to build index in my session and I don't would to wait for\n other sessions, and if it is possible other sessions doesn't\n need to interact or react on my action too. It should be\n independent what is possible. The most simple solution is\n request on unique usage. I understand so it can be not too\n practical. Better is allow to usage GTT by other tables, but\n the changes are invisible in other sessions to session\n reset. It is minimalistic strategy. It has not benefits for\n other sessions, but it has not negative impacts too.\n\n\n\n\n\n\n Building regular index requires two kinds of lock:\n 1. You have to lock pg_class to make changes in system catalog.\n 2. You need to lock heap relation  to pervent concurrent updates\n while building index.\n\n GTT requires 1)  but not 2).\n Once backend inserts information about new index in system catalog,\n all other sessions may use it. pg_class lock prevents any race\n condition here.\n And building index itself doesn't affect any other backends.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 30 Jan 2020 11:45:35 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 30. 1. 2020 v 9:45 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 29.01.2020 20:37, Pavel Stehule wrote:\n>\n>\n>\n> st 29. 1. 2020 v 18:21 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 29.01.2020 20:08, Pavel Stehule wrote:\n>>\n>>\n>>\n>>\n>> 2. Actually I do not propose some completely new approach. I try to\n>>> provide behavior with is compatible with regular tables.\n>>> If you create index for regular table, then it can be used in all\n>>> sessions, right?\n>>>\n>>\n>> I don't understand to this point. Regular tables shares data, shares\n>> files. You cannot to separate it. More - you have to uses relatively\n>> aggressive locks to be this operation safe.\n>>\n>> Nothing from these points are valid for GTT.\n>>\n>>\n>> GTT shares metadata.\n>> As far as them are not sharing data, then GTT are safer than regular\n>> table, aren't them?\n>> \"Safer\" means that we need less \"aggressive\" locks for them: we need to\n>> protect only metadata, not data itself.\n>>\n>> My point is that if we allow other sessions to access created indexes for\n>> regular tables, then it will be not more complex to support it for GTT.\n>> Actually \"not more complex\" in this case means \"no extra efforts are\n>> needed\".\n>>\n>\n> It is hard to say. I see a significant difference. When I do index on\n> regular table, then I don't change a context of other processes. I have to\n> wait for lock, and after I got a lock then other processes waiting.\n>\n> With GTT, I don't want to wait for others - and other processes should\n> build indexes inside - without expected sequence of operations. Maybe it\n> can have positive effect, but it can have negative effect too. In this case\n> I prefer (in this moment) zero effect on other sessions. So I would to\n> build index in my session and I don't would to wait for other sessions, and\n> if it is possible other sessions doesn't need to interact or react on my\n> action too. It should be independent what is possible. The most simple\n> solution is request on unique usage. I understand so it can be not too\n> practical. Better is allow to usage GTT by other tables, but the changes\n> are invisible in other sessions to session reset. It is minimalistic\n> strategy. It has not benefits for other sessions, but it has not negative\n> impacts too.\n>\n>\n> Building regular index requires two kinds of lock:\n> 1. You have to lock pg_class to make changes in system catalog.\n> 2. You need to lock heap relation to pervent concurrent updates while\n> building index.\n>\n> GTT requires 1) but not 2).\n> Once backend inserts information about new index in system catalog, all\n> other sessions may use it. pg_class lock prevents any race condition here.\n> And building index itself doesn't affect any other backends.\n>\n\nIt is true. The difference for GTT, so any other sessions have to build\nindex (in your proposal) as extra operation against original plan.\n\nPavel\n\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nčt 30. 1. 2020 v 9:45 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 29.01.2020 20:37, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nst 29. 1. 2020 v 18:21\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 29.01.2020 20:08, Pavel Stehule wrote:\n\n\n\n\n\n\n\n 2. Actually I\n do not propose some completely new approach. I try\n to \n provide behavior with is compatible with regular\n tables.\n If you create index for regular table, then it can\n be used in all \n sessions, right?\n\n\n\nI don't understand to this point. Regular\n tables shares data, shares files. You cannot to\n separate it. More - you have to uses relatively\n aggressive locks to be this operation safe. \n\n\n\nNothing from these points are valid for GTT.\n\n\n\n\n GTT shares metadata.\n As far as them are not sharing data, then GTT are safer\n than regular table, aren't them?\n \"Safer\" means that we need less \"aggressive\" locks for\n them: we need to protect only metadata, not data itself.\n\n My point is that if we allow other sessions to access\n created indexes for regular tables, then it will be not\n more complex to support it for GTT.\n Actually \"not more complex\" in this case means \"no extra\n efforts are needed\".\n\n\n\n\nIt is hard to say. I see a significant difference. When I\n do index on regular table, then I don't change a context of\n other processes. I have to wait for lock, and after I got a\n lock then other processes waiting.\n\n\nWith GTT, I don't want to wait for others - and other\n processes should build indexes inside - without expected\n sequence of operations. Maybe it can have positive effect,\n but it can have negative effect too. In this case I prefer\n (in this moment) zero effect on other sessions. So I would\n to build index in my session and I don't would to wait for\n other sessions, and if it is possible other sessions doesn't\n need to interact or react on my action too. It should be\n independent what is possible. The most simple solution is\n request on unique usage. I understand so it can be not too\n practical. Better is allow to usage GTT by other tables, but\n the changes are invisible in other sessions to session\n reset. It is minimalistic strategy. It has not benefits for\n other sessions, but it has not negative impacts too.\n\n\n\n\n\n\n Building regular index requires two kinds of lock:\n 1. You have to lock pg_class to make changes in system catalog.\n 2. You need to lock heap relation  to pervent concurrent updates\n while building index.\n\n GTT requires 1)  but not 2).\n Once backend inserts information about new index in system catalog,\n all other sessions may use it. pg_class lock prevents any race\n condition here.\n And building index itself doesn't affect any other backends.It is true. The difference for GTT, so any other sessions have to build index (in your proposal) as extra operation against original plan.Pavel\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 30 Jan 2020 10:23:52 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 29.01.2020 21:16, Robert Haas wrote:\n> On Wed, Jan 29, 2020 at 10:30 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>\n> I think that the idea of calling ambuild() on the fly is not going to\n> work, because, again, I don't think that calling that from random\n> places in the code is safe.\n\nIt is not a random place in the code.\nActually it is just one place - _bt_getbuf\nWhy it can be unsafe if it affects only private backends data?\n\n\n> What I expect we're going to need to do\n> here is model this on the approach used for unlogged tables. For an\n> unlogged table, each table and index has an init fork which contains\n> the correct initial contents for that relation - which is nothing at\n> all for a heap table, and a couple of boilerplate pages for an index.\n> In the case of an unlogged table, the init forks get copied over the\n> main forks after crash recovery, and then we have a brand-new, empty\n> relation with brand-new empty indexes which everyone can use. In the\n> case of global temporary tables, I think that we should do the same\n> kind of copying, but at the time when the session first tries to\n> access the table. There is some fuzziness in my mind about what\n> exactly constitutes accessing the table - it probably shouldn't be\n> when the relcache entry is built, because that seems too early, but\n> I'm not sure what is exactly right. In any event, it's easier to find\n> a context where copying some files on disk (that are certain not to be\n> changing) is safe than it is to find a context where index builds are\n> safe.\n\nI do not think that approach used for unlogged tables is good for GTT.\nUnlogged tables has to be reinitialized only after server restart.\nGTT to should be initialized by each backend on demand.\nIt seems to me that init fork is used for unlogged table because \nrecovery process to not have enough context to be able to reintialize \ntable and indexes.\nIt is much safer and simpler for recovery process just to copy files. \nBut GTT case is different. Heap and indexes can be easily initialized by \nbackend  using existed functions.\n\nApproach with just calling btbuild is much simpler than you propose with \ncreating extra forks and copying data from it.\nYou say that it not safe. But you have not explained why it is unsafe. \nYes, I agree that it is my responsibility to prove that it is safe.\nAnd as I already wrote, I can not provide such proof now. I will be \npleased if you or anybody else can help to convince that this approach \nis safe or demonstrate problems with this approach.\n\nCopying data from fork doesn't help to provide the same behavior of GTT \nindexes as regular indexes.\nAnd from my point of view compatibility with regular tables is most \nimportant point in GTT design.\nIf for some reasons it is not possible, than we should think about other \nsolutions.\nBut right now I do not know such problems. We have two working \nprototypes of GTT. Certainly it is not mean lack of problems with the \ncurrent implementations.\nBut I really like to receive more constructive critics rather than \"this \napproach is wrong because it is unsafe\".\n\n>> planner, and the executor that remember information about indexes\"\n>> have to be properly updated. It is done using invalidation mechanism.\n>> The same mechanism is used in case of DDL operations with GTT, because\n>> we change system catalog.\n> I mean, that's not really a valid argument. Invalidations can only\n> take effect at certain points in the code, and the whole argument here\n> is about which places in the code are safe for which operations, so\n> the fact that some things (like accepting invalidations) are safe at\n> some points in the code (like the places where we accept them) does\n> not prove that other things (like calling ambuild) are safe at other\n> points in the code (like wherever you are proposing to call it). In\n> particular, if you've got a relation open, there's currently no way\n> for another index to show up while you've still got that relation\n> open.\nThe same is true for GTT. Right now building GTT index also locks the \nrelation.\nIt may be not absolutely needed, because data of relation is local and \ncan not be changed by some other backend.\nBut I have not added some special handling of GTT here.\nMostly because I want to follow the same way as with regular indexes and \nprevent possible problems which as you mention can happen\nif we somehow changing locking policy.\n\n\n> That means that the planner and executor (which keep the\n> relevant relations open) don't ever have to worry about updating their\n> data structures, because it can never be necessary. It also means that\n> any code anywhere in the system that keeps a lock on a relation can\n> count on the list of indexes for that relation staying the same until\n> it releases the lock. In fact, it can hold on to pointers to data\n> allocated by the relcache and count on those pointers being stable for\n> as long as it holds the lock, and RelationClearRelation contain\n> specific code that aims to make sure that certain objects don't get\n> deallocated and reallocated at a different address precisely for that\n> reason. That code, however, only works as long as nothing actually\n> changes. The upshot is that it's entirely possible for changing\n> catalog entries in one backend with an inadequate lock level -- or at\n> unexpected point in the code -- to cause a core dump either in that\n> backend or in some other backend. This stuff is really subtle, and\n> super-easy to screw up.\n>\n> I am speaking a bit generally here, because I haven't really studied\n> *exactly* what might go wrong in the relcache, or elsewhere, as a\n> result of creating an index on the fly. However, I'm very sure that a\n> general appeal to invalidation messages is not sufficient to make\n> something like what you want to do safe. Invalidation messages are a\n> complex, ancient, under-documented, fragile system for solving a very\n> specific problem that is not the one you are hoping they'll solve\n> here. They could justifiably be called magic, but it's not the sort of\n> magic where the fairy godmother waves her wand and solves all of your\n> problems; it's more like the kind where you go explore the forbidden\n> forest and are never seen or heard from again.\n\nActually index is not created on the fly.\nIndex is created is usual way, by executing \"create index\" command.\nSo all  components of the Postgres (planner, executor,...) treat GTT \nindexes in the same way as regular indexes.\nLocking and invalidations policies are exactly the same for them.\nThe only difference is that content of GTT index is constructed  on \ndemand using private backend data.\nIs it safe or not? We are just reading data from local buffers/files and \nwriting them here.\nMay be I missed something but I do not see any unsafety here.\nThere are issues with updating statistic but them can be solved.\n\n-- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 30 Jan 2020 12:33:22 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 30.01.2020 12:23, Pavel Stehule wrote:\n>\n> Building regular index requires two kinds of lock:\n> 1. You have to lock pg_class to make changes in system catalog.\n> 2. You need to lock heap relation  to pervent concurrent updates\n> while building index.\n>\n> GTT requires 1)  but not 2).\n> Once backend inserts information about new index in system\n> catalog, all other sessions may use it. pg_class lock prevents any\n> race condition here.\n> And building index itself doesn't affect any other backends.\n>\n>\n> It is true. The difference for GTT, so any other sessions have to \n> build index (in your proposal) as extra operation against original plan.\n>\nWhat is \"index\"?\nFor most parts of Postgres it is just an entry in system catalog.\nAnd only executor deals with its particular implementation and content.\n\nMy point is that if we process GTT index metadata in the same way as \nregular index metadata,\nthen there will be no differences for the postgres between GTT and \nregular indexes.\nAnd we can provide the same behavior.\n\nConcerning actual content of the index - it is local to the backend and \nit is safe to construct it a t any point of time (on demand).\nIt depends only on private data and can not be somehow affected by other \nbackends (taken in account that we preserve locking policy of regular \ntables).\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 30.01.2020 12:23, Pavel Stehule\n wrote:\n\n\n\n\n\n\n Building regular index requires two\n kinds of lock:\n 1. You have to lock pg_class to make changes in system\n catalog.\n 2. You need to lock heap relation  to pervent concurrent\n updates while building index.\n\n GTT requires 1)  but not 2).\n Once backend inserts information about new index in system\n catalog, all other sessions may use it. pg_class lock\n prevents any race condition here.\n And building index itself doesn't affect any other\n backends.\n\n\n\n\nIt is true. The difference for GTT, so any other sessions\n have to build index (in your proposal) as extra operation\n against original plan.\n\n\n\n\n\n What is \"index\"?\n For most parts of Postgres it is just an entry in system catalog.\n And only executor deals with its particular implementation and\n content.\n\n My point is that if we process GTT index metadata in the same way as\n regular index metadata,\n then there will be no differences for the postgres between GTT and\n regular indexes.\n And we can provide the same behavior.\n\n Concerning actual content of the index - it is local to the backend\n and it is safe to construct it a t any point of time (on demand).\n It depends only on private data and can not be somehow affected by\n other backends (taken in account that we preserve locking policy of\n regular tables).\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 30 Jan 2020 12:44:39 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 30. 1. 2020 v 10:44 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 30.01.2020 12:23, Pavel Stehule wrote:\n>\n>\n> Building regular index requires two kinds of lock:\n>> 1. You have to lock pg_class to make changes in system catalog.\n>> 2. You need to lock heap relation to pervent concurrent updates while\n>> building index.\n>>\n>> GTT requires 1) but not 2).\n>> Once backend inserts information about new index in system catalog, all\n>> other sessions may use it. pg_class lock prevents any race condition here.\n>> And building index itself doesn't affect any other backends.\n>>\n>\n> It is true. The difference for GTT, so any other sessions have to build\n> index (in your proposal) as extra operation against original plan.\n>\n> What is \"index\"?\n> For most parts of Postgres it is just an entry in system catalog.\n> And only executor deals with its particular implementation and content.\n>\n> My point is that if we process GTT index metadata in the same way as\n> regular index metadata,\n> then there will be no differences for the postgres between GTT and regular\n> indexes.\n> And we can provide the same behavior.\n>\n\nThere should be a difference - index on regular table is created by one\nprocess. Same thing is not possible on GTT. So there should be a difference\nevery time.\n\nYou can reduce some differences, but minimally me and Robert don't feel it\nwell. Starting a building index from routine, that is used for reading from\nbuffer doesn't look well. I can accept some stranges, but I need to have\nfeeling so it is necessary. I don't think so it is necessary in this case.\n\nRegards\n\nPavel\n\n\n> Concerning actual content of the index - it is local to the backend and it\n> is safe to construct it a t any point of time (on demand).\n> It depends only on private data and can not be somehow affected by other\n> backends (taken in account that we preserve locking policy of regular\n> tables).\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nčt 30. 1. 2020 v 10:44 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 30.01.2020 12:23, Pavel Stehule\n wrote:\n\n\n\n\n\n Building regular index requires two\n kinds of lock:\n 1. You have to lock pg_class to make changes in system\n catalog.\n 2. You need to lock heap relation  to pervent concurrent\n updates while building index.\n\n GTT requires 1)  but not 2).\n Once backend inserts information about new index in system\n catalog, all other sessions may use it. pg_class lock\n prevents any race condition here.\n And building index itself doesn't affect any other\n backends.\n\n\n\n\nIt is true. The difference for GTT, so any other sessions\n have to build index (in your proposal) as extra operation\n against original plan.\n\n\n\n\n\n What is \"index\"?\n For most parts of Postgres it is just an entry in system catalog.\n And only executor deals with its particular implementation and\n content.\n\n My point is that if we process GTT index metadata in the same way as\n regular index metadata,\n then there will be no differences for the postgres between GTT and\n regular indexes.\n And we can provide the same behavior.There should be a difference - index on regular table is created by one process. Same thing is not possible on GTT. So there should be a difference every time.You can reduce some differences, but minimally me and Robert don't feel it well. Starting a building index from routine, that is used for reading from buffer doesn't look well. I can accept some stranges, but I need to have feeling so it is necessary. I don't think so it is necessary in this case.RegardsPavel \n\n Concerning actual content of the index - it is local to the backend\n and it is safe to construct it a t any point of time (on demand).\n It depends only on private data and can not be somehow affected by\n other backends (taken in account that we preserve locking policy of\n regular tables).\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 30 Jan 2020 10:52:56 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 30.01.2020 12:52, Pavel Stehule wrote:\n>\n>\n> čt 30. 1. 2020 v 10:44 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 30.01.2020 12:23, Pavel Stehule wrote:\n>>\n>> Building regular index requires two kinds of lock:\n>> 1. You have to lock pg_class to make changes in system catalog.\n>> 2. You need to lock heap relation  to pervent concurrent\n>> updates while building index.\n>>\n>> GTT requires 1)  but not 2).\n>> Once backend inserts information about new index in system\n>> catalog, all other sessions may use it. pg_class lock\n>> prevents any race condition here.\n>> And building index itself doesn't affect any other backends.\n>>\n>>\n>> It is true. The difference for GTT, so any other sessions have to\n>> build index (in your proposal) as extra operation against\n>> original plan.\n>>\n> What is \"index\"?\n> For most parts of Postgres it is just an entry in system catalog.\n> And only executor deals with its particular implementation and\n> content.\n>\n> My point is that if we process GTT index metadata in the same way\n> as regular index metadata,\n> then there will be no differences for the postgres between GTT and\n> regular indexes.\n> And we can provide the same behavior.\n>\n>\n> There should be a difference - index on regular table is created by \n> one process. Same thing is not possible on GTT. So there should be a \n> difference every time.\n\nMetadata of GTT index is also created by one process. And actual content \nof the index is not interesting for most parts of Postgres.\n\n>\n> You can reduce some differences, but minimally me and Robert don't \n> feel it well. Starting a building index from routine, that is used for \n> reading from buffer doesn't look well. I can accept some stranges, but \n> I need to have feeling so it is necessary. I don't think so it is \n> necessary in this case.\n\nSorry, but \"don't feel it well\", \"doesn't look well\" looks more \nlikeliterary criticism rather than code review;)\nYes, I agree that it is unnatural to call btindex from _bt_getbuf. But \nwhat can go wrong here?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 30.01.2020 12:52, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\nčt 30. 1. 2020 v 10:44\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 30.01.2020 12:23, Pavel Stehule wrote:\n\n\n\n\n\n Building regular index\n requires two kinds of lock:\n 1. You have to lock pg_class to make changes in\n system catalog.\n 2. You need to lock heap relation  to pervent\n concurrent updates while building index.\n\n GTT requires 1)  but not 2).\n Once backend inserts information about new index\n in system catalog, all other sessions may use\n it. pg_class lock prevents any race condition\n here.\n And building index itself doesn't affect any\n other backends.\n\n\n\n\nIt is true. The difference for GTT, so any\n other sessions have to build index (in your\n proposal) as extra operation against original\n plan.\n\n\n\n\n\n What is \"index\"?\n For most parts of Postgres it is just an entry in system\n catalog.\n And only executor deals with its particular implementation\n and content.\n\n My point is that if we process GTT index metadata in the\n same way as regular index metadata,\n then there will be no differences for the postgres between\n GTT and regular indexes.\n And we can provide the same behavior.\n\n\n\n\nThere should be a difference - index on regular table is\n created by one process. Same thing is not possible on GTT.\n So there should be a difference every time.\n\n\n\n\n Metadata of GTT index is also created by one process. And actual\n content of the index is not interesting for most parts of Postgres.\n\n\n\n\n\n\nYou can reduce some differences, but minimally me and\n Robert don't feel it well. Starting a building index from\n routine, that is used for reading from buffer doesn't look\n well. I can accept some stranges, but I need to have feeling\n so it is necessary. I don't think so it is necessary in this\n case.\n\n\n\n\n\n Sorry, but \"don't feel it well\", \"doesn't look well\" looks more like literary criticism rather than code review;)\n Yes, I agree that it is unnatural to call btindex from _bt_getbuf.\n But what can go wrong here?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 30 Jan 2020 13:02:18 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 30. 1. 2020 v 11:02 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 30.01.2020 12:52, Pavel Stehule wrote:\n>\n>\n>\n> čt 30. 1. 2020 v 10:44 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 30.01.2020 12:23, Pavel Stehule wrote:\n>>\n>>\n>> Building regular index requires two kinds of lock:\n>>> 1. You have to lock pg_class to make changes in system catalog.\n>>> 2. You need to lock heap relation to pervent concurrent updates while\n>>> building index.\n>>>\n>>> GTT requires 1) but not 2).\n>>> Once backend inserts information about new index in system catalog, all\n>>> other sessions may use it. pg_class lock prevents any race condition here.\n>>> And building index itself doesn't affect any other backends.\n>>>\n>>\n>> It is true. The difference for GTT, so any other sessions have to build\n>> index (in your proposal) as extra operation against original plan.\n>>\n>> What is \"index\"?\n>> For most parts of Postgres it is just an entry in system catalog.\n>> And only executor deals with its particular implementation and content.\n>>\n>> My point is that if we process GTT index metadata in the same way as\n>> regular index metadata,\n>> then there will be no differences for the postgres between GTT and\n>> regular indexes.\n>> And we can provide the same behavior.\n>>\n>\n> There should be a difference - index on regular table is created by one\n> process. Same thing is not possible on GTT. So there should be a difference\n> every time.\n>\n>\n> Metadata of GTT index is also created by one process. And actual content\n> of the index is not interesting for most parts of Postgres.\n>\n>\n> You can reduce some differences, but minimally me and Robert don't feel it\n> well. Starting a building index from routine, that is used for reading from\n> buffer doesn't look well. I can accept some stranges, but I need to have\n> feeling so it is necessary. I don't think so it is necessary in this case.\n>\n>\n> Sorry, but \"don't feel it well\", \"doesn't look well\" looks more like\n> literary criticism rather than code review;)\n>\n\nThe design is subjective. I am sure, so your solution can work, like mine,\nor any other. But I am not sure, so your solution is good for practical\nusage.\n\n\n> Yes, I agree that it is unnatural to call btindex from _bt_getbuf. But\n> what can go wrong here?\n>\n\ncreating index as side effect of table reading. Just the side effect too\nmuch big.\n\n\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nčt 30. 1. 2020 v 11:02 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 30.01.2020 12:52, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nčt 30. 1. 2020 v 10:44\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 30.01.2020 12:23, Pavel Stehule wrote:\n\n\n\n\n\n Building regular index\n requires two kinds of lock:\n 1. You have to lock pg_class to make changes in\n system catalog.\n 2. You need to lock heap relation  to pervent\n concurrent updates while building index.\n\n GTT requires 1)  but not 2).\n Once backend inserts information about new index\n in system catalog, all other sessions may use\n it. pg_class lock prevents any race condition\n here.\n And building index itself doesn't affect any\n other backends.\n\n\n\n\nIt is true. The difference for GTT, so any\n other sessions have to build index (in your\n proposal) as extra operation against original\n plan.\n\n\n\n\n\n What is \"index\"?\n For most parts of Postgres it is just an entry in system\n catalog.\n And only executor deals with its particular implementation\n and content.\n\n My point is that if we process GTT index metadata in the\n same way as regular index metadata,\n then there will be no differences for the postgres between\n GTT and regular indexes.\n And we can provide the same behavior.\n\n\n\n\nThere should be a difference - index on regular table is\n created by one process. Same thing is not possible on GTT.\n So there should be a difference every time.\n\n\n\n\n Metadata of GTT index is also created by one process. And actual\n content of the index is not interesting for most parts of Postgres.\n\n\n\n\n\n\nYou can reduce some differences, but minimally me and\n Robert don't feel it well. Starting a building index from\n routine, that is used for reading from buffer doesn't look\n well. I can accept some stranges, but I need to have feeling\n so it is necessary. I don't think so it is necessary in this\n case.\n\n\n\n\n\n Sorry, but \"don't feel it well\", \"doesn't look well\" looks more like literary criticism rather than code review;)The design is subjective. I am sure, so your solution can work, like mine, or any other. But I am not sure, so your solution is good for practical usage. \n Yes, I agree that it is unnatural to call btindex from _bt_getbuf.\n But what can go wrong here?creating index as side effect of table reading. Just the side effect too much big. \n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 30 Jan 2020 11:10:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月29日 上午1:54,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> út 28. 1. 2020 v 18:13 odesílatel Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> napsal:\n> \n> \n> út 28. 1. 2020 v 18:12 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年1月29日 上午12:40,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> \n>> \n>> út 28. 1. 2020 v 17:01 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> \n>> \n>>> 2020年1月24日 上午4:47,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n>>> \n>>> On Sat, Jan 11, 2020 at 8:51 PM Tomas Vondra\n>>> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> wrote:\n>>>> I proposed just ignoring those new indexes because it seems much simpler\n>>>> than alternative solutions that I can think of, and it's not like those\n>>>> other solutions don't have other issues.\n>>> \n>>> +1.\n>> I complete the implementation of this feature.\n>> When a session x create an index idx_a on GTT A then\n>> For session x, idx_a is valid when after create index.\n>> For session y, before session x create index done, GTT A has some data, then index_a is invalid.\n>> For session z, before session x create index done, GTT A has no data, then index_a is valid.\n>> \n>>> \n>>>> For example, I've looked at the \"on demand\" building as implemented in\n>>>> global_private_temp-8.patch, I kinda doubt adding a bunch of index build\n>>>> calls into various places in index code seems somewht suspicious.\n>>> \n>>> +1. I can't imagine that's a safe or sane thing to do.\n>>> \n>>> -- \n>>> Robert Haas\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>>> The Enterprise PostgreSQL Company\n>> \n>> Opinion by Pavel\n>> +\trel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>> I renamed rd_islocaltemp\n>> \n>> I don't see any change?\n> Rename rd_islocaltemp to rd_istemp in global_temporary_table_v8-pg13.patch\n> \n> ok :)\n> \n> I found a bug\n> \n> postgres=# create global temp table x(a int);\n> CREATE TABLE\n> postgres=# insert into x values(1);\n> INSERT 0 1\n> postgres=# create index on x (a);\n> CREATE INDEX\n> postgres=# create index on x((a + 1));\n> CREATE INDEX\n> postgres=# analyze x;\n> WARNING: oid 16468 not a relation\n> ANALYZE\nThe bug has been fixed in the global_temporary_table_v9-pg13.patch\n\n\n\nWenjing\n\n\n\n\n> \n> other behave looks well for me.\n> \n> Regards\n> \n> Pavel\n> \n> \n> Pavel\n> \n> \n> Wenjing\n> \n> \n> \n>> \n>> \n>> \n>> Opinion by Konstantin Knizhnik\n>> 1 Fixed comments\n>> 2 Fixed assertion\n>> \n>> \n>> Please help me review.\n>> \n>> \n>> Wenjing\n>> \n>", "msg_date": "Thu, 30 Jan 2020 22:06:41 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>>> Opinion by Pavel\n>>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>>> I renamed rd_islocaltemp\n>> \n>> I don't see any change?\n>> \n>> Rename rd_islocaltemp to rd_istemp in global_temporary_table_v8-pg13.patch\n> \n> In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> that this has approximately a 0% chance of being acceptable. If you're\n> setting a field in a way that is inconsistent with the current use of\n> the field, you're probably doing it wrong, because the field has an\n> existing purpose to which new code must conform. And if you're not\n> doing that, then you don't need to rename it.\nThank you for pointing it out.\nI've rolled back the rename.\nBut I still need rd_localtemp to be true, The reason is that\n1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n\nPlease give me feedback.\n\n\nWenjing\n\n\n\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Thu, 30 Jan 2020 22:17:29 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> > 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n> >\n> > On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> wrote:\n> >>> Opinion by Pavel\n> >>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name\n> of field \"rd_islocaltemp\" is not probably best\n> >>> I renamed rd_islocaltemp\n> >>\n> >> I don't see any change?\n> >>\n> >> Rename rd_islocaltemp to rd_istemp in\n> global_temporary_table_v8-pg13.patch\n> >\n> > In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> > that this has approximately a 0% chance of being acceptable. If you're\n> > setting a field in a way that is inconsistent with the current use of\n> > the field, you're probably doing it wrong, because the field has an\n> > existing purpose to which new code must conform. And if you're not\n> > doing that, then you don't need to rename it.\n> Thank you for pointing it out.\n> I've rolled back the rename.\n> But I still need rd_localtemp to be true, The reason is that\n> 1 GTT The GTT needs to support DML in read-only transactions ,like local\n> temp table.\n> 2 GTT does not need to hold the lock before modifying the index buffer\n> ,also like local temp table.\n>\n> Please give me feedback.\n>\n\nmaybe some like\n\nrel->rd_globaltemp = true;\n\nand somewhere else\n\nif (rel->rd_localtemp || rel->rd_globaltemp)\n{\n ...\n}\n\n\n>\n> Wenjing\n>\n>\n>\n>\n> >\n> > --\n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n>\n>\n\nčt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>>> Opinion by Pavel\n>>> + rel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>>> I renamed rd_islocaltemp\n>> \n>> I don't see any change?\n>> \n>> Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patch\n> \n> In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> that this has approximately a 0% chance of being acceptable. If you're\n> setting a field in a way that is inconsistent with the current use of\n> the field, you're probably doing it wrong, because the field has an\n> existing purpose to which new code must conform. And if you're not\n> doing that, then you don't need to rename it.\nThank you for pointing it out.\nI've rolled back the rename.\nBut I still need rd_localtemp to be true, The reason is that\n1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n\nPlease give me feedback.maybe some likerel->rd_globaltemp = true;and somewhere elseif (rel->rd_localtemp || rel->rd_globaltemp){  ...}\n\n\nWenjing\n\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Thu, 30 Jan 2020 15:21:15 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Jan 30, 2020 at 4:33 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> On 29.01.2020 21:16, Robert Haas wrote:\n> > On Wed, Jan 29, 2020 at 10:30 AM Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru> wrote:\n> >\n> > I think that the idea of calling ambuild() on the fly is not going to\n> > work, because, again, I don't think that calling that from random\n> > places in the code is safe.\n>\n> It is not a random place in the code.\n> Actually it is just one place - _bt_getbuf\n> Why it can be unsafe if it affects only private backends data?\n\nBecause, as I already said, not every operation is safe at every point\nin the code. This is true even when there's no concurrency involved.\nFor example, executing user-defined code is not safe while holding a\nbuffer lock, because the user-defined code might try to do something\nthat locks the same buffer, which would cause an undetected,\nuninterruptible deadlock.\n\n> But GTT case is different. Heap and indexes can be easily initialized by\n> backend using existed functions.\n\nThat would be nice if we could make it work. Someone would need to\nshow, however, that it's safe.\n\n> You say that it not safe. But you have not explained why it is unsafe.\n> Yes, I agree that it is my responsibility to prove that it is safe.\n> And as I already wrote, I can not provide such proof now. I will be\n> pleased if you or anybody else can help to convince that this approach\n> is safe or demonstrate problems with this approach.\n\nThat's fair, but nobody's obliged to spend time on that.\n\n> But I really like to receive more constructive critics rather than \"this\n> approach is wrong because it is unsafe\".\n\nI'm sure, and that's probably valid. Equally, however, I'd like to\nreceive more analysis of why it is safe than \"I don't see anything\nwrong with it so it's probably fine.\" And I think that's pretty valid,\ntoo.\n\n> Actually index is not created on the fly.\n> Index is created is usual way, by executing \"create index\" command.\n> So all components of the Postgres (planner, executor,...) treat GTT\n> indexes in the same way as regular indexes.\n> Locking and invalidations policies are exactly the same for them.\n> The only difference is that content of GTT index is constructed on\n> demand using private backend data.\n> Is it safe or not? We are just reading data from local buffers/files and\n> writing them here.\n> May be I missed something but I do not see any unsafety here.\n> There are issues with updating statistic but them can be solved.\n\nBut that's not all you are doing. To build the index, you'll have to\nsort the data. To sort the data, you'll have to call btree support\nfunctions. Those functions can be user-defined, and can do complex\noperations like catalog access that depend on a good transaction\nstate, no buffer locks being held, and nothing already in progress\nwithin this backend that can get confused as a result of this\noperation.\n\nJust as a quick test, I tried doing this in _bt_getbuf:\n\n+ if (InterruptHoldoffCount != 0)\n+ elog(WARNING, \"INTERRUPTS ARE HELD OFF\");\n\nThat causes 103 out of 196 regression tests to fail, which means that\nit's pretty common to arrive in _bt_getbuf() with interrupts held off.\nAt the very least, that means that the index build would be\nuninterruptible, which already seems unacceptable. Probably, it means\nthat the calling code is already holding an LWLock, because that's\nnormally what causes HOLD_INTERRUPTS() to happen. And as I have\nalready explained, that is super-problematic, because of deadlock\nrisks, and because it risks putting other backends into\nnon-interruptible waits if they should happen to need the LWLock we're\nholding here.\n\nI really don't understand why the basic point here remains obscure. In\ngeneral, it tends to be unsafe to call high-level code from low-level\ncode, not just in PostgreSQL but in pretty much any program. Do you\nthink that we can safely add a GUC that executes a user-defined SQL\nquery every time an LWLock is acquired? If you do, why don't you try\nadding code to do that to LWLockAcquire and testing it out a little\nbit? Try making the SQL query do something like query pg_class, find a\ntable name that's not in use, and create a table by that name. Then\nrun the regression tests with the GUC set to run that query and see\nhow it goes. I always hate to say that things are \"obvious,\" because\nwhat's obvious to me may not be obvious to somebody else, but it is\nclear to me, at least, that this has no chance of working. Even though\nI can't say exactly what will break, or what will break first, I'm\nvery sure that a lot of things will break and that most of them are\nunfixable.\n\nNow, your idea is not quite as crazy as that, but it has the same\nbasic problem: you can't insert code into a low-level facility that\nuses a high level facility which may in turn use and depend on that\nvery same low-level facility to not be in the middle of an operation.\nIf you do, it's going to break somehow.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 31 Jan 2020 14:38:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月30日 下午10:21,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n> > 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n> > \n> > On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> >>> Opinion by Pavel\n> >>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n> >>> I renamed rd_islocaltemp\n> >> \n> >> I don't see any change?\n> >> \n> >> Rename rd_islocaltemp to rd_istemp in global_temporary_table_v8-pg13.patch\n> > \n> > In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> > that this has approximately a 0% chance of being acceptable. If you're\n> > setting a field in a way that is inconsistent with the current use of\n> > the field, you're probably doing it wrong, because the field has an\n> > existing purpose to which new code must conform. And if you're not\n> > doing that, then you don't need to rename it.\n> Thank you for pointing it out.\n> I've rolled back the rename.\n> But I still need rd_localtemp to be true, The reason is that\n> 1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n> 2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n> \n> Please give me feedback.\n> \n> maybe some like\n> \n> rel->rd_globaltemp = true;\n> \n> and somewhere else\n> \n> if (rel->rd_localtemp || rel->rd_globaltemp)\n> {\n> ...\n> }\nI tried to optimize code in global_temporary_table_v10-pg13.patch\n\n\nPlease give me feedback.\n\nWenjing\n\n\n\n> \n> \n> \n> Wenjing\n> \n> \n> \n> \n> > \n> > -- \n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> > The Enterprise PostgreSQL Company\n>", "msg_date": "Sat, 01 Feb 2020 21:39:03 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年1月27日 下午5:38,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 25.01.2020 18:15, 曾文旌(义从) wrote:\n>> I wonder why do we need some special check for GTT here.\n>>> From my point of view cleanup at startup of local storage of temp tables should be performed in the same way for local and global temp tables.\n>> After oom kill, In autovacuum, the Isolated local temp table will be cleaned like orphan temporary tables. The definition of local temp table is deleted with the storage file. \n>> But GTT can not do that. So we have the this implementation in my patch.\n>> If you have other solutions, please let me know.\n>> \n> I wonder if it is possible that autovacuum or some other Postgres process is killed by OOM and postmaster is not noticing it can doens't restart Postgres instance?\n> as far as I know, crash of any process connected to Postgres shared memory (and autovacuum definitely has such connection) cause Postgres restart.\nPostmaster will not restart after oom happen, but the startup process will. GTT data files are cleaned up in the startup process.\n> \n> \n>> In my design\n>> 1 Because different sessions have different transaction information, I choose to store the transaction information of GTT in MyProc,not catalog.\n>> 2 About the XID wraparound problem, the reason is the design of the temp table storage(local temp table and global temp table) that makes it can not to do vacuum by autovacuum. \n>> It should be completely solve at the storage level.\n>> \n> \n> My point of view is that vacuuming of temp tables is common problem for local and global temp tables. \n> So it has to be addressed in the common way and so we should not try to fix this problem only for GTT.\nI think I agree with you this point.\nHowever, this does not mean that GTT transaction information stored in pg_class is correct.\nIf you keep it that way, like in global_private_temp-8.patch, It may cause data loss in GTT after aotuvauum.\n\n> \n> \n>> In fact, The dba can still complete the DDL of the GTT.\n>> I've provided a set of functions for this case.\n>> If the dba needs to modify a GTT A(or drop GTT or create index on GTT), he needs to do:\n>> 1 Use the pg_gtt_attached_pids view to list the pids for the session that is using the GTT A.\n>> 2 Use pg_terminate_backend(pid)terminate they except itself.\n>> 3 Do alter GTT A.\n>> \n> IMHO forced terminated of client sessions is not acceptable solution.\n> And it is not an absolutely necessary requirement.\n> So from my point of view we should not add such limitations to GTT design.\nThis limitation makes it possible for the GTT to do all the DDL.\nIMHO even oracle's GTT has similar limitations.\n\n> \n> \n> \n>>> \n>>> What are the reasons of using RowExclusiveLock for GTT instead of AccessExclusiveLock?\n>>> Yes, GTT data is access only by one backend so no locking here seems to be needed at all.\n>>> But I wonder what are the motivations/benefits of using weaker lock level here?\n>> 1 Truncate GTT deletes only the data in the session, so no need use high-level lock.\n>> 2 I think it still needs to be block by DDL of GTT, which is why I use RowExclusiveLock.\n> \n> Sorry, I do not understand your arguments: we do not need exclusive lock because we drop only local (private) data\n> but we need some kind of lock. I agree with 1) and not 2).\nYes, we don't need lock for private data, but metadata need.\n> \n>> \n>>> There should be no conflicts in any case...\n>>> \n>>> + /* We allow to create index on global temp table only this session use it */\n>>> + if (is_other_backend_use_gtt(heapRelation->rd_node))\n>>> + elog(ERROR, \"can not create index when have other backend attached this global temp table\");\n>>> +\n>>> \n>>> The same argument as in case of dropping GTT: I do not think that prohibiting DLL operations on GTT used by more than one backend is bad idea.\n>> The idea was to give the GTT almost all the features of a regular table with few code changes.\n>> The current version DBA can still do all DDL for GTT, I've already described.\n> \n> I absolutely agree with you that GTT should be given the same features as regular tables.\n> The irony is that this most natural and convenient behavior is most easy to implement without putting some extra restrictions.\n> Just let indexes for GTT be constructed on demand. It it can be done using the same function used for regular index creation.\nThe limitation on index creation have been improved in global_temporary_table_v10-pg13.patch.\n\n> \n> \n>> \n>>> \n>>> + /* global temp table not support foreign key constraint yet */\n>>> + if (RELATION_IS_GLOBAL_TEMP(pkrel))\n>>> + ereport(ERROR,\n>>> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n>>> + errmsg(\"referenced relation \\\"%s\\\" is not a global temp table\",\n>>> + RelationGetRelationName(pkrel))));\n>>> +\n>>> \n>>> Why do we need to prohibit foreign key constraint on GTT?\n>> It may be possible to support FK on GTT in later versions. Before that, I need to check some code.\n> \n> Ok, may be approach to prohibit everything except minimally required functionality is safe and reliable.\n> But frankly speaking I prefer different approach: if I do not see any contradictions of new feature with existed operations \n> and it is passing tests, then we should not prohibit this operations for new feature.\n> \n> \n>> I have already described my point in previous emails.\n>> \n>> 1. The core problem is that the data contains transaction information (xid), which needs to be vacuum(freeze) regularly to avoid running out of xid.\n>> The autovacuum supports vacuum regular table but local temp does not. autovacuum also does not support GTT.\n>> \n>> 2. However, the difference between the local temp table and the global temp table(GTT) is that\n>> a) For local temp table: one table hava one piece of data. the frozenxid of one local temp table is store in the catalog(pg_class). \n>> b) For global temp table: each session has a separate copy of data, one GTT may contain maxbackend frozenxid.\n>> and I don't think it's a good idea to keep frozenxid of GTT in the catalog(pg_class). \n>> It becomes a question: how to handle GTT transaction information?\n>> \n>> I agree that problem 1 should be completely solved by a some feature, such as local transactions. It is definitely not included in the GTT patch.\n>> But, I think we need to ensure the durability of GTT data. For example, data in GTT cannot be lost due to the clog being cleaned up. It belongs to problem 2.\n>> \n>> For problem 2\n>> If we ignore the frozenxid of GTT, when vacuum truncates the clog that GTT need, the GTT data in some sessions is completely lost.\n>> Perhaps we could consider let aotuvacuum terminate those sessions that contain \"too old\" data, \n>> But It's not very friendly, so I didn't choose to implement it in the first version.\n>> Maybe you have a better idea.\n> \n> Sorry, I do not have better idea.\n> I prefer not to address this problem in first version of the patch at all. \n> fozen_xid of temp table is never changed unless user explicitly invoke vacuum on it.\n> I do not think that anybody is doing it (because it accentually contains temporary data which is not expected to live long time.\n> Certainly it is possible to imagine situation when session use GTT to store some local data which is valid during all session life time (which can be large enough).\n> But I am not sure that it is popular scenario.\nAs global_private_temp-8.patch, think about:\n1 session X tale several hours doing some statistical work with the GTT A, which generated some data using transaction 100, The work is not over.\n2 Then session Y vacuumed A, and the GTT's relfrozenxid (in pg_class) was updated to 1000 0000.\n3 Then the aotuvacuum happened, the clog before 1000 0000 was cleaned up.\n4 The data in session A could be lost due to missing clog, The analysis task failed.\n\nHowever This is likely to happen because you allowed the GTT do vacuum. \nAnd this is not a common problem, that not happen with local temp tables.\nI feel uneasy about leaving such a question. We can improve it.\n\n> \n> \n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com <http://www.postgrespro.com/>\n> The Russian Postgres Company \n\n\n2020年1月27日 下午5:38,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n\n\n\n\nOn 25.01.2020 18:15, 曾文旌(义从) wrote:\n\n\n\nI wonder why do\n we need some special check for GTT here.\n\n\n\n\n From my\n point of view cleanup at startup of local storage of\n temp tables should be performed in the same way for\n local and global temp tables.\n\n\n\n After oom kill, In autovacuum, the Isolated local temp table\n will be cleaned like orphan temporary tables. The definition\n of local temp table is deleted with the storage file. \nBut GTT can not do that. So we have the this\n implementation in my patch.\nIf you have other solutions, please let me know.\n\n\n\n\n\n I wonder if it is possible that autovacuum or some other Postgres\n process is killed by OOM and postmaster is not noticing it can\n doens't restart Postgres instance?\n as far as I know, crash of any process connected to Postgres shared\n memory (and autovacuum definitely has such connection) cause\n Postgres restart.Postmaster will not restart after oom happen, but the startup process will. GTT data files are cleaned up in the startup process.\n\n\n\n\n\nIn my design\n 1 Because different sessions have different transaction\n information, I choose to store the transaction information\n of GTT in MyProc,not catalog.\n2 About the XID wraparound problem,\n the reason is the design of the temp table storage(local\n temp table and global temp table) that makes it can not to\n do vacuum by autovacuum. \nIt should be completely solve at the storage level.\n\n\n\n\n\n\n My point of view is that vacuuming of temp tables is common problem\n for local and global temp tables. \n So it has to be addressed in the common way and so we should not try\n to fix this problem only for GTT.I think I agree with you this point.However, this does not mean that GTT transaction information stored in pg_class is correct.If you keep it that way, like in global_private_temp-8.patch, It may cause data loss in GTT after aotuvauum.\n\n\n\n\n\nIn fact, The dba can still complete the DDL of the GTT.\n I've provided a set of functions for this case.\nIf the dba needs to modify a GTT A(or drop GTT or create\n index on GTT), he needs to do:\n\n1 Use the pg_gtt_attached_pids\n view to list the pids for the session that is\n using the GTT A.\n2 Use pg_terminate_backend(pid)terminate they except itself.\n3 Do alter GTT A.\n\n\n\n\n\n\n IMHO forced terminated of client sessions is not acceptable\n solution.\n And it is not an absolutely necessary requirement.\n So from my point of view we should not add such limitations to GTT\n design.This limitation makes it possible for the GTT to do all the DDL.IMHO even oracle's GTT has similar limitations.\n\n\n\n\n\n\n\n\n\n \n What are the reasons of using RowExclusiveLock for GTT\n instead of AccessExclusiveLock?\n Yes, GTT data is access only by one backend so no\n locking here seems to be needed at all.\n But I wonder what are the motivations/benefits of\n using weaker lock level here?\n\n\n\n1 Truncate GTT deletes only the data in the session, so\n no need use high-level lock.\n2 I think it still needs to be block by DDL of GTT, which is why I use\n RowExclusiveLock.\n\n\n\n\n\n Sorry, I do not understand your arguments: we do not need exclusive\n lock because we drop only local (private) data\n but we need some kind of lock. I agree with 1) and not 2).Yes, we don't need lock for private data, but metadata need.\n\n\n\n\n\n\n\n There\n should be no conflicts in any case...\n\n +        /* We allow to create index on global temp\n table only this session use it */\n +        if\n (is_other_backend_use_gtt(heapRelation->rd_node))\n +            elog(ERROR, \"can not create index when\n have other backend attached this global temp table\");\n +\n\n The same argument as in case of dropping GTT: I do not\n think that prohibiting DLL operations on GTT used by\n more than one backend is bad idea.\n\n\n\n The idea was to give the GTT almost all the features of a\n regular table with few code changes.\nThe current version DBA can still do all DDL for GTT,\n I've already described.\n\n\n\n\n I absolutely agree with you that GTT should be given the same\n features as regular tables.\n The irony is that this most natural and convenient behavior is most\n easy to implement without putting some extra restrictions.\n Just let indexes for GTT be constructed on demand. It it can be done\n using the same function used for regular index creation.The limitation on index creation have been improved in global_temporary_table_v10-pg13.patch.\n\n\n\n\n\n\n\n\n \n +    /* global temp table not support foreign key\n constraint yet */\n +    if (RELATION_IS_GLOBAL_TEMP(pkrel))\n +        ereport(ERROR,\n +                (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n +                 errmsg(\"referenced relation \\\"%s\\\"\n is not a global temp table\",\n +                       \n RelationGetRelationName(pkrel))));\n +\n\n Why do we need to prohibit foreign key constraint on\n GTT?\n\n\n\n It may be possible to support FK on GTT in later versions. Before that, I need to check some code.\n\n\n\n\n Ok,  may be approach to prohibit everything except minimally\n required functionality  is safe and reliable.\n But frankly speaking I prefer different approach: if I do not see\n any contradictions of new feature with existed operations \n and it is passing tests, then we should  not prohibit this\n operations for new feature.\n\n\n\n\nI have already described my point in previous\n emails.\n \n\n1. The core problem is that the data\n contains transaction information (xid), which needs to be\n vacuum(freeze) regularly to avoid running out of xid.\nThe autovacuum supports vacuum regular table\n but local temp does not. autovacuum also does not support\n GTT.\n\n2. However, the difference between the local\n temp table and the global temp table(GTT) is that\na) For local temp table: one table hava one\n piece of data. the frozenxid of one local temp table is\n store in the catalog(pg_class). \nb) For global temp table: each session has a\n separate copy of data, one GTT may contain maxbackend\n frozenxid.\nand I don't think it's a good idea to keep\n frozenxid of GTT in the catalog(pg_class). \nIt becomes a question: how to handle GTT\n transaction information?\n\nI agree that problem 1 should be completely\n solved by a some feature, such as local transactions. It\n is definitely not included in the GTT patch.\nBut, I think we need to ensure the\n durability of GTT data. For example, data in GTT cannot be\n lost due to the clog being cleaned up. It belongs to\n problem 2.\n\n\nFor problem 2\nIf we ignore the frozenxid of\n GTT, when vacuum truncates the clog that GTT need, the\n GTT data in some sessions is completely lost.\nPerhaps we could consider\n let aotuvacuum terminate those sessions that contain \"too\n old\" data, \nBut It's not very\n friendly, so I didn't choose to implement it in the first\n version.\nMaybe you have a better idea.\n\n\n\n\n Sorry, I do not have better idea.\n I prefer not to address this problem in first version of the patch\n at all. \n fozen_xid of temp table is never changed unless user explicitly\n invoke vacuum on it.\n I do not think that anybody is doing it (because it accentually\n contains temporary data which is not expected to live long time.\n Certainly it is possible to imagine situation when session use GTT\n to store some local data which is valid during all session life time\n (which can be large enough).\n But I am not sure that it is popular scenario.As global_private_temp-8.patch, think about:1 session X tale several hours doing some statistical work with the GTT A, which generated some data using transaction 100, The work is not over.2 Then session Y vacuumed A, and the GTT's relfrozenxid (in pg_class) was updated to 1000 0000.3 Then the aotuvacuum happened, the clog  before 1000 0000 was cleaned up.4 The data in session A could be lost due to missing clog, The analysis task failed.However This is likely to happen because you allowed the GTT do vacuum. And this is not a common problem, that not happen with local temp tables.I feel uneasy about leaving such a question. We can improve it.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 02 Feb 2020 00:14:44 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "so 1. 2. 2020 v 14:39 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> 2020年1月30日 下午10:21,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n>\n>\n> čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>>\n>>\n>> > 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n>> >\n>> > On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n>> wrote:\n>> >>> Opinion by Pavel\n>> >>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the\n>> name of field \"rd_islocaltemp\" is not probably best\n>> >>> I renamed rd_islocaltemp\n>> >>\n>> >> I don't see any change?\n>> >>\n>> >> Rename rd_islocaltemp to rd_istemp in\n>> global_temporary_table_v8-pg13.patch\n>> >\n>> > In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n>> > that this has approximately a 0% chance of being acceptable. If you're\n>> > setting a field in a way that is inconsistent with the current use of\n>> > the field, you're probably doing it wrong, because the field has an\n>> > existing purpose to which new code must conform. And if you're not\n>> > doing that, then you don't need to rename it.\n>> Thank you for pointing it out.\n>> I've rolled back the rename.\n>> But I still need rd_localtemp to be true, The reason is that\n>> 1 GTT The GTT needs to support DML in read-only transactions ,like local\n>> temp table.\n>> 2 GTT does not need to hold the lock before modifying the index buffer\n>> ,also like local temp table.\n>>\n>> Please give me feedback.\n>>\n>\n> maybe some like\n>\n> rel->rd_globaltemp = true;\n>\n> and somewhere else\n>\n> if (rel->rd_localtemp || rel->rd_globaltemp)\n> {\n> ...\n> }\n>\n> I tried to optimize code in global_temporary_table_v10-pg13.patch\n>\n>\n> Please give me feedback.\n>\n\nI tested this patch and I have not any objections - from my user\nperspective it is work as I expect\n\n+#define RELATION_IS_TEMP(relation) \\\n+ ((relation)->rd_islocaltemp || \\\n+ (relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_TEMP)\n\nIt looks little bit unbalanced\n\nmaybe is better to inject rd_isglobaltemp to relation structure\n\nand then\n\nit should to like\n\n+#define RELATION_IS_TEMP(relation) \\\n+ ((relation)->rd_islocaltemp || \\\n+ (relation)->rd_isglobaltemp))\n\nBut I have not idea if it helps in complex\n\n\n\n\n\n\n\n> Wenjing\n>\n>\n>\n>\n>\n>>\n>> Wenjing\n>>\n>>\n>>\n>>\n>> >\n>> > --\n>> > Robert Haas\n>> > EnterpriseDB: http://www.enterprisedb.com\n>> > The Enterprise PostgreSQL Company\n>>\n>>\n>\n\nso 1. 2. 2020 v 14:39 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月30日 下午10:21,Pavel Stehule <pavel.stehule@gmail.com> 写道:čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>>> Opinion by Pavel\n>>> + rel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>>> I renamed rd_islocaltemp\n>> \n>> I don't see any change?\n>> \n>> Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patch\n> \n> In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> that this has approximately a 0% chance of being acceptable. If you're\n> setting a field in a way that is inconsistent with the current use of\n> the field, you're probably doing it wrong, because the field has an\n> existing purpose to which new code must conform. And if you're not\n> doing that, then you don't need to rename it.\nThank you for pointing it out.\nI've rolled back the rename.\nBut I still need rd_localtemp to be true, The reason is that\n1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n\nPlease give me feedback.maybe some likerel->rd_globaltemp = true;and somewhere elseif (rel->rd_localtemp || rel->rd_globaltemp){  ...}I tried to optimize code in global_temporary_table_v10-pg13.patchPlease give me feedback.I tested this patch and I have not any objections - from my user perspective it is work as I expect+#define RELATION_IS_TEMP(relation) \\+\t((relation)->rd_islocaltemp || \\+\t (relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_TEMP) It looks little bit unbalancedmaybe is better to inject rd_isglobaltemp to relation structureand thenit should to like +#define RELATION_IS_TEMP(relation) \\+\t((relation)->rd_islocaltemp || \\+\t (relation)->rd_isglobaltemp))But I have not idea if it helps in complex Wenjing\n\n\nWenjing\n\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Sat, 1 Feb 2020 19:00:34 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 31.01.2020 22:38, Robert Haas wrote:\n> Now, your idea is not quite as crazy as that, but it has the same\n> basic problem: you can't insert code into a low-level facility that\n> uses a high level facility which may in turn use and depend on that\n> very same low-level facility to not be in the middle of an operation.\n> If you do, it's going to break somehow.\n>\n\nThank you for explanation.\nYou convinced me that building indexes from _bt_getbuf is not good idea.\nWhat do you think about idea to check and build indexes for GTT prior to \nquery execution?\n\nIn this case we do not need to patch code of all indexes - it can be \ndone just in one place.\nWe can use build function of access method to initialize index and \npopulate it with data.\n\nSo right now when building query execution plan, optimizer checks if \nindex is valid.\nIf index belongs to GTT, it an check that first page of the index is \ninitialized and if not - call build method for this index.\n\nIf building index during building query plan is not desirable, we can \njust construct list of indexes which should be checked and\nperform check itself and building indexes somewhere after building plan \nbut for execution of the query.\n\nDo you seem some problems with such approach?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Mon, 3 Feb 2020 11:08:10 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 01.02.2020 19:14, 曾文旌(义从) wrote:\n>\n>\n>> 2020年1月27日 下午5:38,Konstantin Knizhnik <k.knizhnik@postgrespro.ru \n>> <mailto:k.knizhnik@postgrespro.ru>> 写道:\n>>\n>>\n>>\n>> On 25.01.2020 18:15, 曾文旌(义从) wrote:\n>>> I wonder why do we need some special check for GTT here.\n>>>> From my point of view cleanup at startup of local storage of temp \n>>>> tables should be performed in the same way for local and global \n>>>> temp tables.\n>>> After oom kill, In autovacuum, the Isolated local temp table will be \n>>> cleaned like orphan temporary tables. The definition of local temp \n>>> table is deleted with the storage file.\n>>> But GTT can not do that. So we have the this implementation in my patch.\n>>> If you have other solutions, please let me know.\n>>>\n>> I wonder if it is possible that autovacuum or some other Postgres \n>> process is killed by OOM and postmaster is not noticing it can \n>> doens't restart Postgres instance?\n>> as far as I know, crash of any process connected to Postgres shared \n>> memory (and autovacuum definitely has such connection) cause Postgres \n>> restart.\n> Postmaster will not restart after oom happen, but the startup process \n> will. GTT data files are cleaned up in the startup process.\n\nYes, exactly.\nBut it is still not clear to me why do we need some special handling for \nGTT?\nShared memory is reinitialized and storage of temporary tables is removed.\nIt is true for both local and global temp tables.\n\n>>\n>>\n>>> In my design\n>>> 1 Because different sessions have different transaction information, \n>>> I choose to store the transaction information of GTT in MyProc,not \n>>> catalog.\n>>> 2 About the XID wraparound problem, the reason is the design of the \n>>> temp table storage(local temp table and global temp table) that \n>>> makes it can not to do vacuum by autovacuum.\n>>> It should be completely solve at the storage level.\n>>>\n>>\n>> My point of view is that vacuuming of temp tables is common problem \n>> for local and global temp tables.\n>> So it has to be addressed in the common way and so we should not try \n>> to fix this problem only for GTT.\n> I think I agree with you this point.\n> However, this does not mean that GTT transaction information stored in \n> pg_class is correct.\n> If you keep it that way, like in global_private_temp-8.patch, It may \n> cause data loss in GTT after aotuvauum.\n\nIn my patch autovacuum is prohibited for GTT.\n\n> IMHO forced terminated of client sessions is not acceptable solution.\n>> And it is not an absolutely necessary requirement.\n>> So from my point of view we should not add such limitations to GTT \n>> design.\n> This limitation makes it possible for the GTT to do all the DDL.\n> IMHO even oracle's GTT has similar limitations.\n\nI have checked that Oracle is not preventing creation of index for GTT \nif there are some active sessions working with this table. And this \nindex becomes visible for all this sessions.\n\n\n> As global_private_temp-8.patch, think about:\n> 1 session X tale several hours doing some statistical work with the \n> GTT A, which generated some data using transaction 100, The work is \n> not over.\n> 2 Then session Y vacuumed A, and the GTT's relfrozenxid (in pg_class) \n> was updated to 1000 0000.\n> 3 Then the aotuvacuum happened, the clog  before 1000 0000 was cleaned up.\n> 4 The data in session A could be lost due to missing clog, The \n> analysis task failed.\n>\n> However This is likely to happen because you allowed the GTT do vacuum.\n> And this is not a common problem, that not happen with local temp tables.\n> I feel uneasy about leaving such a question. We can improve it.\n>\n\nMay be the easies solution is to prohibit explicit vacuum of GTT?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 01.02.2020 19:14, 曾文旌(义从) wrote:\n\n\n\n\n\n\n\n2020年1月27日 下午5:38,Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n 写道:\n\n\n\n \n\nOn 25.01.2020 18:15,\n 曾文旌(义从) wrote:\n\n\n\nI wonder why do we\n need some special check for GTT here.\n\n\n\n\n From my point of view cleanup\n at startup of local storage of temp\n tables should be performed in the same\n way for local and global temp tables.\n\n\n\n After oom kill, In autovacuum, the Isolated\n local temp table will be cleaned like orphan\n temporary tables. The definition of local temp\n table is deleted with the storage file. \nBut GTT can not do that. So we\n have the this implementation in my patch.\nIf you have other solutions,\n please let me know.\n\n\n\n\n\n I wonder if it is possible that autovacuum or some\n other Postgres process is killed by OOM and postmaster\n is not noticing it can doens't restart Postgres\n instance?\n as far as I know, crash of any process connected to\n Postgres shared memory (and autovacuum definitely has\n such connection) cause Postgres restart.\n\n\n\n Postmaster will not restart after oom happen, but the\n startup process will. GTT data files are cleaned up in the\n startup process.\n\n\n\n\n\n Yes, exactly.\n But it is still not clear to me why do we need some special handling\n for GTT?\n Shared memory is reinitialized and storage of temporary tables is\n removed.\n It is true for both local and global temp tables.\n\n\n\n\n\n\n\n \n\n\n\n\nIn my design\n 1 Because different sessions\n have different transaction information, I\n choose to store the transaction information\n of GTT in MyProc,not catalog.\n2 About the XID wraparound problem, the reason is the\n design of the temp table storage(local temp\n table and global temp table) that makes it\n can not to do vacuum by autovacuum. \nIt should be completely solve at\n the storage level.\n\n\n\n\n\n\n My point of view is that vacuuming of temp tables is\n common problem for local and global temp tables. \n So it has to be addressed in the common way and so we\n should not try to fix this problem only for GTT.\n\n\n\n I think I agree with you this point.\nHowever, this does not mean that GTT transaction\n information stored in pg_class is correct.\nIf you keep it that way, like in\n global_private_temp-8.patch, It may cause data loss in GTT\n after aotuvauum.\n\n\n\n\n In my patch autovacuum is prohibited for GTT.\n\n\n\n\nIMHO forced terminated of client sessions is not\n acceptable solution.\n\n\n And it\n is not an absolutely necessary requirement.\n So from my point of view we should not add such\n limitations to GTT design.\n\n\n\n This limitation makes it possible for the GTT to do all the\n DDL.\nIMHO even oracle's GTT has similar limitations.\n\n\n\n\n I have checked that Oracle is not preventing creation of index for\n GTT if there are some active sessions working with this table. And\n this index becomes visible for all this sessions.\n\n\n\n\nAs global_private_temp-8.patch, think about:\n \n1 session X tale several hours doing some statistical\n work with the GTT A, which generated some data using\n transaction 100, The work is not over.\n 2 Then session Y vacuumed A, and the GTT's relfrozenxid (in\n pg_class) was updated to 1000 0000.\n3 Then the aotuvacuum happened, the clog  before 1000\n 0000 was cleaned up.\n4 The data in session A could be lost due to missing\n clog, The analysis task failed.\n\n\nHowever This is likely to happen\n because you allowed the GTT do vacuum. \nAnd this is not a\n common problem, that not happen with local temp tables.\nI feel uneasy about leaving such a question. We can\n improve it.\n\n\n\n\n\n\n May be the easies solution is to prohibit explicit vacuum of GTT?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 3 Feb 2020 11:16:11 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月2日 上午2:00,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> so 1. 2. 2020 v 14:39 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年1月30日 下午10:21,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> \n>> \n>> čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> \n>> \n>> > 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n>> > \n>> > On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> >>> Opinion by Pavel\n>> >>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>> >>> I renamed rd_islocaltemp\n>> >> \n>> >> I don't see any change?\n>> >> \n>> >> Rename rd_islocaltemp to rd_istemp in global_temporary_table_v8-pg13.patch\n>> > \n>> > In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n>> > that this has approximately a 0% chance of being acceptable. If you're\n>> > setting a field in a way that is inconsistent with the current use of\n>> > the field, you're probably doing it wrong, because the field has an\n>> > existing purpose to which new code must conform. And if you're not\n>> > doing that, then you don't need to rename it.\n>> Thank you for pointing it out.\n>> I've rolled back the rename.\n>> But I still need rd_localtemp to be true, The reason is that\n>> 1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n>> 2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n>> \n>> Please give me feedback.\n>> \n>> maybe some like\n>> \n>> rel->rd_globaltemp = true;\n>> \n>> and somewhere else\n>> \n>> if (rel->rd_localtemp || rel->rd_globaltemp)\n>> {\n>> ...\n>> }\n> I tried to optimize code in global_temporary_table_v10-pg13.patch\n> \n> \n> Please give me feedback.\n> \n> I tested this patch and I have not any objections - from my user perspective it is work as I expect\n> \n> +#define RELATION_IS_TEMP(relation) \\\n> +\t((relation)->rd_islocaltemp || \\\n> +\t(relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_TEMP)\n> \n> It looks little bit unbalanced\n> \n> maybe is better to inject rd_isglobaltemp to relation structure\n> \n> and then\n> \n> it should to like \n> \n> +#define RELATION_IS_TEMP(relation) \\\n> +\t((relation)->rd_islocaltemp || \\\n> +\t(relation)->rd_isglobaltemp))\n> \n> But I have not idea if it helps in complex\nIn my opinion\nFor local temp table we need (relation)->rd_rel->relpersistence == RELPERSISTENCE_TEMP \nand because one local temp table belongs to only one session, need to mark one sessions rd_islocaltemp = true ,and other to rd_islocaltemp = false.\n\nBut For GTT, just need (relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_GLOBAL_TEMP\nOne GTT can be used for every session, so no need rd_isglobaltemp anymore. This seems duplicated and redundant.\n\n> \n> \n> \n> \n> \n> \n> \n> Wenjing\n> \n> \n> \n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> > \n>> > -- \n>> > Robert Haas\n>> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> > The Enterprise PostgreSQL Company\n>> \n> \n\n\n2020年2月2日 上午2:00,Pavel Stehule <pavel.stehule@gmail.com> 写道:so 1. 2. 2020 v 14:39 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月30日 下午10:21,Pavel Stehule <pavel.stehule@gmail.com> 写道:čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>>> Opinion by Pavel\n>>> + rel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>>> I renamed rd_islocaltemp\n>> \n>> I don't see any change?\n>> \n>> Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patch\n> \n> In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> that this has approximately a 0% chance of being acceptable. If you're\n> setting a field in a way that is inconsistent with the current use of\n> the field, you're probably doing it wrong, because the field has an\n> existing purpose to which new code must conform. And if you're not\n> doing that, then you don't need to rename it.\nThank you for pointing it out.\nI've rolled back the rename.\nBut I still need rd_localtemp to be true, The reason is that\n1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n\nPlease give me feedback.maybe some likerel->rd_globaltemp = true;and somewhere elseif (rel->rd_localtemp || rel->rd_globaltemp){  ...}I tried to optimize code in global_temporary_table_v10-pg13.patchPlease give me feedback.I tested this patch and I have not any objections - from my user perspective it is work as I expect+#define RELATION_IS_TEMP(relation) \\+\t((relation)->rd_islocaltemp || \\+\t (relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_TEMP) It looks little bit unbalancedmaybe is better to inject rd_isglobaltemp to relation structureand thenit should to like +#define RELATION_IS_TEMP(relation) \\+\t((relation)->rd_islocaltemp || \\+\t (relation)->rd_isglobaltemp))But I have not idea if it helps in complexIn my opinionFor local temp table we need (relation)->rd_rel->relpersistence == RELPERSISTENCE_TEMP and because one local temp table belongs to only one session, need to mark one sessions rd_islocaltemp = true ,and other to rd_islocaltemp = false.But For GTT, just need (relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_GLOBAL_TEMPOne GTT can be used for every session, so no need rd_isglobaltemp anymore. This seems duplicated and redundant. Wenjing\n\n\nWenjing\n\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Mon, 03 Feb 2020 21:03:11 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "po 3. 2. 2020 v 14:03 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\nnapsal:\n\n>\n>\n> 2020年2月2日 上午2:00,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n>\n>\n> so 1. 2. 2020 v 14:39 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>>\n>>\n>> 2020年1月30日 下午10:21,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>>\n>>\n>>\n>> čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n>> napsal:\n>>\n>>>\n>>>\n>>> > 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n>>> >\n>>> > On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n>>> wrote:\n>>> >>> Opinion by Pavel\n>>> >>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the\n>>> name of field \"rd_islocaltemp\" is not probably best\n>>> >>> I renamed rd_islocaltemp\n>>> >>\n>>> >> I don't see any change?\n>>> >>\n>>> >> Rename rd_islocaltemp to rd_istemp in\n>>> global_temporary_table_v8-pg13.patch\n>>> >\n>>> > In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n>>> > that this has approximately a 0% chance of being acceptable. If you're\n>>> > setting a field in a way that is inconsistent with the current use of\n>>> > the field, you're probably doing it wrong, because the field has an\n>>> > existing purpose to which new code must conform. And if you're not\n>>> > doing that, then you don't need to rename it.\n>>> Thank you for pointing it out.\n>>> I've rolled back the rename.\n>>> But I still need rd_localtemp to be true, The reason is that\n>>> 1 GTT The GTT needs to support DML in read-only transactions ,like local\n>>> temp table.\n>>> 2 GTT does not need to hold the lock before modifying the index buffer\n>>> ,also like local temp table.\n>>>\n>>> Please give me feedback.\n>>>\n>>\n>> maybe some like\n>>\n>> rel->rd_globaltemp = true;\n>>\n>> and somewhere else\n>>\n>> if (rel->rd_localtemp || rel->rd_globaltemp)\n>> {\n>> ...\n>> }\n>>\n>> I tried to optimize code in global_temporary_table_v10-pg13.patch\n>>\n>>\n>> Please give me feedback.\n>>\n>\n> I tested this patch and I have not any objections - from my user\n> perspective it is work as I expect\n>\n> +#define RELATION_IS_TEMP(relation) \\\n> + ((relation)->rd_islocaltemp || \\\n> + (relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_TEMP)\n>\n> It looks little bit unbalanced\n>\n> maybe is better to inject rd_isglobaltemp to relation structure\n>\n> and then\n>\n> it should to like\n>\n> +#define RELATION_IS_TEMP(relation) \\\n> + ((relation)->rd_islocaltemp || \\\n> + (relation)->rd_isglobaltemp))\n>\n> But I have not idea if it helps in complex\n>\n> In my opinion\n> For local temp table we need (relation)->rd_rel->relpersistence ==\n> RELPERSISTENCE_TEMP\n> and because one local temp table belongs to only one session, need to mark\n> one sessions rd_islocaltemp = true ,and other to rd_islocaltemp = false.\n>\n\nso it means so table is assigned to current session or not. In this moment\nI think so name \"islocaltemp\" is not the best - because there can be \"local\ntemp table\" that has this value false.\n\nThe name should be better describe so this table as attached to only\ncurrent session, and not for other, or is accessed by all session.\n\nIn this case I can understand why for GTT is possible to write\n..rd_islocaltemp = true. But is signal so rd_islocaltemp is not good name\nand rd_istemp is not good name too.\n\n\n\n> But For GTT, just need (relation)->rd_rel->relpersistence ==\n> RELPERSISTENCE_GLOBAL_GLOBAL_TEMP\n> One GTT can be used for every session, so no need rd_isglobaltemp\n> anymore. This seems duplicated and redundant.\n>\n\nI didn't understand well the sematic of rd_islocaltemp so my ideas in this\ntopics was not good. Now I think so rd_islocalname is not good name and can\nbe renamed if some body find better name. \"istemptable\" is not good too,\nbecause there is important if relation is attached to the session or not.\n\n\n\n>\n>\n>\n>\n>\n>\n>\n>> Wenjing\n>>\n>>\n>>\n>>\n>>\n>>>\n>>> Wenjing\n>>>\n>>>\n>>>\n>>>\n>>> >\n>>> > --\n>>> > Robert Haas\n>>> > EnterpriseDB: http://www.enterprisedb.com\n>>> > The Enterprise PostgreSQL Company\n>>>\n>>>\n>>\n>\n\npo 3. 2. 2020 v 14:03 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年2月2日 上午2:00,Pavel Stehule <pavel.stehule@gmail.com> 写道:so 1. 2. 2020 v 14:39 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年1月30日 下午10:21,Pavel Stehule <pavel.stehule@gmail.com> 写道:čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>>> Opinion by Pavel\n>>> + rel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>>> I renamed rd_islocaltemp\n>> \n>> I don't see any change?\n>> \n>> Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patch\n> \n> In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> that this has approximately a 0% chance of being acceptable. If you're\n> setting a field in a way that is inconsistent with the current use of\n> the field, you're probably doing it wrong, because the field has an\n> existing purpose to which new code must conform. And if you're not\n> doing that, then you don't need to rename it.\nThank you for pointing it out.\nI've rolled back the rename.\nBut I still need rd_localtemp to be true, The reason is that\n1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n\nPlease give me feedback.maybe some likerel->rd_globaltemp = true;and somewhere elseif (rel->rd_localtemp || rel->rd_globaltemp){  ...}I tried to optimize code in global_temporary_table_v10-pg13.patchPlease give me feedback.I tested this patch and I have not any objections - from my user perspective it is work as I expect+#define RELATION_IS_TEMP(relation) \\+\t((relation)->rd_islocaltemp || \\+\t (relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_TEMP) It looks little bit unbalancedmaybe is better to inject rd_isglobaltemp to relation structureand thenit should to like +#define RELATION_IS_TEMP(relation) \\+\t((relation)->rd_islocaltemp || \\+\t (relation)->rd_isglobaltemp))But I have not idea if it helps in complexIn my opinionFor local temp table we need (relation)->rd_rel->relpersistence == RELPERSISTENCE_TEMP and because one local temp table belongs to only one session, need to mark one sessions rd_islocaltemp = true ,and other to rd_islocaltemp = false.so it means so table is assigned to current session or not. In this moment I think so name \"islocaltemp\" is not the best - because there can be \"local temp table\" that has this value false.The name should be better describe so this table as attached to only current session, and not for other, or is accessed by all session.In this case I can understand why for GTT is possible to write ..rd_islocaltemp = true. But is signal so rd_islocaltemp is not good name and rd_istemp is not good name too.But For GTT, just need (relation)->rd_rel->relpersistence == RELPERSISTENCE_GLOBAL_GLOBAL_TEMPOne GTT can be used for every session, so no need rd_isglobaltemp anymore. This seems duplicated and redundant.I didn't understand well the sematic of rd_islocaltemp so my ideas in this topics was not good. Now I think so rd_islocalname is not good name and can be renamed if some body find better name. \"istemptable\" is not good too, because there is important if relation is attached to the session or not. Wenjing\n\n\nWenjing\n\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Mon, 3 Feb 2020 20:46:00 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月3日 下午4:16,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 01.02.2020 19:14, 曾文旌(义从) wrote:\n>> \n>> \n>>> 2020年1月27日 下午5:38,Konstantin Knizhnik <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> 写道:\n>>> \n>>> \n>>> \n>>> On 25.01.2020 18:15, 曾文旌(义从) wrote:\n>>>> I wonder why do we need some special check for GTT here.\n>>>>> From my point of view cleanup at startup of local storage of temp tables should be performed in the same way for local and global temp tables.\n>>>> After oom kill, In autovacuum, the Isolated local temp table will be cleaned like orphan temporary tables. The definition of local temp table is deleted with the storage file. \n>>>> But GTT can not do that. So we have the this implementation in my patch.\n>>>> If you have other solutions, please let me know.\n>>>> \n>>> I wonder if it is possible that autovacuum or some other Postgres process is killed by OOM and postmaster is not noticing it can doens't restart Postgres instance?\n>>> as far as I know, crash of any process connected to Postgres shared memory (and autovacuum definitely has such connection) cause Postgres restart.\n>> Postmaster will not restart after oom happen, but the startup process will. GTT data files are cleaned up in the startup process.\n> \n> Yes, exactly.\n> But it is still not clear to me why do we need some special handling for GTT?\n> Shared memory is reinitialized and storage of temporary tables is removed.\n> It is true for both local and global temp tables.\nOf course not. The local temp table cleans up the entire table (including catalog buffer and datafile). GTT is not.\n\n> \n>>> \n>>> \n>>>> In my design\n>>>> 1 Because different sessions have different transaction information, I choose to store the transaction information of GTT in MyProc,not catalog.\n>>>> 2 About the XID wraparound problem, the reason is the design of the temp table storage(local temp table and global temp table) that makes it can not to do vacuum by autovacuum. \n>>>> It should be completely solve at the storage level.\n>>>> \n>>> \n>>> My point of view is that vacuuming of temp tables is common problem for local and global temp tables. \n>>> So it has to be addressed in the common way and so we should not try to fix this problem only for GTT.\n>> I think I agree with you this point.\n>> However, this does not mean that GTT transaction information stored in pg_class is correct.\n>> If you keep it that way, like in global_private_temp-8.patch, It may cause data loss in GTT after aotuvauum.\n> \n> In my patch autovacuum is prohibited for GTT.\nBut vacuum GTT is not prohibited. \n\n> \n>> IMHO forced terminated of client sessions is not acceptable solution.\n>>> And it is not an absolutely necessary requirement.\n>>> So from my point of view we should not add such limitations to GTT design.\n>> This limitation makes it possible for the GTT to do all the DDL.\n>> IMHO even oracle's GTT has similar limitations.\n> \n> I have checked that Oracle is not preventing creation of index for GTT if there are some active sessions working with this table. And this index becomes visible for all this sessions.\n1 Yes The creation of inde gtt has been improved in global_temporary_table_v10-pg13.patch\n2 But alter GTT ; drop GTT ; drop index on GTT is blocked by other sessions\n\nSQL> drop table gtt;\ndrop table gtt\n *\nERROR at line 1:\nORA-14452: attempt to create, alter or drop an index on temporary table already\nin use\n\n\nSQL> ALTER TABLE gtt add b int ; \nALTER TABLE gtt add b int\n*\nERROR at line 1:\nORA-14450: attempt to access a transactional temp table already in use\n\nSQL> drop index idx_gtt;\ndrop index idx_gtt\n *\nERROR at line 1:\nORA-14452: attempt to create, alter or drop an index on temporary table already\nin use\n\nI'm not saying we should do this, but from an implementation perspective we face similar issues.\nIf a dba changes a GTT, he can do it. Therefore, I think it is acceptable to do so.\n\n> \n> \n>> As global_private_temp-8.patch, think about:\n>> 1 session X tale several hours doing some statistical work with the GTT A, which generated some data using transaction 100, The work is not over.\n>> 2 Then session Y vacuumed A, and the GTT's relfrozenxid (in pg_class) was updated to 1000 0000.\n>> 3 Then the aotuvacuum happened, the clog before 1000 0000 was cleaned up.\n>> 4 The data in session A could be lost due to missing clog, The analysis task failed.\n>> \n>> However This is likely to happen because you allowed the GTT do vacuum. \n>> And this is not a common problem, that not happen with local temp tables.\n>> I feel uneasy about leaving such a question. We can improve it.\n>> \n> \n> May be the easies solution is to prohibit explicit vacuum of GTT?\nI think vacuum is an important part of GTT.\n\nLooking back at previous emails, robert once said that vacuum GTT is pretty important.\nhttps://www.postgresql.org/message-id/CA%2BTgmob%3DL1k0cpXRcipdsaE07ok%2BOn%3DtTjRiw7FtD_D2T%3DJwhg%40mail.gmail.com <https://www.postgresql.org/message-id/CA+Tgmob=L1k0cpXRcipdsaE07ok+On=tTjRiw7FtD_D2T=Jwhg@mail.gmail.com>\n\n> \n> -- \n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com <http://www.postgrespro.com/>\n> The Russian Postgres Company \n\n\n2020年2月3日 下午4:16,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n\n\n\n\nOn 01.02.2020 19:14, 曾文旌(义从) wrote:\n\n\n\n\n\n\n\n2020年1月27日 下午5:38,Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n 写道:\n\n\n\n \n\nOn 25.01.2020 18:15,\n 曾文旌(义从) wrote:\n\n\n\nI wonder why do we\n need some special check for GTT here.\n\n\n\n\n From my point of view cleanup\n at startup of local storage of temp\n tables should be performed in the same\n way for local and global temp tables.\n\n\n\n After oom kill, In autovacuum, the Isolated\n local temp table will be cleaned like orphan\n temporary tables. The definition of local temp\n table is deleted with the storage file. \nBut GTT can not do that. So we\n have the this implementation in my patch.\nIf you have other solutions,\n please let me know.\n\n\n\n\n\n I wonder if it is possible that autovacuum or some\n other Postgres process is killed by OOM and postmaster\n is not noticing it can doens't restart Postgres\n instance?\n as far as I know, crash of any process connected to\n Postgres shared memory (and autovacuum definitely has\n such connection) cause Postgres restart.\n\n\n\n Postmaster will not restart after oom happen, but the\n startup process will. GTT data files are cleaned up in the\n startup process.\n\n\n\n\n\n Yes, exactly.\n But it is still not clear to me why do we need some special handling\n for GTT?\n Shared memory is reinitialized and storage of temporary tables is\n removed.\n It is true for both local and global temp tables.Of course not. The local temp table cleans up the entire table (including catalog buffer and datafile). GTT is not.\n\n\n\n\n\n\n\n \n\n\n\n\nIn my design\n 1 Because different sessions\n have different transaction information, I\n choose to store the transaction information\n of GTT in MyProc,not catalog.\n2 About the XID wraparound problem, the reason is the\n design of the temp table storage(local temp\n table and global temp table) that makes it\n can not to do vacuum by autovacuum. \nIt should be completely solve at\n the storage level.\n\n\n\n\n\n\n My point of view is that vacuuming of temp tables is\n common problem for local and global temp tables. \n So it has to be addressed in the common way and so we\n should not try to fix this problem only for GTT.\n\n\n\n I think I agree with you this point.\nHowever, this does not mean that GTT transaction\n information stored in pg_class is correct.\nIf you keep it that way, like in\n global_private_temp-8.patch, It may cause data loss in GTT\n after aotuvauum.\n\n\n\n\n In my patch autovacuum is prohibited for GTT.But vacuum GTT is not prohibited. \n\n\n\n\nIMHO forced terminated of client sessions is not\n acceptable solution.\n\n\n And it\n is not an absolutely necessary requirement.\n So from my point of view we should not add such\n limitations to GTT design.\n\n\n\n This limitation makes it possible for the GTT to do all the\n DDL.\nIMHO even oracle's GTT has similar limitations.\n\n\n\n\n I have checked that Oracle is not preventing creation of index for\n GTT if there are some active sessions working with this table. And\n this index becomes visible for all this sessions.1 Yes The creation of inde gtt has been improved in global_temporary_table_v10-pg13.patch2 But alter GTT ; drop GTT ; drop index on GTT is blocked by other sessionsSQL> drop table gtt;drop table gtt           *ERROR at line 1:ORA-14452: attempt to create, alter or drop an index on temporary table alreadyin useSQL> ALTER TABLE gtt add b int ; ALTER TABLE gtt add b int*ERROR at line 1:ORA-14450: attempt to access a transactional temp table already in useSQL> drop index idx_gtt;drop index idx_gtt           *ERROR at line 1:ORA-14452: attempt to create, alter or drop an index on temporary table alreadyin useI'm not saying we should do this, but from an implementation perspective we face similar issues.If a dba changes a GTT, he can do it. Therefore, I think it is acceptable to do so.\n\n\n\n\nAs global_private_temp-8.patch, think about:\n \n1 session X tale several hours doing some statistical\n work with the GTT A, which generated some data using\n transaction 100, The work is not over.\n 2 Then session Y vacuumed A, and the GTT's relfrozenxid (in\n pg_class) was updated to 1000 0000.\n3 Then the aotuvacuum happened, the clog  before 1000\n 0000 was cleaned up.\n4 The data in session A could be lost due to missing\n clog, The analysis task failed.\n\n\nHowever This is likely to happen\n because you allowed the GTT do vacuum. \nAnd this is not a\n common problem, that not happen with local temp tables.\nI feel uneasy about leaving such a question. We can\n improve it.\n\n\n\n\n\n\n May be the easies solution is to prohibit explicit vacuum of GTT?I think vacuum is an important part of GTT.Looking back at previous emails, robert once said that vacuum GTT is pretty important.https://www.postgresql.org/message-id/CA%2BTgmob%3DL1k0cpXRcipdsaE07ok%2BOn%3DtTjRiw7FtD_D2T%3DJwhg%40mail.gmail.com\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 04 Feb 2020 23:01:37 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 04.02.2020 18:01, 曾文旌(义从) wrote:\n>\n>\n>>\n>> Yes, exactly.\n>> But it is still not clear to me why do we need some special handling \n>> for GTT?\n>> Shared memory is reinitialized and storage of temporary tables is \n>> removed.\n>> It is true for both local and global temp tables.\n> Of course not. The local temp table cleans up the entire table \n> (including catalog buffer and datafile). GTT is not.\n>\n\nWhat do you mean by \"catalog buffer\"?\nYes, cleanup of local temp table requires deletion of correspondent \nentry from catalog and GTT should not do it.\nBut  I am speaking only about cleanup of data files of temp relations. \nIt is done in the same way for local and global temp tables.\n\n\n>> In my patch autovacuum is prohibited for GTT.\n> But vacuum GTT is not prohibited.\n>\nYes, but the simplest solution is to prohibit also explicit vacuum of \nGTT, isn't it?\n\n>>\n>>> IMHO forced terminated of client sessions is not acceptable solution.\n>>>> And it is not an absolutely necessary requirement.\n>>>> So from my point of view we should not add such limitations to GTT \n>>>> design.\n>>> This limitation makes it possible for the GTT to do all the DDL.\n>>> IMHO even oracle's GTT has similar limitations.\n>>\n>> I have checked that Oracle is not preventing creation of index for \n>> GTT if there are some active sessions working with this table. And \n>> this index becomes visible for all this sessions.\n> 1 Yes The creation of inde gtt has been improved \n> in global_temporary_table_v10-pg13.patch\n> 2 But alter GTT ; drop GTT ; drop index on GTT is blocked by other \n> sessions\n>\nYes, you are right.\nOrale documetation says:\n >  1) DDL operation on global temporary tables\n\n > It is not possible to perform a DDL operation (except |TRUNCATE \n<https://www.oracletutorial.com/oracle-basics/oracle-truncate-table/>|) \non an existing global temporary table if one or more sessions are \ncurrently bound to that table.\n\nBut looks like create index is not considered as DDL operation on GTT \nand is also supported by Oracle.\n\nYour approach with prohibiting such accessed using shared cache is \ncertainly better then my attempt to prohibit such DDLs for GTT at all.\nI just what to eliminate maintenance of such shared cache to simplify \nthe patch.\n\nBut I still think that we should allow truncation of GTT and \ncreating/dropping indexes on it without any limitations.\n>>\n>> May be the easies solution is to prohibit explicit vacuum of GTT?\n> I think vacuum is an important part of GTT.\n>\n> Looking back at previous emails, robert once said that vacuum GTT is \n> pretty important.\n> https://www.postgresql.org/message-id/CA%2BTgmob%3DL1k0cpXRcipdsaE07ok%2BOn%3DtTjRiw7FtD_D2T%3DJwhg%40mail.gmail.com \n> <https://www.postgresql.org/message-id/CA+Tgmob=L1k0cpXRcipdsaE07ok+On=tTjRiw7FtD_D2T=Jwhg@mail.gmail.com>\n>\n\nWell, may be I am not right.\nI never saw use cases where temp table are used not like append-only \nstorage (when temp table tuples are updated multiple times).\nBut I think that if such problem actually exists then solution is to \nsupport autovacuum for temp tables, rather than allow manual vacuum.\nCertainly it can not be done by another  worker because it has no access \nto private backend's data. But it can done incrementally by backend itself.\n\n\n\n\n\n\n\n\n\n\nOn 04.02.2020 18:01, 曾文旌(义从) wrote:\n\n\n\n\n\n\n\n Yes,\n exactly.\n But it is still not clear to me why do we need some\n special handling for GTT?\n Shared memory is reinitialized and storage of temporary\n tables is removed.\n It is true for both local and global temp tables.\n\n\n\n Of course not. The local temp table cleans up the entire table\n (including catalog buffer and datafile). GTT is not.\n\n\n\n\n What do you mean by \"catalog buffer\"?\n Yes, cleanup of local temp table requires deletion of correspondent\n entry from catalog and GTT should not do it.\n But  I am speaking only about cleanup of data files of temp\n relations. It is done in the same way for local and global temp\n tables.\n\n\n\n\n\n\n In my patch\n autovacuum is prohibited for GTT.\n\n\n\n But vacuum GTT is not prohibited. \n\n\n\n Yes, but the simplest solution is to prohibit also explicit vacuum\n of GTT, isn't it?\n\n\n\n\n\n \n\n\n\nIMHO forced terminated of client\n sessions is not acceptable solution.\n\n\n\n And it is not an absolutely necessary\n requirement.\n So from my point of view we should not add\n such limitations to GTT design.\n\n\n\n This limitation makes it possible for the GTT to\n do all the DDL.\nIMHO even oracle's GTT has similar\n limitations.\n\n\n\n\n I have checked that Oracle is not preventing creation of\n index for GTT if there are some active sessions working\n with this table. And this index becomes visible for all\n this sessions.\n\n\n\n 1 Yes The creation of inde gtt has been improved\n in global_temporary_table_v10-pg13.patch\n2 But alter GTT ; drop GTT ; drop index on GTT is blocked by\n other sessions\n\n\n\n Yes, you are right.\n Orale documetation says:\n >  1) DDL operation on global temporary tables\n > It is not possible to perform a DDL operation (except TRUNCATE)\n on an existing global temporary table if one or more sessions are\n currently bound to that table.\n But looks like create index is not considered as DDL operation on\n GTT and is also supported by Oracle.\n\n Your approach with prohibiting such accessed using shared cache is\n certainly better then my attempt to prohibit such DDLs for GTT at\n all.\n I just what to eliminate maintenance of such shared cache to\n simplify the patch.\n\n But I still think that we should allow truncation of GTT and\n creating/dropping indexes on it without any limitations. \n\n\n\n\n \n May be the easies solution is to prohibit explicit vacuum\n of GTT?\n\n\n\nI think vacuum is an important part of GTT.\n\n\nLooking back at previous emails, robert once said that\n vacuum GTT is pretty important.\nhttps://www.postgresql.org/message-id/CA%2BTgmob%3DL1k0cpXRcipdsaE07ok%2BOn%3DtTjRiw7FtD_D2T%3DJwhg%40mail.gmail.com\n\n\n\n\n Well, may be I am not right.\n I never saw use cases where temp table are used not like append-only\n storage (when temp table tuples are updated multiple times).\n But I think that if such problem actually exists then solution is to\n support autovacuum for temp tables, rather than allow manual vacuum.\n Certainly it can not be done by another  worker because it has no\n access to private backend's data. But it can done incrementally by\n backend itself.", "msg_date": "Tue, 4 Feb 2020 19:47:47 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Sat, Feb 1, 2020 at 11:14 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n> As global_private_temp-8.patch, think about:\n> 1 session X tale several hours doing some statistical work with the GTT A, which generated some data using transaction 100, The work is not over.\n> 2 Then session Y vacuumed A, and the GTT's relfrozenxid (in pg_class) was updated to 1000 0000.\n> 3 Then the aotuvacuum happened, the clog before 1000 0000 was cleaned up.\n> 4 The data in session A could be lost due to missing clog, The analysis task failed.\n>\n> However This is likely to happen because you allowed the GTT do vacuum.\n> And this is not a common problem, that not happen with local temp tables.\n> I feel uneasy about leaving such a question. We can improve it.\n\nEach session is going to need to maintain its own notion of the\nrelfrozenxid and relminmxid of each GTT to which it is attached.\nStoring the values in pg_class makes no sense and is completely\nunacceptable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Feb 2020 15:57:55 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Feb 3, 2020 at 3:08 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> Thank you for explanation.\n> You convinced me that building indexes from _bt_getbuf is not good idea.\n> What do you think about idea to check and build indexes for GTT prior to\n> query execution?\n>\n> In this case we do not need to patch code of all indexes - it can be\n> done just in one place.\n> We can use build function of access method to initialize index and\n> populate it with data.\n>\n> So right now when building query execution plan, optimizer checks if\n> index is valid.\n> If index belongs to GTT, it an check that first page of the index is\n> initialized and if not - call build method for this index.\n>\n> If building index during building query plan is not desirable, we can\n> just construct list of indexes which should be checked and\n> perform check itself and building indexes somewhere after building plan\n> but for execution of the query.\n>\n> Do you seem some problems with such approach?\n\nMy guess it that the right time to do this work is just after we\nacquire locks, at the end of parse analysis. I think trying to do it\nduring execution is too late, since the planner looks at indexes, and\ntrying to do it in the planner instead of before we start planning\nseems more likely to cause bugs and has no real advantages. It's just\nbetter to do complicated things (like creating indexes) separately\nrather than in the middle of some other complicated thing (like\nplanning). I could tie my shoelaces the first time they get tangled up\nwith my break pedal but it's better to do it before I get in the car.\n\nAnd I'm still inclined to do it by flat-copying files rather than\ncalling ambuild. It will be slightly faster, but also importantly, it\nwill guarantee that (1) every backend gets exactly the same initial\nstate and (2) it has fewer ways to fail because it doesn't involve\ncalling any user-defined code. Those seem like fairly compelling\nadvantages, and I don't see what the disadvantages are. I think\ncalling ambuild() at the point in time proposed in the preceding\nparagraph would be fairly safe and would probably work OK most of the\ntime, but I can't think of any reason it would be better.\n\nIncidentally, what I'd be inclined to do is - if the session is\nrunning a query that does only read-only operations, let it continue\nto point to the \"master\" copy of the GTT and its indexes, which is\nstored in the relfilenodes indicated for those relations in pg_class.\nIf it's going to acquire a lock heavier than AccessShareLock, then\ngive it is own copies of the table and indexes, stored in a temporary\nrelfilenode (tXXX_YYY) and redirect all future access to that GTT by\nthis backend to there. Maybe there's some reason this won't work, but\nit seems nice to avoid saying that we've \"attached\" to the GTT if all\nwe did is read the empty table.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 4 Feb 2020 16:38:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 05.02.2020 00:38, Robert Haas wrote:\n>\n> My guess it that the right time to do this work is just after we\n> acquire locks, at the end of parse analysis. I think trying to do it\n> during execution is too late, since the planner looks at indexes, and\n> trying to do it in the planner instead of before we start planning\n> seems more likely to cause bugs and has no real advantages. It's just\n> better to do complicated things (like creating indexes) separately\n> rather than in the middle of some other complicated thing (like\n> planning). I could tie my shoelaces the first time they get tangled up\n> with my break pedal but it's better to do it before I get in the car.\nI have implemented this approach in my new patch\n\nhttps://www.postgresql.org/message-id/3e88b59f-73e8-685e-4983-9026f94c57c5%40postgrespro.ru\n\nI have added check whether index is initialized or not to plancat.c \nwhere optimizer checks if index is valid.\nNow it should work for all kinds of indexes (B-Tree, hash, user defined \naccess methods...).\n>\n> And I'm still inclined to do it by flat-copying files rather than\n> calling ambuild. It will be slightly faster, but also importantly, it\n> will guarantee that (1) every backend gets exactly the same initial\n> state and (2) it has fewer ways to fail because it doesn't involve\n> calling any user-defined code. Those seem like fairly compelling\n> advantages, and I don't see what the disadvantages are. I think\n> calling ambuild() at the point in time proposed in the preceding\n> paragraph would be fairly safe and would probably work OK most of the\n> time, but I can't think of any reason it would be better.\n\nThere is very important reason (from my point of view): allow other \nsessions to use created index and\nso provide compatible behavior with regular tables (and with Oracle).\nSo we should be able to populate index with existed GTT data.\nAnd ambuild will do it.\n\n>\n> Incidentally, what I'd be inclined to do is - if the session is\n> running a query that does only read-only operations, let it continue\n> to point to the \"master\" copy of the GTT and its indexes, which is\n> stored in the relfilenodes indicated for those relations in pg_class.\n> If it's going to acquire a lock heavier than AccessShareLock, then\n> give it is own copies of the table and indexes, stored in a temporary\n> relfilenode (tXXX_YYY) and redirect all future access to that GTT by\n> this backend to there. Maybe there's some reason this won't work, but\n> it seems nice to avoid saying that we've \"attached\" to the GTT if all\n> we did is read the empty table.\n>\nSorry, I do not understand the benefits of such optimization. It seems \nto be very rare situation when session will try to access temp table \nwhich was not previously filled with data. But even if it happen, \nkeeping \"master\" copy will not safe much: we in any case have shared \nmetadata and no data. Yes, with current approach, first access to GTT \nwill cause creation of empty indexes. But It is just initialization of \n1-3 pages. I do not think that delaying index initialization can be \nreally useful.\n\nIn any case, calling ambuild is the simplest and most universal \napproach, providing desired and compatible behavior.\nI really do not understand why we should try yo invent some alternative \nsolution.\n\n\n\n\n", "msg_date": "Wed, 5 Feb 2020 10:28:31 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2020年2月5日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Sat, Feb 1, 2020 at 11:14 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>> As global_private_temp-8.patch, think about:\n>> 1 session X tale several hours doing some statistical work with the GTT A, which generated some data using transaction 100, The work is not over.\n>> 2 Then session Y vacuumed A, and the GTT's relfrozenxid (in pg_class) was updated to 1000 0000.\n>> 3 Then the aotuvacuum happened, the clog before 1000 0000 was cleaned up.\n>> 4 The data in session A could be lost due to missing clog, The analysis task failed.\n>> \n>> However This is likely to happen because you allowed the GTT do vacuum.\n>> And this is not a common problem, that not happen with local temp tables.\n>> I feel uneasy about leaving such a question. We can improve it.\n> \n> Each session is going to need to maintain its own notion of the\n> relfrozenxid and relminmxid of each GTT to which it is attached.\n> Storing the values in pg_class makes no sense and is completely\n> unacceptable.\nYes, I've implemented it in global_temporary_table_v10-pg13.patch\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 05 Feb 2020 17:51:48 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月5日 上午12:47,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 04.02.2020 18:01, 曾文旌(义从) wrote:\n>> \n>> \n>>> \n>>> Yes, exactly.\n>>> But it is still not clear to me why do we need some special handling for GTT?\n>>> Shared memory is reinitialized and storage of temporary tables is removed.\n>>> It is true for both local and global temp tables.\n>> Of course not. The local temp table cleans up the entire table (including catalog buffer and datafile). GTT is not.\n>> \n> \n> What do you mean by \"catalog buffer\"?\n> Yes, cleanup of local temp table requires deletion of correspondent entry from catalog and GTT should not do it.\n> But I am speaking only about cleanup of data files of temp relations. It is done in the same way for local and global temp tables.\nFor native pg, the data file of temp table will not be cleaned up direct after oom happen.\nBecause the orphan local temp table(include catalog, local buffer, datafile) will be cleaned up by deleting the orphan temp schame in autovacuum.\nSo for GTT ,we cannot do the same with just deleting data files. This is why I dealt with it specifically.\n\n> \n> \n>>> In my patch autovacuum is prohibited for GTT.\n>> But vacuum GTT is not prohibited. \n>> \n> Yes, but the simplest solution is to prohibit also explicit vacuum of GTT, isn't it?\n> \n>>> \n>>>> IMHO forced terminated of client sessions is not acceptable solution.\n>>>>> And it is not an absolutely necessary requirement.\n>>>>> So from my point of view we should not add such limitations to GTT design.\n>>>> This limitation makes it possible for the GTT to do all the DDL.\n>>>> IMHO even oracle's GTT has similar limitations.\n>>> \n>>> I have checked that Oracle is not preventing creation of index for GTT if there are some active sessions working with this table. And this index becomes visible for all this sessions.\n>> 1 Yes The creation of inde gtt has been improved in global_temporary_table_v10-pg13.patch\n>> 2 But alter GTT ; drop GTT ; drop index on GTT is blocked by other sessions\n>> \n> Yes, you are right.\n> Orale documetation says:\n> > 1) DDL operation on global temporary tables\n> > It is not possible to perform a DDL operation (except TRUNCATE <https://www.oracletutorial.com/oracle-basics/oracle-truncate-table/>) on an existing global temporary table if one or more sessions are currently bound to that table.\n> \n> But looks like create index is not considered as DDL operation on GTT and is also supported by Oracle.\n\n> \n> Your approach with prohibiting such accessed using shared cache is certainly better then my attempt to prohibit such DDLs for GTT at all.\n> I just what to eliminate maintenance of such shared cache to simplify the patch.\n> \n> But I still think that we should allow truncation of GTT and creating/dropping indexes on it without any limitations. \nI think the goal of this work is this.\nBut, the first step is let GTT get as many features as possible on regular tables, even with some limitations.\n\n>>> \n>>> May be the easies solution is to prohibit explicit vacuum of GTT?\n>> I think vacuum is an important part of GTT.\n>> \n>> Looking back at previous emails, robert once said that vacuum GTT is pretty important.\n>> https://www.postgresql.org/message-id/CA%2BTgmob%3DL1k0cpXRcipdsaE07ok%2BOn%3DtTjRiw7FtD_D2T%3DJwhg%40mail.gmail.com <https://www.postgresql.org/message-id/CA+Tgmob=L1k0cpXRcipdsaE07ok+On=tTjRiw7FtD_D2T=Jwhg@mail.gmail.com>\n>> \n> \n> Well, may be I am not right.\n> I never saw use cases where temp table are used not like append-only storage (when temp table tuples are updated multiple times).\n> But I think that if such problem actually exists then solution is to support autovacuum for temp tables, rather than allow manual vacuum.\n> Certainly it can not be done by another worker because it has no access to private backend's data. But it can done incrementally by backend itself.\n> \n> \n\n\n2020年2月5日 上午12:47,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n\n\n\n\nOn 04.02.2020 18:01, 曾文旌(义从) wrote:\n\n\n\n\n\n\n\n Yes,\n exactly.\n But it is still not clear to me why do we need some\n special handling for GTT?\n Shared memory is reinitialized and storage of temporary\n tables is removed.\n It is true for both local and global temp tables.\n\n\n\n Of course not. The local temp table cleans up the entire table\n (including catalog buffer and datafile). GTT is not.\n\n\n\n\n What do you mean by \"catalog buffer\"?\n Yes, cleanup of local temp table requires deletion of correspondent\n entry from catalog and GTT should not do it.\n But  I am speaking only about cleanup of data files of temp\n relations. It is done in the same way for local and global temp\n tables.For native pg, the data file of temp table will not be cleaned up direct after oom happen.Because the orphan local temp table(include catalog, local buffer, datafile) will be cleaned up by deleting the orphan temp schame in autovacuum.So for GTT ,we cannot do the same with just deleting data files. This is why I dealt with it specifically.\n\n\n\n\n\n\n In my patch\n autovacuum is prohibited for GTT.\n\n\n\n But vacuum GTT is not prohibited. \n\n\n\n Yes, but the simplest solution is to prohibit also explicit vacuum\n of GTT, isn't it?\n\n\n\n\n\n \n\n\n\nIMHO forced terminated of client\n sessions is not acceptable solution.\n\n\n\n And it is not an absolutely necessary\n requirement.\n So from my point of view we should not add\n such limitations to GTT design.\n\n\n\n This limitation makes it possible for the GTT to\n do all the DDL.\nIMHO even oracle's GTT has similar\n limitations.\n\n\n\n\n I have checked that Oracle is not preventing creation of\n index for GTT if there are some active sessions working\n with this table. And this index becomes visible for all\n this sessions.\n\n\n\n 1 Yes The creation of inde gtt has been improved\n in global_temporary_table_v10-pg13.patch\n2 But alter GTT ; drop GTT ; drop index on GTT is blocked by\n other sessions\n\n\n\n Yes, you are right.\n Orale documetation says:\n >  1) DDL operation on global temporary tables\n > It is not possible to perform a DDL operation (except TRUNCATE)\n on an existing global temporary table if one or more sessions are\n currently bound to that table.\n But looks like create index is not considered as DDL operation on\n GTT and is also supported by Oracle.\n\n Your approach with prohibiting such accessed using shared cache is\n certainly better then my attempt to prohibit such DDLs for GTT at\n all.\n I just what to eliminate maintenance of such shared cache to\n simplify the patch.\n\n But I still think that we should allow truncation of GTT and\n creating/dropping indexes on it without any limitations. I think the goal of this work is this.But, the first step is let GTT get as many features as possible on regular tables, even with some limitations.\n\n\n\n\n \n May be the easies solution is to prohibit explicit vacuum\n of GTT?\n\n\n\nI think vacuum is an important part of GTT.\n\n\nLooking back at previous emails, robert once said that\n vacuum GTT is pretty important.\nhttps://www.postgresql.org/message-id/CA%2BTgmob%3DL1k0cpXRcipdsaE07ok%2BOn%3DtTjRiw7FtD_D2T%3DJwhg%40mail.gmail.com\n\n\n\n\n Well, may be I am not right.\n I never saw use cases where temp table are used not like append-only\n storage (when temp table tuples are updated multiple times).\n But I think that if such problem actually exists then solution is to\n support autovacuum for temp tables, rather than allow manual vacuum.\n Certainly it can not be done by another  worker because it has no\n access to private backend's data. But it can done incrementally by\n backend itself.", "msg_date": "Wed, 05 Feb 2020 21:20:11 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Feb 5, 2020 at 2:28 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> There is very important reason (from my point of view): allow other\n> sessions to use created index and\n> so provide compatible behavior with regular tables (and with Oracle).\n> So we should be able to populate index with existed GTT data.\n> And ambuild will do it.\n\nI don't understand. A global temporary table, as I understand it, is a\ntable for which each session sees separate contents. So you would\nnever need to populate it with existing data.\n\nBesides, even if you did, how are you going to get the data for the\ntable? If you get the table data by flat-copying the table, then you\ncould copy the index files too. And you would want to, because if the\ntable contains a large amount of data, building indexes will be\nexpensive. If the index is *empty*, a file copy will not be much\ncheaper than calling ambuild(), but if it's got a lot of data in it,\nit will.\n\n> Sorry, I do not understand the benefits of such optimization. It seems\n> to be very rare situation when session will try to access temp table\n> which was not previously filled with data. But even if it happen,\n> keeping \"master\" copy will not safe much: we in any case have shared\n> metadata and no data. Yes, with current approach, first access to GTT\n> will cause creation of empty indexes. But It is just initialization of\n> 1-3 pages. I do not think that delaying index initialization can be\n> really useful.\n\nYou might be right, but you're misunderstanding the nature of my\nconcern. We probably can't allow DDL on a GTT unless no sessions are\nattached. Having sessions that just read the empty GTT be considered\nas \"not attached\" might make it easier for some users to find a time\nwhen no backend is attached and thus DDL is possible.\n\n> In any case, calling ambuild is the simplest and most universal\n> approach, providing desired and compatible behavior.\n\nCalling ambuild is definitely not simpler than a plain file copy. I\ndon't know how you can contend otherwise.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Feb 2020 09:10:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Feb 5, 2020 at 8:21 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n> What do you mean by \"catalog buffer\"?\n> Yes, cleanup of local temp table requires deletion of correspondent entry from catalog and GTT should not do it.\n> But I am speaking only about cleanup of data files of temp relations. It is done in the same way for local and global temp tables.\n>\n> For native pg, the data file of temp table will not be cleaned up direct after oom happen.\n> Because the orphan local temp table(include catalog, local buffer, datafile) will be cleaned up by deleting the orphan temp schame in autovacuum.\n> So for GTT ,we cannot do the same with just deleting data files. This is why I dealt with it specifically.\n\nAfter a crash restart, all temporary relfilenodes (e.g t12345_67890)\nare removed. I think GTTs should use relfilenodes of this general\nform, and then they'll be cleaned up by the existing code. For a\nregular temporary table, there is also the problem of removing the\ncatalog entries, but GTTs shouldn't have this problem, because a GTT\ndoesn't have any catalog entries for individual sessions, just for the\nmain object, which isn't going away just because the system restarted.\nRight?\n\n> In my patch autovacuum is prohibited for GTT.\n>\n> But vacuum GTT is not prohibited.\n\nThat sounds right to me.\n\nThis thread is getting very hard to follow because neither Konstantin\nnor Wenjing seem to be using the standard method of quoting. When I\nreply, I get the whole thing quoted with \"> \" but can't easily tell\nthe difference between what Wenjing wrote and what Konstantin wrote,\nbecause both of your mailers are quoting using indentation rather than\n\"> \" and it gets wiped out by my mailer. Please see if you can get\nyour mailer to do what is normally done on this mailing list.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Feb 2020 09:15:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 05.02.2020 17:10, Robert Haas wrote:\n> On Wed, Feb 5, 2020 at 2:28 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> There is very important reason (from my point of view): allow other\n>> sessions to use created index and\n>> so provide compatible behavior with regular tables (and with Oracle).\n>> So we should be able to populate index with existed GTT data.\n>> And ambuild will do it.\n> I don't understand. A global temporary table, as I understand it, is a\n> table for which each session sees separate contents. So you would\n> never need to populate it with existing data.\nSession 1:\ncreate global temp table gtt(x integer);\ninsert into gtt values (generate_series(1,100000));\n\nSession 2:\ninsert into gtt values (generate_series(1,200000));\n\nSession1:\ncreate index on gtt(x);\nexplain select * from gtt where x = 1;\n\nSession2:\nexplain select * from gtt where x = 1;\n??? Should we use index here?\n\nMy answer is - yes.\nJust because:\n- Such behavior is compatible with regular tables. So it will not \nconfuse users and doesn't require some complex explanations.\n- It is compatible with Oracle.\n- It is what DBA usually want when creating index.\n-\nThere are several arguments against such behavior:\n- Concurrent building of index in multiple sessions can consume a lot of \nmemory\n- Building index can increase query execution time (which can be not \nexpected by clients)\n\nI have discussion about it with Pavel here in Pgcon Moscow but we can \nnot convince each other.\nMay be we should provide a choice to the user, by means of GUC or index \ncreating parameter.\n\n\n>\n> Besides, even if you did, how are you going to get the data for the\n> table? If you get the table data by flat-copying the table, then you\n> could copy the index files too. And you would want to, because if the\n> table contains a large amount of data, building indexes will be\n> expensive. If the index is *empty*, a file copy will not be much\n> cheaper than calling ambuild(), but if it's got a lot of data in it,\n> it will.\n\nSorry, I do not understand you.\nambuild is called locally by each backend on first access to the GTT index.\nIt is done at the moment of building query execution plan when we check \nwhether index is valid.\nMay be it will be sensible to postpone this check and do it for indexes \nwhich are actually used in query execution plan.\n\n>\n>> Sorry, I do not understand the benefits of such optimization. It seems\n>> to be very rare situation when session will try to access temp table\n>> which was not previously filled with data. But even if it happen,\n>> keeping \"master\" copy will not safe much: we in any case have shared\n>> metadata and no data. Yes, with current approach, first access to GTT\n>> will cause creation of empty indexes. But It is just initialization of\n>> 1-3 pages. I do not think that delaying index initialization can be\n>> really useful.\n> You might be right, but you're misunderstanding the nature of my\n> concern. We probably can't allow DDL on a GTT unless no sessions are\n> attached. Having sessions that just read the empty GTT be considered\n> as \"not attached\" might make it easier for some users to find a time\n> when no backend is attached and thus DDL is possible.\n\nOk, now I understand the problem your are going to address.\nBut still I never saw use cases when empty temp tables are accessed.\nUsually we save in temp table some intermediate results of complex query.\nCertainly it can happen that query returns empty result.\nBut usually temp table are used when we expect huge result (otherwise \nmaterializing result in temp table is not needed).\nSo I do not think that such optimization can help much in performing DDL \nfor GTT.\n\n\n\n>\n>> In any case, calling ambuild is the simplest and most universal\n>> approach, providing desired and compatible behavior.\n> Calling ambuild is definitely not simpler than a plain file copy. I\n> don't know how you can contend otherwise.\n>\n\nThis is code fragment whichbuild GTT index on demand:\n\n     if (index->rd_rel->relpersistence == RELPERSISTENCE_SESSION)\n     {\n         Buffer metapage = ReadBuffer(index, 0);\n         bool isNew = PageIsNew(BufferGetPage(metapage));\n         ReleaseBuffer(metapage);\n         if (isNew)\n         {\n             Relation heap;\nDropRelFileNodeAllLocalBuffers(index->rd_smgr->smgr_rnode.node);\n             heap = RelationIdGetRelation(index->rd_index->indrelid);\n             index->rd_indam->ambuild(heap, index, BuildIndexInfo(index));\n             RelationClose(heap);\n         }\n     }\n\nThat is all - just 10 line of code.\nI can make a bet that maintaining separate fork for indexes and copying \ndata from it will require much more coding.\n\n\n\n\n\n\n", "msg_date": "Wed, 5 Feb 2020 18:48:38 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "st 5. 2. 2020 v 16:48 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 05.02.2020 17:10, Robert Haas wrote:\n> > On Wed, Feb 5, 2020 at 2:28 AM Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru> wrote:\n> >> There is very important reason (from my point of view): allow other\n> >> sessions to use created index and\n> >> so provide compatible behavior with regular tables (and with Oracle).\n> >> So we should be able to populate index with existed GTT data.\n> >> And ambuild will do it.\n> > I don't understand. A global temporary table, as I understand it, is a\n> > table for which each session sees separate contents. So you would\n> > never need to populate it with existing data.\n> Session 1:\n> create global temp table gtt(x integer);\n> insert into gtt values (generate_series(1,100000));\n>\n> Session 2:\n> insert into gtt values (generate_series(1,200000));\n>\n> Session1:\n> create index on gtt(x);\n> explain select * from gtt where x = 1;\n>\n> Session2:\n> explain select * from gtt where x = 1;\n> ??? Should we use index here?\n>\n> My answer is - yes.\n> Just because:\n> - Such behavior is compatible with regular tables. So it will not\n> confuse users and doesn't require some complex explanations.\n> - It is compatible with Oracle.\n> - It is what DBA usually want when creating index.\n> -\n> There are several arguments against such behavior:\n> - Concurrent building of index in multiple sessions can consume a lot of\n> memory\n> - Building index can increase query execution time (which can be not\n> expected by clients)\n>\n> I have discussion about it with Pavel here in Pgcon Moscow but we can\n> not convince each other.\n> May be we should provide a choice to the user, by means of GUC or index\n> creating parameter.\n>\n\nI prefer some creating index parameter for enforcing creating indexes to\nliving other session.\n\nIn this case I think so too much strongly the best design depends on\ncontext so there cannot to exists one design (both proposed behaves has\nsense and has contrary advantages and disadvantages). Unfortunately only\none behave can be default.\n\nRegards\n\nPavel\n\n>\n>\n> >\n> > Besides, even if you did, how are you going to get the data for the\n> > table? If you get the table data by flat-copying the table, then you\n> > could copy the index files too. And you would want to, because if the\n> > table contains a large amount of data, building indexes will be\n> > expensive. If the index is *empty*, a file copy will not be much\n> > cheaper than calling ambuild(), but if it's got a lot of data in it,\n> > it will.\n>\n> Sorry, I do not understand you.\n> ambuild is called locally by each backend on first access to the GTT index.\n> It is done at the moment of building query execution plan when we check\n> whether index is valid.\n> May be it will be sensible to postpone this check and do it for indexes\n> which are actually used in query execution plan.\n>\n> >\n> >> Sorry, I do not understand the benefits of such optimization. It seems\n> >> to be very rare situation when session will try to access temp table\n> >> which was not previously filled with data. But even if it happen,\n> >> keeping \"master\" copy will not safe much: we in any case have shared\n> >> metadata and no data. Yes, with current approach, first access to GTT\n> >> will cause creation of empty indexes. But It is just initialization of\n> >> 1-3 pages. I do not think that delaying index initialization can be\n> >> really useful.\n> > You might be right, but you're misunderstanding the nature of my\n> > concern. We probably can't allow DDL on a GTT unless no sessions are\n> > attached. Having sessions that just read the empty GTT be considered\n> > as \"not attached\" might make it easier for some users to find a time\n> > when no backend is attached and thus DDL is possible.\n>\n> Ok, now I understand the problem your are going to address.\n> But still I never saw use cases when empty temp tables are accessed.\n> Usually we save in temp table some intermediate results of complex query.\n> Certainly it can happen that query returns empty result.\n> But usually temp table are used when we expect huge result (otherwise\n> materializing result in temp table is not needed).\n> So I do not think that such optimization can help much in performing DDL\n> for GTT.\n>\n>\n>\n> >\n> >> In any case, calling ambuild is the simplest and most universal\n> >> approach, providing desired and compatible behavior.\n> > Calling ambuild is definitely not simpler than a plain file copy. I\n> > don't know how you can contend otherwise.\n> >\n>\n> This is code fragment whichbuild GTT index on demand:\n>\n> if (index->rd_rel->relpersistence == RELPERSISTENCE_SESSION)\n> {\n> Buffer metapage = ReadBuffer(index, 0);\n> bool isNew = PageIsNew(BufferGetPage(metapage));\n> ReleaseBuffer(metapage);\n> if (isNew)\n> {\n> Relation heap;\n> DropRelFileNodeAllLocalBuffers(index->rd_smgr->smgr_rnode.node);\n> heap = RelationIdGetRelation(index->rd_index->indrelid);\n> index->rd_indam->ambuild(heap, index, BuildIndexInfo(index));\n> RelationClose(heap);\n> }\n> }\n>\n> That is all - just 10 line of code.\n> I can make a bet that maintaining separate fork for indexes and copying\n> data from it will require much more coding.\n>\n>\n>\n>\n>\n\nst 5. 2. 2020 v 16:48 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\nOn 05.02.2020 17:10, Robert Haas wrote:\n> On Wed, Feb 5, 2020 at 2:28 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> There is very important reason (from my point of view): allow other\n>> sessions to use created index and\n>> so provide compatible behavior with regular tables (and with Oracle).\n>> So we should be able to populate index with existed GTT data.\n>> And ambuild will do it.\n> I don't understand. A global temporary table, as I understand it, is a\n> table for which each session sees separate contents. So you would\n> never need to populate it with existing data.\nSession 1:\ncreate global temp table gtt(x integer);\ninsert into gtt values (generate_series(1,100000));\n\nSession 2:\ninsert into gtt values (generate_series(1,200000));\n\nSession1:\ncreate index on gtt(x);\nexplain select * from gtt where x = 1;\n\nSession2:\nexplain select * from gtt where x = 1;\n??? Should we use index here?\n\nMy answer is - yes.\nJust because:\n- Such behavior is compatible with regular tables. So it will not \nconfuse users and doesn't require some complex explanations.\n- It is compatible with Oracle.\n- It is what DBA usually want when creating index.\n-\nThere are several arguments against such behavior:\n- Concurrent building of index in multiple sessions can consume a lot of \nmemory\n- Building index can increase query execution time (which can be not \nexpected by clients)\n\nI have discussion about it with Pavel here in Pgcon Moscow but we can \nnot convince each other.\nMay be we should provide a choice to the user, by means of GUC or index \ncreating parameter.I prefer some creating index parameter for enforcing creating indexes to living other session.In this case I think so too much strongly the best design depends on context so there cannot to exists one design (both proposed behaves has sense and has contrary advantages and disadvantages). Unfortunately only one behave can be default.RegardsPavel\n\n\n>\n> Besides, even if you did, how are you going to get the data for the\n> table? If you get the table data by flat-copying the table, then you\n> could copy the index files too. And you would want to, because if the\n> table contains a large amount of data, building indexes will be\n> expensive. If the index is *empty*, a file copy will not be much\n> cheaper than calling ambuild(), but if it's got a lot of data in it,\n> it will.\n\nSorry, I do not understand you.\nambuild is called locally by each backend on first access to the GTT index.\nIt is done at the moment of building query execution plan when we check \nwhether index is valid.\nMay be it will be sensible to postpone this check and do it for indexes \nwhich are actually used in query execution plan.\n\n>\n>> Sorry, I do not understand the benefits of such optimization. It seems\n>> to be very rare situation when session will try to access temp table\n>> which was not previously filled with data. But even if it happen,\n>> keeping \"master\" copy will not safe much: we in any case have shared\n>> metadata and no data. Yes, with current approach, first access to GTT\n>> will cause creation of empty indexes. But It is just initialization of\n>> 1-3 pages. I do not think that delaying index initialization can be\n>> really useful.\n> You might be right, but you're misunderstanding the nature of my\n> concern. We probably can't allow DDL on a GTT unless no sessions are\n> attached. Having sessions that just read the empty GTT be considered\n> as \"not attached\" might make it easier for some users to find a time\n> when no backend is attached and thus DDL is possible.\n\nOk, now I understand the problem your are going to address.\nBut still I never saw use cases when empty temp tables are accessed.\nUsually we save in temp table some intermediate results of complex query.\nCertainly it can happen that query returns empty result.\nBut usually temp table are used when we expect huge result (otherwise \nmaterializing result in temp table is not needed).\nSo I do not think that such optimization can help much in performing DDL \nfor GTT.\n\n\n\n>\n>> In any case, calling ambuild is the simplest and most universal\n>> approach, providing desired and compatible behavior.\n> Calling ambuild is definitely not simpler than a plain file copy. I\n> don't know how you can contend otherwise.\n>\n\nThis is code fragment whichbuild GTT index on demand:\n\n     if (index->rd_rel->relpersistence == RELPERSISTENCE_SESSION)\n     {\n         Buffer metapage = ReadBuffer(index, 0);\n         bool isNew = PageIsNew(BufferGetPage(metapage));\n         ReleaseBuffer(metapage);\n         if (isNew)\n         {\n             Relation heap;\nDropRelFileNodeAllLocalBuffers(index->rd_smgr->smgr_rnode.node);\n             heap = RelationIdGetRelation(index->rd_index->indrelid);\n             index->rd_indam->ambuild(heap, index, BuildIndexInfo(index));\n             RelationClose(heap);\n         }\n     }\n\nThat is all - just 10 line of code.\nI can make a bet that maintaining separate fork for indexes and copying \ndata from it will require much more coding.", "msg_date": "Wed, 5 Feb 2020 20:18:08 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2020年2月5日 下午10:15,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Wed, Feb 5, 2020 at 8:21 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>> What do you mean by \"catalog buffer\"?\n>> Yes, cleanup of local temp table requires deletion of correspondent entry from catalog and GTT should not do it.\n>> But I am speaking only about cleanup of data files of temp relations. It is done in the same way for local and global temp tables.\n>> \n>> For native pg, the data file of temp table will not be cleaned up direct after oom happen.\n>> Because the orphan local temp table(include catalog, local buffer, datafile) will be cleaned up by deleting the orphan temp schame in autovacuum.\n>> So for GTT ,we cannot do the same with just deleting data files. This is why I dealt with it specifically.\n> \n> After a crash restart, all temporary relfilenodes (e.g t12345_67890)\n> are removed. I think GTTs should use relfilenodes of this general\n> form, and then they'll be cleaned up by the existing code. For a\n> regular temporary table, there is also the problem of removing the\n> catalog entries, but GTTs shouldn't have this problem, because a GTT\n> doesn't have any catalog entries for individual sessions, just for the\n> main object, which isn't going away just because the system restarted.\n> Right?\nWenjing wrote:\nI have implemented its processing in global_temporary_table_v10-pg13.patch\nWhen oom happen, all backend will be killed.\nThen, I choose to clean up these files(all like t12345_67890) in startup process.\n\nWenjing\n\n> \n>> In my patch autovacuum is prohibited for GTT.\n>> \n>> But vacuum GTT is not prohibited.\n> \n> That sounds right to me.\nWenjing wrote:\nAlso implemented in global_temporary_table_v10-pg13.patch\n\nWenjing\n\n> \n> This thread is getting very hard to follow because neither Konstantin\n> nor Wenjing seem to be using the standard method of quoting. When I\n> reply, I get the whole thing quoted with \"> \" but can't easily tell\n> the difference between what Wenjing wrote and what Konstantin wrote,\n> because both of your mailers are quoting using indentation rather than\n> \"> \" and it gets wiped out by my mailer. Please see if you can get\n> your mailer to do what is normally done on this mailing list.\n> \n> Thanks,\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 06 Feb 2020 11:39:05 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Feb 5, 2020 at 10:48 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> > I don't understand. A global temporary table, as I understand it, is a\n> > table for which each session sees separate contents. So you would\n> > never need to populate it with existing data.\n> Session 1:\n> create global temp table gtt(x integer);\n> insert into gtt values (generate_series(1,100000));\n>\n> Session 2:\n> insert into gtt values (generate_series(1,200000));\n>\n> Session1:\n> create index on gtt(x);\n> explain select * from gtt where x = 1;\n>\n> Session2:\n> explain select * from gtt where x = 1;\n> ??? Should we use index here?\n\nOK, I see where you're coming from now.\n\n> My answer is - yes.\n> Just because:\n> - Such behavior is compatible with regular tables. So it will not\n> confuse users and doesn't require some complex explanations.\n> - It is compatible with Oracle.\n> - It is what DBA usually want when creating index.\n> -\n> There are several arguments against such behavior:\n> - Concurrent building of index in multiple sessions can consume a lot of\n> memory\n> - Building index can increase query execution time (which can be not\n> expected by clients)\n\nI think those are good arguments, especially the second one. There's\nno limit on how long building a new index might take, and it could be\nseveral minutes. A user who was running a query that could have\ncompleted in a few seconds or even milliseconds will be unhappy to\nsuddenly wait a long time for a new index to be built. And that is an\nentirely realistic scenario, because the new index might be better,\nbut only marginally.\n\nAlso, an important point to which I've already alluded a few times is\nthat creating an index can fail. Now, one way it can fail is that\nthere could be some problem writing to disk, or you could run out of\nmemory, or whatever. However, it can also fail because the new index\nis UNIQUE and the data this backend has in the table doesn't conform\nto the associated constraint. It will be confusing if all access to a\ntable suddenly starts complaining about uniqueness violations.\n\n> That is all - just 10 line of code.\n\nI don't believe that the feature you are proposing can be correctly\nimplemented in 10 lines of code. I would be pleasantly surprised if it\ncan be done in 1000.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Feb 2020 10:15:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\nOn 07.02.2020 18:15, Robert Haas wrote:\n> On Wed, Feb 5, 2020 at 10:48 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n> My answer is - yes.\n>> Just because:\n>> - Such behavior is compatible with regular tables. So it will not\n>> confuse users and doesn't require some complex explanations.\n>> - It is compatible with Oracle.\n>> - It is what DBA usually want when creating index.\n>> -\n>> There are several arguments against such behavior:\n>> - Concurrent building of index in multiple sessions can consume a lot of\n>> memory\n>> - Building index can increase query execution time (which can be not\n>> expected by clients)\n> I think those are good arguments, especially the second one. There's\n> no limit on how long building a new index might take, and it could be\n> several minutes. A user who was running a query that could have\n> completed in a few seconds or even milliseconds will be unhappy to\n> suddenly wait a long time for a new index to be built. And that is an\n> entirely realistic scenario, because the new index might be better,\n> but only marginally.\nYes, I agree that this arguments are important.\nBut IMHO less important than incompatible behavior (Pavel doesn't agree \nwith word \"incompatible\" in this context\nsince semantic of temp tables is in any case different with semantic of \nregular tables).\n\nJust want to notice that if we have huge GTT (so that creation of index \ntakes significant amount of time)\nsequential scan of this table also will not be fast.\n\nBut in any case, if we agree that we can control thus behavior using GUC \nor index property,\nthen it is ok for me.\n\n\n\n>\n> Also, an important point to which I've already alluded a few times is\n> that creating an index can fail. Now, one way it can fail is that\n> there could be some problem writing to disk, or you could run out of\n> memory, or whatever. However, it can also fail because the new index\n> is UNIQUE and the data this backend has in the table doesn't conform\n> to the associated constraint. It will be confusing if all access to a\n> table suddenly starts complaining about uniqueness violations.\n\nYes, building index can fail (as any other operation with database).\nWhat's wring with it?\nIf it is fatal error, then backend is terminated and content of its temp \ntable is disappeared.\nIf it is non-fatal error, then current transaction is aborted:\n\n\nSession1:\npostgres=# create global temp table gtt(x integer);\nCREATE TABLE\npostgres=# insert into gtt values (generate_series(1,100000));\nINSERT 0 100000\n\nSession2:\npostgres=# insert into gtt values (generate_series(1,100000));\nINSERT 0 100000\npostgres=# insert into gtt values (1);\nINSERT 0 1\n\nSession1:\npostgres=# create unique index on gtt(x);\nCREATE INDEX\n\nSessin2:\npostgres=# explain select * from gtt where x=1;\nERROR:  could not create unique index \"gtt_x_idx\"\nDETAIL:  Key (x)=(1) is duplicated.\n\n> I don't believe that the feature you are proposing can be correctly\n> implemented in 10 lines of code. I would be pleasantly surprised if it\n> can be done in 1000.\n>\nRight now I do not see any sources of extra complexity.\nWill be pleased if you can point them to me.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 7 Feb 2020 20:28:41 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Feb 7, 2020 at 12:28 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> But in any case, if we agree that we can control thus behavior using GUC\n> or index property,\n> then it is ok for me.\n\nNope, I am not going to agree to that, and I don't believe that any\nother committer will, either.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Feb 2020 12:31:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 7. 2. 2020 v 18:28 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 07.02.2020 18:15, Robert Haas wrote:\n> > On Wed, Feb 5, 2020 at 10:48 AM Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru> wrote:\n> > My answer is - yes.\n> >> Just because:\n> >> - Such behavior is compatible with regular tables. So it will not\n> >> confuse users and doesn't require some complex explanations.\n> >> - It is compatible with Oracle.\n> >> - It is what DBA usually want when creating index.\n> >> -\n> >> There are several arguments against such behavior:\n> >> - Concurrent building of index in multiple sessions can consume a lot of\n> >> memory\n> >> - Building index can increase query execution time (which can be not\n> >> expected by clients)\n> > I think those are good arguments, especially the second one. There's\n> > no limit on how long building a new index might take, and it could be\n> > several minutes. A user who was running a query that could have\n> > completed in a few seconds or even milliseconds will be unhappy to\n> > suddenly wait a long time for a new index to be built. And that is an\n> > entirely realistic scenario, because the new index might be better,\n> > but only marginally.\n> Yes, I agree that this arguments are important.\n> But IMHO less important than incompatible behavior (Pavel doesn't agree\n> with word \"incompatible\" in this context\n> since semantic of temp tables is in any case different with semantic of\n> regular tables).\n>\n> Just want to notice that if we have huge GTT (so that creation of index\n> takes significant amount of time)\n> sequential scan of this table also will not be fast.\n>\n> But in any case, if we agree that we can control thus behavior using GUC\n> or index property,\n> then it is ok for me.\n>\n>\n>\n> >\n> > Also, an important point to which I've already alluded a few times is\n> > that creating an index can fail. Now, one way it can fail is that\n> > there could be some problem writing to disk, or you could run out of\n> > memory, or whatever. However, it can also fail because the new index\n> > is UNIQUE and the data this backend has in the table doesn't conform\n> > to the associated constraint. It will be confusing if all access to a\n> > table suddenly starts complaining about uniqueness violations.\n>\n> Yes, building index can fail (as any other operation with database).\n> What's wring with it?\n> If it is fatal error, then backend is terminated and content of its temp\n> table is disappeared.\n> If it is non-fatal error, then current transaction is aborted:\n>\n>\n> Session1:\n> postgres=# create global temp table gtt(x integer);\n> CREATE TABLE\n> postgres=# insert into gtt values (generate_series(1,100000));\n> INSERT 0 100000\n>\n> Session2:\n> postgres=# insert into gtt values (generate_series(1,100000));\n> INSERT 0 100000\n> postgres=# insert into gtt values (1);\n> INSERT 0 1\n>\n\nWhat when session 2 has active transaction? Then to be correct, you should\nto wait with index creation to end of transaction.\n\n\n> Session1:\n> postgres=# create unique index on gtt(x);\n> CREATE INDEX\n>\n> Sessin2:\n> postgres=# explain select * from gtt where x=1;\n> ERROR: could not create unique index \"gtt_x_idx\"\n> DETAIL: Key (x)=(1) is duplicated.\n>\n\nThis is little bit unexpected behave (probably nobody expect so any SELECT\nfail with error \"could not create index\" - I understand exactly to reason\nand context, but this side effect is something what I afraid.\n\n\n>\n> > I don't believe that the feature you are proposing can be correctly\n> > implemented in 10 lines of code. I would be pleasantly surprised if it\n> > can be done in 1000.\n> >\n> Right now I do not see any sources of extra complexity.\n> Will be pleased if you can point them to me.\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 7. 2. 2020 v 18:28 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\nOn 07.02.2020 18:15, Robert Haas wrote:\n> On Wed, Feb 5, 2020 at 10:48 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n> My answer is - yes.\n>> Just because:\n>> - Such behavior is compatible with regular tables. So it will not\n>> confuse users and doesn't require some complex explanations.\n>> - It is compatible with Oracle.\n>> - It is what DBA usually want when creating index.\n>> -\n>> There are several arguments against such behavior:\n>> - Concurrent building of index in multiple sessions can consume a lot of\n>> memory\n>> - Building index can increase query execution time (which can be not\n>> expected by clients)\n> I think those are good arguments, especially the second one. There's\n> no limit on how long building a new index might take, and it could be\n> several minutes. A user who was running a query that could have\n> completed in a few seconds or even milliseconds will be unhappy to\n> suddenly wait a long time for a new index to be built. And that is an\n> entirely realistic scenario, because the new index might be better,\n> but only marginally.\nYes, I agree that this arguments are important.\nBut IMHO less important than incompatible behavior (Pavel doesn't agree \nwith word \"incompatible\" in this context\nsince semantic of temp tables is in any case different with semantic of \nregular tables).\n\nJust want to notice that if we have huge GTT (so that creation of index \ntakes significant amount of time)\nsequential scan of this table also will not be fast.\n\nBut in any case, if we agree that we can control thus behavior using GUC \nor index property,\nthen it is ok for me.\n\n\n\n>\n> Also, an important point to which I've already alluded a few times is\n> that creating an index can fail. Now, one way it can fail is that\n> there could be some problem writing to disk, or you could run out of\n> memory, or whatever. However, it can also fail because the new index\n> is UNIQUE and the data this backend has in the table doesn't conform\n> to the associated constraint. It will be confusing if all access to a\n> table suddenly starts complaining about uniqueness violations.\n\nYes, building index can fail (as any other operation with database).\nWhat's wring with it?\nIf it is fatal error, then backend is terminated and content of its temp \ntable is disappeared.\nIf it is non-fatal error, then current transaction is aborted:\n\n\nSession1:\npostgres=# create global temp table gtt(x integer);\nCREATE TABLE\npostgres=# insert into gtt values (generate_series(1,100000));\nINSERT 0 100000\n\nSession2:\npostgres=# insert into gtt values (generate_series(1,100000));\nINSERT 0 100000\npostgres=# insert into gtt values (1);\nINSERT 0 1What when session 2 has active transaction? Then to be correct, you should to wait with index creation to end of transaction. \n\nSession1:\npostgres=# create unique index on gtt(x);\nCREATE INDEX\n\nSessin2:\npostgres=# explain select * from gtt where x=1;\nERROR:  could not create unique index \"gtt_x_idx\"\nDETAIL:  Key (x)=(1) is duplicated.This is little bit unexpected behave (probably nobody expect so any SELECT fail with error \"could not create index\" - I understand exactly to reason and context, but this side effect is something what I afraid.  \n\n> I don't believe that the feature you are proposing can be correctly\n> implemented in 10 lines of code. I would be pleasantly surprised if it\n> can be done in 1000.\n>\nRight now I do not see any sources of extra complexity.\nWill be pleased if you can point them to me.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 7 Feb 2020 19:37:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 07.02.2020 21:37, Pavel Stehule wrote:\n>\n> What when session 2 has active transaction? Then to be correct, you \n> should to wait with index creation to end of transaction.\n>\n>\n> Session1:\n> postgres=# create unique index on gtt(x);\n> CREATE INDEX\n>\n> Sessin2:\n> postgres=# explain select * from gtt where x=1;\n> ERROR:  could not create unique index \"gtt_x_idx\"\n> DETAIL:  Key (x)=(1) is duplicated.\n>\n>\n> This is little bit unexpected behave (probably nobody expect so any \n> SELECT fail with error \"could not create index\" - I understand exactly \n> to reason and context, but this side effect is something what I afraid.\n>\nThe more I thinking creation of indexes for GTT on-demand, the more \ncontractions I see.\nSo looks like there are only two safe alternatives:\n1. Allow DDL for GTT (including index creation) only if there are no \nother sessions using this GTT (\"using\" means that no data was inserted \nin GTT by this session). Things can be even more complicated if we take \nin account inter-table dependencies (like foreign key constraint).\n2. Create indexes for GTT locally.\n\n2) seems to be very contradictory (global table metadata, but private \nindexes) and hard to implement because in this case we have to maintain \nsome private copy of index catalog to keep information about private \nindexes.\n\n1) is currently implemented by Wenjing. Frankly speaking I still find \nsuch limitation too restrictive and inconvenient for users. From my \npoint of view Oracle developers have implemented better compromise. But \nif I am the only person voting for such solution, then let's stop this \ndiscussion.\nBut in any case I think that calling ambuild to construct index for \nempty table is better solution than implementation of all indexes (and \nstill not solving the problem with custom indexes).\n\n\n\n\n\n\n\n\nOn 07.02.2020 21:37, Pavel Stehule\n wrote:\n\n\n\n\n\nWhat when session 2 has active transaction? Then to be\n correct, you should to wait with index creation to end of\n transaction.\n \n\n\n\n Session1:\n postgres=# create unique index on gtt(x);\n CREATE INDEX\n\n Sessin2:\n postgres=# explain select * from gtt where x=1;\n ERROR:  could not create unique index \"gtt_x_idx\"\n DETAIL:  Key (x)=(1) is duplicated.\n\n\n\nThis is little bit unexpected behave (probably nobody\n expect so any SELECT fail with error \"could not create\n index\" - I understand exactly to reason and context, but\n this side effect is something what I afraid. \n\n \n\n\n\n\n The more I thinking creation of indexes for GTT on-demand, the more\n contractions I see.\n So looks like there are only two safe alternatives:\n 1. Allow DDL for GTT (including index creation) only if there are no\n other sessions using this GTT (\"using\" means that no data was\n inserted in GTT by this session). Things can be even more\n complicated if we take in account inter-table dependencies (like\n foreign key constraint).\n 2. Create indexes for GTT locally.\n\n 2) seems to be very contradictory (global table metadata, but\n private indexes) and hard to implement because in this case we have\n to maintain some private copy of index catalog to keep information\n about private indexes.\n\n 1) is currently implemented by Wenjing. Frankly speaking I still\n find such limitation too restrictive and inconvenient for users.\n From my point of view Oracle developers have implemented better\n compromise. But if I am the only person voting for such solution,\n then let's stop this discussion.\n But in any case I think that calling ambuild to construct index for\n empty table is better solution than implementation of all indexes\n (and still not solving the problem with custom indexes).", "msg_date": "Sun, 9 Feb 2020 15:05:04 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "ne 9. 2. 2020 v 13:05 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 07.02.2020 21:37, Pavel Stehule wrote:\n>\n>\n> What when session 2 has active transaction? Then to be correct, you should\n> to wait with index creation to end of transaction.\n>\n>\n>> Session1:\n>> postgres=# create unique index on gtt(x);\n>> CREATE INDEX\n>>\n>> Sessin2:\n>> postgres=# explain select * from gtt where x=1;\n>> ERROR: could not create unique index \"gtt_x_idx\"\n>> DETAIL: Key (x)=(1) is duplicated.\n>>\n>\n> This is little bit unexpected behave (probably nobody expect so any SELECT\n> fail with error \"could not create index\" - I understand exactly to reason\n> and context, but this side effect is something what I afraid.\n>\n>\n> The more I thinking creation of indexes for GTT on-demand, the more\n> contractions I see.\n> So looks like there are only two safe alternatives:\n> 1. Allow DDL for GTT (including index creation) only if there are no other\n> sessions using this GTT (\"using\" means that no data was inserted in GTT by\n> this session). Things can be even more complicated if we take in account\n> inter-table dependencies (like foreign key constraint).\n> 2. Create indexes for GTT locally.\n>\n> 2) seems to be very contradictory (global table metadata, but private\n> indexes) and hard to implement because in this case we have to maintain\n> some private copy of index catalog to keep information about private\n> indexes.\n>\n> 1) is currently implemented by Wenjing. Frankly speaking I still find such\n> limitation too restrictive and inconvenient for users. From my point of\n> view Oracle developers have implemented better compromise. But if I am the\n> only person voting for such solution, then let's stop this discussion.\n>\n\nThank you. I respect your opinion.\n\n\n> But in any case I think that calling ambuild to construct index for empty\n> table is better solution than implementation of all indexes (and still not\n> solving the problem with custom indexes).\n>\n\nI know nothing about this area - I expect so you and Wenjing will find\ngood solution.\n\nWe have to start with something what is simple, usable, and if it possible\nit is well placed to Postgres's architecture.\n\nat @1 .. when all tables are empty for other sessions, then I don't see any\nproblem. From practical reason, I think so requirement to don't use table\nin other sessions is too hard, and I can be nice (maybe it is) if creating\nindex should not be blocked, but if I create index too late, then index is\nfor other session (when the table is used) invalid (again it can be done in\nfuture).\n\nI am sure, so there are not end of all days - and there is a space for\nfuture enhancing and testing other variants. I can imagine more different\nvariations with different advantages/disadvantages. Just for begin I prefer\ndesign that has concept closer to current Postgres.\n\nRegards\n\nPavel\n\nne 9. 2. 2020 v 13:05 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 07.02.2020 21:37, Pavel Stehule\n wrote:\n\n\n\n\nWhat when session 2 has active transaction? Then to be\n correct, you should to wait with index creation to end of\n transaction.\n \n\n\n\n Session1:\n postgres=# create unique index on gtt(x);\n CREATE INDEX\n\n Sessin2:\n postgres=# explain select * from gtt where x=1;\n ERROR:  could not create unique index \"gtt_x_idx\"\n DETAIL:  Key (x)=(1) is duplicated.\n\n\n\nThis is little bit unexpected behave (probably nobody\n expect so any SELECT fail with error \"could not create\n index\" - I understand exactly to reason and context, but\n this side effect is something what I afraid. \n\n \n\n\n\n\n The more I thinking creation of indexes for GTT on-demand, the more\n contractions I see.\n So looks like there are only two safe alternatives:\n 1. Allow DDL for GTT (including index creation) only if there are no\n other sessions using this GTT (\"using\" means that no data was\n inserted in GTT by this session). Things can be even more\n complicated if we take in account inter-table dependencies (like\n foreign key constraint).\n 2. Create indexes for GTT locally.\n\n 2) seems to be very contradictory (global table metadata, but\n private indexes) and hard to implement because in this case we have\n to maintain some private copy of index catalog to keep information\n about private indexes.\n\n 1) is currently implemented by Wenjing. Frankly speaking I still\n find such limitation too restrictive and inconvenient for users.\n From my point of view Oracle developers have implemented better\n compromise. But if I am the only person voting for such solution,\n then let's stop this discussion.Thank you. I respect your opinion. \n But in any case I think that calling ambuild to construct index for\n empty table is better solution than implementation of all indexes\n (and still not solving the problem with custom indexes).I know nothing about this area - I expect so you and  Wenjing will find good solution. We have to start with something what is simple, usable, and if it possible it is well placed to Postgres's architecture. at @1 .. when all tables are empty for other sessions, then I don't see any problem. From practical reason, I think so requirement to don't use table in other sessions is too hard, and I can be nice (maybe it is) if creating index should not be blocked, but if I create index too late, then index is for other session (when the table is used) invalid (again it can be done in future). I am sure, so there are not end of all days -  and there is a space for future enhancing and testing other variants. I can imagine more different variations with different advantages/disadvantages. Just for begin I prefer design that has concept closer to current Postgres.RegardsPavel", "msg_date": "Sun, 9 Feb 2020 13:53:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 30. 1. 2020 v 15:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\r\nnapsal:\r\n\r\n>\r\n>\r\n> čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\r\n> napsal:\r\n>\r\n>>\r\n>>\r\n>> > 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\r\n>> >\r\n>> > On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\r\n>> wrote:\r\n>> >>> Opinion by Pavel\r\n>> >>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the\r\n>> name of field \"rd_islocaltemp\" is not probably best\r\n>> >>> I renamed rd_islocaltemp\r\n>> >>\r\n>> >> I don't see any change?\r\n>> >>\r\n>> >> Rename rd_islocaltemp to rd_istemp in\r\n>> global_temporary_table_v8-pg13.patch\r\n>> >\r\n>> > In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\r\n>> > that this has approximately a 0% chance of being acceptable. If you're\r\n>> > setting a field in a way that is inconsistent with the current use of\r\n>> > the field, you're probably doing it wrong, because the field has an\r\n>> > existing purpose to which new code must conform. And if you're not\r\n>> > doing that, then you don't need to rename it.\r\n>> Thank you for pointing it out.\r\n>> I've rolled back the rename.\r\n>> But I still need rd_localtemp to be true, The reason is that\r\n>> 1 GTT The GTT needs to support DML in read-only transactions ,like local\r\n>> temp table.\r\n>> 2 GTT does not need to hold the lock before modifying the index buffer\r\n>> ,also like local temp table.\r\n>>\r\n>> Please give me feedback.\r\n>>\r\n>\r\n> maybe some like\r\n>\r\n> rel->rd_globaltemp = true;\r\n>\r\n> and somewhere else\r\n>\r\n> if (rel->rd_localtemp || rel->rd_globaltemp)\r\n> {\r\n> ...\r\n> }\r\n>\r\n>\r\nI tested this patch again and I am very well satisfied with behave.\r\n\r\nwhat doesn't work still - TRUNCATE statement\r\n\r\npostgres=# insert into foo select generate_series(1,10000);\r\nINSERT 0 10000\r\npostgres=# \\dt+ foo\r\n List of relations\r\n┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐\r\n│ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡\r\n│ public │ foo │ table │ pavel │ session │ 384 kB │ │\r\n└────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘\r\n(1 row)\r\n\r\npostgres=# truncate foo;\r\nTRUNCATE TABLE\r\npostgres=# \\dt+ foo\r\n List of relations\r\n┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐\r\n│ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡\r\n│ public │ foo │ table │ pavel │ session │ 16 kB │ │\r\n└────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘\r\n(1 row)\r\n\r\nI expect zero size after truncate.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n>>\r\n>> Wenjing\r\n>>\r\n>>\r\n>>\r\n>>\r\n>> >\r\n>> > --\r\n>> > Robert Haas\r\n>> > EnterpriseDB: http://www.enterprisedb.com\r\n>> > The Enterprise PostgreSQL Company\r\n>>\r\n>>\r\n\nčt 30. 1. 2020 v 15:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\r\n> 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\r\n> \r\n> On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\r\n>>> Opinion by Pavel\r\n>>> + rel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\r\n>>> I renamed rd_islocaltemp\r\n>> \r\n>> I don't see any change?\r\n>> \r\n>> Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patch\r\n> \r\n> In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\r\n> that this has approximately a 0% chance of being acceptable. If you're\r\n> setting a field in a way that is inconsistent with the current use of\r\n> the field, you're probably doing it wrong, because the field has an\r\n> existing purpose to which new code must conform. And if you're not\r\n> doing that, then you don't need to rename it.\r\nThank you for pointing it out.\r\nI've rolled back the rename.\r\nBut I still need rd_localtemp to be true, The reason is that\r\n1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\r\n2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n\r\nPlease give me feedback.maybe some likerel->rd_globaltemp = true;and somewhere elseif (rel->rd_localtemp || rel->rd_globaltemp){  ...}I tested this patch again and I am very well satisfied with behave. what doesn't work still - TRUNCATE statementpostgres=# insert into foo select generate_series(1,10000);INSERT 0 10000postgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │  Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 384 kB │             │└────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘(1 row)postgres=# truncate foo;TRUNCATE TABLEpostgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │ Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 16 kB │             │└────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘(1 row)I expect zero size after truncate.RegardsPavel \n\n\r\nWenjing\n\n\n\n\r\n> \r\n> -- \r\n> Robert Haas\r\n> EnterpriseDB: http://www.enterprisedb.com\r\n> The Enterprise PostgreSQL Company", "msg_date": "Fri, 14 Feb 2020 10:19:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月14日 下午5:19,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> čt 30. 1. 2020 v 15:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> napsal:\n> \n> \n> čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n> > 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n> > \n> > On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> >>> Opinion by Pavel\n> >>> + rel->rd_islocaltemp = true; <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n> >>> I renamed rd_islocaltemp\n> >> \n> >> I don't see any change?\n> >> \n> >> Rename rd_islocaltemp to rd_istemp in global_temporary_table_v8-pg13.patch\n> > \n> > In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> > that this has approximately a 0% chance of being acceptable. If you're\n> > setting a field in a way that is inconsistent with the current use of\n> > the field, you're probably doing it wrong, because the field has an\n> > existing purpose to which new code must conform. And if you're not\n> > doing that, then you don't need to rename it.\n> Thank you for pointing it out.\n> I've rolled back the rename.\n> But I still need rd_localtemp to be true, The reason is that\n> 1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n> 2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n> \n> Please give me feedback.\n> \n> maybe some like\n> \n> rel->rd_globaltemp = true;\n> \n> and somewhere else\n> \n> if (rel->rd_localtemp || rel->rd_globaltemp)\n> {\n> ...\n> }\n> \n> \n> I tested this patch again and I am very well satisfied with behave. \n> \n> what doesn't work still - TRUNCATE statement\n> \n> postgres=# insert into foo select generate_series(1,10000);\n> INSERT 0 10000\n> postgres=# \\dt+ foo\n> List of relations\n> ┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐\n> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\n> ╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡\n> │ public │ foo │ table │ pavel │ session │ 384 kB │ │\n> └────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘\n> (1 row)\n> \n> postgres=# truncate foo;\n> TRUNCATE TABLE\n> postgres=# \\dt+ foo\n> List of relations\n> ┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐\n> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\n> ╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡\n> │ public │ foo │ table │ pavel │ session │ 16 kB │ │\n> └────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘\n> (1 row)\n> \n> I expect zero size after truncate.\nThanks for review.\n\nI can explain, I don't think it's a bug.\nThe current implementation of the truncated GTT retains two blocks of FSM pages.\nThe same is true for truncating regular tables in subtransactions.\nThis is an implementation that truncates the table without changing the relfilenode of the table.\n\n\nWenjing\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> Wenjing\n> \n> \n> \n> \n> > \n> > -- \n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> > The Enterprise PostgreSQL Company\n> \n\n\n2020年2月14日 下午5:19,Pavel Stehule <pavel.stehule@gmail.com> 写道:čt 30. 1. 2020 v 15:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 30. 1. 2020 v 15:17 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:\n\n> 2020年1月29日 下午9:48,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Tue, Jan 28, 2020 at 12:12 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>>> Opinion by Pavel\n>>> + rel->rd_islocaltemp = true;  <<<<<<< if this is valid, then the name of field \"rd_islocaltemp\" is not probably best\n>>> I renamed rd_islocaltemp\n>> \n>> I don't see any change?\n>> \n>> Rename rd_islocaltemp to rd_istemp  in global_temporary_table_v8-pg13.patch\n> \n> In view of commit 6919b7e3294702adc39effd16634b2715d04f012, I think\n> that this has approximately a 0% chance of being acceptable. If you're\n> setting a field in a way that is inconsistent with the current use of\n> the field, you're probably doing it wrong, because the field has an\n> existing purpose to which new code must conform. And if you're not\n> doing that, then you don't need to rename it.\nThank you for pointing it out.\nI've rolled back the rename.\nBut I still need rd_localtemp to be true, The reason is that\n1 GTT The GTT needs to support DML in read-only transactions ,like local temp table.\n2 GTT does not need to hold the lock before modifying the index buffer ,also like local temp table.\n\nPlease give me feedback.maybe some likerel->rd_globaltemp = true;and somewhere elseif (rel->rd_localtemp || rel->rd_globaltemp){  ...}I tested this patch again and I am very well satisfied with behave. what doesn't work still - TRUNCATE statementpostgres=# insert into foo select generate_series(1,10000);INSERT 0 10000postgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │  Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 384 kB │             │└────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘(1 row)postgres=# truncate foo;TRUNCATE TABLEpostgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │ Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 16 kB │             │└────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘(1 row)I expect zero size after truncate.Thanks for review.I can explain, I don't think it's a bug.The current implementation of the truncated GTT retains two blocks of FSM pages.The same is true for truncating regular tables in subtransactions.This is an implementation that truncates the table without changing the relfilenode of the table.WenjingRegardsPavel \n\n\nWenjing\n\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Sat, 15 Feb 2020 17:56:42 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> postgres=# insert into foo select generate_series(1,10000);\r\n> INSERT 0 10000\r\n> postgres=# \\dt+ foo\r\n> List of relations\r\n> ┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐\r\n> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n> ╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡\r\n> │ public │ foo │ table │ pavel │ session │ 384 kB │ │\r\n> └────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘\r\n> (1 row)\r\n>\r\n> postgres=# truncate foo;\r\n> TRUNCATE TABLE\r\n> postgres=# \\dt+ foo\r\n> List of relations\r\n> ┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐\r\n> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n> ╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡\r\n> │ public │ foo │ table │ pavel │ session │ 16 kB │ │\r\n> └────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘\r\n> (1 row)\r\n>\r\n> I expect zero size after truncate.\r\n>\r\n> Thanks for review.\r\n>\r\n> I can explain, I don't think it's a bug.\r\n> The current implementation of the truncated GTT retains two blocks of FSM\r\n> pages.\r\n> The same is true for truncating regular tables in subtransactions.\r\n> This is an implementation that truncates the table without changing the\r\n> relfilenode of the table.\r\n>\r\n>\r\nThis is not extra important feature - now this is little bit a surprise,\r\nbecause I was not under transaction.\r\n\r\nChanging relfilenode, I think, is necessary, minimally for future VACUUM\r\nFULL support.\r\n\r\nRegards\r\n\r\nPavel Stehule\r\n\r\n\r\n>\r\n> Wenjing\r\n>\r\n>\r\n> Regards\r\n>\r\n> Pavel\r\n>\r\n>\r\n>>>\r\n>>> Wenjing\r\n>>>\r\n>>>\r\n>>>\r\n>>>\r\n>>> >\r\n>>> > --\r\n>>> > Robert Haas\r\n>>> > EnterpriseDB: http://www.enterprisedb.com\r\n>>> > The Enterprise PostgreSQL Company\r\n>>>\r\n>>>\r\n>\r\n\npostgres=# insert into foo select generate_series(1,10000);INSERT 0 10000postgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │  Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 384 kB │             │└────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘(1 row)postgres=# truncate foo;TRUNCATE TABLEpostgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │ Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 16 kB │             │└────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘(1 row)I expect zero size after truncate.Thanks for review.I can explain, I don't think it's a bug.The current implementation of the truncated GTT retains two blocks of FSM pages.The same is true for truncating regular tables in subtransactions.This is an implementation that truncates the table without changing the relfilenode of the table.This is not extra important feature - now this is little bit a surprise, because I was not under transaction.Changing relfilenode, I think, is necessary, minimally for future VACUUM FULL support.RegardsPavel Stehule WenjingRegardsPavel \n\n\r\nWenjing\n\n\n\n\r\n> \r\n> -- \r\n> Robert Haas\r\n> EnterpriseDB: http://www.enterprisedb.com\r\n> The Enterprise PostgreSQL Company", "msg_date": "Sat, 15 Feb 2020 11:06:18 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月9日 下午8:05,Konstantin Knizhnik <k.knizhnik@postgrespro.ru> 写道:\n> \n> \n> \n> On 07.02.2020 21:37, Pavel Stehule wrote:\n>> \n>> What when session 2 has active transaction? Then to be correct, you should to wait with index creation to end of transaction.\n>> \n>> \n>> Session1:\n>> postgres=# create unique index on gtt(x);\n>> CREATE INDEX\n>> \n>> Sessin2:\n>> postgres=# explain select * from gtt where x=1;\n>> ERROR: could not create unique index \"gtt_x_idx\"\n>> DETAIL: Key (x)=(1) is duplicated.\n>> \n>> This is little bit unexpected behave (probably nobody expect so any SELECT fail with error \"could not create index\" - I understand exactly to reason and context, but this side effect is something what I afraid. \n>> \n> The more I thinking creation of indexes for GTT on-demand, the more contractions I see.\n> So looks like there are only two safe alternatives:\n> 1. Allow DDL for GTT (including index creation) only if there are no other sessions using this GTT (\"using\" means that no data was inserted in GTT by this session). Things can be even more complicated if we take in account inter-table dependencies (like foreign key constraint).\n> 2. Create indexes for GTT locally.\n> \n> 2) seems to be very contradictory (global table metadata, but private indexes) and hard to implement because in this case we have to maintain some private copy of index catalog to keep information about private indexes.\n> \n> 1) is currently implemented by Wenjing. Frankly speaking I still find such limitation too restrictive and inconvenient for users. From my point of view Oracle developers have implemented better compromise. But if I am the only person voting for such solution, then let's stop this discussion.\n> But in any case I think that calling ambuild to construct index for empty table is better solution than implementation of all indexes (and still not solving the problem with custom indexes).\nI made some improvements\n1 Support for all indexes on GTT (using index_build build empty index).\n2 Remove some ugly code in md.c bufmgr.c\n\nPlease give me feedback.\n\n\nWenjing", "msg_date": "Sun, 16 Feb 2020 23:07:19 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月15日 下午6:06,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n>> postgres=# insert into foo select generate_series(1,10000);\n>> INSERT 0 10000\n>> postgres=# \\dt+ foo\n>> List of relations\n>> ┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐\n>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\n>> ╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡\n>> │ public │ foo │ table │ pavel │ session │ 384 kB │ │\n>> └────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘\n>> (1 row)\n>> \n>> postgres=# truncate foo;\n>> TRUNCATE TABLE\n>> postgres=# \\dt+ foo\n>> List of relations\n>> ┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐\n>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\n>> ╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡\n>> │ public │ foo │ table │ pavel │ session │ 16 kB │ │\n>> └────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘\n>> (1 row)\n>> \n>> I expect zero size after truncate.\n> Thanks for review.\n> \n> I can explain, I don't think it's a bug.\n> The current implementation of the truncated GTT retains two blocks of FSM pages.\n> The same is true for truncating regular tables in subtransactions.\n> This is an implementation that truncates the table without changing the relfilenode of the table.\n> \n> \n> This is not extra important feature - now this is little bit a surprise, because I was not under transaction.\n> \n> Changing relfilenode, I think, is necessary, minimally for future VACUUM FULL support.\nNot allowing relfilenode changes is the current limit.\nI think can improve on it. But ,This is a bit complicated.\nso I'd like to know the necessity of this improvement.\nCould you give me more details?\n\n> \n> Regards\n> \n> Pavel Stehule\n> \n> \n> Wenjing\n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> > \n>> > -- \n>> > Robert Haas\n>> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> > The Enterprise PostgreSQL Company\n>> \n> \n\n\n2020年2月15日 下午6:06,Pavel Stehule <pavel.stehule@gmail.com> 写道:postgres=# insert into foo select generate_series(1,10000);INSERT 0 10000postgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │  Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 384 kB │             │└────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘(1 row)postgres=# truncate foo;TRUNCATE TABLEpostgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │ Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 16 kB │             │└────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘(1 row)I expect zero size after truncate.Thanks for review.I can explain, I don't think it's a bug.The current implementation of the truncated GTT retains two blocks of FSM pages.The same is true for truncating regular tables in subtransactions.This is an implementation that truncates the table without changing the relfilenode of the table.This is not extra important feature - now this is little bit a surprise, because I was not under transaction.Changing relfilenode, I think, is necessary, minimally for future VACUUM FULL support.Not allowing relfilenode changes is the current limit.I think can improve on it. But ,This is a bit complicated.so I'd like to know the necessity of this improvement.Could you give me more details?RegardsPavel Stehule WenjingRegardsPavel \n\n\nWenjing\n\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Sun, 16 Feb 2020 23:16:19 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "ne 16. 2. 2020 v 16:15 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\r\nnapsal:\r\n\r\n>\r\n>\r\n> 2020年2月15日 下午6:06,Pavel Stehule <pavel.stehule@gmail.com> 写道:\r\n>\r\n>\r\n> postgres=# insert into foo select generate_series(1,10000);\r\n>> INSERT 0 10000\r\n>> postgres=# \\dt+ foo\r\n>> List of relations\r\n>> ┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐\r\n>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n>> ╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡\r\n>> │ public │ foo │ table │ pavel │ session │ 384 kB │ │\r\n>> └────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘\r\n>> (1 row)\r\n>>\r\n>> postgres=# truncate foo;\r\n>> TRUNCATE TABLE\r\n>> postgres=# \\dt+ foo\r\n>> List of relations\r\n>> ┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐\r\n>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n>> ╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡\r\n>> │ public │ foo │ table │ pavel │ session │ 16 kB │ │\r\n>> └────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘\r\n>> (1 row)\r\n>>\r\n>> I expect zero size after truncate.\r\n>>\r\n>> Thanks for review.\r\n>>\r\n>> I can explain, I don't think it's a bug.\r\n>> The current implementation of the truncated GTT retains two blocks of FSM\r\n>> pages.\r\n>> The same is true for truncating regular tables in subtransactions.\r\n>> This is an implementation that truncates the table without changing the\r\n>> relfilenode of the table.\r\n>>\r\n>>\r\n> This is not extra important feature - now this is little bit a surprise,\r\n> because I was not under transaction.\r\n>\r\n> Changing relfilenode, I think, is necessary, minimally for future VACUUM\r\n> FULL support.\r\n>\r\n> Not allowing relfilenode changes is the current limit.\r\n> I think can improve on it. But ,This is a bit complicated.\r\n> so I'd like to know the necessity of this improvement.\r\n> Could you give me more details?\r\n>\r\n\r\nI don't think so GTT without support of VACUUM FULL can be accepted. Just\r\ndue consistency.\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n>\r\n> Regards\r\n>\r\n> Pavel Stehule\r\n>\r\n>\r\n>>\r\n>> Wenjing\r\n>>\r\n>>\r\n>> Regards\r\n>>\r\n>> Pavel\r\n>>\r\n>>\r\n>>>>\r\n>>>> Wenjing\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>> >\r\n>>>> > --\r\n>>>> > Robert Haas\r\n>>>> > EnterpriseDB: http://www.enterprisedb.com\r\n>>>> > The Enterprise PostgreSQL Company\r\n>>>>\r\n>>>>\r\n>>\r\n>\r\n\nne 16. 2. 2020 v 16:15 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年2月15日 下午6:06,Pavel Stehule <pavel.stehule@gmail.com> 写道:postgres=# insert into foo select generate_series(1,10000);INSERT 0 10000postgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │  Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 384 kB │             │└────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘(1 row)postgres=# truncate foo;TRUNCATE TABLEpostgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │ Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 16 kB │             │└────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘(1 row)I expect zero size after truncate.Thanks for review.I can explain, I don't think it's a bug.The current implementation of the truncated GTT retains two blocks of FSM pages.The same is true for truncating regular tables in subtransactions.This is an implementation that truncates the table without changing the relfilenode of the table.This is not extra important feature - now this is little bit a surprise, because I was not under transaction.Changing relfilenode, I think, is necessary, minimally for future VACUUM FULL support.Not allowing relfilenode changes is the current limit.I think can improve on it. But ,This is a bit complicated.so I'd like to know the necessity of this improvement.Could you give me more details?I don't think so GTT without support of VACUUM FULL can be accepted. Just due consistency. RegardsPavel RegardsPavel Stehule WenjingRegardsPavel \n\n\r\nWenjing\n\n\n\n\r\n> \r\n> -- \r\n> Robert Haas\r\n> EnterpriseDB: http://www.enterprisedb.com\r\n> The Enterprise PostgreSQL Company", "msg_date": "Sun, 16 Feb 2020 16:22:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi,\r\nI have started testing the \"Global temporary table\" feature,\r\nfrom \"gtt_v11-pg13.patch\". Below is my findings:\r\n\r\n-- session 1:\r\npostgres=# create global temporary table gtt1(a int);\r\nCREATE TABLE\r\n\r\n-- seeeion 2:\r\npostgres=# truncate gtt1 ;\r\nERROR: could not open file \"base/13585/t3_16384\": No such file or directory\r\n\r\nis it expected?\r\n\r\nOn Sun, Feb 16, 2020 at 8:53 PM Pavel Stehule <pavel.stehule@gmail.com>\r\nwrote:\r\n\r\n>\r\n>\r\n> ne 16. 2. 2020 v 16:15 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\r\n> napsal:\r\n>\r\n>>\r\n>>\r\n>> 2020年2月15日 下午6:06,Pavel Stehule <pavel.stehule@gmail.com> 写道:\r\n>>\r\n>>\r\n>> postgres=# insert into foo select generate_series(1,10000);\r\n>>> INSERT 0 10000\r\n>>> postgres=# \\dt+ foo\r\n>>> List of relations\r\n>>> ┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐\r\n>>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n>>> ╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡\r\n>>> │ public │ foo │ table │ pavel │ session │ 384 kB │ │\r\n>>> └────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘\r\n>>> (1 row)\r\n>>>\r\n>>> postgres=# truncate foo;\r\n>>> TRUNCATE TABLE\r\n>>> postgres=# \\dt+ foo\r\n>>> List of relations\r\n>>> ┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐\r\n>>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\r\n>>> ╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡\r\n>>> │ public │ foo │ table │ pavel │ session │ 16 kB │ │\r\n>>> └────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘\r\n>>> (1 row)\r\n>>>\r\n>>> I expect zero size after truncate.\r\n>>>\r\n>>> Thanks for review.\r\n>>>\r\n>>> I can explain, I don't think it's a bug.\r\n>>> The current implementation of the truncated GTT retains two blocks of\r\n>>> FSM pages.\r\n>>> The same is true for truncating regular tables in subtransactions.\r\n>>> This is an implementation that truncates the table without changing the\r\n>>> relfilenode of the table.\r\n>>>\r\n>>>\r\n>> This is not extra important feature - now this is little bit a surprise,\r\n>> because I was not under transaction.\r\n>>\r\n>> Changing relfilenode, I think, is necessary, minimally for future VACUUM\r\n>> FULL support.\r\n>>\r\n>> Not allowing relfilenode changes is the current limit.\r\n>> I think can improve on it. But ,This is a bit complicated.\r\n>> so I'd like to know the necessity of this improvement.\r\n>> Could you give me more details?\r\n>>\r\n>\r\n> I don't think so GTT without support of VACUUM FULL can be accepted. Just\r\n> due consistency.\r\n>\r\n> Regards\r\n>\r\n> Pavel\r\n>\r\n>\r\n>>\r\n>> Regards\r\n>>\r\n>> Pavel Stehule\r\n>>\r\n>>\r\n>>>\r\n>>> Wenjing\r\n>>>\r\n>>>\r\n>>> Regards\r\n>>>\r\n>>> Pavel\r\n>>>\r\n>>>\r\n>>>>>\r\n>>>>> Wenjing\r\n>>>>>\r\n>>>>>\r\n>>>>>\r\n>>>>>\r\n>>>>> >\r\n>>>>> > --\r\n>>>>> > Robert Haas\r\n>>>>> > EnterpriseDB: http://www.enterprisedb.com\r\n>>>>> > The Enterprise PostgreSQL Company\r\n>>>>>\r\n>>>>>\r\n>>>\r\n>>\r\n\r\n-- \r\n\r\nWith Regards,\r\nPrabhat Kumar Sahu\r\nEnterpriseDB: http://www.enterprisedb.com\r\n\nHi,I have started testing the \"Global temporary table\" feature,from \"gtt_v11-pg13.patch\". Below is my findings:-- session 1:postgres=# create global temporary table gtt1(a int);CREATE TABLE-- seeeion 2:postgres=# truncate gtt1 ;ERROR:  could not open file \"base/13585/t3_16384\": No such file or directoryis it expected?On Sun, Feb 16, 2020 at 8:53 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:ne 16. 2. 2020 v 16:15 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> napsal:2020年2月15日 下午6:06,Pavel Stehule <pavel.stehule@gmail.com> 写道:postgres=# insert into foo select generate_series(1,10000);INSERT 0 10000postgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │  Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 384 kB │             │└────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘(1 row)postgres=# truncate foo;TRUNCATE TABLEpostgres=# \\dt+ foo                          List of relations┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐│ Schema │ Name │ Type  │ Owner │ Persistence │ Size  │ Description │╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡│ public │ foo  │ table │ pavel │ session     │ 16 kB │             │└────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘(1 row)I expect zero size after truncate.Thanks for review.I can explain, I don't think it's a bug.The current implementation of the truncated GTT retains two blocks of FSM pages.The same is true for truncating regular tables in subtransactions.This is an implementation that truncates the table without changing the relfilenode of the table.This is not extra important feature - now this is little bit a surprise, because I was not under transaction.Changing relfilenode, I think, is necessary, minimally for future VACUUM FULL support.Not allowing relfilenode changes is the current limit.I think can improve on it. But ,This is a bit complicated.so I'd like to know the necessity of this improvement.Could you give me more details?I don't think so GTT without support of VACUUM FULL can be accepted. Just due consistency. RegardsPavel RegardsPavel Stehule WenjingRegardsPavel \n\n\r\nWenjing\n\n\n\n\r\n> \r\n> -- \r\n> Robert Haas\r\n> EnterpriseDB: http://www.enterprisedb.com\r\n> The Enterprise PostgreSQL Company\n\n\n\n\n\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 21 Feb 2020 13:15:44 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi,\nI have started testing the \"Global temporary table\" feature,\nThat's great, I see hope.\nfrom \"gtt_v11-pg13.patch\". Below is my findings:\n\n-- session 1:\npostgres=# create global temporary table gtt1(a int);\nCREATE TABLE\n\n-- seeeion 2:\npostgres=# truncate gtt1 ;\nERROR: could not open file \"base/13585/t3_16384\": No such file or directory\n\nis it expected?\n\nOh ,this is a bug, I fixed it.\n\nWenjing\n\n\nOn Sun, Feb 16, 2020 at 8:53 PM Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> wrote:\n\n\nne 16. 2. 2020 v 16:15 odesílatel 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n\n\n> 2020年2月15日 下午6:06,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n> \n> \n>> postgres=# insert into foo select generate_series(1,10000);\n>> INSERT 0 10000\n>> postgres=# \\dt+ foo\n>> List of relations\n>> ┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐\n>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\n>> ╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡\n>> │ public │ foo │ table │ pavel │ session │ 384 kB │ │\n>> └────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘\n>> (1 row)\n>> \n>> postgres=# truncate foo;\n>> TRUNCATE TABLE\n>> postgres=# \\dt+ foo\n>> List of relations\n>> ┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐\n>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\n>> ╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡\n>> │ public │ foo │ table │ pavel │ session │ 16 kB │ │\n>> └────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘\n>> (1 row)\n>> \n>> I expect zero size after truncate.\n> Thanks for review.\n> \n> I can explain, I don't think it's a bug.\n> The current implementation of the truncated GTT retains two blocks of FSM pages.\n> The same is true for truncating regular tables in subtransactions.\n> This is an implementation that truncates the table without changing the relfilenode of the table.\n> \n> \n> This is not extra important feature - now this is little bit a surprise, because I was not under transaction.\n> \n> Changing relfilenode, I think, is necessary, minimally for future VACUUM FULL support.\nNot allowing relfilenode changes is the current limit.\nI think can improve on it. But ,This is a bit complicated.\nso I'd like to know the necessity of this improvement.\nCould you give me more details?\n\nI don't think so GTT without support of VACUUM FULL can be accepted. Just due consistency. \n\nRegards\n\nPavel\n\n\n> \n> Regards\n> \n> Pavel Stehule\n> \n> \n> Wenjing\n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> > \n>> > -- \n>> > Robert Haas\n>> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> > The Enterprise PostgreSQL Company\n>> \n> \n\n\n\n-- \nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Fri, 21 Feb 2020 23:40:14 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n> Hi,\n> I have started testing the \"Global temporary table\" feature,\n> That's great, I see hope.\n> from \"gtt_v11-pg13.patch\". Below is my findings:\n>\n> -- session 1:\n> postgres=# create global temporary table gtt1(a int);\n> CREATE TABLE\n>\n> -- seeeion 2:\n> postgres=# truncate gtt1 ;\n> ERROR: could not open file \"base/13585/t3_16384\": No such file or\n> directory\n>\n> is it expected?\n>\n> Oh ,this is a bug, I fixed it.\n>\nThanks for the patch.\nI have verified the same, Now the issue is resolved with v12 patch.\n\nKindly confirm the below scenario:\n\npostgres=# create global temporary table gtt1 (c1 int unique);\nCREATE TABLE\n\npostgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );\nERROR: referenced relation \"gtt1\" is not a global temp table\n\npostgres=# create table tab2 (c1 int references gtt1(c1) );\nERROR: referenced relation \"gtt1\" is not a global temp table\n\nThanks,\nPrabhat Sahu\n\nOn Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:Hi,I have started testing the \"Global temporary table\" feature,That's great, I see hope.from \"gtt_v11-pg13.patch\". Below is my findings:-- session 1:postgres=# create global temporary table gtt1(a int);CREATE TABLE-- seeeion 2:postgres=# truncate gtt1 ;ERROR:  could not open file \"base/13585/t3_16384\": No such file or directoryis it expected?Oh ,this is a bug, I fixed it.Thanks for the patch.I have verified the same, Now the issue is resolved with v12 patch.Kindly confirm the below scenario:postgres=# create global temporary table gtt1 (c1 int unique);CREATE TABLEpostgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );ERROR:  referenced relation \"gtt1\" is not a global temp tablepostgres=# create table tab2 (c1 int references gtt1(c1) );ERROR:  referenced relation \"gtt1\" is not a global temp tableThanks, Prabhat Sahu", "msg_date": "Mon, 24 Feb 2020 15:14:34 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi All,\n\nI observe a different behavior in \"temporary table\" and \"global temporary\ntable\".\nNot sure if it is expected?\n\npostgres=# create global temporary table parent1(a int) on commit delete\nrows;\nCREATE TABLE\npostgres=# create global temporary table child1() inherits (parent1);\nCREATE TABLE\npostgres=# insert into parent1 values(1);\nINSERT 0 1\npostgres=# insert into child1 values(2);\nINSERT 0 1\npostgres=# select * from parent1;\n a\n---\n(0 rows)\n\npostgres=# select * from child1;\n a\n---\n(0 rows)\n\n\npostgres=# create temporary table parent2(a int) on commit delete rows;\nCREATE TABLE\npostgres=# create temporary table child2() inherits (parent2);\nCREATE TABLE\npostgres=# insert into parent2 values(1);\nINSERT 0 1\npostgres=# insert into child2 values(2);\nINSERT 0 1\npostgres=# select * from parent2;\n a\n---\n 2\n(1 row)\n\npostgres=# select * from child2;\n a\n---\n 2\n(1 row)\n\n\nThanks,\nPrabhat Sahu\n\nHi All,I observe a different behavior in  \"temporary table\" and \"global temporary table\".Not sure if it is expected?postgres=# create global temporary table parent1(a int)  on commit delete rows;CREATE TABLEpostgres=# create global temporary table child1() inherits (parent1);CREATE TABLEpostgres=# insert into parent1 values(1);INSERT 0 1postgres=# insert into child1 values(2);INSERT 0 1postgres=# select * from parent1; a ---(0 rows)postgres=# select * from child1; a ---(0 rows)postgres=# create temporary table parent2(a int)  on commit delete rows;CREATE TABLEpostgres=# create temporary table child2() inherits (parent2);CREATE TABLEpostgres=# insert into parent2 values(1);INSERT 0 1postgres=# insert into child2 values(2);INSERT 0 1postgres=# select * from parent2; a --- 2(1 row)postgres=# select * from child2; a --- 2(1 row)Thanks,Prabhat Sahu", "msg_date": "Mon, 24 Feb 2020 19:04:43 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "po 24. 2. 2020 v 14:34 odesílatel Prabhat Sahu <\nprabhat.sahu@enterprisedb.com> napsal:\n\n> Hi All,\n>\n> I observe a different behavior in \"temporary table\" and \"global temporary\n> table\".\n> Not sure if it is expected?\n>\n> postgres=# create global temporary table parent1(a int) on commit delete\n> rows;\n> CREATE TABLE\n> postgres=# create global temporary table child1() inherits (parent1);\n> CREATE TABLE\n> postgres=# insert into parent1 values(1);\n> INSERT 0 1\n> postgres=# insert into child1 values(2);\n> INSERT 0 1\n> postgres=# select * from parent1;\n> a\n> ---\n> (0 rows)\n>\n> postgres=# select * from child1;\n> a\n> ---\n> (0 rows)\n>\n\nIt is bug. Probably INHERITS clause is not well implemented for GTT\n\n\n\n>\n> postgres=# create temporary table parent2(a int) on commit delete rows;\n> CREATE TABLE\n> postgres=# create temporary table child2() inherits (parent2);\n> CREATE TABLE\n> postgres=# insert into parent2 values(1);\n> INSERT 0 1\n> postgres=# insert into child2 values(2);\n> INSERT 0 1\n> postgres=# select * from parent2;\n> a\n> ---\n> 2\n> (1 row)\n>\n> postgres=# select * from child2;\n> a\n> ---\n> 2\n> (1 row)\n>\n>\n> Thanks,\n> Prabhat Sahu\n>\n>\n\npo 24. 2. 2020 v 14:34 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com> napsal:Hi All,I observe a different behavior in  \"temporary table\" and \"global temporary table\".Not sure if it is expected?postgres=# create global temporary table parent1(a int)  on commit delete rows;CREATE TABLEpostgres=# create global temporary table child1() inherits (parent1);CREATE TABLEpostgres=# insert into parent1 values(1);INSERT 0 1postgres=# insert into child1 values(2);INSERT 0 1postgres=# select * from parent1; a ---(0 rows)postgres=# select * from child1; a ---(0 rows)It is bug. Probably INHERITS clause is not well implemented for GTT postgres=# create temporary table parent2(a int)  on commit delete rows;CREATE TABLEpostgres=# create temporary table child2() inherits (parent2);CREATE TABLEpostgres=# insert into parent2 values(1);INSERT 0 1postgres=# insert into child2 values(2);INSERT 0 1postgres=# select * from parent2; a --- 2(1 row)postgres=# select * from child2; a --- 2(1 row)Thanks,Prabhat Sahu", "msg_date": "Mon, 24 Feb 2020 14:41:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月24日 下午9:34,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi All,\n> \n> I observe a different behavior in \"temporary table\" and \"global temporary table\".\n> Not sure if it is expected?\n> \n> postgres=# create global temporary table parent1(a int) on commit delete rows;\n> CREATE TABLE\n> postgres=# create global temporary table child1() inherits (parent1);\n> CREATE TABLE\n> postgres=# insert into parent1 values(1);\n> INSERT 0 1\n> postgres=# insert into child1 values(2);\n> INSERT 0 1\n> postgres=# select * from parent1;\n> a \n> ---\n> (0 rows)\n> \n> postgres=# select * from child1;\n> a \n> ---\n> (0 rows)\nBecause child1 inherits its father's on commit property.\nI can make GTT behave like local temp table.\n\n\n> \n> \n> postgres=# create temporary table parent2(a int) on commit delete rows;\n> CREATE TABLE\n> postgres=# create temporary table child2() inherits (parent2);\n> CREATE TABLE\n> postgres=# insert into parent2 values(1);\n> INSERT 0 1\n> postgres=# insert into child2 values(2);\n> INSERT 0 1\n> postgres=# select * from parent2;\n> a \n> ---\n> 2\n> (1 row)\n> \n> postgres=# select * from child2;\n> a \n> ---\n> 2\n> (1 row)\n> \n> \n> Thanks,\n> Prabhat Sahu\n> \n\n\n2020年2月24日 下午9:34,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi All,I observe a different behavior in  \"temporary table\" and \"global temporary table\".Not sure if it is expected?postgres=# create global temporary table parent1(a int)  on commit delete rows;CREATE TABLEpostgres=# create global temporary table child1() inherits (parent1);CREATE TABLEpostgres=# insert into parent1 values(1);INSERT 0 1postgres=# insert into child1 values(2);INSERT 0 1postgres=# select * from parent1; a ---(0 rows)postgres=# select * from child1; a ---(0 rows)Because child1 inherits its father's on commit property.I can make GTT behave like local temp table.postgres=# create temporary table parent2(a int)  on commit delete rows;CREATE TABLEpostgres=# create temporary table child2() inherits (parent2);CREATE TABLEpostgres=# insert into parent2 values(1);INSERT 0 1postgres=# insert into child2 values(2);INSERT 0 1postgres=# select * from parent2; a --- 2(1 row)postgres=# select * from child2; a --- 2(1 row)Thanks,Prabhat Sahu", "msg_date": "Mon, 24 Feb 2020 21:57:26 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月24日 下午9:41,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> po 24. 2. 2020 v 14:34 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> napsal:\n> Hi All,\n> \n> I observe a different behavior in \"temporary table\" and \"global temporary table\".\n> Not sure if it is expected?\n> \n> postgres=# create global temporary table parent1(a int) on commit delete rows;\n> CREATE TABLE\n> postgres=# create global temporary table child1() inherits (parent1);\n> CREATE TABLE\n> postgres=# insert into parent1 values(1);\n> INSERT 0 1\n> postgres=# insert into child1 values(2);\n> INSERT 0 1\n> postgres=# select * from parent1;\n> a \n> ---\n> (0 rows)\n> \n> postgres=# select * from child1;\n> a \n> ---\n> (0 rows)\n> \n> It is bug. Probably INHERITS clause is not well implemented for GTT\nI fixed the GTT's behavior, like local temp table.\n\n\nWenjing\n\n\n\n> \n> \n> \n> \n> postgres=# create temporary table parent2(a int) on commit delete rows;\n> CREATE TABLE\n> postgres=# create temporary table child2() inherits (parent2);\n> CREATE TABLE\n> postgres=# insert into parent2 values(1);\n> INSERT 0 1\n> postgres=# insert into child2 values(2);\n> INSERT 0 1\n> postgres=# select * from parent2;\n> a \n> ---\n> 2\n> (1 row)\n> \n> postgres=# select * from child2;\n> a \n> ---\n> 2\n> (1 row)\n> \n> \n> Thanks,\n> Prabhat Sahu\n>", "msg_date": "Tue, 25 Feb 2020 16:53:21 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> Hi,\n> I have started testing the \"Global temporary table\" feature,\n> That's great, I see hope.\n> from \"gtt_v11-pg13.patch\". Below is my findings:\n> \n> -- session 1:\n> postgres=# create global temporary table gtt1(a int);\n> CREATE TABLE\n> \n> -- seeeion 2:\n> postgres=# truncate gtt1 ;\n> ERROR: could not open file \"base/13585/t3_16384\": No such file or directory\n> \n> is it expected?\n> \n> Oh ,this is a bug, I fixed it.\n> Thanks for the patch.\n> I have verified the same, Now the issue is resolved with v12 patch.\n> \n> Kindly confirm the below scenario:\n> \n> postgres=# create global temporary table gtt1 (c1 int unique);\n> CREATE TABLE\n> \n> postgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );\n> ERROR: referenced relation \"gtt1\" is not a global temp table\n> \n> postgres=# create table tab2 (c1 int references gtt1(c1) );\n> ERROR: referenced relation \"gtt1\" is not a global temp table\n> \n> Thanks, \n> Prabhat Sahu\n\nGTT supports foreign key constraints in global_temporary_table_v13-pg13.patch\n\n\nWenjing", "msg_date": "Tue, 25 Feb 2020 16:55:46 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi All,\n\nPlease check the below findings on GTT.\n*-- Scenario 1:*\nUnder \"information_schema\", We are not allowed to create \"temporary table\",\nwhereas we can CREATE/DROP \"Global Temporary Table\", is it expected ?\n\npostgres=# create temporary table information_schema.temp1(c1 int);\nERROR: cannot create temporary relation in non-temporary schema\nLINE 1: create temporary table information_schema.temp1(c1 int);\n ^\n\npostgres=# create global temporary table information_schema.temp1(c1 int);\nCREATE TABLE\n\npostgres=# drop table information_schema.temp1 ;\nDROP TABLE\n\n*-- Scenario 2:*\nHere I am getting the same error message in both the below cases.\nWe may add a \"global\" keyword with GTT related error message.\n\npostgres=# create global temporary table gtt1 (c1 int unique);\nCREATE TABLE\npostgres=# create temporary table tmp1 (c1 int unique);\nCREATE TABLE\n\npostgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\nERROR: constraints on temporary tables may reference only temporary tables\n\npostgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );\nERROR: constraints on temporary tables may reference only temporary tables\n\nThanks,\nPrabhat Sahu\n\nOn Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> 2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> wrote:\n>\n>> Hi,\n>> I have started testing the \"Global temporary table\" feature,\n>> That's great, I see hope.\n>> from \"gtt_v11-pg13.patch\". Below is my findings:\n>>\n>> -- session 1:\n>> postgres=# create global temporary table gtt1(a int);\n>> CREATE TABLE\n>>\n>> -- seeeion 2:\n>> postgres=# truncate gtt1 ;\n>> ERROR: could not open file \"base/13585/t3_16384\": No such file or\n>> directory\n>>\n>> is it expected?\n>>\n>> Oh ,this is a bug, I fixed it.\n>>\n> Thanks for the patch.\n> I have verified the same, Now the issue is resolved with v12 patch.\n>\n> Kindly confirm the below scenario:\n>\n> postgres=# create global temporary table gtt1 (c1 int unique);\n> CREATE TABLE\n>\n> postgres=# create global temporary table gtt2 (c1 int references gtt1(c1)\n> );\n> ERROR: referenced relation \"gtt1\" is not a global temp table\n>\n> postgres=# create table tab2 (c1 int references gtt1(c1) );\n> ERROR: referenced relation \"gtt1\" is not a global temp table\n>\n> Thanks,\n> Prabhat Sahu\n>\n>\n> GTT supports foreign key constraints\n> in global_temporary_table_v13-pg13.patch\n>\n>\n> Wenjing\n>\n>\n>\n>\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,Please check the below findings on GTT.-- Scenario 1:Under \"information_schema\", We are not allowed to create \"temporary table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it expected ?postgres=# create temporary table information_schema.temp1(c1 int);ERROR:  cannot create temporary relation in non-temporary schemaLINE 1: create temporary table information_schema.temp1(c1 int);                               ^postgres=# create global temporary table information_schema.temp1(c1 int);CREATE TABLEpostgres=# drop table information_schema.temp1 ;DROP TABLE-- Scenario 2:Here I am getting the same error message in both the below cases.We may add a \"global\" keyword with GTT related error message.postgres=# create global temporary table gtt1 (c1 int unique);CREATE TABLEpostgres=# create temporary table tmp1 (c1 int unique);CREATE TABLEpostgres=# create temporary table tmp2 (c1 int references gtt1(c1) );ERROR:  constraints on temporary tables may reference only temporary tablespostgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );ERROR:  constraints on temporary tables may reference only temporary tablesThanks,Prabhat SahuOn Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:Hi,I have started testing the \"Global temporary table\" feature,That's great, I see hope.from \"gtt_v11-pg13.patch\". Below is my findings:-- session 1:postgres=# create global temporary table gtt1(a int);CREATE TABLE-- seeeion 2:postgres=# truncate gtt1 ;ERROR:  could not open file \"base/13585/t3_16384\": No such file or directoryis it expected?Oh ,this is a bug, I fixed it.Thanks for the patch.I have verified the same, Now the issue is resolved with v12 patch.Kindly confirm the below scenario:postgres=# create global temporary table gtt1 (c1 int unique);CREATE TABLEpostgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );ERROR:  referenced relation \"gtt1\" is not a global temp tablepostgres=# create table tab2 (c1 int references gtt1(c1) );ERROR:  referenced relation \"gtt1\" is not a global temp tableThanks, Prabhat Sahu\nGTT supports foreign key constraints in global_temporary_table_v13-pg13.patchWenjing-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 25 Feb 2020 19:06:07 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "út 25. 2. 2020 v 14:36 odesílatel Prabhat Sahu <\nprabhat.sahu@enterprisedb.com> napsal:\n\n> Hi All,\n>\n> Please check the below findings on GTT.\n> *-- Scenario 1:*\n> Under \"information_schema\", We are not allowed to create \"temporary\n> table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it expected\n> ?\n>\n\nIt is ok for me. temporary tables should be created only in proprietary\nschema. For GTT there is not risk of collision, so it can be created in any\nschema where are necessary access rights.\n\nPavel\n\n\n> postgres=# create temporary table information_schema.temp1(c1 int);\n> ERROR: cannot create temporary relation in non-temporary schema\n> LINE 1: create temporary table information_schema.temp1(c1 int);\n> ^\n>\n> postgres=# create global temporary table information_schema.temp1(c1 int);\n> CREATE TABLE\n>\n> postgres=# drop table information_schema.temp1 ;\n> DROP TABLE\n>\n> *-- Scenario 2:*\n> Here I am getting the same error message in both the below cases.\n> We may add a \"global\" keyword with GTT related error message.\n>\n> postgres=# create global temporary table gtt1 (c1 int unique);\n> CREATE TABLE\n> postgres=# create temporary table tmp1 (c1 int unique);\n> CREATE TABLE\n>\n> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n> ERROR: constraints on temporary tables may reference only temporary tables\n>\n> postgres=# create global temporary table gtt2 (c1 int references tmp1(c1)\n> );\n> ERROR: constraints on temporary tables may reference only temporary tables\n>\n> Thanks,\n> Prabhat Sahu\n>\n> On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> wrote:\n>\n>>\n>>\n>> 2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>>\n>> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n>> wrote:\n>>\n>>> Hi,\n>>> I have started testing the \"Global temporary table\" feature,\n>>> That's great, I see hope.\n>>> from \"gtt_v11-pg13.patch\". Below is my findings:\n>>>\n>>> -- session 1:\n>>> postgres=# create global temporary table gtt1(a int);\n>>> CREATE TABLE\n>>>\n>>> -- seeeion 2:\n>>> postgres=# truncate gtt1 ;\n>>> ERROR: could not open file \"base/13585/t3_16384\": No such file or\n>>> directory\n>>>\n>>> is it expected?\n>>>\n>>> Oh ,this is a bug, I fixed it.\n>>>\n>> Thanks for the patch.\n>> I have verified the same, Now the issue is resolved with v12 patch.\n>>\n>> Kindly confirm the below scenario:\n>>\n>> postgres=# create global temporary table gtt1 (c1 int unique);\n>> CREATE TABLE\n>>\n>> postgres=# create global temporary table gtt2 (c1 int references gtt1(c1)\n>> );\n>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>>\n>> postgres=# create table tab2 (c1 int references gtt1(c1) );\n>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>>\n>> Thanks,\n>> Prabhat Sahu\n>>\n>>\n>> GTT supports foreign key constraints\n>> in global_temporary_table_v13-pg13.patch\n>>\n>>\n>> Wenjing\n>>\n>>\n>>\n>>\n>\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nút 25. 2. 2020 v 14:36 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com> napsal:Hi All,Please check the below findings on GTT.-- Scenario 1:Under \"information_schema\", We are not allowed to create \"temporary table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it expected ?It is ok for me. temporary tables should be created only in proprietary schema. For GTT there is not risk of collision, so it can be created in any schema where are necessary access rights.Pavelpostgres=# create temporary table information_schema.temp1(c1 int);ERROR:  cannot create temporary relation in non-temporary schemaLINE 1: create temporary table information_schema.temp1(c1 int);                               ^postgres=# create global temporary table information_schema.temp1(c1 int);CREATE TABLEpostgres=# drop table information_schema.temp1 ;DROP TABLE-- Scenario 2:Here I am getting the same error message in both the below cases.We may add a \"global\" keyword with GTT related error message.postgres=# create global temporary table gtt1 (c1 int unique);CREATE TABLEpostgres=# create temporary table tmp1 (c1 int unique);CREATE TABLEpostgres=# create temporary table tmp2 (c1 int references gtt1(c1) );ERROR:  constraints on temporary tables may reference only temporary tablespostgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );ERROR:  constraints on temporary tables may reference only temporary tablesThanks,Prabhat SahuOn Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:Hi,I have started testing the \"Global temporary table\" feature,That's great, I see hope.from \"gtt_v11-pg13.patch\". Below is my findings:-- session 1:postgres=# create global temporary table gtt1(a int);CREATE TABLE-- seeeion 2:postgres=# truncate gtt1 ;ERROR:  could not open file \"base/13585/t3_16384\": No such file or directoryis it expected?Oh ,this is a bug, I fixed it.Thanks for the patch.I have verified the same, Now the issue is resolved with v12 patch.Kindly confirm the below scenario:postgres=# create global temporary table gtt1 (c1 int unique);CREATE TABLEpostgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );ERROR:  referenced relation \"gtt1\" is not a global temp tablepostgres=# create table tab2 (c1 int references gtt1(c1) );ERROR:  referenced relation \"gtt1\" is not a global temp tableThanks, Prabhat Sahu\nGTT supports foreign key constraints in global_temporary_table_v13-pg13.patchWenjing-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 25 Feb 2020 14:49:32 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi ,\n\npg_upgrade  scenario is failing if database is containing  global \ntemporary table\n\n=============================\ncentos@tushar-ldap-docker bin]$ ./psql postgres\npsql (13devel)\nType \"help\" for help.\n\npostgres=# create global temporary table  t(n int);\nCREATE TABLE\npostgres=# \\q\n===============================\n\nrun pg_upgrade -\n\n[centos@tushar-ldap-docker bin]$ ./pg_upgrade -d /tmp/t1/ -D /tmp/t2 -b \n. -B .\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions    ok\nChecking database user is the install user                   ok\nChecking database connection settings                       ok\nChecking for prepared transactions                             ok\nChecking for reg* data types in user tables                 ok\n--\n--\nIf pg_upgrade fails after this point, you must re-initdb the\nnew cluster before continuing.\n\nPerforming Upgrade\n------------------\nAnalyzing all rows in the new cluster                        ok\nFreezing all rows in the new cluster                          ok\nDeleting files from new pg_xact                                ok\n--\n--\nRestoring database schemas in the new cluster\nok\nCopying user relation files\n   /tmp/t1/base/13585/16384\nerror while copying relation \"public.t\": could not open file \n\"/tmp/t1/base/13585/16384\": No such file or directory\nFailure, exiting\n\nregards,\n\nOn 2/25/20 7:06 PM, Prabhat Sahu wrote:\n> Hi All,\n>\n> Please check the below findings on GTT.\n> _-- Scenario 1:_\n> Under \"information_schema\", We are not allowed to create \"temporary \n> table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it \n> expected ?\n>\n> postgres=# create temporary table information_schema.temp1(c1 int);\n> ERROR:  cannot create temporary relation in non-temporary schema\n> LINE 1: create temporary table information_schema.temp1(c1 int);\n>                                ^\n>\n> postgres=# create global temporary table information_schema.temp1(c1 int);\n> CREATE TABLE\n>\n> postgres=# drop table information_schema.temp1 ;\n> DROP TABLE\n>\n> _-- Scenario 2:_\n> Here I am getting the same error message in both the below cases.\n> We may add a \"global\" keyword with GTT related error message.\n>\n> postgres=# create global temporary table gtt1 (c1 int unique);\n> CREATE TABLE\n> postgres=# create temporary table tmp1 (c1 int unique);\n> CREATE TABLE\n>\n> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n> ERROR:  constraints on temporary tables may reference only temporary \n> tables\n>\n> postgres=# create global temporary table gtt2 (c1 int references \n> tmp1(c1) );\n> ERROR:  constraints on temporary tables may reference only temporary \n> tables\n>\n> Thanks,\n> Prabhat Sahu\n>\n> On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com \n> <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>\n>\n>\n>> 2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com\n>> <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>\n>> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从)\n>> <wenjing.zwj@alibaba-inc.com\n>> <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>>\n>> Hi,\n>> I have started testing the \"Global temporary table\" feature,\n>> That's great, I see hope.\n>> from \"gtt_v11-pg13.patch\". Below is my findings:\n>>\n>> -- session 1:\n>> postgres=# create global temporary table gtt1(a int);\n>> CREATE TABLE\n>>\n>> -- seeeion 2:\n>> postgres=# truncate gtt1 ;\n>> ERROR:  could not open file \"base/13585/t3_16384\": No such\n>> file or directory\n>>\n>> is it expected?\n>>\n>> Oh ,this is a bug, I fixed it.\n>>\n>> Thanks for the patch.\n>> I have verified the same, Now the issue is resolved with v12 patch.\n>>\n>> Kindly confirm the below scenario:\n>>\n>> postgres=# create global temporary table gtt1 (c1 int unique);\n>> CREATE TABLE\n>>\n>> postgres=# create global temporary table gtt2 (c1 int references\n>> gtt1(c1) );\n>> ERROR:  referenced relation \"gtt1\" is not a global temp table\n>>\n>> postgres=# create table tab2 (c1 int references gtt1(c1) );\n>> ERROR:  referenced relation \"gtt1\" is not a global temp table\n>>\n>> Thanks,\n>> Prabhat Sahu\n>\n> GTT supports foreign key constraints\n> in global_temporary_table_v13-pg13.patch\n>\n>\n> Wenjing\n>\n>\n>\n>\n>\n> -- \n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nHi ,\n\n\npg_upgrade  scenario is failing if\n database is containing  global temporary table \n\n\n\n=============================\n\ncentos@tushar-ldap-docker bin]$ ./psql\n postgres\n psql (13devel)\n Type \"help\" for help.\n\n postgres=# create global temporary table  t(n int);\n CREATE TABLE\n postgres=# \\q\n\n===============================\n\n\n\nrun pg_upgrade -\n\n\n[centos@tushar-ldap-docker bin]$\n ./pg_upgrade -d /tmp/t1/ -D /tmp/t2 -b . -B . \n Performing Consistency Checks\n -----------------------------\n Checking cluster versions                                         \n    ok\n Checking database user is the install user                   ok\n Checking database connection settings                       ok\n Checking for prepared transactions                             ok\n Checking for reg* data types in user tables                 ok\n --\n--\nIf pg_upgrade fails after this point,\n you must re-initdb the\n new cluster before continuing.\n\n Performing Upgrade\n ------------------\n Analyzing all rows in the new cluster                        ok\n Freezing all rows in the new cluster                          ok\n Deleting files from new pg_xact                                ok\n --\n--\nRestoring database schemas in the new\n cluster\n                                                                                 \n ok\n Copying user relation files\n   /tmp/t1/base/13585/16384                                  \n error while copying relation \"public.t\": could not open file\n \"/tmp/t1/base/13585/16384\": No such file or directory\n Failure, exiting\n\n\nregards,\n\n\nOn 2/25/20 7:06 PM, Prabhat Sahu wrote:\n\n\n\n\n\nHi All,\n\n Please check the below findings on GTT.\n-- Scenario 1:\n Under \"information_schema\", We are not allowed to create\n \"temporary table\", whereas we can CREATE/DROP \"Global\n Temporary Table\", is it expected ?\n\n postgres=# create temporary table\n information_schema.temp1(c1 int);\n ERROR:  cannot create temporary relation in non-temporary\n schema\n LINE 1: create temporary table information_schema.temp1(c1\n int);\n                                ^\n\n postgres=# create global temporary table\n information_schema.temp1(c1 int);\n CREATE TABLE\n\n postgres=# drop table information_schema.temp1 ;\n DROP TABLE\n\n\n\n-- Scenario 2:\n Here I am getting the same error message in both the below\n cases.\n We may add a \"global\" keyword with GTT related error\n message.\n\n postgres=# create global temporary table gtt1 (c1 int\n unique);\n CREATE TABLE\n postgres=# create temporary table tmp1 (c1 int unique);\n CREATE TABLE\n\n postgres=# create temporary table tmp2 (c1 int references\n gtt1(c1) );\n ERROR:  constraints on temporary tables may reference only\n temporary tables\n\n postgres=# create global temporary table gtt2 (c1 int\n references tmp1(c1) );\n ERROR:  constraints on temporary tables may reference only\n temporary tables\n\n\n\nThanks,\nPrabhat Sahu\n\n\n\n\nOn Tue, Feb 25, 2020 at 2:25\n PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n wrote:\n\n\n\n\n\n2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n 写道:\n\n\n\n\nOn Fri,\n Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n wrote:\n\n\n\n\n\n\nHi,\n I have started testing the \"Global\n temporary table\" feature,\nThat's\n great, I see hope.\nfrom\n \"gtt_v11-pg13.patch\". Below is my\n findings:\n\n--\n session 1:\npostgres=#\n create global temporary table gtt1(a\n int);\nCREATE\n TABLE\n\n--\n seeeion 2:\npostgres=#\n truncate gtt1 ;\nERROR:\n  could not open file\n \"base/13585/t3_16384\": No such file or\n directory\n\nis it\n expected?\n\n\nOh\n ,this is a bug, I fixed it.\n\n\n\nThanks for the patch.\nI have verified the same,\n Now the issue is resolved with v12 patch.\n\n\nKindly confirm the below\n scenario:\n\n\npostgres=# create global\n temporary table gtt1 (c1 int unique);\n CREATE TABLE\n\n postgres=# create global temporary table gtt2\n (c1 int references gtt1(c1) );\n ERROR:  referenced relation \"gtt1\" is not a\n global temp table\n\n postgres=# create table tab2 (c1 int references\n gtt1(c1) );\n ERROR:  referenced relation \"gtt1\" is not a\n global temp table\n\n\nThanks, \n\nPrabhat Sahu\n\n\n\n\n\n\n\nGTT supports foreign key constraints\n in global_temporary_table_v13-pg13.patch\n\n\n\n\nWenjing\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n -- \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWith\n Regards,\nPrabhat\n Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 25 Feb 2020 19:26:02 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi,\n\nI have created two  global temporary tables like this -\n\nCase 1-\npostgres=# create global  temp table foo(n int) *with \n(on_c*ommit_delete_rows='true');\nCREATE TABLE\n\nCase 2-\npostgres=# create global  temp table bar1(n int) *on c*ommit delete rows;\nCREATE TABLE\n\n\nbut   if i try to do the same having only 'temp' keyword , Case 2 is \nworking fine but getting this error  for case 1 -\n\npostgres=# create   temp table foo1(n int) with \n(on_commit_delete_rows='true');\nERROR:  regular table cannot specifie on_commit_delete_rows\npostgres=#\n\npostgres=#  create   temp table bar1(n int) on commit delete rows;\nCREATE TABLE\n\ni think this error message need to be more clear .\n\nregards,\ntushar\n\nOn 2/25/20 7:19 PM, Pavel Stehule wrote/:\n>\n>\n> út 25. 2. 2020 v 14:36 odesílatel Prabhat Sahu \n> <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> \n> napsal:\n>\n> Hi All,\n>\n> Please check the below findings on GTT.\n> _-- Scenario 1:_\n> Under \"information_schema\", We are not allowed to create\n> \"temporary table\", whereas we can CREATE/DROP \"Global Temporary\n> Table\", is it expected ?\n>\n>\n> It is ok for me. temporary tables should be created only in \n> proprietary schema. For GTT there is not risk of collision, so it can \n> be created in any schema where are necessary access rights.\n>\n> Pavel\n>\n>\n> postgres=# create temporary table information_schema.temp1(c1 int);\n> ERROR:  cannot create temporary relation in non-temporary schema\n> LINE 1: create temporary table information_schema.temp1(c1 int);\n>                                ^\n>\n> postgres=# create global temporary table\n> information_schema.temp1(c1 int);\n> CREATE TABLE\n>\n> postgres=# drop table information_schema.temp1 ;\n> DROP TABLE\n>\n> _-- Scenario 2:_\n> Here I am getting the same error message in both the below cases.\n> We may add a \"global\" keyword with GTT related error message.\n>\n> postgres=# create global temporary table gtt1 (c1 int unique);\n> CREATE TABLE\n> postgres=# create temporary table tmp1 (c1 int unique);\n> CREATE TABLE\n>\n> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n> ERROR:  constraints on temporary tables may reference only\n> temporary tables\n>\n> postgres=# create global temporary table gtt2 (c1 int references\n> tmp1(c1) );\n> ERROR:  constraints on temporary tables may reference only\n> temporary tables\n>\n> Thanks,\n> Prabhat Sahu\n>\n> On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从)\n> <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>>\n> wrote:\n>\n>\n>\n>> 2020年2月24日 下午5:44,Prabhat Sahu\n>> <prabhat.sahu@enterprisedb.com\n>> <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>\n>> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从)\n>> <wenjing.zwj@alibaba-inc.com\n>> <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>>\n>> Hi,\n>> I have started testing the \"Global temporary table\" feature,\n>> That's great, I see hope.\n>> from \"gtt_v11-pg13.patch\". Below is my findings:\n>>\n>> -- session 1:\n>> postgres=# create global temporary table gtt1(a int);\n>> CREATE TABLE\n>>\n>> -- seeeion 2:\n>> postgres=# truncate gtt1 ;\n>> ERROR:  could not open file \"base/13585/t3_16384\": No\n>> such file or directory\n>>\n>> is it expected?\n>>\n>> Oh ,this is a bug, I fixed it.\n>>\n>> Thanks for the patch.\n>> I have verified the same, Now the issue is resolved with v12\n>> patch.\n>>\n>> Kindly confirm the below scenario:\n>>\n>> postgres=# create global temporary table gtt1 (c1 int unique);\n>> CREATE TABLE\n>>\n>> postgres=# create global temporary table gtt2 (c1 int\n>> references gtt1(c1) );\n>> ERROR:  referenced relation \"gtt1\" is not a global temp table\n>>\n>> postgres=# create table tab2 (c1 int references gtt1(c1) );\n>> ERROR:  referenced relation \"gtt1\" is not a global temp table\n>>\n>> Thanks,\n>> Prabhat Sahu\n>\n> GTT supports foreign key constraints\n> in global_temporary_table_v13-pg13.patch\n>\n>\n> Wenjing\n>\n>\n>\n>\n>\n> -- \n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n> <http://www.enterprisedb.com/>\n>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nHi,\n\n\nI have created two  global temporary\n tables like this -\n\n\nCase 1- \n postgres=# create global  temp table foo(n int) with (on_commit_delete_rows='true');\n CREATE TABLE\n\n\n\nCase 2- \n\npostgres=# create global  temp table\n bar1(n int) on commit delete rows;\n CREATE TABLE\n\n\n\n\nbut   if i try to do the same having\n only 'temp' keyword , Case 2 is working fine but getting this\n error  for case 1 -\n\n\npostgres=# create   temp table foo1(n\n int) with (on_commit_delete_rows='true');\n ERROR:  regular table cannot specifie on_commit_delete_rows\n postgres=# \n\n\n\npostgres=#  create   temp table bar1(n\n int) on commit delete rows;\n CREATE TABLE\n\n\ni think this error message need to be\n more clear .\n\n\nregards,\ntushar \n\n\n\nOn 2/25/20 7:19 PM, Pavel Stehule\n wrote/:\n\n\n\n\n\n\n\n\nút 25. 2. 2020 v 14:36\n odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n napsal:\n\n\n\n\nHi All,\n\n Please check the below findings on GTT.\n-- Scenario 1:\n Under \"information_schema\", We are not allowed to\n create \"temporary table\", whereas we can CREATE/DROP\n \"Global Temporary Table\", is it expected ?\n\n\n\n\n\n\nIt is ok for me. temporary tables should be created only\n in proprietary schema. For GTT there is not risk of\n collision, so it can be created in any schema where are\n necessary access rights.\n\n\nPavel\n\n\n\n\n\n\n postgres=# create temporary table\n information_schema.temp1(c1 int);\n ERROR:  cannot create temporary relation in\n non-temporary schema\n LINE 1: create temporary table\n information_schema.temp1(c1 int);\n                                ^\n\n postgres=# create global temporary table\n information_schema.temp1(c1 int);\n CREATE TABLE\n\n postgres=# drop table information_schema.temp1 ;\n DROP TABLE\n\n\n\n-- Scenario 2:\n Here I am getting the same error message in both the\n below cases.\n We may add a \"global\" keyword with GTT related error\n message.\n\n postgres=# create global temporary table gtt1 (c1 int\n unique);\n CREATE TABLE\n postgres=# create temporary table tmp1 (c1 int\n unique);\n CREATE TABLE\n\n postgres=# create temporary table tmp2 (c1 int\n references gtt1(c1) );\n ERROR:  constraints on temporary tables may reference\n only temporary tables\n\n postgres=# create global temporary table gtt2 (c1 int\n references tmp1(c1) );\n ERROR:  constraints on temporary tables may reference\n only temporary tables\n\n\n\nThanks,\nPrabhat Sahu\n\n\n\n\nOn Tue, Feb 25, 2020 at\n 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n wrote:\n\n\n\n\n\n2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n 写道:\n\n\n\n\nOn\n Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n wrote:\n\n\n\n\n\n\nHi,\n I have started testing the \"Global\n temporary table\" feature,\nThat's\n great, I see hope.\nfrom\n \"gtt_v11-pg13.patch\". Below is my\n findings:\n\n--\n session 1:\npostgres=#\n create global temporary table\n gtt1(a int);\nCREATE\n TABLE\n\n--\n seeeion 2:\npostgres=#\n truncate gtt1 ;\nERROR:\n  could not open file\n \"base/13585/t3_16384\": No such\n file or directory\n\nis\n it expected?\n\n\nOh\n ,this is a bug, I fixed it.\n\n\n\nThanks\n for the patch.\nI\n have verified the same, Now the issue is\n resolved with v12 patch.\n\n\nKindly\n confirm the below scenario:\n\n\npostgres=#\n create global temporary table gtt1 (c1 int\n unique);\n CREATE TABLE\n\n postgres=# create global temporary table\n gtt2 (c1 int references gtt1(c1) );\n ERROR:  referenced relation \"gtt1\" is not\n a global temp table\n\n postgres=# create table tab2 (c1 int\n references gtt1(c1) );\n ERROR:  referenced relation \"gtt1\" is not\n a global temp table\n\n\nThanks,\n \n\nPrabhat\n Sahu\n\n\n\n\n\n\n\nGTT supports foreign key constraints\n in global_temporary_table_v13-pg13.patch\n\n\n\n\nWenjing\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n -- \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWith\n Regards,\nPrabhat\n Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 25 Feb 2020 21:01:09 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thanks for review.\n\n\n> 2020年2月25日 下午9:56,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> Hi ,\n> \n> pg_upgrade scenario is failing if database is containing global temporary table \n> \n> =============================\n> centos@tushar-ldap-docker bin]$ ./psql postgres\n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# create global temporary table t(n int);\n> CREATE TABLE\n> postgres=# \\q\n> ===============================\n> \n> run pg_upgrade -\n> \n> [centos@tushar-ldap-docker bin]$ ./pg_upgrade -d /tmp/t1/ -D /tmp/t2 -b . -B . \n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok\n> Checking database user is the install user ok\n> Checking database connection settings ok\n> Checking for prepared transactions ok\n> Checking for reg* data types in user tables ok\n> --\n> --\n> If pg_upgrade fails after this point, you must re-initdb the\n> new cluster before continuing.\n> \n> Performing Upgrade\n> ------------------\n> Analyzing all rows in the new cluster ok\n> Freezing all rows in the new cluster ok\n> Deleting files from new pg_xact ok\n> --\n> --\n> Restoring database schemas in the new cluster\n> ok\n> Copying user relation files\n> /tmp/t1/base/13585/16384 \n> error while copying relation \"public.t\": could not open file \"/tmp/t1/base/13585/16384\": No such file or directory\n> Failure, exiting\nThis is a bug.\nI fixed in global_temporary_table_v14-pg13.patch\n\n\nWenjing\n\n\n\n\n> \n> regards,\n> \n> On 2/25/20 7:06 PM, Prabhat Sahu wrote:\n>> Hi All,\n>> \n>> Please check the below findings on GTT.\n>> -- Scenario 1:\n>> Under \"information_schema\", We are not allowed to create \"temporary table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it expected ?\n>> \n>> postgres=# create temporary table information_schema.temp1(c1 int);\n>> ERROR: cannot create temporary relation in non-temporary schema\n>> LINE 1: create temporary table information_schema.temp1(c1 int);\n>> ^\n>> \n>> postgres=# create global temporary table information_schema.temp1(c1 int);\n>> CREATE TABLE\n>> \n>> postgres=# drop table information_schema.temp1 ;\n>> DROP TABLE\n>> \n>> -- Scenario 2:\n>> Here I am getting the same error message in both the below cases.\n>> We may add a \"global\" keyword with GTT related error message.\n>> \n>> postgres=# create global temporary table gtt1 (c1 int unique);\n>> CREATE TABLE\n>> postgres=# create temporary table tmp1 (c1 int unique);\n>> CREATE TABLE\n>> \n>> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n>> ERROR: constraints on temporary tables may reference only temporary tables\n>> \n>> postgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );\n>> ERROR: constraints on temporary tables may reference only temporary tables\n>> \n>> Thanks,\n>> Prabhat Sahu\n>> \n>> On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> \n>> \n>>> 2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>> \n>>> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>>> Hi,\n>>> I have started testing the \"Global temporary table\" feature,\n>>> That's great, I see hope.\n>>> from \"gtt_v11-pg13.patch\". Below is my findings:\n>>> \n>>> -- session 1:\n>>> postgres=# create global temporary table gtt1(a int);\n>>> CREATE TABLE\n>>> \n>>> -- seeeion 2:\n>>> postgres=# truncate gtt1 ;\n>>> ERROR: could not open file \"base/13585/t3_16384\": No such file or directory\n>>> \n>>> is it expected?\n>>> \n>>> Oh ,this is a bug, I fixed it.\n>>> Thanks for the patch.\n>>> I have verified the same, Now the issue is resolved with v12 patch.\n>>> \n>>> Kindly confirm the below scenario:\n>>> \n>>> postgres=# create global temporary table gtt1 (c1 int unique);\n>>> CREATE TABLE\n>>> \n>>> postgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );\n>>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>>> \n>>> postgres=# create table tab2 (c1 int references gtt1(c1) );\n>>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>>> \n>>> Thanks, \n>>> Prabhat Sahu\n>> \n>> GTT supports foreign key constraints in global_temporary_table_v13-pg13.patch\n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> \n>> -- \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company", "msg_date": "Wed, 26 Feb 2020 23:52:38 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月25日 下午11:31,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> Hi,\n> \n> I have created two global temporary tables like this -\n> \n> Case 1- \n> postgres=# create global temp table foo(n int) with (on_commit_delete_rows='true');\n> CREATE TABLE\n> \n> Case 2- \n> postgres=# create global temp table bar1(n int) on commit delete rows;\n> CREATE TABLE\n> \n> \n> but if i try to do the same having only 'temp' keyword , Case 2 is working fine but getting this error for case 1 -\n> \n> postgres=# create temp table foo1(n int) with (on_commit_delete_rows='true');\n> ERROR: regular table cannot specifie on_commit_delete_rows\n> postgres=# \n> \n> postgres=# create temp table bar1(n int) on commit delete rows;\n> CREATE TABLE\n> \n> i think this error message need to be more clear .\nAlso fixed in global_temporary_table_v14-pg13.patch\n\nWenjing\n\n\n\n> \n> regards,\n> tushar \n> \n> On 2/25/20 7:19 PM, Pavel Stehule wrote/:\n>> \n>> \n>> út 25. 2. 2020 v 14:36 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> napsal:\n>> Hi All,\n>> \n>> Please check the below findings on GTT.\n>> -- Scenario 1:\n>> Under \"information_schema\", We are not allowed to create \"temporary table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it expected ?\n>> \n>> It is ok for me. temporary tables should be created only in proprietary schema. For GTT there is not risk of collision, so it can be created in any schema where are necessary access rights.\n>> \n>> Pavel\n>> \n>> \n>> postgres=# create temporary table information_schema.temp1(c1 int);\n>> ERROR: cannot create temporary relation in non-temporary schema\n>> LINE 1: create temporary table information_schema.temp1(c1 int);\n>> ^\n>> \n>> postgres=# create global temporary table information_schema.temp1(c1 int);\n>> CREATE TABLE\n>> \n>> postgres=# drop table information_schema.temp1 ;\n>> DROP TABLE\n>> \n>> -- Scenario 2:\n>> Here I am getting the same error message in both the below cases.\n>> We may add a \"global\" keyword with GTT related error message.\n>> \n>> postgres=# create global temporary table gtt1 (c1 int unique);\n>> CREATE TABLE\n>> postgres=# create temporary table tmp1 (c1 int unique);\n>> CREATE TABLE\n>> \n>> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n>> ERROR: constraints on temporary tables may reference only temporary tables\n>> \n>> postgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );\n>> ERROR: constraints on temporary tables may reference only temporary tables\n>> \n>> Thanks,\n>> Prabhat Sahu\n>> \n>> On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> \n>> \n>>> 2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>> \n>>> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>>> Hi,\n>>> I have started testing the \"Global temporary table\" feature,\n>>> That's great, I see hope.\n>>> from \"gtt_v11-pg13.patch\". Below is my findings:\n>>> \n>>> -- session 1:\n>>> postgres=# create global temporary table gtt1(a int);\n>>> CREATE TABLE\n>>> \n>>> -- seeeion 2:\n>>> postgres=# truncate gtt1 ;\n>>> ERROR: could not open file \"base/13585/t3_16384\": No such file or directory\n>>> \n>>> is it expected?\n>>> \n>>> Oh ,this is a bug, I fixed it.\n>>> Thanks for the patch.\n>>> I have verified the same, Now the issue is resolved with v12 patch.\n>>> \n>>> Kindly confirm the below scenario:\n>>> \n>>> postgres=# create global temporary table gtt1 (c1 int unique);\n>>> CREATE TABLE\n>>> \n>>> postgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );\n>>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>>> \n>>> postgres=# create table tab2 (c1 int references gtt1(c1) );\n>>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>>> \n>>> Thanks, \n>>> Prabhat Sahu\n>> \n>> GTT supports foreign key constraints in global_temporary_table_v13-pg13.patch\n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> \n>> -- \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company\n\n\n2020年2月25日 下午11:31,tushar <tushar.ahuja@enterprisedb.com> 写道:\n\n\nHi,\n\n\nI have created two  global temporary\n tables like this -\n\n\nCase 1- \n postgres=# create global  temp table foo(n int) with (on_commit_delete_rows='true');\n CREATE TABLE\n\n\n\nCase 2- \n\npostgres=# create global  temp table\n bar1(n int) on commit delete rows;\n CREATE TABLE\n\n\n\n\nbut   if i try to do the same having\n only 'temp' keyword , Case 2 is working fine but getting this\n error  for case 1 -\n\n\npostgres=# create   temp table foo1(n\n int) with (on_commit_delete_rows='true');\n ERROR:  regular table cannot specifie on_commit_delete_rows\n postgres=# \n\n\n\npostgres=#  create   temp table bar1(n\n int) on commit delete rows;\n CREATE TABLE\n\n\ni think this error message need to be\n more clear .Also fixed in global_temporary_table_v14-pg13.patchWenjing\n\n\nregards,\ntushar \n\n\n\nOn 2/25/20 7:19 PM, Pavel Stehule\n wrote/:\n\n\n\n\n\n\n\n\nút 25. 2. 2020 v 14:36\n odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n napsal:\n\n\n\n\nHi All,\n\n Please check the below findings on GTT.\n-- Scenario 1:\n Under \"information_schema\", We are not allowed to\n create \"temporary table\", whereas we can CREATE/DROP\n \"Global Temporary Table\", is it expected ?\n\n\n\n\n\n\nIt is ok for me. temporary tables should be created only\n in proprietary schema. For GTT there is not risk of\n collision, so it can be created in any schema where are\n necessary access rights.\n\n\nPavel\n\n\n\n\n\n\n postgres=# create temporary table\n information_schema.temp1(c1 int);\n ERROR:  cannot create temporary relation in\n non-temporary schema\n LINE 1: create temporary table\n information_schema.temp1(c1 int);\n                                ^\n\n postgres=# create global temporary table\n information_schema.temp1(c1 int);\n CREATE TABLE\n\n postgres=# drop table information_schema.temp1 ;\n DROP TABLE\n\n\n\n-- Scenario 2:\n Here I am getting the same error message in both the\n below cases.\n We may add a \"global\" keyword with GTT related error\n message.\n\n postgres=# create global temporary table gtt1 (c1 int\n unique);\n CREATE TABLE\n postgres=# create temporary table tmp1 (c1 int\n unique);\n CREATE TABLE\n\n postgres=# create temporary table tmp2 (c1 int\n references gtt1(c1) );\n ERROR:  constraints on temporary tables may reference\n only temporary tables\n\n postgres=# create global temporary table gtt2 (c1 int\n references tmp1(c1) );\n ERROR:  constraints on temporary tables may reference\n only temporary tables\n\n\n\nThanks,\nPrabhat Sahu\n\n\n\n\nOn Tue, Feb 25, 2020 at\n 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n wrote:\n\n\n\n\n\n2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n 写道:\n\n\n\n\nOn\n Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n wrote:\n\n\n\n\n\n\nHi,\n I have started testing the \"Global\n temporary table\" feature,\nThat's\n great, I see hope.\nfrom\n \"gtt_v11-pg13.patch\". Below is my\n findings:\n\n--\n session 1:\npostgres=#\n create global temporary table\n gtt1(a int);\nCREATE\n TABLE\n\n--\n seeeion 2:\npostgres=#\n truncate gtt1 ;\nERROR:\n  could not open file\n \"base/13585/t3_16384\": No such\n file or directory\n\nis\n it expected?\n\n\nOh\n ,this is a bug, I fixed it.\n\n\n\nThanks\n for the patch.\nI\n have verified the same, Now the issue is\n resolved with v12 patch.\n\n\nKindly\n confirm the below scenario:\n\n\npostgres=#\n create global temporary table gtt1 (c1 int\n unique);\n CREATE TABLE\n\n postgres=# create global temporary table\n gtt2 (c1 int references gtt1(c1) );\n ERROR:  referenced relation \"gtt1\" is not\n a global temp table\n\n postgres=# create table tab2 (c1 int\n references gtt1(c1) );\n ERROR:  referenced relation \"gtt1\" is not\n a global temp table\n\n\nThanks,\n \n\nPrabhat\n Sahu\n\n\n\n\n\n\n\nGTT supports foreign key constraints\n in global_temporary_table_v13-pg13.patch\n\n\n\n\nWenjing\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n -- \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWith\n Regards,\nPrabhat\n Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 26 Feb 2020 23:54:22 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月25日 下午9:56,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> Hi ,\n> \n> pg_upgrade scenario is failing if database is containing global temporary table \n> \n> =============================\n> centos@tushar-ldap-docker bin]$ ./psql postgres\n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# create global temporary table t(n int);\n> CREATE TABLE\n> postgres=# \\q\n> ===============================\n> \n> run pg_upgrade -\n> \n> [centos@tushar-ldap-docker bin]$ ./pg_upgrade -d /tmp/t1/ -D /tmp/t2 -b . -B . \n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok\n> Checking database user is the install user ok\n> Checking database connection settings ok\n> Checking for prepared transactions ok\n> Checking for reg* data types in user tables ok\n> --\n> --\n> If pg_upgrade fails after this point, you must re-initdb the\n> new cluster before continuing.\n> \n> Performing Upgrade\n> ------------------\n> Analyzing all rows in the new cluster ok\n> Freezing all rows in the new cluster ok\n> Deleting files from new pg_xact ok\n> --\n> --\n> Restoring database schemas in the new cluster\n> ok\n> Copying user relation files\n> /tmp/t1/base/13585/16384 \n> error while copying relation \"public.t\": could not open file \"/tmp/t1/base/13585/16384\": No such file or directory\n> Failure, exiting\nI fixed some bug in global_temporary_table_v14-pg13.patch\n\nPlease check global_temporary_table_v15-pg13.patch\n\nWenjing\n\n\n\n> \n> regards,\n> \n> On 2/25/20 7:06 PM, Prabhat Sahu wrote:\n>> Hi All,\n>> \n>> Please check the below findings on GTT.\n>> -- Scenario 1:\n>> Under \"information_schema\", We are not allowed to create \"temporary table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it expected ?\n>> \n>> postgres=# create temporary table information_schema.temp1(c1 int);\n>> ERROR: cannot create temporary relation in non-temporary schema\n>> LINE 1: create temporary table information_schema.temp1(c1 int);\n>> ^\n>> \n>> postgres=# create global temporary table information_schema.temp1(c1 int);\n>> CREATE TABLE\n>> \n>> postgres=# drop table information_schema.temp1 ;\n>> DROP TABLE\n>> \n>> -- Scenario 2:\n>> Here I am getting the same error message in both the below cases.\n>> We may add a \"global\" keyword with GTT related error message.\n>> \n>> postgres=# create global temporary table gtt1 (c1 int unique);\n>> CREATE TABLE\n>> postgres=# create temporary table tmp1 (c1 int unique);\n>> CREATE TABLE\n>> \n>> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n>> ERROR: constraints on temporary tables may reference only temporary tables\n>> \n>> postgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );\n>> ERROR: constraints on temporary tables may reference only temporary tables\n>> \n>> Thanks,\n>> Prabhat Sahu\n>> \n>> On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> \n>> \n>>> 2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>> \n>>> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>>> Hi,\n>>> I have started testing the \"Global temporary table\" feature,\n>>> That's great, I see hope.\n>>> from \"gtt_v11-pg13.patch\". Below is my findings:\n>>> \n>>> -- session 1:\n>>> postgres=# create global temporary table gtt1(a int);\n>>> CREATE TABLE\n>>> \n>>> -- seeeion 2:\n>>> postgres=# truncate gtt1 ;\n>>> ERROR: could not open file \"base/13585/t3_16384\": No such file or directory\n>>> \n>>> is it expected?\n>>> \n>>> Oh ,this is a bug, I fixed it.\n>>> Thanks for the patch.\n>>> I have verified the same, Now the issue is resolved with v12 patch.\n>>> \n>>> Kindly confirm the below scenario:\n>>> \n>>> postgres=# create global temporary table gtt1 (c1 int unique);\n>>> CREATE TABLE\n>>> \n>>> postgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );\n>>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>>> \n>>> postgres=# create table tab2 (c1 int references gtt1(c1) );\n>>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>>> \n>>> Thanks, \n>>> Prabhat Sahu\n>> \n>> GTT supports foreign key constraints in global_temporary_table_v13-pg13.patch\n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> \n>> -- \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company", "msg_date": "Thu, 27 Feb 2020 12:12:35 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月25日 下午9:36,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi All,\n> \n> Please check the below findings on GTT.\n> -- Scenario 1:\n> Under \"information_schema\", We are not allowed to create \"temporary table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it expected ?\n> \n> postgres=# create temporary table information_schema.temp1(c1 int);\n> ERROR: cannot create temporary relation in non-temporary schema\n> LINE 1: create temporary table information_schema.temp1(c1 int);\n> ^\n> \n> postgres=# create global temporary table information_schema.temp1(c1 int);\n> CREATE TABLE\n> \n> postgres=# drop table information_schema.temp1 ;\n> DROP TABLE\n> \n> -- Scenario 2:\n> Here I am getting the same error message in both the below cases.\n> We may add a \"global\" keyword with GTT related error message.\n> \n> postgres=# create global temporary table gtt1 (c1 int unique);\n> CREATE TABLE\n> postgres=# create temporary table tmp1 (c1 int unique);\n> CREATE TABLE\n> \n> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n> ERROR: constraints on temporary tables may reference only temporary tables\n> \n> postgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );\n> ERROR: constraints on temporary tables may reference only temporary tables\nFixed in global_temporary_table_v15-pg13.patch\n\n\nWenjing\n\n\n> \n> Thanks,\n> Prabhat Sahu\n> \n> On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n>> 2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>> \n>> On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> Hi,\n>> I have started testing the \"Global temporary table\" feature,\n>> That's great, I see hope.\n>> from \"gtt_v11-pg13.patch\". Below is my findings:\n>> \n>> -- session 1:\n>> postgres=# create global temporary table gtt1(a int);\n>> CREATE TABLE\n>> \n>> -- seeeion 2:\n>> postgres=# truncate gtt1 ;\n>> ERROR: could not open file \"base/13585/t3_16384\": No such file or directory\n>> \n>> is it expected?\n>> \n>> Oh ,this is a bug, I fixed it.\n>> Thanks for the patch.\n>> I have verified the same, Now the issue is resolved with v12 patch.\n>> \n>> Kindly confirm the below scenario:\n>> \n>> postgres=# create global temporary table gtt1 (c1 int unique);\n>> CREATE TABLE\n>> \n>> postgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );\n>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>> \n>> postgres=# create table tab2 (c1 int references gtt1(c1) );\n>> ERROR: referenced relation \"gtt1\" is not a global temp table\n>> \n>> Thanks, \n>> Prabhat Sahu\n> \n> GTT supports foreign key constraints in global_temporary_table_v13-pg13.patch\n> \n> \n> Wenjing\n> \n> \n> \n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n\n\n2020年2月25日 下午9:36,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi All,Please check the below findings on GTT.-- Scenario 1:Under \"information_schema\", We are not allowed to create \"temporary table\", whereas we can CREATE/DROP \"Global Temporary Table\", is it expected ?postgres=# create temporary table information_schema.temp1(c1 int);ERROR:  cannot create temporary relation in non-temporary schemaLINE 1: create temporary table information_schema.temp1(c1 int);                               ^postgres=# create global temporary table information_schema.temp1(c1 int);CREATE TABLEpostgres=# drop table information_schema.temp1 ;DROP TABLE-- Scenario 2:Here I am getting the same error message in both the below cases.We may add a \"global\" keyword with GTT related error message.postgres=# create global temporary table gtt1 (c1 int unique);CREATE TABLEpostgres=# create temporary table tmp1 (c1 int unique);CREATE TABLEpostgres=# create temporary table tmp2 (c1 int references gtt1(c1) );ERROR:  constraints on temporary tables may reference only temporary tablespostgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );ERROR:  constraints on temporary tables may reference only temporary tablesFixed in global_temporary_table_v15-pg13.patchWenjingThanks,Prabhat SahuOn Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:2020年2月24日 下午5:44,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:Hi,I have started testing the \"Global temporary table\" feature,That's great, I see hope.from \"gtt_v11-pg13.patch\". Below is my findings:-- session 1:postgres=# create global temporary table gtt1(a int);CREATE TABLE-- seeeion 2:postgres=# truncate gtt1 ;ERROR:  could not open file \"base/13585/t3_16384\": No such file or directoryis it expected?Oh ,this is a bug, I fixed it.Thanks for the patch.I have verified the same, Now the issue is resolved with v12 patch.Kindly confirm the below scenario:postgres=# create global temporary table gtt1 (c1 int unique);CREATE TABLEpostgres=# create global temporary table gtt2 (c1 int references gtt1(c1) );ERROR:  referenced relation \"gtt1\" is not a global temp tablepostgres=# create table tab2 (c1 int references gtt1(c1) );ERROR:  referenced relation \"gtt1\" is not a global temp tableThanks, Prabhat Sahu\nGTT supports foreign key constraints in global_temporary_table_v13-pg13.patchWenjing-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 27 Feb 2020 12:13:07 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 2/27/20 9:43 AM, 曾文旌(义从) wrote:\n>> _-- Scenario 2:_\n>> Here I am getting the same error message in both the below cases.\n>> We may add a \"global\" keyword with GTT related error message.\n>>\n>> postgres=# create global temporary table gtt1 (c1 int unique);\n>> CREATE TABLE\n>> postgres=# create temporary table tmp1 (c1 int unique);\n>> CREATE TABLE\n>>\n>> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n>> ERROR:  constraints on temporary tables may reference only temporary \n>> tables\n>>\n>> postgres=# create global temporary table gtt2 (c1 int references \n>> tmp1(c1) );\n>> ERROR:  constraints on temporary tables may reference only temporary \n>> tables\n> Fixed in global_temporary_table_v15-pg13.patch\n>\n>\nThanks Wenjing.\n\nThis below scenario is not working  i.e even 'on_commit_delete_rows' is \ntrue then after commit -  rows are NOT removing\n\npostgres=#  create global  temp table foo1(n int) with \n(on_commit_delete_rows='true');\nCREATE TABLE\npostgres=#\npostgres=# begin;\nBEGIN\npostgres=*# insert into foo1 values (9);\nINSERT 0 1\npostgres=*# insert into foo1 values (9);\nINSERT 0 1\npostgres=*# select * from foo1;\n  n\n---\n  9\n  9\n(2 rows)\n\npostgres=*# commit;\nCOMMIT\npostgres=# select * from foo1;   -- after commit -there should be 0 row \nas on_commit_delete_rows is 'true'\n  n\n---\n  9\n  9\n(2 rows)\n\npostgres=# \\d+ foo1\n                                    Table \"public.foo1\"\n  Column |  Type   | Collation | Nullable | Default | Storage | Stats \ntarget | Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n  n      | integer |           |          |         | plain \n|              |\nAccess method: heap\nOptions: on_commit_delete_rows=true\n\npostgres=#\n\nbut if user - create table this way then it is working as expected\n\npostgres=#  create global  temp table foo2(n int) *on commit delete rows;*\nCREATE TABLE\npostgres=# begin; insert into foo2 values (9); insert into foo2 values \n(9); commit; select * from foo2;\nBEGIN\nINSERT 0 1\nINSERT 0 1\nCOMMIT\n  n\n---\n(0 rows)\n\npostgres=#\n\ni guess , problem is something with this syntax - create global temp \ntable foo1(n int) *with (on_commit_delete_rows='true'); *\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 2/27/20 9:43 AM, 曾文旌(义从) wrote:\n\n\n\n\n\n\n-- Scenario 2:\n Here I am getting the same error message in both the\n below cases.\n We may add a \"global\" keyword with GTT related error\n message.\n\n postgres=# create global temporary table gtt1 (c1 int\n unique);\n CREATE TABLE\n postgres=# create temporary table tmp1 (c1 int unique);\n CREATE TABLE\n\n postgres=# create temporary table tmp2 (c1 int\n references gtt1(c1) );\n ERROR:  constraints on temporary tables may reference\n only temporary tables\n\n postgres=# create global temporary table gtt2 (c1 int\n references tmp1(c1) );\n ERROR:  constraints on temporary tables may reference\n only temporary tables\n\n\n\n\n\nFixed in global_temporary_table_v15-pg13.patch\n\n\n\n\n\nThanks Wenjing. \n\nThis below scenario is not working  i.e even\n 'on_commit_delete_rows' is true then after commit -  rows are NOT\n removing \n\npostgres=#  create global  temp table foo1(n int) with\n (on_commit_delete_rows='true');\n CREATE TABLE\n postgres=# \n postgres=# begin;\n BEGIN\n postgres=*# insert into foo1 values (9);\n INSERT 0 1\n postgres=*# insert into foo1 values (9);\n INSERT 0 1\n postgres=*# select * from foo1;\n  n \n ---\n  9\n  9\n (2 rows)\n\n postgres=*# commit;\n COMMIT\n postgres=# select * from foo1;   -- after commit -there should be\n 0 row as on_commit_delete_rows is 'true' \n  n \n ---\n  9\n  9\n (2 rows)\n\n postgres=# \\d+ foo1\n                                    Table \"public.foo1\"\n  Column |  Type   | Collation | Nullable | Default | Storage |\n Stats target | Description \n--------+---------+-----------+----------+---------+---------+--------------+-------------\n  n      | integer |           |          |         | plain  \n |              | \n Access method: heap\n Options: on_commit_delete_rows=true\n\n postgres=# \n\nbut if user - create table this way then it is working as\n expected \n\npostgres=#  create global  temp table foo2(n int) on commit\n delete rows;\n CREATE TABLE\n postgres=# begin; insert into foo2 values (9); insert into foo2\n values (9); commit; select * from foo2;\n BEGIN\n INSERT 0 1\n INSERT 0 1\n COMMIT\n  n \n ---\n (0 rows)\n\n postgres=# \n\ni guess , problem is something with this syntax - create global \n temp table foo1(n int) with (on_commit_delete_rows='true'); \n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 2 Mar 2020 20:17:19 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月2日 下午10:47,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 2/27/20 9:43 AM, 曾文旌(义从) wrote:\n>>> -- Scenario 2:\n>>> Here I am getting the same error message in both the below cases.\n>>> We may add a \"global\" keyword with GTT related error message.\n>>> \n>>> postgres=# create global temporary table gtt1 (c1 int unique);\n>>> CREATE TABLE\n>>> postgres=# create temporary table tmp1 (c1 int unique);\n>>> CREATE TABLE\n>>> \n>>> postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );\n>>> ERROR: constraints on temporary tables may reference only temporary tables\n>>> \n>>> postgres=# create global temporary table gtt2 (c1 int references tmp1(c1) );\n>>> ERROR: constraints on temporary tables may reference only temporary tables\n>> Fixed in global_temporary_table_v15-pg13.patch\n>> \n>> \n> Thanks Wenjing. \n> \n> This below scenario is not working i.e even 'on_commit_delete_rows' is true then after commit - rows are NOT removing \n> \n> postgres=# create global temp table foo1(n int) with (on_commit_delete_rows='true');\n> CREATE TABLE\n> postgres=# \n> postgres=# begin;\n> BEGIN\n> postgres=*# insert into foo1 values (9);\n> INSERT 0 1\n> postgres=*# insert into foo1 values (9);\n> INSERT 0 1\n> postgres=*# select * from foo1;\n> n \n> ---\n> 9\n> 9\n> (2 rows)\n> \n> postgres=*# commit;\n> COMMIT\n> postgres=# select * from foo1; -- after commit -there should be 0 row as on_commit_delete_rows is 'true' \n> n \n> ---\n> 9\n> 9\n> (2 rows)\n> \n> postgres=# \\d+ foo1\n> Table \"public.foo1\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> --------+---------+-----------+----------+---------+---------+--------------+-------------\n> n | integer | | | | plain | | \n> Access method: heap\n> Options: on_commit_delete_rows=true\n> \n> postgres=# \n> \n> but if user - create table this way then it is working as expected \n> \n> postgres=# create global temp table foo2(n int) on commit delete rows;\n> CREATE TABLE\n> postgres=# begin; insert into foo2 values (9); insert into foo2 values (9); commit; select * from foo2;\n> BEGIN\n> INSERT 0 1\n> INSERT 0 1\n> COMMIT\n> n \n> ---\n> (0 rows)\n> \n> postgres=# \n> \n> i guess , problem is something with this syntax - create global temp table foo1(n int) with (on_commit_delete_rows='true'); \n> \nThanks for review.\n\nI fixed in global_temporary_table_v16-pg13.patch.\n\n\n\nWenjing\n\n\n\n\n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company", "msg_date": "Tue, 03 Mar 2020 16:40:55 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Tue, Mar 3, 2020 at 2:11 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n>\n> I fixed in global_temporary_table_v16-pg13.patch.\n>\n\nThank you Wenjing for the patch.\nNow we are getting corruption with GTT with below scenario.\n\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 bigint, c2 bigserial) on\ncommit delete rows;\nCREATE TABLE\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 bigint, c2 bigserial) on\ncommit preserve rows;\nCREATE TABLE\npostgres=# \\q\n\n[edb@localhost bin]$ echo \"1\n> 2\n> 3\n> \"> t.dat\n\n[edb@localhost bin]$ ./psql postgres\npsql (13devel)\nType \"help\" for help.\n\npostgres=# \\copy gtt1(c1) from 't.dat' with csv;\nERROR: could not read block 0 in file \"base/13585/t3_16384\": read only 0\nof 8192 bytes\nCONTEXT: COPY gtt1, line 1: \"1\"\n\npostgres=# \\copy gtt2(c1) from 't.dat' with csv;\nERROR: could not read block 0 in file \"base/13585/t3_16390\": read only 0\nof 8192 bytes\nCONTEXT: COPY gtt2, line 1: \"1\"\n\nNOTE: We end with such corruption for \"bigserial/smallserial/serial\"\ndatatype columns.\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Mar 3, 2020 at 2:11 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:I fixed in global_temporary_table_v16-pg13.patch.  Thank you Wenjing for the patch.Now we are getting corruption with GTT with below scenario.postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 bigint, c2 bigserial) on commit delete rows;CREATE TABLEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 bigint, c2 bigserial) on commit preserve rows;CREATE TABLEpostgres=# \\q[edb@localhost bin]$ echo \"1> 2> 3> \"> t.dat[edb@localhost bin]$ ./psql  postgres psql (13devel)Type \"help\" for help.postgres=# \\copy gtt1(c1) from 't.dat' with  csv;ERROR:  could not read block 0 in file \"base/13585/t3_16384\": read only 0 of 8192 bytesCONTEXT:  COPY gtt1, line 1: \"1\"postgres=# \\copy gtt2(c1) from 't.dat' with  csv;ERROR:  could not read block 0 in file \"base/13585/t3_16390\": read only 0 of 8192 bytesCONTEXT:  COPY gtt2, line 1: \"1\"NOTE: We end with such corruption for \"bigserial/smallserial/serial\" datatype columns.-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 4 Mar 2020 13:19:13 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/3/20 2:10 PM, 曾文旌(义从) wrote:\n> I fixed in global_temporary_table_v16-pg13.patch.\nThanks Wenjing. The reported  issue is fixed now  but  there is an \nanother similar  scenario -\nif we enable 'on_commit_delete_rows' to true using alter command then \ngetting same issue i.e rows are not removing after commit.\n\nx=# create global  temp table foo123(n int) with \n(on_commit_delete_rows='false');\nCREATE TABLE\nx=#\nx=# alter table foo123 set ( on_commit_delete_rows='true');\nALTER TABLE\nx=#\nx=# insert into foo123 values (1);\nINSERT 0 1\nx=# select * from foo123;   <- row should get removed.\n  n\n---\n  1\n(1 row)\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 4 Mar 2020 21:09:57 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/3/20 2:10 PM, 曾文旌(义从) wrote:\n> I fixed in global_temporary_table_v16-pg13.patch.\n\nPlease refer this scenario -\n\n--Connect to psql -\n\npostgres=# alter system set max_active_global_temporary_table =1;\nALTER SYSTEM\n\n--restart the server (./pg_ctl -D data restart)\n\n--create global temp table\n\npostgres=# create global temp  table ccc1  (c int);\nCREATE TABLE\n\n--Try to Create another global temp table\n\n*postgres=# create global temp  table ccc2  (c int);**\n**WARNING:  relfilenode 13589/1663/19063 not exist in gtt shared hash \nwhen forget**\n**ERROR:  out of shared memory**\n**HINT:  You might need to increase max_active_gtt.**\n*\npostgres=# show max_active_gtt;\nERROR:  unrecognized configuration parameter \"max_active_gtt\"\npostgres=#\npostgres=# show max_active_global_temporary_table ;\n  max_active_global_temporary_table\n-----------------------------------\n  1\n(1 row)\n\npostgres=#\n\nI cannot find \"max_active_gtt\"  GUC . I think you are referring to  \n\"max_active_global_temporary_table\" here ?\n\nalso , would be great  if we can make this error message  user friendly \nlike  - \"max connection reached\"  rather than memory error\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 3/3/20 2:10 PM, 曾文旌(义从) wrote:\n\n\nI fixed in global_temporary_table_v16-pg13.patch.\n\nPlease refer this scenario -\n--Connect to psql -\n\npostgres=# alter system set max_active_global_temporary_table =1;\n ALTER SYSTEM\n\n--restart the server (./pg_ctl -D data restart) \n\n--create global temp table \n\npostgres=# create global temp  table ccc1  (c int);\n CREATE TABLE\n\n--Try to Create another global temp table\n\npostgres=# create global temp  table ccc2  (c int);\nWARNING:  relfilenode 13589/1663/19063 not exist in gtt\n shared hash when forget\nERROR:  out of shared memory\nHINT:  You might need to increase max_active_gtt.\n\n postgres=# show max_active_gtt;\n ERROR:  unrecognized configuration parameter \"max_active_gtt\"\n postgres=# \n postgres=# show max_active_global_temporary_table ;\n  max_active_global_temporary_table \n -----------------------------------\n  1\n (1 row)\n\n postgres=# \n\nI cannot find \"max_active_gtt\"  GUC . I think you are referring\n to  \"max_active_global_temporary_table\" here ? \n\nalso , would be great  if we can make this error message  user\n friendly like  - \"max connection reached\"  rather than memory\n error\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 5 Mar 2020 19:49:37 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Mar 5, 2020 at 9:19 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> WARNING: relfilenode 13589/1663/19063 not exist in gtt shared hash when forget\n> ERROR: out of shared memory\n> HINT: You might need to increase max_active_gtt.\n>\n> also , would be great if we can make this error message user friendly like - \"max connection reached\" rather than memory error\n\nThat would be nice, but the bigger problem is that the WARNING there\nlooks totally unacceptable. It's looks like it's complaining of some\ninternal issue (i.e. a bug or corruption) and the grammar is poor,\ntoo.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 5 Mar 2020 09:38:26 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月4日 下午3:49,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> On Tue, Mar 3, 2020 at 2:11 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n> \n> I fixed in global_temporary_table_v16-pg13.patch.\n> \n> Thank you Wenjing for the patch.\n> Now we are getting corruption with GTT with below scenario.\n> \n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 bigint, c2 bigserial) on commit delete rows;\n> CREATE TABLE\n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 bigint, c2 bigserial) on commit preserve rows;\n> CREATE TABLE\n> postgres=# \\q\n> \n> [edb@localhost bin]$ echo \"1\n> > 2\n> > 3\n> > \"> t.dat\n> \n> [edb@localhost bin]$ ./psql postgres \n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# \\copy gtt1(c1) from 't.dat' with csv;\n> ERROR: could not read block 0 in file \"base/13585/t3_16384\": read only 0 of 8192 bytes\n> CONTEXT: COPY gtt1, line 1: \"1\"\n> \n> postgres=# \\copy gtt2(c1) from 't.dat' with csv;\n> ERROR: could not read block 0 in file \"base/13585/t3_16390\": read only 0 of 8192 bytes\n> CONTEXT: COPY gtt2, line 1: \"1\"\n> \n> NOTE: We end with such corruption for \"bigserial/smallserial/serial\" datatype columns.\nThanks for review.\n\nI fixed this issue in global_temporary_table_v17-pg13.patch\n\n\nWenjing\n\n\n\n\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Fri, 06 Mar 2020 15:00:21 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月4日 下午11:39,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/3/20 2:10 PM, 曾文旌(义从) wrote:\n>> I fixed in global_temporary_table_v16-pg13.patch.\n> Thanks Wenjing. The reported issue is fixed now but there is an another similar scenario -\n> if we enable 'on_commit_delete_rows' to true using alter command then getting same issue i.e rows are not removing after commit.\n> \n> x=# create global temp table foo123(n int) with (on_commit_delete_rows='false');\n> CREATE TABLE\n> x=#\n> x=# alter table foo123 set ( on_commit_delete_rows='true');\n> ALTER TABLE\nI blocked modify this parameter.\n\nFixed in global_temporary_table_v17-pg13.patch\n\n\nWenjing\n\n\n\n\n\n> x=#\n> x=# insert into foo123 values (1);\n> INSERT 0 1\n> x=# select * from foo123; <- row should get removed.\n> n\n> ---\n> 1\n> (1 row)\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Fri, 06 Mar 2020 15:05:30 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月5日 下午10:19,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/3/20 2:10 PM, 曾文旌(义从) wrote:\n>> I fixed in global_temporary_table_v16-pg13.patch.\n> Please refer this scenario -\n> \n> --Connect to psql -\n> \n> postgres=# alter system set max_active_global_temporary_table =1;\n> ALTER SYSTEM\n> \n> --restart the server (./pg_ctl -D data restart) \n> \n> --create global temp table \n> \n> postgres=# create global temp table ccc1 (c int);\n> CREATE TABLE\n> \n> --Try to Create another global temp table\n> \n> postgres=# create global temp table ccc2 (c int);\n> WARNING: relfilenode 13589/1663/19063 not exist in gtt shared hash when forget\n> ERROR: out of shared memory\n> HINT: You might need to increase max_active_gtt.\n> \n> postgres=# show max_active_gtt;\n> ERROR: unrecognized configuration parameter \"max_active_gtt\"\n> postgres=# \n> postgres=# show max_active_global_temporary_table ;\n> max_active_global_temporary_table \n> -----------------------------------\n> 1\n> (1 row)\n> \n> postgres=# \n> \n> I cannot find \"max_active_gtt\" GUC . I think you are referring to \"max_active_global_temporary_table\" here ? \n> \nYou're right.\n\nFixed in global_temporary_table_v17-pg13.patch\n\n\nWenjing\n\n\n> also , would be great if we can make this error message user friendly like - \"max connection reached\" rather than memory error\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company\n\n\n2020年3月5日 下午10:19,tushar <tushar.ahuja@enterprisedb.com> 写道:\n\n\nOn 3/3/20 2:10 PM, 曾文旌(义从) wrote:\n\n\nI fixed in global_temporary_table_v16-pg13.patch.\nPlease refer this scenario ---Connect to psql -\npostgres=# alter system set max_active_global_temporary_table =1;\n ALTER SYSTEM\n--restart the server (./pg_ctl -D data restart) \n--create global temp table \npostgres=# create global temp  table ccc1  (c int);\n CREATE TABLE\n--Try to Create another global temp table\npostgres=# create global temp  table ccc2  (c int);\nWARNING:  relfilenode 13589/1663/19063 not exist in gtt\n shared hash when forget\nERROR:  out of shared memory\nHINT:  You might need to increase max_active_gtt.\n\n postgres=# show max_active_gtt;\n ERROR:  unrecognized configuration parameter \"max_active_gtt\"\n postgres=# \n postgres=# show max_active_global_temporary_table ;\n  max_active_global_temporary_table \n -----------------------------------\n  1\n (1 row)\n\n postgres=# \nI cannot find \"max_active_gtt\"  GUC . I think you are referring\n to  \"max_active_global_temporary_table\" here ? You're right.Fixed in global_temporary_table_v17-pg13.patchWenjing\nalso , would be great  if we can make this error message  user\n friendly like  - \"max connection reached\"  rather than memory\n error\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 06 Mar 2020 15:08:30 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2020年3月5日 下午10:38,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Thu, Mar 5, 2020 at 9:19 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>> WARNING: relfilenode 13589/1663/19063 not exist in gtt shared hash when forget\n>> ERROR: out of shared memory\n>> HINT: You might need to increase max_active_gtt.\n>> \n>> also , would be great if we can make this error message user friendly like - \"max connection reached\" rather than memory error\n> \n> That would be nice, but the bigger problem is that the WARNING there\n> looks totally unacceptable. It's looks like it's complaining of some\n> internal issue (i.e. a bug or corruption) and the grammar is poor,\n> too.\n\nYes, WARNING should not exist.\nThis is a bug in the rollback process and I have fixed it in global_temporary_table_v17-pg13.patch\n\n\nWenjing\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Fri, 06 Mar 2020 15:10:52 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi All,\n\nKindly check the below scenario.\n\n*Case 1: *\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int) on commit delete rows;\nCREATE TABLE\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int) on commit preserve\nrows;\nCREATE TABLE\npostgres=# vacuum gtt1;\nVACUUM\npostgres=# vacuum gtt2;\nVACUUM\npostgres=# vacuum;\nVACUUM\npostgres=# \\q\n\n*Case 2: Exit and reconnect to psql prompt.*\n[edb@localhost bin]$ ./psql postgres\npsql (13devel)\nType \"help\" for help.\n\npostgres=# vacuum gtt1;\nWARNING: skipping vacuum empty global temp table \"gtt1\"\nVACUUM\npostgres=# vacuum gtt2;\nWARNING: skipping vacuum empty global temp table \"gtt2\"\nVACUUM\npostgres=# vacuum;\nWARNING: skipping vacuum empty global temp table \"gtt1\"\nWARNING: skipping vacuum empty global temp table \"gtt2\"\nVACUUM\n\nAlthough in \"Case1\" the gtt1/gtt2 are empty, we are not getting \"WARNING:\n skipping vacuum empty global temp table\" for VACUUM in \"Case 1\".\nwhereas we are getting the \"WARNING\" for VACUUM in \"Case2\".\n\n\nOn Fri, Mar 6, 2020 at 12:41 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> > 2020年3月5日 下午10:38,Robert Haas <robertmhaas@gmail.com> 写道:\n> >\n> > On Thu, Mar 5, 2020 at 9:19 AM tushar <tushar.ahuja@enterprisedb.com>\n> wrote:\n> >> WARNING: relfilenode 13589/1663/19063 not exist in gtt shared hash\n> when forget\n> >> ERROR: out of shared memory\n> >> HINT: You might need to increase max_active_gtt.\n> >>\n> >> also , would be great if we can make this error message user friendly\n> like - \"max connection reached\" rather than memory error\n> >\n> > That would be nice, but the bigger problem is that the WARNING there\n> > looks totally unacceptable. It's looks like it's complaining of some\n> > internal issue (i.e. a bug or corruption) and the grammar is poor,\n> > too.\n>\n> Yes, WARNING should not exist.\n> This is a bug in the rollback process and I have fixed it in\n> global_temporary_table_v17-pg13.patch\n>\n>\n> Wenjing\n>\n>\n> >\n> > --\n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n>\n>\n>\n>\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,Kindly check the below scenario.Case 1: postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int) on commit delete rows;CREATE TABLEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int) on commit preserve rows;CREATE TABLEpostgres=# vacuum gtt1;VACUUMpostgres=# vacuum gtt2;VACUUMpostgres=# vacuum;VACUUMpostgres=# \\qCase 2: Exit and reconnect to psql prompt.[edb@localhost bin]$ ./psql  postgres psql (13devel)Type \"help\" for help.postgres=# vacuum gtt1;WARNING:  skipping vacuum empty global temp table \"gtt1\"VACUUMpostgres=# vacuum gtt2;WARNING:  skipping vacuum empty global temp table \"gtt2\"VACUUMpostgres=# vacuum;WARNING:  skipping vacuum empty global temp table \"gtt1\"WARNING:  skipping vacuum empty global temp table \"gtt2\"VACUUMAlthough in \"Case1\" the gtt1/gtt2 are empty, we are not getting \"WARNING:  skipping vacuum empty global temp table\" for VACUUM in \"Case 1\".whereas we are getting the \"WARNING\" for VACUUM in \"Case2\".On Fri, Mar 6, 2020 at 12:41 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n> 2020年3月5日 下午10:38,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Thu, Mar 5, 2020 at 9:19 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>> WARNING:  relfilenode 13589/1663/19063 not exist in gtt shared hash when forget\n>> ERROR:  out of shared memory\n>> HINT:  You might need to increase max_active_gtt.\n>> \n>> also , would be great  if we can make this error message  user friendly like  - \"max connection reached\"  rather than memory error\n> \n> That would be nice, but the bigger problem is that the WARNING there\n> looks totally unacceptable. It's looks like it's complaining of some\n> internal issue (i.e. a bug or corruption) and the grammar is poor,\n> too.\n\nYes, WARNING should not exist.\nThis is a bug in the rollback process and I have fixed it in global_temporary_table_v17-pg13.patch\n\n\nWenjing\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 9 Mar 2020 17:54:01 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/6/20 12:35 PM, 曾文旌(义从) wrote:\n> Fixed in global_temporary_table_v17-pg13.patch\n\nThanks Wenjing.\n\nPlease refer this scenario , where i am able to set \n'on_commit_delete_rows=true'  on regular table using 'alter' Syntax  \nwhich is not allowed using 'Create' Syntax\n\n--Expected -\n\npostgres=# CREATE TABLE foo () WITH (on_commit_delete_rows='true');\nERROR:  The parameter on_commit_delete_rows is exclusive to the global \ntemp table, which cannot be specified by a regular table\n\n--But user can do this with 'alter' command -\npostgres=# create table foo();\nCREATE TABLE\npostgres=# alter table foo set (on_commit_delete_rows='true');\nALTER TABLE\npostgres=#\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 9 Mar 2020 19:04:39 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/6/20 12:35 PM, 曾文旌(义从) wrote:\n> Fixed in global_temporary_table_v17-pg13.patch\n\nI observed that , we do support 'global temp' keyword with views\n\npostgres=# create or replace  global temp view v1 as select 5;\nCREATE VIEW\n\nbut if we take the dump( using pg_dumpall) then it only display 'create \nview'\n\nlook like we are skipping it ?\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 9 Mar 2020 20:07:45 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月9日 下午8:24,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi All,\n> \n> Kindly check the below scenario.\n> \n> Case 1: \n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int) on commit delete rows;\n> CREATE TABLE\n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int) on commit preserve rows;\n> CREATE TABLE\n> postgres=# vacuum gtt1;\n> VACUUM\n> postgres=# vacuum gtt2;\n> VACUUM\n> postgres=# vacuum;\n> VACUUM\n> postgres=# \\q\n> \n> Case 2: Exit and reconnect to psql prompt.\n> [edb@localhost bin]$ ./psql postgres \n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# vacuum gtt1;\n> WARNING: skipping vacuum empty global temp table \"gtt1\"\n> VACUUM\n> postgres=# vacuum gtt2;\n> WARNING: skipping vacuum empty global temp table \"gtt2\"\n> VACUUM\n> postgres=# vacuum;\n> WARNING: skipping vacuum empty global temp table \"gtt1\"\n> WARNING: skipping vacuum empty global temp table \"gtt2\"\n> VACUUM\n> \n> Although in \"Case1\" the gtt1/gtt2 are empty, we are not getting \"WARNING: skipping vacuum empty global temp table\" for VACUUM in \"Case 1\".\n> whereas we are getting the \"WARNING\" for VACUUM in \"Case2\".\nI fixed the warning message, It's more accurate now.\n\nWenjing\n\n\n\n> \n> \n> On Fri, Mar 6, 2020 at 12:41 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n> > 2020年3月5日 下午10:38,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n> > \n> > On Thu, Mar 5, 2020 at 9:19 AM tushar <tushar.ahuja@enterprisedb.com <mailto:tushar.ahuja@enterprisedb.com>> wrote:\n> >> WARNING: relfilenode 13589/1663/19063 not exist in gtt shared hash when forget\n> >> ERROR: out of shared memory\n> >> HINT: You might need to increase max_active_gtt.\n> >> \n> >> also , would be great if we can make this error message user friendly like - \"max connection reached\" rather than memory error\n> > \n> > That would be nice, but the bigger problem is that the WARNING there\n> > looks totally unacceptable. It's looks like it's complaining of some\n> > internal issue (i.e. a bug or corruption) and the grammar is poor,\n> > too.\n> \n> Yes, WARNING should not exist.\n> This is a bug in the rollback process and I have fixed it in global_temporary_table_v17-pg13.patch\n> \n> \n> Wenjing\n> \n> \n> > \n> > -- \n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> > The Enterprise PostgreSQL Company\n> \n> \n> \n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Tue, 10 Mar 2020 00:28:12 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月9日 下午9:34,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/6/20 12:35 PM, 曾文旌(义从) wrote:\n>> Fixed in global_temporary_table_v17-pg13.patch\n> \n> Thanks Wenjing.\n> \n> Please refer this scenario , where i am able to set 'on_commit_delete_rows=true' on regular table using 'alter' Syntax which is not allowed using 'Create' Syntax\n> \n> --Expected -\n> \n> postgres=# CREATE TABLE foo () WITH (on_commit_delete_rows='true');\n> ERROR: The parameter on_commit_delete_rows is exclusive to the global temp table, which cannot be specified by a regular table\n> \n> --But user can do this with 'alter' command -\n> postgres=# create table foo();\n> CREATE TABLE\n> postgres=# alter table foo set (on_commit_delete_rows='true');\n> ALTER TABLE\nThis is a bug ,I fixed.\n\n\nWenjing\n\n\n\n\n\n> postgres=#\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Tue, 10 Mar 2020 00:29:58 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2020年3月9日 下午10:37,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/6/20 12:35 PM, 曾文旌(义从) wrote:\n>> Fixed in global_temporary_table_v17-pg13.patch\n> \n> I observed that , we do support 'global temp' keyword with views\n> \n> postgres=# create or replace global temp view v1 as select 5;\n> CREATE VIEW\nI think we should not support global temp view.\nFixed in global_temporary_table_v18-pg13.patch.\n\n\n\nWenjing\n\n\n> \n> but if we take the dump( using pg_dumpall) then it only display 'create view'\n> \n> look like we are skipping it ?\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 10 Mar 2020 00:31:58 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> Fixed in global_temporary_table_v18-pg13.patch.\n>\nHi Wenjing,\nThanks for the patch. I have verified the previous issues with\n\"gtt_v18_pg13.patch\" and those are resolved.\nPlease find below case:\n\npostgres=# create sequence seq;\nCREATE SEQUENCE\n\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON COMMIT\nDELETE ROWS;\nCREATE TABLE\n\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON COMMIT\nPRESERVE ROWS;\nCREATE TABLE\n\npostgres=# alter table gtt1 add c2 int default nextval('seq');\nERROR: cannot reindex global temporary tables\n\npostgres=# alter table gtt2 add c2 int default nextval('seq');\nERROR: cannot reindex global temporary tables\n\n*Note*: We are getting this error if we have a key column(PK/UNIQUE) in a\nGTT, and trying to add a column with a default sequence into it.\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\nFixed in global_temporary_table_v18-pg13.patch.Hi Wenjing,Thanks for the patch. I have verified the previous issues with \"gtt_v18_pg13.patch\" and those are resolved.Please find below case:postgres=# create sequence seq;CREATE SEQUENCEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON COMMIT DELETE ROWS;CREATE TABLEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON COMMIT PRESERVE ROWS;CREATE TABLEpostgres=# alter table gtt1 add c2 int default nextval('seq');ERROR:  cannot reindex global temporary tablespostgres=# alter table gtt2 add c2 int default nextval('seq');ERROR:  cannot reindex global temporary tablesNote: We are getting this error if we have a key column(PK/UNIQUE) in a GTT, and trying to add a column with a default sequence into it.-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 11 Mar 2020 13:22:03 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月11日 下午3:52,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n> Fixed in global_temporary_table_v18-pg13.patch.\n> Hi Wenjing,\n> Thanks for the patch. I have verified the previous issues with \"gtt_v18_pg13.patch\" and those are resolved.\n> Please find below case:\n> \n> postgres=# create sequence seq;\n> CREATE SEQUENCE\n> \n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON COMMIT DELETE ROWS;\n> CREATE TABLE\n> \n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON COMMIT PRESERVE ROWS;\n> CREATE TABLE\n> \n> postgres=# alter table gtt1 add c2 int default nextval('seq');\n> ERROR: cannot reindex global temporary tables\n> \n> postgres=# alter table gtt2 add c2 int default nextval('seq');\n> ERROR: cannot reindex global temporary tables\n> \n> Note: We are getting this error if we have a key column(PK/UNIQUE) in a GTT, and trying to add a column with a default sequence into it.\nThis is because alter table add column with default value need reindex pk,\nreindex need change relfilenode, but GTT is not currently supported.\nI make the error message more clearer in global_temporary_table_v19-pg13.patch\n\n\nWenjing\n\n\n\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Wed, 11 Mar 2020 21:06:57 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Mar 11, 2020 at 9:07 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n> reindex need change relfilenode, but GTT is not currently supported.\n\nIn my view that'd have to be fixed somehow.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 11 Mar 2020 16:12:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2020年3月12日 上午4:12,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Wed, Mar 11, 2020 at 9:07 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>> reindex need change relfilenode, but GTT is not currently supported.\n> \n> In my view that'd have to be fixed somehow.\nOk , I am working on it.\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 12 Mar 2020 18:06:19 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nPlease check the below findings:\nAfter running \"TRUNCATE\" command, the \"relfilenode\" field is not changing\nfor GTT\nwhereas, for Simple table/Temp table \"relfilenode\" field is changing after\nTRUNCATE.\n\n*Case 1: Getting same \"relfilenode\" for GTT after and before \"TRUNCATE\"*\npostgres=# create global temporary table gtt1(c1 int) on commit delete rows;\nCREATE TABLE\npostgres=# select relfilenode from pg_class where relname ='gtt1';\n relfilenode\n-------------\n 16384\n(1 row)\npostgres=# truncate gtt1;\nTRUNCATE TABLE\npostgres=# select relfilenode from pg_class where relname ='gtt1';\n relfilenode\n-------------\n 16384\n(1 row)\n\npostgres=# create global temporary table gtt2(c1 int) on commit preserve\nrows;\nCREATE TABLE\npostgres=# select relfilenode from pg_class where relname ='gtt2';\n relfilenode\n-------------\n 16387\n(1 row)\npostgres=# truncate gtt2;\nTRUNCATE TABLE\npostgres=# select relfilenode from pg_class where relname ='gtt2';\n relfilenode\n-------------\n 16387\n(1 row)\n\n\n*Case 2: \"relfilenode\" changes after \"TRUNCATE\" for Simple table/Temp table*\npostgres=# create temporary table temp3(c1 int) on commit preserve rows;\nCREATE TABLE\npostgres=# select relfilenode from pg_class where relname ='temp3';\n relfilenode\n-------------\n 16392\n(1 row)\npostgres=# truncate temp3;\nTRUNCATE TABLE\npostgres=# select relfilenode from pg_class where relname ='temp3';\n relfilenode\n-------------\n 16395\n(1 row)\n\n\npostgres=# create table tabl4(c1 int);\nCREATE TABLE\npostgres=# select relfilenode from pg_class where relname ='tabl4';\n relfilenode\n-------------\n 16396\n(1 row)\npostgres=# truncate tabl4;\nTRUNCATE TABLE\npostgres=# select relfilenode from pg_class where relname ='tabl4';\n relfilenode\n-------------\n 16399\n(1 row)\n\n\nOn Thu, Mar 12, 2020 at 3:36 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> > 2020年3月12日 上午4:12,Robert Haas <robertmhaas@gmail.com> 写道:\n> >\n> > On Wed, Mar 11, 2020 at 9:07 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> wrote:\n> >> reindex need change relfilenode, but GTT is not currently supported.\n> >\n> > In my view that'd have to be fixed somehow.\n> Ok , I am working on it.\n>\n>\n>\n> >\n> > --\n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n>\n>\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Please check the below findings:After running \"TRUNCATE\" command, the \"relfilenode\" field is not changing for GTT whereas, for Simple table/Temp table \"relfilenode\" field is changing after TRUNCATE.Case 1: Getting same \"relfilenode\" for GTT after and before \"TRUNCATE\"postgres=# create global temporary table gtt1(c1 int) on commit delete rows;CREATE TABLEpostgres=# select relfilenode from pg_class  where relname ='gtt1'; relfilenode -------------       16384(1 row)postgres=# truncate gtt1;TRUNCATE TABLEpostgres=# select relfilenode from pg_class  where relname ='gtt1'; relfilenode -------------       16384(1 row)postgres=# create global temporary table gtt2(c1 int) on commit preserve rows;CREATE TABLEpostgres=# select relfilenode from pg_class  where relname ='gtt2'; relfilenode -------------       16387(1 row)postgres=# truncate gtt2;TRUNCATE TABLEpostgres=# select relfilenode from pg_class  where relname ='gtt2'; relfilenode -------------       16387(1 row)Case 2: \"relfilenode\" changes after \"TRUNCATE\" for Simple table/Temp tablepostgres=# create temporary table temp3(c1 int) on commit preserve rows;CREATE TABLEpostgres=# select relfilenode from pg_class  where relname ='temp3'; relfilenode -------------       16392(1 row)postgres=# truncate temp3;TRUNCATE TABLEpostgres=# select relfilenode from pg_class  where relname ='temp3'; relfilenode -------------       16395(1 row)postgres=# create table tabl4(c1 int);CREATE TABLEpostgres=# select relfilenode from pg_class  where relname ='tabl4'; relfilenode -------------       16396(1 row)postgres=# truncate tabl4;TRUNCATE TABLEpostgres=# select relfilenode from pg_class  where relname ='tabl4'; relfilenode -------------       16399(1 row)On Thu, Mar 12, 2020 at 3:36 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n> 2020年3月12日 上午4:12,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Wed, Mar 11, 2020 at 9:07 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n>> reindex need change relfilenode, but GTT is not currently supported.\n> \n> In my view that'd have to be fixed somehow.\nOk , I am working on it.\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 12 Mar 2020 17:52:20 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/9/20 10:01 PM, 曾文旌(义从) wrote:\n> Fixed in global_temporary_table_v18-pg13.patch.\n\nThanks Wenjing.\n\nI am getting this error  \"ERROR:  could not open file \n\"base/13589/t3_16440\": No such file or directory\" if \nmax_active_global_temporary_table set to 0\n\nPlease refer this scenario -\n\npostgres=# create global temp table  tab1 (n int ) with ( \non_commit_delete_rows='true');\nCREATE TABLE\npostgres=# insert into tab1 values (1);\nINSERT 0 1\npostgres=# select * from tab1;\n  n\n---\n(0 rows)\n\npostgres=# alter system set max_active_global_temporary_table=0;\nALTER SYSTEM\npostgres=# \\q\n[tushar@localhost bin]$ ./pg_ctl -D data/ restart -c -l logs123\n\nwaiting for server to start.... done\nserver started\n\n[tushar@localhost bin]$ ./psql postgres\npsql (13devel)\nType \"help\" for help.\n\npostgres=# insert into tab1 values (1);\nERROR:  could not open file \"base/13589/t3_16440\": No such file or directory\npostgres=#\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Fri, 13 Mar 2020 18:10:42 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nPlease check the below combination of GTT with Primary and Foreign key\nrelations, with the ERROR message.\n\n\n*Case1:*postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 serial PRIMARY\nKEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT *DELETE* ROWS;\nCREATE TABLE\n\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2\ninteger NOT NULL,\nPRIMARY KEY (c1, c2),\nFOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT *PRESERVE* ROWS;\nERROR: unsupported ON COMMIT and foreign key combination\nDETAIL: Table \"gtt2\" references \"gtt1\", but *they do not have the same ON\nCOMMIT setting*.\n\n*Case2:*\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2\nVARCHAR (50) UNIQUE NOT NULL) ON COMMIT *PRESERVE* ROWS;\nCREATE TABLE\n\npostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2\ninteger NOT NULL,\nPRIMARY KEY (c1, c2),\nFOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT *DELETE* ROWS;\nCREATE TABLE\n\nIn \"case2\" although both the primary table and foreign key GTT *do not have\nthe same ON COMMIT setting*, still we are able to create the PK-FK\nrelations with GTT.\n\nSo I hope the detail message(DETAIL: Table \"gtt2\" references \"gtt1\", but\nthey do not have the same ON COMMIT setting.) in \"Case1\" should be more\nclear(something like \"wrong combination of ON COMMIT setting\").\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Please check the below combination of GTT with Primary and Foreign key relations, with the ERROR message.Case1:postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT DELETE ROWS;CREATE TABLEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2 integer NOT NULL,PRIMARY KEY (c1, c2),FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT PRESERVE ROWS;ERROR:  unsupported ON COMMIT and foreign key combinationDETAIL:  Table \"gtt2\" references \"gtt1\", but they do not have the same ON COMMIT setting.Case2:postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT PRESERVE ROWS;CREATE TABLEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2 integer NOT NULL,PRIMARY KEY (c1, c2),FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT DELETE ROWS;CREATE TABLEIn \"case2\" although both the primary table and foreign key GTT do not have the same ON COMMIT setting, still we are able to create the PK-FK relations with GTT.So I hope the detail message(DETAIL:  Table \"gtt2\" references \"gtt1\", but they do not have the same ON COMMIT setting.) in \"Case1\" should be more clear(something like \"wrong combination of ON COMMIT setting\").-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 13 Mar 2020 19:46:32 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\nPlease check the below scenario, where the Foreign table on GTT not showing\nrecords.\n\npostgres=# create extension postgres_fdw;\nCREATE EXTENSION\npostgres=# do $d$\n begin\n execute $$create server fdw foreign data wrapper postgres_fdw\noptions (host 'localhost',dbname 'postgres',port\n'$$||current_setting('port')||$$')$$;\n end;\n$d$;\nDO\npostgres=# create user mapping for public server fdw;\nCREATE USER MAPPING\n\npostgres=# create table lt1 (c1 integer, c2 varchar(50));\nCREATE TABLE\npostgres=# insert into lt1 values (1,'c21');\nINSERT 0 1\npostgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) server fdw\noptions (table_name 'lt1');\nCREATE FOREIGN TABLE\npostgres=# select * from ft1;\n c1 | c2\n----+-----\n 1 | c21\n(1 row)\n\npostgres=# create global temporary table gtt1 (c1 integer, c2 varchar(50));\nCREATE TABLE\npostgres=# insert into gtt1 values (1,'gtt_c21');\nINSERT 0 1\npostgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) server\nfdw options (table_name 'gtt1');\nCREATE FOREIGN TABLE\n\npostgres=# select * from gtt1;\n c1 | c2\n----+---------\n 1 | gtt_c21\n(1 row)\n\npostgres=# select * from f_gtt1;\n c1 | c2\n----+----\n(0 rows)\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Please check the below scenario, where the Foreign table on GTT not showing records.postgres=# create extension postgres_fdw;CREATE EXTENSIONpostgres=# do $d$    begin        execute $$create server fdw foreign data wrapper postgres_fdw options (host 'localhost',dbname 'postgres',port '$$||current_setting('port')||$$')$$;    end;$d$;DOpostgres=# create user mapping for public server fdw;CREATE USER MAPPINGpostgres=# create table lt1 (c1 integer, c2 varchar(50));CREATE TABLEpostgres=# insert into lt1 values (1,'c21');INSERT 0 1postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'lt1');CREATE FOREIGN TABLEpostgres=# select * from ft1; c1 | c2  ----+-----  1 | c21(1 row)postgres=# create global temporary table gtt1 (c1 integer, c2 varchar(50));CREATE TABLEpostgres=# insert into gtt1 values (1,'gtt_c21');INSERT 0 1postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'gtt1');CREATE FOREIGN TABLEpostgres=# select * from gtt1; c1 |   c2    ----+---------  1 | gtt_c21(1 row)postgres=# select * from f_gtt1; c1 | c2 ----+----(0 rows)-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 16 Mar 2020 11:53:32 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 16.03.2020 9:23, Prabhat Sahu wrote:\n> Hi Wenjing,\n> Please check the below scenario, where the Foreign table on GTT not \n> showing records.\n>\n> postgres=# create extension postgres_fdw;\n> CREATE EXTENSION\n> postgres=# do $d$\n>     begin\n>         execute $$create server fdw foreign data wrapper postgres_fdw \n> options (host 'localhost',dbname 'postgres',port \n> '$$||current_setting('port')||$$')$$;\n>     end;\n> $d$;\n> DO\n> postgres=# create user mapping for public server fdw;\n> CREATE USER MAPPING\n>\n> postgres=# create table lt1 (c1 integer, c2 varchar(50));\n> CREATE TABLE\n> postgres=# insert into lt1 values (1,'c21');\n> INSERT 0 1\n> postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) \n> server fdw options (table_name 'lt1');\n> CREATE FOREIGN TABLE\n> postgres=# select * from ft1;\n>  c1 | c2\n> ----+-----\n>   1 | c21\n> (1 row)\n>\n> postgres=# create global temporary table gtt1 (c1 integer, c2 \n> varchar(50));\n> CREATE TABLE\n> postgres=# insert into gtt1 values (1,'gtt_c21');\n> INSERT 0 1\n> postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) \n> server fdw options (table_name 'gtt1');\n> CREATE FOREIGN TABLE\n>\n> postgres=# select * from gtt1;\n>  c1 |   c2\n> ----+---------\n>   1 | gtt_c21\n> (1 row)\n>\n> postgres=# select * from f_gtt1;\n>  c1 | c2\n> ----+----\n> (0 rows)\n>\n> -- \n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>\n\nIt seems to be expected behavior: GTT data is private to the session and \npostgres_fdw establish its own session where content of the table is empty.\nBut if you insert some data in f_gtt1, then you will be able to select \nthis data from it because of connection cache in postgres_fdw.\n\n-- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 16.03.2020 9:23, Prabhat Sahu wrote:\n\n\n\n\n\nHi Wenjing,\nPlease check the\n below scenario, where the Foreign table on GTT not showing\n records.\n\n\n postgres=# create extension postgres_fdw;\n CREATE EXTENSION\n postgres=# do $d$\n     begin\n         execute $$create server fdw foreign data wrapper\n postgres_fdw options (host 'localhost',dbname\n 'postgres',port '$$||current_setting('port')||$$')$$;\n     end;\n $d$;\n DO\n postgres=# create user mapping for public server fdw;\n CREATE USER MAPPING\n\n postgres=# create table lt1 (c1 integer, c2 varchar(50));\n CREATE TABLE\n postgres=# insert into lt1 values (1,'c21');\n INSERT 0 1\n postgres=# create foreign table ft1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'lt1');\n CREATE FOREIGN TABLE\n postgres=# select * from ft1;\n  c1 | c2  \n ----+-----\n   1 | c21\n (1 row)\n\n postgres=# create global temporary table gtt1 (c1 integer,\n c2 varchar(50));\n CREATE TABLE\n postgres=# insert into gtt1 values (1,'gtt_c21');\n INSERT 0 1\n postgres=# create foreign table f_gtt1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'gtt1');\n CREATE FOREIGN TABLE\n\n\n postgres=# select * from gtt1;\n  c1 |   c2    \n ----+---------\n   1 | gtt_c21\n (1 row)\n\n\npostgres=# select *\n from f_gtt1;\n  c1 | c2 \n ----+----\n (0 rows)\n\n\n\n\n -- \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWith\n Regards,\nPrabhat\n Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n It seems to be expected behavior: GTT data is private to the session\n and postgres_fdw establish its own session where content of the\n table is empty.\n But if you insert some data in f_gtt1, then you will be able to\n select this data from it because of connection cache in\n postgres_fdw.\n\n --\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 16 Mar 2020 10:59:40 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> Please check the below scenario, where the Foreign table on GTT not showing records.\n> \n> postgres=# create extension postgres_fdw;\n> CREATE EXTENSION\n> postgres=# do $d$\n> begin\n> execute $$create server fdw foreign data wrapper postgres_fdw options (host 'localhost',dbname 'postgres',port '$$||current_setting('port')||$$')$$;\n> end;\n> $d$;\n> DO\n> postgres=# create user mapping for public server fdw;\n> CREATE USER MAPPING\n> \n> postgres=# create table lt1 (c1 integer, c2 varchar(50));\n> CREATE TABLE\n> postgres=# insert into lt1 values (1,'c21');\n> INSERT 0 1\n> postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'lt1');\n> CREATE FOREIGN TABLE\n> postgres=# select * from ft1;\n> c1 | c2 \n> ----+-----\n> 1 | c21\n> (1 row)\n> \n> postgres=# create global temporary table gtt1 (c1 integer, c2 varchar(50));\n> CREATE TABLE\n> postgres=# insert into gtt1 values (1,'gtt_c21');\n> INSERT 0 1\n> postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'gtt1');\n> CREATE FOREIGN TABLE\n> \n> postgres=# select * from gtt1;\n> c1 | c2 \n> ----+---------\n> 1 | gtt_c21\n> (1 row)\n> \n> postgres=# select * from f_gtt1;\n> c1 | c2 \n> ----+----\n> (0 rows)\n> \n> -- \n\nI understand that postgre_fdw works similar to dblink.\npostgre_fdw access to the table requires a new connection.\nThe data in the GTT table is empty in the newly established connection.\nBecause GTT shares structure but not data between connections.\n\nTry local temp table:\ncreate temporary table ltt1 (c1 integer, c2 varchar(50));\n\ninsert into ltt1 values (1,'gtt_c21');\n\ncreate foreign table f_ltt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'ltt1');\n\nselect * from ltt1;\n c1 | c2 \n----+---------\n 1 | gtt_c21\n(1 row)\n\nselect * from l_gtt1;\nERROR: relation \"l_gtt1\" does not exist\nLINE 1: select * from l_gtt1;\n\n\nWenjing\n\n\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n\n\n2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi Wenjing,Please check the below scenario, where the Foreign table on GTT not showing records.postgres=# create extension postgres_fdw;CREATE EXTENSIONpostgres=# do $d$    begin        execute $$create server fdw foreign data wrapper postgres_fdw options (host 'localhost',dbname 'postgres',port '$$||current_setting('port')||$$')$$;    end;$d$;DOpostgres=# create user mapping for public server fdw;CREATE USER MAPPINGpostgres=# create table lt1 (c1 integer, c2 varchar(50));CREATE TABLEpostgres=# insert into lt1 values (1,'c21');INSERT 0 1postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'lt1');CREATE FOREIGN TABLEpostgres=# select * from ft1; c1 | c2  ----+-----  1 | c21(1 row)postgres=# create global temporary table gtt1 (c1 integer, c2 varchar(50));CREATE TABLEpostgres=# insert into gtt1 values (1,'gtt_c21');INSERT 0 1postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'gtt1');CREATE FOREIGN TABLEpostgres=# select * from gtt1; c1 |   c2    ----+---------  1 | gtt_c21(1 row)postgres=# select * from f_gtt1; c1 | c2 ----+----(0 rows)-- I understand that postgre_fdw works similar to dblink.postgre_fdw access to the table requires a new connection.The data in the GTT table is empty in the newly established connection.Because GTT shares structure but not data between connections.Try local temp table:create temporary table ltt1 (c1 integer, c2 varchar(50));insert into ltt1 values (1,'gtt_c21');create foreign table f_ltt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'ltt1');select * from ltt1; c1 |   c2    ----+---------  1 | gtt_c21(1 row)select * from l_gtt1;ERROR:  relation \"l_gtt1\" does not existLINE 1: select * from l_gtt1;WenjingWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 16 Mar 2020 16:05:32 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nI have created a global table on X session but i am not able to drop \nfrom Y session ?\n\nX session - ( connect to psql terminal )\npostgres=# create global temp table foo(n int);\nCREATE TABLE\npostgres=# select * from foo;\n  n\n---\n(0 rows)\n\n\nY session - ( connect to psql terminal )\npostgres=# drop table foo;\nERROR:  can not drop relation foo when other backend attached this \nglobal temp table\n\nTable has been created  so i think - user should be able to drop from \nanother session as well without exit from X session.\n\nregards,\n\nOn 3/16/20 1:35 PM, 曾文旌(义从) wrote:\n>\n>\n>> 2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com \n>> <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>\n>> Hi Wenjing,\n>> Please check the below scenario, where the Foreign table on GTT not \n>> showing records.\n>>\n>> postgres=# create extension postgres_fdw;\n>> CREATE EXTENSION\n>> postgres=# do $d$\n>>     begin\n>>         execute $$create server fdw foreign data wrapper postgres_fdw \n>> options (host 'localhost',dbname 'postgres',port \n>> '$$||current_setting('port')||$$')$$;\n>>     end;\n>> $d$;\n>> DO\n>> postgres=# create user mapping for public server fdw;\n>> CREATE USER MAPPING\n>>\n>> postgres=# create table lt1 (c1 integer, c2 varchar(50));\n>> CREATE TABLE\n>> postgres=# insert into lt1 values (1,'c21');\n>> INSERT 0 1\n>> postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) \n>> server fdw options (table_name 'lt1');\n>> CREATE FOREIGN TABLE\n>> postgres=# select * from ft1;\n>>  c1 | c2\n>> ----+-----\n>>   1 | c21\n>> (1 row)\n>>\n>> postgres=# create global temporary table gtt1 (c1 integer, c2 \n>> varchar(50));\n>> CREATE TABLE\n>> postgres=# insert into gtt1 values (1,'gtt_c21');\n>> INSERT 0 1\n>> postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) \n>> server fdw options (table_name 'gtt1');\n>> CREATE FOREIGN TABLE\n>>\n>> postgres=# select * from gtt1;\n>>  c1 |   c2\n>> ----+---------\n>>   1 | gtt_c21\n>> (1 row)\n>>\n>> postgres=# select * from f_gtt1;\n>>  c1 | c2\n>> ----+----\n>> (0 rows)\n>>\n>> -- \n>\n> I understand that postgre_fdw works similar to dblink.\n> postgre_fdw access to the table requires a new connection.\n> The data in the GTT table is empty in the newly established connection.\n> Because GTT shares structure but not data between connections.\n>\n> Try local temp table:\n> create temporary table ltt1 (c1 integer, c2 varchar(50));\n>\n> insert into ltt1 values (1,'gtt_c21');\n>\n> create foreign table f_ltt1 (c1 integer, c2 varchar(50)) server fdw \n> options (table_name 'ltt1');\n>\n> select * from ltt1;\n>  c1 |   c2\n> ----+---------\n>   1 | gtt_c21\n> (1 row)\n>\n> select * from l_gtt1;\n> ERROR:  relation \"l_gtt1\" does not exist\n> LINE 1: select * from l_gtt1;\n>\n>\n> Wenjing\n>\n>\n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>>\n>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nHi Wenjing,\n\n\nI have created a global table on X\n session but i am not able to drop from Y session ?\n\n\nX session - ( connect to psql terminal\n )\n\npostgres=# create global temp table\n foo(n int);\n CREATE TABLE\n postgres=# select * from foo;\n  n \n ---\n (0 rows)\n\n\n\n\nY session - ( connect to psql terminal\n )\npostgres=# drop table foo;\n ERROR:  can not drop relation foo when other backend attached this\n global temp table\n\n\n\nTable has been created  so i think -\n user should be able to drop from another session as well without\n exit from X session. \n\n\n\nregards,\n\n\nOn 3/16/20 1:35 PM, 曾文旌(义从) wrote:\n\n\n\n\n\n\n2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n 写道:\n\n\n\n\nHi Wenjing,\nPlease check\n the below scenario, where the Foreign table on GTT not\n showing records.\n\n\n postgres=# create extension postgres_fdw;\n CREATE EXTENSION\n postgres=# do $d$\n     begin\n         execute $$create server fdw foreign data\n wrapper postgres_fdw options (host 'localhost',dbname\n 'postgres',port '$$||current_setting('port')||$$')$$;\n     end;\n $d$;\n DO\n postgres=# create user mapping for public server fdw;\n CREATE USER MAPPING\n\n postgres=# create table lt1 (c1 integer, c2\n varchar(50));\n CREATE TABLE\n postgres=# insert into lt1 values (1,'c21');\n INSERT 0 1\n postgres=# create foreign table ft1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'lt1');\n CREATE FOREIGN TABLE\n postgres=# select * from ft1;\n  c1 | c2  \n ----+-----\n   1 | c21\n (1 row)\n\n postgres=# create global temporary table gtt1 (c1\n integer, c2 varchar(50));\n CREATE TABLE\n postgres=# insert into gtt1 values (1,'gtt_c21');\n INSERT 0 1\n postgres=# create foreign table f_gtt1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'gtt1');\n CREATE FOREIGN TABLE\n\n\n postgres=# select * from gtt1;\n  c1 |   c2    \n ----+---------\n   1 | gtt_c21\n (1 row)\n\n\npostgres=#\n select * from f_gtt1;\n  c1 | c2 \n ----+----\n (0 rows)\n\n\n\n\n -- \n\n\n\n\n\nI understand that postgre_fdw works similar to dblink.\npostgre_fdw access to the table requires a new\n connection.\n\nThe data in the GTT table is empty in the newly\n established connection.\nBecause GTT shares structure but not data between\n connections.\n\n\n\n\nTry local temp table:\n\ncreate temporary table ltt1 (c1 integer, c2 varchar(50));\n\n\ninsert into ltt1 values (1,'gtt_c21');\n\n\ncreate foreign table f_ltt1 (c1 integer, c2 varchar(50))\n server fdw options (table_name 'ltt1');\n\n\nselect * from ltt1;\n c1 |   c2    \n----+---------\n  1 | gtt_c21\n(1 row)\n\n\nselect * from l_gtt1;\nERROR:  relation \"l_gtt1\" does not exist\nLINE 1: select * from l_gtt1;\n\n\n\n\n\nWenjing\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWith Regards,\nPrabhat\n Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 16 Mar 2020 14:28:17 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "po 16. 3. 2020 v 9:58 odesílatel tushar <tushar.ahuja@enterprisedb.com>\nnapsal:\n\n> Hi Wenjing,\n>\n> I have created a global table on X session but i am not able to drop from\n> Y session ?\n>\n> X session - ( connect to psql terminal )\n> postgres=# create global temp table foo(n int);\n> CREATE TABLE\n> postgres=# select * from foo;\n> n\n> ---\n> (0 rows)\n>\n>\n> Y session - ( connect to psql terminal )\n> postgres=# drop table foo;\n> ERROR: can not drop relation foo when other backend attached this global\n> temp table\n>\n> Table has been created so i think - user should be able to drop from\n> another session as well without exit from X session.\n>\n\nBy the original design GTT was not modifiable until is used by any session.\nNow, you cannot to drop normal table when this table is used.\n\nIt is hard to say what is most correct behave and design, but for this\nmoment, I think so protecting table against drop while it is used by other\nsession is the best behave.\n\nMaybe for next release we can introduce DROP TABLE x (FORCE) - like we have\nfor DROP DATABASE. This behave is very similar.\n\nPavel\n\n\n> regards,\n>\n> On 3/16/20 1:35 PM, 曾文旌(义从) wrote:\n>\n>\n>\n> 2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n> Hi Wenjing,\n> Please check the below scenario, where the Foreign table on GTT not\n> showing records.\n>\n> postgres=# create extension postgres_fdw;\n> CREATE EXTENSION\n> postgres=# do $d$\n> begin\n> execute $$create server fdw foreign data wrapper postgres_fdw\n> options (host 'localhost',dbname 'postgres',port\n> '$$||current_setting('port')||$$')$$;\n> end;\n> $d$;\n> DO\n> postgres=# create user mapping for public server fdw;\n> CREATE USER MAPPING\n>\n> postgres=# create table lt1 (c1 integer, c2 varchar(50));\n> CREATE TABLE\n> postgres=# insert into lt1 values (1,'c21');\n> INSERT 0 1\n> postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) server\n> fdw options (table_name 'lt1');\n> CREATE FOREIGN TABLE\n> postgres=# select * from ft1;\n> c1 | c2\n> ----+-----\n> 1 | c21\n> (1 row)\n>\n> postgres=# create global temporary table gtt1 (c1 integer, c2 varchar(50));\n> CREATE TABLE\n> postgres=# insert into gtt1 values (1,'gtt_c21');\n> INSERT 0 1\n> postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) server\n> fdw options (table_name 'gtt1');\n> CREATE FOREIGN TABLE\n>\n> postgres=# select * from gtt1;\n> c1 | c2\n> ----+---------\n> 1 | gtt_c21\n> (1 row)\n>\n> postgres=# select * from f_gtt1;\n> c1 | c2\n> ----+----\n> (0 rows)\n>\n> --\n>\n>\n> I understand that postgre_fdw works similar to dblink.\n> postgre_fdw access to the table requires a new connection.\n> The data in the GTT table is empty in the newly established connection.\n> Because GTT shares structure but not data between connections.\n>\n> Try local temp table:\n> create temporary table ltt1 (c1 integer, c2 varchar(50));\n>\n> insert into ltt1 values (1,'gtt_c21');\n>\n> create foreign table f_ltt1 (c1 integer, c2 varchar(50)) server fdw\n> options (table_name 'ltt1');\n>\n> select * from ltt1;\n> c1 | c2\n> ----+---------\n> 1 | gtt_c21\n> (1 row)\n>\n> select * from l_gtt1;\n> ERROR: relation \"l_gtt1\" does not exist\n> LINE 1: select * from l_gtt1;\n>\n>\n> Wenjing\n>\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n> --\n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company\n>\n>\n\npo 16. 3. 2020 v 9:58 odesílatel tushar <tushar.ahuja@enterprisedb.com> napsal:\n\nHi Wenjing,\n\n\nI have created a global table on X\n session but i am not able to drop from Y session ?\n\n\nX session - ( connect to psql terminal\n )\n\npostgres=# create global temp table\n foo(n int);\n CREATE TABLE\n postgres=# select * from foo;\n  n \n ---\n (0 rows)\n\n\n\n\nY session - ( connect to psql terminal\n )\npostgres=# drop table foo;\n ERROR:  can not drop relation foo when other backend attached this\n global temp table\n\n\n\nTable has been created  so i think -\n user should be able to drop from another session as well without\n exit from X session. By the original design GTT was not modifiable until is used by any session. Now, you cannot to drop normal table when this table is used.It is hard to say what is most correct behave and design, but for this moment, I think so protecting table against drop while it is used by other session is the best behave.Maybe for next release we can introduce DROP TABLE x (FORCE) - like we have for DROP DATABASE. This behave is very similar.Pavel\n\n\n\nregards,\n\n\nOn 3/16/20 1:35 PM, 曾文旌(义从) wrote:\n\n\n\n\n\n2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n 写道:\n\n\n\n\nHi Wenjing,\nPlease check\n the below scenario, where the Foreign table on GTT not\n showing records.\n\n\n postgres=# create extension postgres_fdw;\n CREATE EXTENSION\n postgres=# do $d$\n     begin\n         execute $$create server fdw foreign data\n wrapper postgres_fdw options (host 'localhost',dbname\n 'postgres',port '$$||current_setting('port')||$$')$$;\n     end;\n $d$;\n DO\n postgres=# create user mapping for public server fdw;\n CREATE USER MAPPING\n\n postgres=# create table lt1 (c1 integer, c2\n varchar(50));\n CREATE TABLE\n postgres=# insert into lt1 values (1,'c21');\n INSERT 0 1\n postgres=# create foreign table ft1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'lt1');\n CREATE FOREIGN TABLE\n postgres=# select * from ft1;\n  c1 | c2  \n ----+-----\n   1 | c21\n (1 row)\n\n postgres=# create global temporary table gtt1 (c1\n integer, c2 varchar(50));\n CREATE TABLE\n postgres=# insert into gtt1 values (1,'gtt_c21');\n INSERT 0 1\n postgres=# create foreign table f_gtt1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'gtt1');\n CREATE FOREIGN TABLE\n\n\n postgres=# select * from gtt1;\n  c1 |   c2    \n ----+---------\n   1 | gtt_c21\n (1 row)\n\n\npostgres=#\n select * from f_gtt1;\n  c1 | c2 \n ----+----\n (0 rows)\n\n\n\n\n -- \n\n\n\n\n\nI understand that postgre_fdw works similar to dblink.\npostgre_fdw access to the table requires a new\n connection.\n\nThe data in the GTT table is empty in the newly\n established connection.\nBecause GTT shares structure but not data between\n connections.\n\n\n\n\nTry local temp table:\n\ncreate temporary table ltt1 (c1 integer, c2 varchar(50));\n\n\ninsert into ltt1 values (1,'gtt_c21');\n\n\ncreate foreign table f_ltt1 (c1 integer, c2 varchar(50))\n server fdw options (table_name 'ltt1');\n\n\nselect * from ltt1;\n c1 |   c2    \n----+---------\n  1 | gtt_c21\n(1 row)\n\n\nselect * from l_gtt1;\nERROR:  relation \"l_gtt1\" does not exist\nLINE 1: select * from l_gtt1;\n\n\n\n\n\nWenjing\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWith Regards,\nPrabhat\n Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 16 Mar 2020 10:04:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月16日 下午4:58,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> \n> I have created a global table on X session but i am not able to drop from Y session ?\n> \n> X session - ( connect to psql terminal )\n> postgres=# create global temp table foo(n int);\n> CREATE TABLE\n> postgres=# select * from foo;\n> n \n> ---\n> (0 rows)\n> \n> \n> Y session - ( connect to psql terminal )\n> postgres=# drop table foo;\n> ERROR: can not drop relation foo when other backend attached this global temp table\nFor now, If one dba wants to drop one GTT,\nhe can use the view pg_gtt_attached_pids to see which backends are using this GTT.\nthen kill these sessions with pg_terminate_backend, and he can drop this GTT.\n\n> \n> Table has been created so i think - user should be able to drop from another session as well without exit from X session. \n> \n> regards,\n> \n> On 3/16/20 1:35 PM, 曾文旌(义从) wrote:\n>> \n>> \n>>> 2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>> \n>>> Hi Wenjing,\n>>> Please check the below scenario, where the Foreign table on GTT not showing records.\n>>> \n>>> postgres=# create extension postgres_fdw;\n>>> CREATE EXTENSION\n>>> postgres=# do $d$\n>>> begin\n>>> execute $$create server fdw foreign data wrapper postgres_fdw options (host 'localhost',dbname 'postgres',port '$$||current_setting('port')||$$')$$;\n>>> end;\n>>> $d$;\n>>> DO\n>>> postgres=# create user mapping for public server fdw;\n>>> CREATE USER MAPPING\n>>> \n>>> postgres=# create table lt1 (c1 integer, c2 varchar(50));\n>>> CREATE TABLE\n>>> postgres=# insert into lt1 values (1,'c21');\n>>> INSERT 0 1\n>>> postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'lt1');\n>>> CREATE FOREIGN TABLE\n>>> postgres=# select * from ft1;\n>>> c1 | c2 \n>>> ----+-----\n>>> 1 | c21\n>>> (1 row)\n>>> \n>>> postgres=# create global temporary table gtt1 (c1 integer, c2 varchar(50));\n>>> CREATE TABLE\n>>> postgres=# insert into gtt1 values (1,'gtt_c21');\n>>> INSERT 0 1\n>>> postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'gtt1');\n>>> CREATE FOREIGN TABLE\n>>> \n>>> postgres=# select * from gtt1;\n>>> c1 | c2 \n>>> ----+---------\n>>> 1 | gtt_c21\n>>> (1 row)\n>>> \n>>> postgres=# select * from f_gtt1;\n>>> c1 | c2 \n>>> ----+----\n>>> (0 rows)\n>>> \n>>> -- \n>> \n>> I understand that postgre_fdw works similar to dblink.\n>> postgre_fdw access to the table requires a new connection.\n>> The data in the GTT table is empty in the newly established connection.\n>> Because GTT shares structure but not data between connections.\n>> \n>> Try local temp table:\n>> create temporary table ltt1 (c1 integer, c2 varchar(50));\n>> \n>> insert into ltt1 values (1,'gtt_c21');\n>> \n>> create foreign table f_ltt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'ltt1');\n>> \n>> select * from ltt1;\n>> c1 | c2 \n>> ----+---------\n>> 1 | gtt_c21\n>> (1 row)\n>> \n>> select * from l_gtt1;\n>> ERROR: relation \"l_gtt1\" does not exist\n>> LINE 1: select * from l_gtt1;\n>> \n>> \n>> Wenjing\n>> \n>> \n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> \n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company\n\n\n2020年3月16日 下午4:58,tushar <tushar.ahuja@enterprisedb.com> 写道:\n\n\nHi Wenjing,\n\n\nI have created a global table on X\n session but i am not able to drop from Y session ?\n\n\nX session - ( connect to psql terminal\n )\n\npostgres=# create global temp table\n foo(n int);\n CREATE TABLE\n postgres=# select * from foo;\n  n \n ---\n (0 rows)\n\n\n\n\nY session - ( connect to psql terminal\n )\npostgres=# drop table foo;\n ERROR:  can not drop relation foo when other backend attached this\n global temp tableFor now, If one dba wants to drop one GTT,he can use the view pg_gtt_attached_pids to see which backends are using this GTT.then kill these sessions with pg_terminate_backend, and he can drop this GTT.\n\n\n\nTable has been created  so i think -\n user should be able to drop from another session as well without\n exit from X session. \n\n\n\nregards,\n\n\nOn 3/16/20 1:35 PM, 曾文旌(义从) wrote:\n\n\n\n\n\n\n2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n 写道:\n\n\n\n\nHi Wenjing,\nPlease check\n the below scenario, where the Foreign table on GTT not\n showing records.\n\n\n postgres=# create extension postgres_fdw;\n CREATE EXTENSION\n postgres=# do $d$\n     begin\n         execute $$create server fdw foreign data\n wrapper postgres_fdw options (host 'localhost',dbname\n 'postgres',port '$$||current_setting('port')||$$')$$;\n     end;\n $d$;\n DO\n postgres=# create user mapping for public server fdw;\n CREATE USER MAPPING\n\n postgres=# create table lt1 (c1 integer, c2\n varchar(50));\n CREATE TABLE\n postgres=# insert into lt1 values (1,'c21');\n INSERT 0 1\n postgres=# create foreign table ft1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'lt1');\n CREATE FOREIGN TABLE\n postgres=# select * from ft1;\n  c1 | c2  \n ----+-----\n   1 | c21\n (1 row)\n\n postgres=# create global temporary table gtt1 (c1\n integer, c2 varchar(50));\n CREATE TABLE\n postgres=# insert into gtt1 values (1,'gtt_c21');\n INSERT 0 1\n postgres=# create foreign table f_gtt1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'gtt1');\n CREATE FOREIGN TABLE\n\n\n postgres=# select * from gtt1;\n  c1 |   c2    \n ----+---------\n   1 | gtt_c21\n (1 row)\n\n\npostgres=#\n select * from f_gtt1;\n  c1 | c2 \n ----+----\n (0 rows)\n\n\n\n\n -- \n\n\n\n\n\nI understand that postgre_fdw works similar to dblink.\npostgre_fdw access to the table requires a new\n connection.\n\nThe data in the GTT table is empty in the newly\n established connection.\nBecause GTT shares structure but not data between\n connections.\n\n\n\n\nTry local temp table:\n\ncreate temporary table ltt1 (c1 integer, c2 varchar(50));\n\n\ninsert into ltt1 values (1,'gtt_c21');\n\n\ncreate foreign table f_ltt1 (c1 integer, c2 varchar(50))\n server fdw options (table_name 'ltt1');\n\n\nselect * from ltt1;\n c1 |   c2    \n----+---------\n  1 | gtt_c21\n(1 row)\n\n\nselect * from l_gtt1;\nERROR:  relation \"l_gtt1\" does not exist\nLINE 1: select * from l_gtt1;\n\n\n\n\n\nWenjing\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWith Regards,\nPrabhat\n Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 16 Mar 2020 17:24:08 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月16日 下午5:04,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> po 16. 3. 2020 v 9:58 odesílatel tushar <tushar.ahuja@enterprisedb.com <mailto:tushar.ahuja@enterprisedb.com>> napsal:\n> Hi Wenjing,\n> \n> I have created a global table on X session but i am not able to drop from Y session ?\n> \n> X session - ( connect to psql terminal )\n> postgres=# create global temp table foo(n int);\n> CREATE TABLE\n> postgres=# select * from foo;\n> n \n> ---\n> (0 rows)\n> \n> \n> Y session - ( connect to psql terminal )\n> postgres=# drop table foo;\n> ERROR: can not drop relation foo when other backend attached this global temp table\n> \n> Table has been created so i think - user should be able to drop from another session as well without exit from X session. \n> \n> By the original design GTT was not modifiable until is used by any session. Now, you cannot to drop normal table when this table is used.\n> \n> It is hard to say what is most correct behave and design, but for this moment, I think so protecting table against drop while it is used by other session is the best behave.\n> \n> Maybe for next release we can introduce DROP TABLE x (FORCE) - like we have for DROP DATABASE. This behave is very similar.\nI agree with that.\n\n\nWenjing\n\n> \n> Pavel\n> \n> \n> regards,\n> \n> On 3/16/20 1:35 PM, 曾文旌(义从) wrote:\n>> \n>> \n>>> 2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>> \n>>> Hi Wenjing,\n>>> Please check the below scenario, where the Foreign table on GTT not showing records.\n>>> \n>>> postgres=# create extension postgres_fdw;\n>>> CREATE EXTENSION\n>>> postgres=# do $d$\n>>> begin\n>>> execute $$create server fdw foreign data wrapper postgres_fdw options (host 'localhost',dbname 'postgres',port '$$||current_setting('port')||$$')$$;\n>>> end;\n>>> $d$;\n>>> DO\n>>> postgres=# create user mapping for public server fdw;\n>>> CREATE USER MAPPING\n>>> \n>>> postgres=# create table lt1 (c1 integer, c2 varchar(50));\n>>> CREATE TABLE\n>>> postgres=# insert into lt1 values (1,'c21');\n>>> INSERT 0 1\n>>> postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'lt1');\n>>> CREATE FOREIGN TABLE\n>>> postgres=# select * from ft1;\n>>> c1 | c2 \n>>> ----+-----\n>>> 1 | c21\n>>> (1 row)\n>>> \n>>> postgres=# create global temporary table gtt1 (c1 integer, c2 varchar(50));\n>>> CREATE TABLE\n>>> postgres=# insert into gtt1 values (1,'gtt_c21');\n>>> INSERT 0 1\n>>> postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'gtt1');\n>>> CREATE FOREIGN TABLE\n>>> \n>>> postgres=# select * from gtt1;\n>>> c1 | c2 \n>>> ----+---------\n>>> 1 | gtt_c21\n>>> (1 row)\n>>> \n>>> postgres=# select * from f_gtt1;\n>>> c1 | c2 \n>>> ----+----\n>>> (0 rows)\n>>> \n>>> -- \n>> \n>> I understand that postgre_fdw works similar to dblink.\n>> postgre_fdw access to the table requires a new connection.\n>> The data in the GTT table is empty in the newly established connection.\n>> Because GTT shares structure but not data between connections.\n>> \n>> Try local temp table:\n>> create temporary table ltt1 (c1 integer, c2 varchar(50));\n>> \n>> insert into ltt1 values (1,'gtt_c21');\n>> \n>> create foreign table f_ltt1 (c1 integer, c2 varchar(50)) server fdw options (table_name 'ltt1');\n>> \n>> select * from ltt1;\n>> c1 | c2 \n>> ----+---------\n>> 1 | gtt_c21\n>> (1 row)\n>> \n>> select * from l_gtt1;\n>> ERROR: relation \"l_gtt1\" does not exist\n>> LINE 1: select * from l_gtt1;\n>> \n>> \n>> Wenjing\n>> \n>> \n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> \n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company\n\n\n2020年3月16日 下午5:04,Pavel Stehule <pavel.stehule@gmail.com> 写道:po 16. 3. 2020 v 9:58 odesílatel tushar <tushar.ahuja@enterprisedb.com> napsal:\n\nHi Wenjing,\n\n\nI have created a global table on X\n session but i am not able to drop from Y session ?\n\n\nX session - ( connect to psql terminal\n )\n\npostgres=# create global temp table\n foo(n int);\n CREATE TABLE\n postgres=# select * from foo;\n  n \n ---\n (0 rows)\n\n\n\n\nY session - ( connect to psql terminal\n )\npostgres=# drop table foo;\n ERROR:  can not drop relation foo when other backend attached this\n global temp table\n\n\n\nTable has been created  so i think -\n user should be able to drop from another session as well without\n exit from X session. By the original design GTT was not modifiable until is used by any session. Now, you cannot to drop normal table when this table is used.It is hard to say what is most correct behave and design, but for this moment, I think so protecting table against drop while it is used by other session is the best behave.Maybe for next release we can introduce DROP TABLE x (FORCE) - like we have for DROP DATABASE. This behave is very similar.I agree with that.WenjingPavel\n\n\n\nregards,\n\n\nOn 3/16/20 1:35 PM, 曾文旌(义从) wrote:\n\n\n\n\n\n2020年3月16日 下午2:23,Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n 写道:\n\n\n\n\nHi Wenjing,\nPlease check\n the below scenario, where the Foreign table on GTT not\n showing records.\n\n\n postgres=# create extension postgres_fdw;\n CREATE EXTENSION\n postgres=# do $d$\n     begin\n         execute $$create server fdw foreign data\n wrapper postgres_fdw options (host 'localhost',dbname\n 'postgres',port '$$||current_setting('port')||$$')$$;\n     end;\n $d$;\n DO\n postgres=# create user mapping for public server fdw;\n CREATE USER MAPPING\n\n postgres=# create table lt1 (c1 integer, c2\n varchar(50));\n CREATE TABLE\n postgres=# insert into lt1 values (1,'c21');\n INSERT 0 1\n postgres=# create foreign table ft1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'lt1');\n CREATE FOREIGN TABLE\n postgres=# select * from ft1;\n  c1 | c2  \n ----+-----\n   1 | c21\n (1 row)\n\n postgres=# create global temporary table gtt1 (c1\n integer, c2 varchar(50));\n CREATE TABLE\n postgres=# insert into gtt1 values (1,'gtt_c21');\n INSERT 0 1\n postgres=# create foreign table f_gtt1 (c1 integer, c2\n varchar(50)) server fdw options (table_name 'gtt1');\n CREATE FOREIGN TABLE\n\n\n postgres=# select * from gtt1;\n  c1 |   c2    \n ----+---------\n   1 | gtt_c21\n (1 row)\n\n\npostgres=#\n select * from f_gtt1;\n  c1 | c2 \n ----+----\n (0 rows)\n\n\n\n\n -- \n\n\n\n\n\nI understand that postgre_fdw works similar to dblink.\npostgre_fdw access to the table requires a new\n connection.\n\nThe data in the GTT table is empty in the newly\n established connection.\nBecause GTT shares structure but not data between\n connections.\n\n\n\n\nTry local temp table:\n\ncreate temporary table ltt1 (c1 integer, c2 varchar(50));\n\n\ninsert into ltt1 values (1,'gtt_c21');\n\n\ncreate foreign table f_ltt1 (c1 integer, c2 varchar(50))\n server fdw options (table_name 'ltt1');\n\n\nselect * from ltt1;\n c1 |   c2    \n----+---------\n  1 | gtt_c21\n(1 row)\n\n\nselect * from l_gtt1;\nERROR:  relation \"l_gtt1\" does not exist\nLINE 1: select * from l_gtt1;\n\n\n\n\n\nWenjing\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWith Regards,\nPrabhat\n Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 16 Mar 2020 17:26:27 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Mon, Mar 16, 2020 at 1:30 PM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n> It seems to be expected behavior: GTT data is private to the session and\n> postgres_fdw establish its own session where content of the table is empty.\n> But if you insert some data in f_gtt1, then you will be able to select\n> this data from it because of connection cache in postgres_fdw.\n>\n\nThanks for the explanation.\nI am able to insert and select the value from f_gtt1.\n\n postgres=# insert into f_gtt1 values (1,'gtt_c21');\nINSERT 0 1\npostgres=# select * from f_gtt1;\n c1 | c2\n----+---------\n 1 | gtt_c21\n(1 row)\n\nI have one more doubt,\nAs you told above \"GTT data is private to the session and postgres_fdw\nestablish its own session where content of the table is empty.\"\nPlease check the below scenario,\nwe can select data from the \"root GTT\" and \"foreign GTT partitioned table\"\nbut we are unable to select data from \"GTT partitioned table\"\n\npostgres=# create global temporary table gtt2 (c1 integer, c2 integer)\npartition by range(c1);\nCREATE TABLE\npostgres=# create global temporary table gtt2_p1 (c1 integer, c2 integer);\nCREATE TABLE\npostgres=# create foreign table f_gtt2_p1 (c1 integer, c2 integer) server\nfdw options (table_name 'gtt2_p1');\nCREATE FOREIGN TABLE\npostgres=# alter table gtt2 attach partition f_gtt2_p1 for values from\n(minvalue) to (10);\nALTER TABLE\npostgres=# insert into gtt2 select i,i from generate_series(1,5,2)i;\nINSERT 0 3\npostgres=# select * from gtt2;\n c1 | c2\n----+----\n 1 | 1\n 3 | 3\n 5 | 5\n(3 rows)\n\npostgres=# select * from gtt2_p1;\n c1 | c2\n----+----\n(0 rows)\n\npostgres=# select * from f_gtt2_p1;\n c1 | c2\n----+----\n 1 | 1\n 3 | 3\n 5 | 5\n(3 rows)\n\nIs this an expected behavior?\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Mar 16, 2020 at 1:30 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n It seems to be expected behavior: GTT data is private to the session\n and postgres_fdw establish its own session where content of the\n table is empty.\n But if you insert some data in f_gtt1, then you will be able to\n select this data from it because of connection cache in\n postgres_fdw.Thanks for the explanation.I am able to insert and select the value from f_gtt1. postgres=# insert into f_gtt1 values (1,'gtt_c21');INSERT 0 1postgres=# select * from f_gtt1; c1 |   c2    ----+---------  1 | gtt_c21(1 row)I have one more doubt,As you told above \"GTT data is private to the session\n and postgres_fdw establish its own session where content of the\n table is empty.\"Please check the below scenario, we can select data from the \"root GTT\" and \"foreign GTT partitioned table\" but we are unable to select data from \"GTT partitioned table\"postgres=# create global temporary table gtt2 (c1 integer, c2 integer) partition by range(c1);CREATE TABLEpostgres=# create global temporary table gtt2_p1 (c1 integer, c2 integer);CREATE TABLEpostgres=# create foreign table f_gtt2_p1 (c1 integer, c2 integer) server fdw options (table_name 'gtt2_p1');CREATE FOREIGN TABLEpostgres=# alter table gtt2 attach partition f_gtt2_p1 for values from (minvalue) to (10);ALTER TABLEpostgres=# insert into gtt2 select i,i from generate_series(1,5,2)i;INSERT 0 3postgres=# select * from gtt2; c1 | c2 ----+----  1 |  1  3 |  3  5 |  5(3 rows)postgres=# select * from gtt2_p1; c1 | c2 ----+----(0 rows)postgres=# select * from f_gtt2_p1; c1 | c2 ----+----  1 |  1  3 |  3  5 |  5(3 rows)Is this an expected behavior?-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 16 Mar 2020 15:01:48 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月16日 下午5:31,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> \n> \n> On Mon, Mar 16, 2020 at 1:30 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n> \n> It seems to be expected behavior: GTT data is private to the session and postgres_fdw establish its own session where content of the table is empty.\n> But if you insert some data in f_gtt1, then you will be able to select this data from it because of connection cache in postgres_fdw.\n> \n> Thanks for the explanation.\n> I am able to insert and select the value from f_gtt1.\n> \n> postgres=# insert into f_gtt1 values (1,'gtt_c21');\n> INSERT 0 1\n> postgres=# select * from f_gtt1;\n> c1 | c2 \n> ----+---------\n> 1 | gtt_c21\n> (1 row)\n> \n> I have one more doubt,\n> As you told above \"GTT data is private to the session and postgres_fdw establish its own session where content of the table is empty.\"\n> Please check the below scenario, \n> we can select data from the \"root GTT\" and \"foreign GTT partitioned table\" but we are unable to select data from \"GTT partitioned table\"\npostgres=# select pg_backend_pid();\n pg_backend_pid \n----------------\n 119135\n(1 row)\n\npostgres=# select * from pg_gtt_attached_pids;\n schemaname | tablename | relid | pid \n------------+-----------+-------+--------\n public | gtt2_p1 | 73845 | 119135\n public | gtt2_p1 | 73845 | 51482\n(2 rows)\n\n\npostgres=# select datid,datname,pid,application_name,query from pg_stat_activity where usename = ‘wenjing';\n datid | datname | pid | application_name | query \n-------+----------+--------+------------------+------------------------------------------------------------------------------------------------------\n 13589 | postgres | 119135 | psql | select datid,datname,pid,application_name,query from pg_stat_activity where usename = 'wenjing';\n 13589 | postgres | 51482 | postgres_fdw | COMMIT TRANSACTION\n(2 rows)\n\nThis can be explained\nThe postgre_fdw connection has not been disconnected, and it produced data in another session.\nIn other words, gtt2_p1 is empty in session 119135, but not in session 51482.\n\n\n> \n> postgres=# create global temporary table gtt2 (c1 integer, c2 integer) partition by range(c1);\n> CREATE TABLE\n> postgres=# create global temporary table gtt2_p1 (c1 integer, c2 integer);\n> CREATE TABLE\n> postgres=# create foreign table f_gtt2_p1 (c1 integer, c2 integer) server fdw options (table_name 'gtt2_p1');\n> CREATE FOREIGN TABLE\n> postgres=# alter table gtt2 attach partition f_gtt2_p1 for values from (minvalue) to (10);\n> ALTER TABLE\n> postgres=# insert into gtt2 select i,i from generate_series(1,5,2)i;\n> INSERT 0 3\n> postgres=# select * from gtt2;\n> c1 | c2 \n> ----+----\n> 1 | 1\n> 3 | 3\n> 5 | 5\n> (3 rows)\n> \n> postgres=# select * from gtt2_p1;\n> c1 | c2 \n> ----+----\n> (0 rows)\n> \n> postgres=# select * from f_gtt2_p1;\n> c1 | c2 \n> ----+----\n> 1 | 1\n> 3 | 3\n> 5 | 5\n> (3 rows)\n> \n> Is this an expected behavior?\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n\n\n2020年3月16日 下午5:31,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:On Mon, Mar 16, 2020 at 1:30 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n It seems to be expected behavior: GTT data is private to the session\n and postgres_fdw establish its own session where content of the\n table is empty.\n But if you insert some data in f_gtt1, then you will be able to\n select this data from it because of connection cache in\n postgres_fdw.Thanks for the explanation.I am able to insert and select the value from f_gtt1. postgres=# insert into f_gtt1 values (1,'gtt_c21');INSERT 0 1postgres=# select * from f_gtt1; c1 |   c2    ----+---------  1 | gtt_c21(1 row)I have one more doubt,As you told above \"GTT data is private to the session\n and postgres_fdw establish its own session where content of the\n table is empty.\"Please check the below scenario, we can select data from the \"root GTT\" and \"foreign GTT partitioned table\" but we are unable to select data from \"GTT partitioned table\"postgres=# select pg_backend_pid(); pg_backend_pid ----------------         119135(1 row)postgres=# select * from pg_gtt_attached_pids; schemaname | tablename | relid |  pid   ------------+-----------+-------+-------- public     | gtt2_p1   | 73845 | 119135 public     | gtt2_p1   | 73845 |  51482(2 rows)postgres=# select datid,datname,pid,application_name,query from pg_stat_activity where usename = ‘wenjing'; datid | datname  |  pid   | application_name |                                                query                                                 -------+----------+--------+------------------+------------------------------------------------------------------------------------------------------ 13589 | postgres | 119135 | psql             | select datid,datname,pid,application_name,query from pg_stat_activity where usename = 'wenjing'; 13589 | postgres |  51482 | postgres_fdw     | COMMIT TRANSACTION(2 rows)This can be explainedThe postgre_fdw connection has not been disconnected, and it produced data in another session.In other words, gtt2_p1 is empty in session 119135, but not in session 51482.postgres=# create global temporary table gtt2 (c1 integer, c2 integer) partition by range(c1);CREATE TABLEpostgres=# create global temporary table gtt2_p1 (c1 integer, c2 integer);CREATE TABLEpostgres=# create foreign table f_gtt2_p1 (c1 integer, c2 integer) server fdw options (table_name 'gtt2_p1');CREATE FOREIGN TABLEpostgres=# alter table gtt2 attach partition f_gtt2_p1 for values from (minvalue) to (10);ALTER TABLEpostgres=# insert into gtt2 select i,i from generate_series(1,5,2)i;INSERT 0 3postgres=# select * from gtt2; c1 | c2 ----+----  1 |  1  3 |  3  5 |  5(3 rows)postgres=# select * from gtt2_p1; c1 | c2 ----+----(0 rows)postgres=# select * from f_gtt2_p1; c1 | c2 ----+----  1 |  1  3 |  3  5 |  5(3 rows)Is this an expected behavior?-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 16 Mar 2020 17:57:47 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月12日 下午8:22,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> \n> Please check the below findings:\n> After running \"TRUNCATE\" command, the \"relfilenode\" field is not changing for GTT \n> whereas, for Simple table/Temp table \"relfilenode\" field is changing after TRUNCATE.\n> \n> Case 1: Getting same \"relfilenode\" for GTT after and before \"TRUNCATE\"\n> postgres=# create global temporary table gtt1(c1 int) on commit delete rows;\n> CREATE TABLE\n> postgres=# select relfilenode from pg_class where relname ='gtt1';\n> relfilenode \n> -------------\n> 16384\n> (1 row)\n> postgres=# truncate gtt1;\n> TRUNCATE TABLE\n> postgres=# select relfilenode from pg_class where relname ='gtt1';\n> relfilenode \n> -------------\n> 16384\n> (1 row)\n> \n> postgres=# create global temporary table gtt2(c1 int) on commit preserve rows;\n> CREATE TABLE\n> postgres=# select relfilenode from pg_class where relname ='gtt2';\n> relfilenode \n> -------------\n> 16387\n> (1 row)\n> postgres=# truncate gtt2;\n> TRUNCATE TABLE\n> postgres=# select relfilenode from pg_class where relname ='gtt2';\n> relfilenode \n> -------------\n> 16387\n> (1 row)\n> \n> \n> Case 2: \"relfilenode\" changes after \"TRUNCATE\" for Simple table/Temp table\n> postgres=# create temporary table temp3(c1 int) on commit preserve rows;\n> CREATE TABLE\n> postgres=# select relfilenode from pg_class where relname ='temp3';\n> relfilenode \n> -------------\n> 16392\n> (1 row)\n> postgres=# truncate temp3;\n> TRUNCATE TABLE\n> postgres=# select relfilenode from pg_class where relname ='temp3';\n> relfilenode \n> -------------\n> 16395\n> (1 row)\n> \n> \n> postgres=# create table tabl4(c1 int);\n> CREATE TABLE\n> postgres=# select relfilenode from pg_class where relname ='tabl4';\n> relfilenode \n> -------------\n> 16396\n> (1 row)\n> postgres=# truncate tabl4;\n> TRUNCATE TABLE\n> postgres=# select relfilenode from pg_class where relname ='tabl4';\n> relfilenode \n> -------------\n> 16399\n> (1 row)\n\nTruncated GTT has been supported. \nNow it clears the data in the table by switching relfilenode and can support rollback.\nNote that the latest relfilenode in GTT is not stored in pg_class, you can view them in the view pg_gtt_stats.\n\npostgres=# create global temp table gtt1(a int primary key);\nCREATE TABLE\npostgres=# insert into gtt1 select generate_series(1,10000);\nINSERT 0 10000\npostgres=# select tablename,relfilenode from pg_gtt_relstats;\n tablename | relfilenode \n-----------+-------------\n gtt1 | 16406\n gtt1_pkey | 16409\n(2 rows)\npostgres=# truncate gtt1;\nTRUNCATE TABLE\npostgres=# \npostgres=# select tablename,relfilenode from pg_gtt_relstats;\n tablename | relfilenode \n-----------+-------------\n gtt1 | 16411\n gtt1_pkey | 16412\n(2 rows)\n\n\n\nWenjing\n\n\n\n\n> \n> \n> On Thu, Mar 12, 2020 at 3:36 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n> > 2020年3月12日 上午4:12,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n> > \n> > On Wed, Mar 11, 2020 at 9:07 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> >> reindex need change relfilenode, but GTT is not currently supported.\n> > \n> > In my view that'd have to be fixed somehow.\n> Ok , I am working on it.\n> \n> \n> \n> > \n> > -- \n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> > The Enterprise PostgreSQL Company\n> \n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Tue, 17 Mar 2020 11:42:24 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月13日 下午8:40,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/9/20 10:01 PM, 曾文旌(义从) wrote:\n>> Fixed in global_temporary_table_v18-pg13.patch.\n> \n> Thanks Wenjing.\n> \n> I am getting this error \"ERROR: could not open file \"base/13589/t3_16440\": No such file or directory\" if max_active_global_temporary_table set to 0\n> \n> Please refer this scenario -\n> \n> postgres=# create global temp table tab1 (n int ) with ( on_commit_delete_rows='true');\n> CREATE TABLE\n> postgres=# insert into tab1 values (1);\n> INSERT 0 1\n> postgres=# select * from tab1;\n> n\n> ---\n> (0 rows)\n> \n> postgres=# alter system set max_active_global_temporary_table=0;\n> ALTER SYSTEM\n> postgres=# \\q\n> [tushar@localhost bin]$ ./pg_ctl -D data/ restart -c -l logs123\n> \n> waiting for server to start.... done\n> server started\n> \n> [tushar@localhost bin]$ ./psql postgres\n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# insert into tab1 values (1);\n> ERROR: could not open file \"base/13589/t3_16440\": No such file or directory\n> postgres=#\nThanks for review\nIt is a bug, I fixed in global_temporary_table_v20-pg13.patch\n\n\nWenjing\n\n\n\n\n\n\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Tue, 17 Mar 2020 11:44:15 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "[Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月11日 下午3:52,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n> Fixed in global_temporary_table_v18-pg13.patch.\n> Hi Wenjing,\n> Thanks for the patch. I have verified the previous issues with \"gtt_v18_pg13.patch\" and those are resolved.\n> Please find below case:\n> \n> postgres=# create sequence seq;\n> CREATE SEQUENCE\n> \n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON COMMIT DELETE ROWS;\n> CREATE TABLE\n> \n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON COMMIT PRESERVE ROWS;\n> CREATE TABLE\n> \n> postgres=# alter table gtt1 add c2 int default nextval('seq');\n> ERROR: cannot reindex global temporary tables\n> \n> postgres=# alter table gtt2 add c2 int default nextval('seq');\n> ERROR: cannot reindex global temporary tables\nreindex GTT is already supported\n\nPlease check global_temporary_table_v20-pg13.patch\n\n\nWenjing\n\n\n\n> \n> Note: We are getting this error if we have a key column(PK/UNIQUE) in a GTT, and trying to add a column with a default sequence into it.\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n\n\n2020年3月11日 下午3:52,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\nFixed in global_temporary_table_v18-pg13.patch.Hi Wenjing,Thanks for the patch. I have verified the previous issues with \"gtt_v18_pg13.patch\" and those are resolved.Please find below case:postgres=# create sequence seq;CREATE SEQUENCEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON COMMIT DELETE ROWS;CREATE TABLEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON COMMIT PRESERVE ROWS;CREATE TABLEpostgres=# alter table gtt1 add c2 int default nextval('seq');ERROR:  cannot reindex global temporary tablespostgres=# alter table gtt2 add c2 int default nextval('seq');ERROR:  cannot reindex global temporary tablesreindex GTT is already supportedPlease check global_temporary_table_v20-pg13.patchWenjingNote: We are getting this error if we have a key column(PK/UNIQUE) in a GTT, and trying to add a column with a default sequence into it.-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 17 Mar 2020 11:45:53 +0800", "msg_from": "\"=?UTF-8?B?5pu+5paH5peMKOS5ieS7jik=?=\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "postgres=# CREATE LOCAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT DELETE ROWS;\nCREATE TABLE\npostgres=# CREATE LOCAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2 integer NOT NULL,\npostgres(# PRIMARY KEY (c1, c2),\npostgres(# FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT PRESERVE ROWS;\nERROR: unsupported ON COMMIT and foreign key combination\nDETAIL: Table \"gtt2\" references \"gtt1\", but they do not have the same ON COMMIT setting.\n\npostgres=# CREATE LOCAL TEMPORARY TABLE gtt3(c1 serial PRIMARY KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT PRESERVE ROWS;\nCREATE TABLE\npostgres=# \npostgres=# CREATE LOCAL TEMPORARY TABLE gtt4(c1 integer NOT NULL, c2 integer NOT NULL,\npostgres(# PRIMARY KEY (c1, c2),\npostgres(# FOREIGN KEY (c1) REFERENCES gtt3 (c1)) ON COMMIT DELETE ROWS;\nCREATE TABLE\n\nThe same behavior applies to the local temp table.\nI think, Cause of the problem is temp table with on commit delete rows is not good for reference tables.\nSo, it the error message ”cannot reference an on commit delete rows temporary table.“ ?\n\n\n\n> 2020年3月13日 下午10:16,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> \n> Please check the below combination of GTT with Primary and Foreign key relations, with the ERROR message.\n> \n> Case1:\n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT DELETE ROWS;\n> CREATE TABLE\n> \n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2 integer NOT NULL,\n> PRIMARY KEY (c1, c2),\n> FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT PRESERVE ROWS;\n> ERROR: unsupported ON COMMIT and foreign key combination\n> DETAIL: Table \"gtt2\" references \"gtt1\", but they do not have the same ON COMMIT setting.\n> \n> Case2:\n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT PRESERVE ROWS;\n> CREATE TABLE\n> \n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2 integer NOT NULL,\n> PRIMARY KEY (c1, c2),\n> FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT DELETE ROWS;\n> CREATE TABLE\n> \n> In \"case2\" although both the primary table and foreign key GTT do not have the same ON COMMIT setting, still we are able to create the PK-FK relations with GTT.\n> \n> So I hope the detail message(DETAIL: Table \"gtt2\" references \"gtt1\", but they do not have the same ON COMMIT setting.) in \"Case1\" should be more clear(something like \"wrong combination of ON COMMIT setting\").\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Thu, 19 Mar 2020 18:21:25 +0800", "msg_from": "\"wenjing.zwj\" <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Mar 19, 2020 at 3:51 PM wenjing.zwj <wenjing.zwj@alibaba-inc.com>\nwrote:\n\n> postgres=# CREATE LOCAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2\n> VARCHAR (50) UNIQUE NOT NULL) ON COMMIT DELETE ROWS;\n> CREATE TABLE\n> postgres=# CREATE LOCAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2\n> integer NOT NULL,\n> postgres(# PRIMARY KEY (c1, c2),\n> postgres(# FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT PRESERVE ROWS;\n> ERROR: unsupported ON COMMIT and foreign key combination\n> DETAIL: Table \"gtt2\" references \"gtt1\", but they do not have the same ON\n> COMMIT setting.\n>\n> postgres=# CREATE LOCAL TEMPORARY TABLE gtt3(c1 serial PRIMARY KEY, c2\n> VARCHAR (50) UNIQUE NOT NULL) ON COMMIT PRESERVE ROWS;\n> CREATE TABLE\n> postgres=#\n> postgres=# CREATE LOCAL TEMPORARY TABLE gtt4(c1 integer NOT NULL, c2\n> integer NOT NULL,\n> postgres(# PRIMARY KEY (c1, c2),\n> postgres(# FOREIGN KEY (c1) REFERENCES gtt3 (c1)) ON COMMIT DELETE ROWS;\n> CREATE TABLE\n>\n> The same behavior applies to the local temp table.\n>\nYes, the issue is related to \"local temp table\".\n\nI think, Cause of the problem is temp table with on commit delete rows is\n> not good for reference tables.\n> So, it the error message ”cannot reference an on commit delete rows\n> temporary table.“ ?\n>\nNo, this is not always true.\nWe can create GTT/\"local temp table\" with \"ON COMMIT DELETE ROWS\" which\ncan references to \"ON COMMIT DELETE ROWS\"\n\nBelow are the 4 combinations of GTT/\"local temp table\" reference.\n1. \"ON COMMIT PRESERVE ROWS\" can references to \"ON COMMIT PRESERVE ROWS\"\n2. \"ON COMMIT DELETE ROWS\" can references to \"ON COMMIT PRESERVE ROWS\"\n3. \"ON COMMIT DELETE ROWS\" can references to \"ON COMMIT DELETE ROWS\"\nBut\n4. \"ON COMMIT PRESERVE ROWS\" fails to reference \"ON COMMIT DELETE ROWS\"\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Mar 19, 2020 at 3:51 PM wenjing.zwj <wenjing.zwj@alibaba-inc.com> wrote:postgres=# CREATE LOCAL TEMPORARY TABLE gtt1(c1 serial PRIMARY KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT DELETE ROWS;CREATE TABLEpostgres=# CREATE LOCAL TEMPORARY TABLE gtt2(c1 integer NOT NULL, c2 integer NOT NULL,postgres(# PRIMARY KEY (c1, c2),postgres(# FOREIGN KEY (c1) REFERENCES gtt1 (c1)) ON COMMIT PRESERVE ROWS;ERROR:  unsupported ON COMMIT and foreign key combinationDETAIL:  Table \"gtt2\" references \"gtt1\", but they do not have the same ON COMMIT setting.postgres=# CREATE LOCAL TEMPORARY TABLE gtt3(c1 serial PRIMARY KEY, c2 VARCHAR (50) UNIQUE NOT NULL) ON COMMIT PRESERVE ROWS;CREATE TABLEpostgres=# postgres=# CREATE LOCAL TEMPORARY TABLE gtt4(c1 integer NOT NULL, c2 integer NOT NULL,postgres(# PRIMARY KEY (c1, c2),postgres(# FOREIGN KEY (c1) REFERENCES gtt3 (c1)) ON COMMIT DELETE ROWS;CREATE TABLEThe same behavior applies to the local temp table.Yes, the issue is related to \"local temp table\".I think, Cause of the problem is temp table with on commit delete rows is not good for reference tables.So, it the error message ”cannot reference an on commit delete rows temporary table.“ ?No, this is not always true.We can create GTT/\"local temp table\" with \"ON COMMIT DELETE ROWS\"  which can references to \"ON COMMIT DELETE ROWS\" Below are the 4 combinations of GTT/\"local temp table\" reference.1. \"ON COMMIT PRESERVE ROWS\" can references to \"ON COMMIT PRESERVE ROWS\"2. \"ON COMMIT DELETE ROWS\"   can references to \"ON COMMIT PRESERVE ROWS\"3. \"ON COMMIT DELETE ROWS\"   can references to \"ON COMMIT DELETE ROWS\"But 4. \"ON COMMIT PRESERVE ROWS\" fails to reference \"ON COMMIT DELETE ROWS\" -- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 19 Mar 2020 18:09:43 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\nPlease check my findings(on gtt_v20.patch) as below:\n\n*TestCase1:* (cache lookup failed on GTT)\n\n-- Session1:\npostgres=# create global temporary table gtt1(c1 int) on commit delete rows;\nCREATE TABLE\n\n-- Session2:\npostgres=# drop table gtt1 ;\nDROP TABLE\n\n-- Session1:\npostgres=# create global temporary table gtt1(c1 int) on commit delete rows;\nERROR: cache lookup failed for relation 16384\n\n\n*TestCase2:*\n\n-- Session1:\npostgres=# create global temporary table gtt (c1 integer) on commit\npreserve rows;\nCREATE TABLE\npostgres=# insert into gtt values(10);\nINSERT 0 1\n\n-- Session2:\npostgres=# drop table gtt;\nDROP TABLE\n\n\nI hope \"session2\" should not allow to perform the \"DROP\" operation on GTT\nhaving data.\n\n*Behavior of GTT in Oracle Database in such scenario:* For a completed\ntransaction on GTT with(on_commit_delete_rows='FALSE') with data in a\nsession, we will not be able to DROP from any session, we need to TRUNCATE\nthe data first to DROP the table.\n\nSQL> drop table gtt;\ndrop table gtt\n *\nERROR at line 1:\nORA-14452: attempt to create, alter or drop an index on temporary table\nalready\nin use\n\n\n\nOn Tue, Mar 17, 2020 at 9:16 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> 2020年3月11日 下午3:52,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n> On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com>\n> wrote:\n>\n>>\n>>\n>> Fixed in global_temporary_table_v18-pg13.patch.\n>>\n> Hi Wenjing,\n> Thanks for the patch. I have verified the previous issues with\n> \"gtt_v18_pg13.patch\" and those are resolved.\n> Please find below case:\n>\n> postgres=# create sequence seq;\n> CREATE SEQUENCE\n>\n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON\n> COMMIT DELETE ROWS;\n> CREATE TABLE\n>\n> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON\n> COMMIT PRESERVE ROWS;\n> CREATE TABLE\n>\n> postgres=# alter table gtt1 add c2 int default nextval('seq');\n> ERROR: cannot reindex global temporary tables\n>\n> postgres=# alter table gtt2 add c2 int default nextval('seq');\n> ERROR: cannot reindex global temporary tables\n>\n> reindex GTT is already supported\n>\n> Please check global_temporary_table_v20-pg13.patch\n>\n>\n> Wenjing\n>\n>\n>\n>\n> *Note*: We are getting this error if we have a key column(PK/UNIQUE) in a\n> GTT, and trying to add a column with a default sequence into it.\n>\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Please check my findings(on gtt_v20.patch) as below:TestCase1: (cache lookup failed on GTT)-- Session1:postgres=# create global temporary table gtt1(c1 int) on commit delete rows;CREATE TABLE-- Session2:postgres=# drop table gtt1 ;DROP TABLE-- Session1:postgres=# create global temporary table gtt1(c1 int) on commit delete rows;ERROR:  cache lookup failed for relation 16384TestCase2:-- Session1:postgres=# create global temporary table gtt (c1 integer) on commit preserve rows;CREATE TABLEpostgres=# insert into gtt values(10);INSERT 0 1-- Session2:postgres=# drop table gtt;DROP TABLEI hope \"session2\" should not allow to perform the \"DROP\" operation on GTT having data.Behavior of GTT in Oracle Database in such scenario: For a completed transaction on GTT with(on_commit_delete_rows='FALSE') with data in a session, we will not be able to DROP from any session, we need to TRUNCATE the data first to DROP the table.SQL> drop table gtt;\tdrop table gtt           *ERROR at line 1:ORA-14452: attempt to create, alter or drop an index on temporary table alreadyin useOn Tue, Mar 17, 2020 at 9:16 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:2020年3月11日 下午3:52,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com> wrote:\nFixed in global_temporary_table_v18-pg13.patch.Hi Wenjing,Thanks for the patch. I have verified the previous issues with \"gtt_v18_pg13.patch\" and those are resolved.Please find below case:postgres=# create sequence seq;CREATE SEQUENCEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON COMMIT DELETE ROWS;CREATE TABLEpostgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON COMMIT PRESERVE ROWS;CREATE TABLEpostgres=# alter table gtt1 add c2 int default nextval('seq');ERROR:  cannot reindex global temporary tablespostgres=# alter table gtt2 add c2 int default nextval('seq');ERROR:  cannot reindex global temporary tablesreindex GTT is already supportedPlease check global_temporary_table_v20-pg13.patchWenjingNote: We are getting this error if we have a key column(PK/UNIQUE) in a GTT, and trying to add a column with a default sequence into it.-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 24 Mar 2020 19:04:46 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/17/20 9:15 AM, 曾文旌(义从) wrote:\n> reindex GTT is already supported\n>\n> Please check global_temporary_table_v20-pg13.patch\n>\nPlease refer this scenario -\n\n\npostgres=# create global temp table co(n int) ;\nCREATE TABLE\n\npostgres=# create index fff on co(n);\nCREATE INDEX\n\nCase 1-\npostgres=# reindex table  co;\nREINDEX\n\nCase -2\npostgres=# reindex database postgres ;\nWARNING:  global temp table \"public.co\" skip reindexed\nREINDEX\npostgres=#\n\nCase 2 should work as similar to Case 1.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 25 Mar 2020 16:14:48 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi All,\n\nPlease check the behavior of GTT having column with \"SERIAL\" datatype and\ncolumn with default value as \"SEQUENCE\" as below:\n\n\n*Session1:*postgres=# create sequence gtt_c3_seq;\nCREATE SEQUENCE\npostgres=# create global temporary table gtt(c1 int, c2 serial, c3 int\ndefault nextval('gtt_c3_seq') not null) on commit preserve rows;\nCREATE TABLE\n\n-- Structure of column c2 and c3 are similar:\npostgres=# \\d+ gtt\n Table \"public.gtt\"\n Column | Type | Collation | Nullable | Default\n| Storage | Stats target | Description\n--------+---------+-----------+----------+---------------------------------+---------+--------------+-------------\n c1 | integer | | |\n| plain | |\n c2 | integer | | not null | nextval('gtt_c2_seq'::regclass)\n| plain | |\n c3 | integer | | not null | nextval('gtt_c3_seq'::regclass)\n| plain | |\nAccess method: heap\nOptions: on_commit_delete_rows=false\n\npostgres=# insert into gtt select generate_series(1,3);\nINSERT 0 3\npostgres=# select * from gtt;\n c1 | c2 | c3\n----+----+----\n 1 | 1 | 1\n 2 | 2 | 2\n 3 | 3 | 3\n(3 rows)\n\n\n*Session2:*postgres=# insert into gtt select generate_series(1,3);\nINSERT 0 3\npostgres=# select * from gtt;\n c1 | c2 | c3\n----+----+----\n 1 | 1 | 4\n 2 | 2 | 5\n 3 | 3 | 6\n(3 rows)\n\nKindly let me know, Is this behavior expected?\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,Please check the behavior of GTT  having column with \"SERIAL\" datatype and column with default value as \"SEQUENCE\" as below:Session1:postgres=# create sequence gtt_c3_seq;CREATE SEQUENCEpostgres=# create global temporary table gtt(c1 int, c2 serial, c3 int default nextval('gtt_c3_seq') not null) on commit preserve rows;CREATE TABLE-- Structure of column c2 and c3 are similar:postgres=# \\d+ gtt                                                Table \"public.gtt\" Column |  Type   | Collation | Nullable |             Default             | Storage | Stats target | Description --------+---------+-----------+----------+---------------------------------+---------+--------------+------------- c1     | integer |           |          |                                 | plain   |              |  c2     | integer |           | not null | nextval('gtt_c2_seq'::regclass) | plain   |              |  c3     | integer |           | not null | nextval('gtt_c3_seq'::regclass) | plain   |              | Access method: heapOptions: on_commit_delete_rows=falsepostgres=# insert into gtt select generate_series(1,3);INSERT 0 3postgres=# select * from gtt; c1 | c2 | c3 ----+----+----  1 |  1 |  1  2 |  2 |  2  3 |  3 |  3(3 rows)Session2:postgres=# insert into gtt select generate_series(1,3);INSERT 0 3postgres=# select * from gtt; c1 | c2 | c3 ----+----+----  1 |  1 |  4  2 |  2 |  5  3 |  3 |  6(3 rows)Kindly let me know, Is this behavior expected?-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 25 Mar 2020 18:22:55 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "st 25. 3. 2020 v 13:53 odesílatel Prabhat Sahu <\nprabhat.sahu@enterprisedb.com> napsal:\n\n> Hi All,\n>\n> Please check the behavior of GTT having column with \"SERIAL\" datatype and\n> column with default value as \"SEQUENCE\" as below:\n>\n>\n> *Session1:*postgres=# create sequence gtt_c3_seq;\n> CREATE SEQUENCE\n> postgres=# create global temporary table gtt(c1 int, c2 serial, c3 int\n> default nextval('gtt_c3_seq') not null) on commit preserve rows;\n> CREATE TABLE\n>\n> -- Structure of column c2 and c3 are similar:\n> postgres=# \\d+ gtt\n> Table \"public.gtt\"\n> Column | Type | Collation | Nullable | Default\n> | Storage | Stats target | Description\n>\n> --------+---------+-----------+----------+---------------------------------+---------+--------------+-------------\n> c1 | integer | | |\n> | plain | |\n> c2 | integer | | not null | nextval('gtt_c2_seq'::regclass)\n> | plain | |\n> c3 | integer | | not null | nextval('gtt_c3_seq'::regclass)\n> | plain | |\n> Access method: heap\n> Options: on_commit_delete_rows=false\n>\n> postgres=# insert into gtt select generate_series(1,3);\n> INSERT 0 3\n> postgres=# select * from gtt;\n> c1 | c2 | c3\n> ----+----+----\n> 1 | 1 | 1\n> 2 | 2 | 2\n> 3 | 3 | 3\n> (3 rows)\n>\n>\n> *Session2:*postgres=# insert into gtt select generate_series(1,3);\n> INSERT 0 3\n> postgres=# select * from gtt;\n> c1 | c2 | c3\n> ----+----+----\n> 1 | 1 | 4\n> 2 | 2 | 5\n> 3 | 3 | 6\n> (3 rows)\n>\n> Kindly let me know, Is this behavior expected?\n>\n\nIt is interesting side effect - theoretically it is not important, because\nsequence ensure just unique values - so values are not important.\n\nYou created classic shared sequence so the behave is correct and expected.\n\nPavel\n\n\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nst 25. 3. 2020 v 13:53 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com> napsal:Hi All,Please check the behavior of GTT  having column with \"SERIAL\" datatype and column with default value as \"SEQUENCE\" as below:Session1:postgres=# create sequence gtt_c3_seq;CREATE SEQUENCEpostgres=# create global temporary table gtt(c1 int, c2 serial, c3 int default nextval('gtt_c3_seq') not null) on commit preserve rows;CREATE TABLE-- Structure of column c2 and c3 are similar:postgres=# \\d+ gtt                                                Table \"public.gtt\" Column |  Type   | Collation | Nullable |             Default             | Storage | Stats target | Description --------+---------+-----------+----------+---------------------------------+---------+--------------+------------- c1     | integer |           |          |                                 | plain   |              |  c2     | integer |           | not null | nextval('gtt_c2_seq'::regclass) | plain   |              |  c3     | integer |           | not null | nextval('gtt_c3_seq'::regclass) | plain   |              | Access method: heapOptions: on_commit_delete_rows=falsepostgres=# insert into gtt select generate_series(1,3);INSERT 0 3postgres=# select * from gtt; c1 | c2 | c3 ----+----+----  1 |  1 |  1  2 |  2 |  2  3 |  3 |  3(3 rows)Session2:postgres=# insert into gtt select generate_series(1,3);INSERT 0 3postgres=# select * from gtt; c1 | c2 | c3 ----+----+----  1 |  1 |  4  2 |  2 |  5  3 |  3 |  6(3 rows)Kindly let me know, Is this behavior expected?It is interesting side effect - theoretically it  is not important, because sequence ensure just unique values - so values are not important.You created classic shared sequence so the behave is correct and expected.Pavel-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 25 Mar 2020 13:57:05 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/17/20 9:15 AM, 曾文旌(义从) wrote:\n> Please check global_temporary_table_v20-pg13.patch\n\nThere is a typo in the error message\n\npostgres=# create global temp table test(a int ) \nwith(on_commit_delete_rows=true) on commit delete rows;\nERROR:  can not defeine global temp table with on commit and with clause \nat same time\npostgres=#\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 25 Mar 2020 19:46:09 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月25日 下午8:52,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi All,\n> \n> Please check the behavior of GTT having column with \"SERIAL\" datatype and column with default value as \"SEQUENCE\" as below:\n> \n> Session1:\n> postgres=# create sequence gtt_c3_seq;\n> CREATE SEQUENCE\n> postgres=# create global temporary table gtt(c1 int, c2 serial, c3 int default nextval('gtt_c3_seq') not null) on commit preserve rows;\n> CREATE TABLE\n> \n> -- Structure of column c2 and c3 are similar:\n> postgres=# \\d+ gtt\n> Table \"public.gtt\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> --------+---------+-----------+----------+---------------------------------+---------+--------------+-------------\n> c1 | integer | | | | plain | | \n> c2 | integer | | not null | nextval('gtt_c2_seq'::regclass) | plain | | \n> c3 | integer | | not null | nextval('gtt_c3_seq'::regclass) | plain | | \n> Access method: heap\n> Options: on_commit_delete_rows=false\n> \n> postgres=# insert into gtt select generate_series(1,3);\n> INSERT 0 3\n> postgres=# select * from gtt;\n> c1 | c2 | c3 \n> ----+----+----\n> 1 | 1 | 1\n> 2 | 2 | 2\n> 3 | 3 | 3\n> (3 rows)\n> \n> Session2:\n> postgres=# insert into gtt select generate_series(1,3);\n> INSERT 0 3\n> postgres=# select * from gtt;\n> c1 | c2 | c3 \n> ----+----+----\n> 1 | 1 | 4\n> 2 | 2 | 5\n> 3 | 3 | 6\n> (3 rows)\n> \n> Kindly let me know, Is this behavior expected?\n> \n> -- \n\npostgres=# \\d+\n List of relations\n Schema | Name | Type | Owner | Persistence | Size | Description \n--------+------------+----------+-------------+-------------+------------+-------------\n public | gtt | table | wenjing.zwj | session | 8192 bytes | \n public | gtt_c2_seq | sequence | wenjing.zwj | session | 8192 bytes | \n public | gtt_c3_seq | sequence | wenjing.zwj | permanent | 8192 bytes | \n(3 rows)\n\nThis is expected.\nGTT'sequence is the same as GTT, so gtt_c2_seq is independent of each sessions.\ngtt_c3_seq is a classic sequence.\n\n\n\nWenjing\n\n\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Thu, 26 Mar 2020 11:15:08 +0800", "msg_from": "wjzeng <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月24日 下午9:34,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> Please check my findings(on gtt_v20.patch) as below:\n> \n> TestCase1: (cache lookup failed on GTT)\n> -- Session1:\n> postgres=# create global temporary table gtt1(c1 int) on commit delete rows;\n> CREATE TABLE\n> \n> -- Session2:\n> postgres=# drop table gtt1 ;\n> DROP TABLE\n> \n> -- Session1:\n> postgres=# create global temporary table gtt1(c1 int) on commit delete rows;\n> ERROR: cache lookup failed for relation 16384\n> \n> TestCase2:\n> -- Session1:\n> postgres=# create global temporary table gtt (c1 integer) on commit preserve rows;\n> CREATE TABLE\n> postgres=# insert into gtt values(10);\n> INSERT 0 1\n> \n> -- Session2:\n> postgres=# drop table gtt;\n> DROP TABLE\n> \n> I hope \"session2\" should not allow to perform the \"DROP\" operation on GTT having data.\n\nSorry, I introduced this bug in my refactoring.\nIt's been fixed.\n\nWenjing\n\n\n\n> \n> Behavior of GTT in Oracle Database in such scenario: For a completed transaction on GTT with(on_commit_delete_rows='FALSE') with data in a session, we will not be able to DROP from any session, we need to TRUNCATE the data first to DROP the table.\n> SQL> drop table gtt;\n> drop table gtt\n> *\n> ERROR at line 1:\n> ORA-14452: attempt to create, alter or drop an index on temporary table already\n> in use\n> \n> \n> On Tue, Mar 17, 2020 at 9:16 AM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n>> 2020年3月11日 下午3:52,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>> \n>> On Mon, Mar 9, 2020 at 10:02 PM 曾文旌(义从) <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> \n>> \n>> Fixed in global_temporary_table_v18-pg13.patch.\n>> Hi Wenjing,\n>> Thanks for the patch. I have verified the previous issues with \"gtt_v18_pg13.patch\" and those are resolved.\n>> Please find below case:\n>> \n>> postgres=# create sequence seq;\n>> CREATE SEQUENCE\n>> \n>> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int PRIMARY KEY) ON COMMIT DELETE ROWS;\n>> CREATE TABLE\n>> \n>> postgres=# CREATE GLOBAL TEMPORARY TABLE gtt2(c1 int PRIMARY KEY) ON COMMIT PRESERVE ROWS;\n>> CREATE TABLE\n>> \n>> postgres=# alter table gtt1 add c2 int default nextval('seq');\n>> ERROR: cannot reindex global temporary tables\n>> \n>> postgres=# alter table gtt2 add c2 int default nextval('seq');\n>> ERROR: cannot reindex global temporary tables\n> reindex GTT is already supported\n> \n> Please check global_temporary_table_v20-pg13.patch\n> \n> \n> Wenjing\n> \n> \n> \n>> \n>> Note: We are getting this error if we have a key column(PK/UNIQUE) in a GTT, and trying to add a column with a default sequence into it.\n>> \n>> -- \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> \n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Thu, 26 Mar 2020 11:36:48 +0800", "msg_from": "wjzeng <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月25日 下午6:44,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/17/20 9:15 AM, 曾文旌(义从) wrote:\n>> reindex GTT is already supported\n>> \n>> Please check global_temporary_table_v20-pg13.patch\n>> \n> Please refer this scenario -\n> \n> \n> postgres=# create global temp table co(n int) ;\n> CREATE TABLE\n> \n> postgres=# create index fff on co(n);\n> CREATE INDEX\n> \n> Case 1-\n> postgres=# reindex table co;\n> REINDEX\n> \n> Case -2\n> postgres=# reindex database postgres ;\n> WARNING: global temp table \"public.co\" skip reindexed\nI fixed in global_temporary_table_v21-pg13.patch\n\n\nWenjing\n\n\n> REINDEX\n> postgres=#\n> \n> Case 2 should work as similar to Case 1.\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Thu, 26 Mar 2020 11:37:30 +0800", "msg_from": "wjzeng <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月25日 下午10:16,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/17/20 9:15 AM, 曾文旌(义从) wrote:\n>> Please check global_temporary_table_v20-pg13.patch\n> \n> There is a typo in the error message\n> \n> postgres=# create global temp table test(a int ) with(on_commit_delete_rows=true) on commit delete rows;\n> ERROR: can not defeine global temp table with on commit and with clause at same time\n> postgres=#\nThank you pointed out.\nI fixed in global_temporary_table_v21-pg13.patch\n\n\nWenjing\n\n\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Thu, 26 Mar 2020 11:38:39 +0800", "msg_from": "wjzeng <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> Sorry, I introduced this bug in my refactoring.\n> It's been fixed.\n>\n> Wenjing\n>\n> Hi Wenjing,\nThis patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you\nhave prepared the patch on top of some previous commit.\nCould you please rebase the patch which we can apply on HEAD ?\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nSorry, I introduced this bug in my refactoring.It's been fixed.WenjingHi Wenjing,This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you have prepared the patch on top of some previous commit. Could you please rebase the patch which we can apply on HEAD ?-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 26 Mar 2020 10:04:01 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月26日 下午12:34,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> \n> Sorry, I introduced this bug in my refactoring.\n> It's been fixed.\n> \n> Wenjing\n> \n> Hi Wenjing,\n> This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you have prepared the patch on top of some previous commit. \n> Could you please rebase the patch which we can apply on HEAD ?\nYes, It looks like the built-in functions are in conflict with new code.\n\n\nWenjing\n\n\n\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Fri, 27 Mar 2020 13:25:33 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/27/20 10:55 AM, 曾文旌 wrote:\n>> Hi Wenjing,\n>> This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope \n>> you have prepared the patch on top of some previous commit.\n>> Could you please rebase the patch which we can apply on HEAD ?\n> Yes, It looks like the built-in functions are in conflict with new code.\n>\n>\nThis error message looks wrong  to me-\n\npostgres=# reindex table concurrently t ;\nERROR:  cannot create indexes on global temporary tables using \nconcurrent mode\npostgres=#\n\nBetter message would be-\n\nERROR:  cannot reindex global temporary tables concurrently\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 3/27/20 10:55 AM, 曾文旌 wrote:\n\n\n\n\n\n\nHi Wenjing,\nThis\n patch(gtt_v21_pg13.patch) is not applicable on PG\n HEAD, I hope you have prepared the patch on top of\n some previous commit. \n\n\nCould you please rebase the\n patch which we can apply on HEAD ?\n\n\n\n\n\n\nYes, It looks like the built-in functions are in conflict\n with new code.\n\n\n\n\n\nThis error message looks wrong  to me-\npostgres=# reindex table concurrently t ;\n ERROR:  cannot create indexes on global temporary tables using\n concurrent mode\n postgres=# \nBetter message would be-\nERROR:  cannot reindex global temporary tables concurrently\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 27 Mar 2020 14:51:21 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/27/20 10:55 AM, 曾文旌 wrote:\n>> Hi Wenjing,\n>> This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope \n>> you have prepared the patch on top of some previous commit.\n>> Could you please rebase the patch which we can apply on HEAD ?\n> Yes, It looks like the built-in functions are in conflict with new code.\n>\nIn this below scenario, pg_dump is failing -\n\ntest=# CREATE database foo;\nCREATE DATABASE\ntest=# \\c foo\nYou are now connected to database \"foo\" as user \"tushar\".\nfoo=# CREATE GLOBAL TEMPORARY TABLE bar(c1 bigint, c2 bigserial) on \ncommit PRESERVE rows;\nCREATE TABLE\nfoo=# \\q\n\n[tushar@localhost bin]$ ./pg_dump -Fp foo > /tmp/rf2\npg_dump: error: query to get data of sequence \"bar_c2_seq\" returned 0 \nrows (expected 1)\n[tushar@localhost bin]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 3/27/20 10:55 AM, 曾文旌 wrote:\n\n\n\n\n\n\nHi Wenjing,\nThis\n patch(gtt_v21_pg13.patch) is not applicable on PG\n HEAD, I hope you have prepared the patch on top of\n some previous commit. \n\n\nCould you please rebase the\n patch which we can apply on HEAD ?\n\n\n\n\n\n\nYes, It looks like the built-in functions are in conflict\n with new code.\n\n\n\nIn this below scenario, pg_dump is failing -\ntest=# CREATE database foo;\n CREATE DATABASE\n test=# \\c foo\n You are now connected to database \"foo\" as user \"tushar\".\n foo=# CREATE GLOBAL TEMPORARY TABLE bar(c1 bigint, c2 bigserial)\n on commit PRESERVE rows;\n CREATE TABLE\n foo=# \\q\n\n [tushar@localhost bin]$ ./pg_dump -Fp foo > /tmp/rf2\n pg_dump: error: query to get data of sequence \"bar_c2_seq\"\n returned 0 rows (expected 1)\n [tushar@localhost bin]$ \n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 27 Mar 2020 15:36:24 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月27日 下午6:06,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/27/20 10:55 AM, 曾文旌 wrote:\n>>> Hi Wenjing,\n>>> This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you have prepared the patch on top of some previous commit. \n>>> Could you please rebase the patch which we can apply on HEAD ?\n>> Yes, It looks like the built-in functions are in conflict with new code.\n>> \n> In this below scenario, pg_dump is failing -\n> \n> test=# CREATE database foo;\n> CREATE DATABASE\n> test=# \\c foo\n> You are now connected to database \"foo\" as user \"tushar\".\n> foo=# CREATE GLOBAL TEMPORARY TABLE bar(c1 bigint, c2 bigserial) on commit PRESERVE rows;\n> CREATE TABLE\n> foo=# \\q\n> \n> [tushar@localhost bin]$ ./pg_dump -Fp foo > /tmp/rf2\n> pg_dump: error: query to get data of sequence \"bar_c2_seq\" returned 0 rows (expected 1)\n> [tushar@localhost bin]$ \n> \n> \nThanks for review\nFixed in global_temporary_table_v23-pg13.patch\n\n\n\nWenjing\n\n\n\n\n\n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company", "msg_date": "Tue, 31 Mar 2020 12:11:58 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月27日 下午5:21,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 3/27/20 10:55 AM, 曾文旌 wrote:\n>>> Hi Wenjing,\n>>> This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you have prepared the patch on top of some previous commit. \n>>> Could you please rebase the patch which we can apply on HEAD ?\n>> Yes, It looks like the built-in functions are in conflict with new code.\n>> \n>> \n> This error message looks wrong to me-\n> \n> postgres=# reindex table concurrently t ;\n> ERROR: cannot create indexes on global temporary tables using concurrent mode\n> postgres=# \n> \n> Better message would be-\n> \n> ERROR: cannot reindex global temporary tables concurrently\n> \nI found that the local temp table automatically disables concurrency mode.\nso, I made some improvements, The reindex GTT behaves the same as the local temp table.\n\n\nWenjing\n\n\n\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company", "msg_date": "Tue, 31 Mar 2020 12:16:31 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\nThanks for the new patch.\nI saw with the patch(gtt_v23.patch), we are supporting the new concept\n\"global temporary sequence\"(i.e. session-specific sequence), is this\nintentional?\n\npostgres=# create *global temporary sequence* gt_seq;\nCREATE SEQUENCE\npostgres=# create sequence seq;\nCREATE SEQUENCE\npostgres=# \\d+\n List of relations\n Schema | Name | Type | Owner | Persistence | Size |\nDescription\n--------+--------+----------+-------+-------------+------------+-------------\n *public | gt_seq | sequence | edb | session | 8192 bytes |*\n public | seq | sequence | edb | permanent | 8192 bytes |\n(2 rows)\n\npostgres=# select *nextval('gt_seq')*, nextval('seq');\n nextval | nextval\n---------+---------\n * 1* | 1\n(1 row)\n\npostgres=# select nextval('gt_seq'), nextval('seq');\n nextval | nextval\n---------+---------\n *2* | 2\n(1 row)\n\n-- Exit and re-connect to psql prompt:\npostgres=# \\q\n[edb@localhost bin]$ ./psql postgres\npsql (13devel)\nType \"help\" for help.\n\npostgres=# select nextval('gt_seq'), nextval('seq');\n nextval | nextval\n---------+---------\n * 1* | 3\n(1 row)\n\npostgres=# select nextval('gt_seq'), nextval('seq');\n nextval | nextval\n---------+---------\n *2 *| 4\n(1 row)\n\nOn Tue, Mar 31, 2020 at 9:46 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> 2020年3月27日 下午5:21,tushar <tushar.ahuja@enterprisedb.com> 写道:\n>\n> On 3/27/20 10:55 AM, 曾文旌 wrote:\n>\n> Hi Wenjing,\n> This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you\n> have prepared the patch on top of some previous commit.\n> Could you please rebase the patch which we can apply on HEAD ?\n>\n> Yes, It looks like the built-in functions are in conflict with new code.\n>\n>\n> This error message looks wrong to me-\n>\n> postgres=# reindex table concurrently t ;\n> ERROR: cannot create indexes on global temporary tables using concurrent\n> mode\n> postgres=#\n>\n> Better message would be-\n>\n> ERROR: cannot reindex global temporary tables concurrently\n>\n> I found that the local temp table automatically disables concurrency mode.\n> so, I made some improvements, The reindex GTT behaves the same as the\n> local temp table.\n>\n>\n> Wenjing\n>\n>\n>\n>\n> --\n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Thanks for the new patch.I saw with the patch(gtt_v23.patch), we are supporting the new concept \"global temporary sequence\"(i.e. session-specific sequence), is this intentional?postgres=# create global temporary sequence gt_seq;CREATE SEQUENCEpostgres=# create sequence seq;CREATE SEQUENCEpostgres=# \\d+                              List of relations Schema |  Name  |   Type   | Owner | Persistence |    Size    | Description --------+--------+----------+-------+-------------+------------+------------- public | gt_seq | sequence | edb   | session     | 8192 bytes |  public | seq    | sequence | edb   | permanent   | 8192 bytes | (2 rows)postgres=# select nextval('gt_seq'), nextval('seq'); nextval | nextval ---------+---------       1 |       1(1 row)postgres=# select nextval('gt_seq'), nextval('seq'); nextval | nextval ---------+---------       2 |       2(1 row)-- Exit and re-connect to psql prompt:postgres=# \\q[edb@localhost bin]$ ./psql postgres psql (13devel)Type \"help\" for help.postgres=# select nextval('gt_seq'), nextval('seq'); nextval | nextval ---------+---------       1 |       3(1 row)postgres=# select nextval('gt_seq'), nextval('seq'); nextval | nextval ---------+---------       2 |       4(1 row)On Tue, Mar 31, 2020 at 9:46 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:2020年3月27日 下午5:21,tushar <tushar.ahuja@enterprisedb.com> 写道:\n\nOn 3/27/20 10:55 AM, 曾文旌 wrote:\n\n\n\n\n\n\nHi Wenjing,\nThis\n patch(gtt_v21_pg13.patch) is not applicable on PG\n HEAD, I hope you have prepared the patch on top of\n some previous commit. \n\n\nCould you please rebase the\n patch which we can apply on HEAD ?\n\n\n\n\n\n\nYes, It looks like the built-in functions are in conflict\n with new code.\n\n\n\n\nThis error message looks wrong  to me-postgres=# reindex table concurrently t ;\n ERROR:  cannot create indexes on global temporary tables using\n concurrent mode\n postgres=# Better message would be-ERROR:  cannot reindex global temporary tables concurrentlyI found that the local temp table automatically disables concurrency mode.so, I made some improvements, The reindex GTT behaves the same as the local temp table.Wenjing\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 31 Mar 2020 19:29:15 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年3月31日 下午9:59,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> Thanks for the new patch.\n> I saw with the patch(gtt_v23.patch), we are supporting the new concept \"global temporary sequence\"(i.e. session-specific sequence), is this intentional?\nIt was supported in earlier versions,\nThis causes the sequence built into the GTT to automatically become a \"global temp sequence\",\nSuch as create global temp table (a serial);\nLike GTT, the global temp sequnce is used individually for each session.\n\nRecently, I added the global temp sequence syntax so that it can be created independently.\nThe purpose of this is to enable such sequence built into the GTT to support pg_dump and pg_restore.\n\n\nWenjing\n\n\n> \n> postgres=# create global temporary sequence gt_seq;\n> CREATE SEQUENCE\n> postgres=# create sequence seq;\n> CREATE SEQUENCE\n> postgres=# \\d+\n> List of relations\n> Schema | Name | Type | Owner | Persistence | Size | Description \n> --------+--------+----------+-------+-------------+------------+-------------\n> public | gt_seq | sequence | edb | session | 8192 bytes | \n> public | seq | sequence | edb | permanent | 8192 bytes | \n> (2 rows)\n> \n> postgres=# select nextval('gt_seq'), nextval('seq');\n> nextval | nextval \n> ---------+---------\n> 1 | 1\n> (1 row)\n> \n> postgres=# select nextval('gt_seq'), nextval('seq');\n> nextval | nextval \n> ---------+---------\n> 2 | 2\n> (1 row)\n> \n> -- Exit and re-connect to psql prompt:\n> postgres=# \\q\n> [edb@localhost bin]$ ./psql postgres \n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# select nextval('gt_seq'), nextval('seq');\n> nextval | nextval \n> ---------+---------\n> 1 | 3\n> (1 row)\n> \n> postgres=# select nextval('gt_seq'), nextval('seq');\n> nextval | nextval \n> ---------+---------\n> 2 | 4\n> (1 row)\n> \n> On Tue, Mar 31, 2020 at 9:46 AM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n>> 2020年3月27日 下午5:21,tushar <tushar.ahuja@enterprisedb.com <mailto:tushar.ahuja@enterprisedb.com>> 写道:\n>> \n>> On 3/27/20 10:55 AM, 曾文旌 wrote:\n>>>> Hi Wenjing,\n>>>> This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope you have prepared the patch on top of some previous commit. \n>>>> Could you please rebase the patch which we can apply on HEAD ?\n>>> Yes, It looks like the built-in functions are in conflict with new code.\n>>> \n>>> \n>> This error message looks wrong to me-\n>> \n>> postgres=# reindex table concurrently t ;\n>> ERROR: cannot create indexes on global temporary tables using concurrent mode\n>> postgres=# \n>> \n>> Better message would be-\n>> \n>> ERROR: cannot reindex global temporary tables concurrently\n>> \n> I found that the local temp table automatically disables concurrency mode.\n> so, I made some improvements, The reindex GTT behaves the same as the local temp table.\n> \n> \n> Wenjing\n> \n> \n> \n>> \n>> -- \n>> regards,tushar\n>> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n>> The Enterprise PostgreSQL Company\n> \n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Wed, 1 Apr 2020 11:22:49 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Apr 1, 2020 at 8:52 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> 2020年3月31日 下午9:59,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n> Hi Wenjing,\n> Thanks for the new patch.\n> I saw with the patch(gtt_v23.patch), we are supporting the new concept\n> \"global temporary sequence\"(i.e. session-specific sequence), is this\n> intentional?\n>\n> It was supported in earlier versions,\n>\nyes.\n\nThis causes the sequence built into the GTT to automatically become a\n> \"global temp sequence\",\n> Such as create global temp table (a serial);\n> Like GTT, the global temp sequnce is used individually for each session.\n>\n> Recently, I added the global temp sequence syntax so that it can be\n> created independently.\n> The purpose of this is to enable such sequence built into the GTT to\n> support pg_dump and pg_restore.\n>\n\nThanks for the explanation.\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Apr 1, 2020 at 8:52 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:2020年3月31日 下午9:59,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi Wenjing,Thanks for the new patch.I saw with the patch(gtt_v23.patch), we are supporting the new concept \"global temporary sequence\"(i.e. session-specific sequence), is this intentional?It was supported in earlier versions,yes.This causes the sequence built into the GTT to automatically become a \"global temp sequence\",Such as create global temp table (a serial);Like GTT, the global temp sequnce is used individually for each session.Recently, I added the global temp sequence syntax so that it can be created independently.The purpose of this is to enable such sequence built into the GTT to support pg_dump and pg_restore.Thanks for the explanation.-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Apr 2020 09:17:18 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\nI hope we need to change the below error message.\n\npostgres=# create global temporary table gtt(c1 int) on commit preserve\nrows;\nCREATE TABLE\n\npostgres=# create materialized view mvw as select * from gtt;\nERROR: materialized views must not use global temporary tables* or views*\n\nAnyways we are not allowed to create a \"global temporary view\",\nso the above ERROR message should change(i.e. *\" or view\"* need to be\nremoved from the error message) something like:\n*\"ERROR: materialized views must not use global temporary tables\"*\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,I hope we need to change the below error message.postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# create materialized view mvw as select * from gtt;ERROR: materialized views must not use global temporary tables or viewsAnyways we are not allowed to create a \"global temporary view\", so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\"ERROR: materialized views must not use global temporary tables\"-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Apr 2020 18:26:13 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi All,\n\nI have noted down few behavioral difference in our GTT implementation in PG\nas compared to Oracle DB:\nAs per my understanding, the behavior of DROP TABLE in case of \"Normal\ntable and GTT\" in Oracle DB are as below:\n\n 1. Any tables(Normal table / GTT) without having data in a session, we\n will be able to DROP from another session.\n 2. For a completed transaction on a normal table having data, we will be\n able to DROP from another session. If the transaction is not yet complete,\n and we are trying to drop the table from another session, then we will get\n an error. (working as expected)\n 3. For a completed transaction on GTT with(on commit delete rows) (i.e.\n no data in GTT) in a session, we will be able to DROP from another session.\n 4. For a completed transaction on GTT with(on commit preserve rows) with\n data in a session, we will not be able to DROP from any session(not even\n from the session in which GTT is created), we need to truncate the table\n data first from all the session(session1, session2) which is having data.\n\n*1. Any tables(Normal table / GTT) without having data in a session, we\nwill be able to DROP from another session.*\n*Session1:*\ncreate table t1 (c1 integer);\ncreate global temporary table gtt1 (c1 integer) on commit delete rows;\ncreate global temporary table gtt2 (c1 integer) on commit preserve rows;\n\n*Session2:*\ndrop table t1;\ndrop table gtt1;\ndrop table gtt2;\n\n-- *Issue 1:* But we are able to drop a simple table and failed to drop GTT\nas below.\n\npostgres=# drop table t1;\nDROP TABLE\npostgres=# drop table gtt1;\nERROR: can not drop relation gtt1 when other backend attached this global\ntemp table\npostgres=# drop table gtt2;\nERROR: can not drop relation gtt2 when other backend attached this global\ntemp table\n\n\n*3. For a completed transaction on GTT with(on commit delete rows) (i.e. no\ndata in GTT) in a session, we will be able to DROP from another session.*\n\n*Session1:*create global temporary table gtt1 (c1 integer) on commit delete\nrows;\n\n*Session2:*\ndrop table gtt1;\n\n-- *Issue 2:* But we are getting error for GTT with(on_commit_delete_rows)\nwithout data.\n\npostgres=# drop table gtt1;\nERROR: can not drop relation gtt1 when other backend attached this global\ntemp table\n\n\n*4. For a completed transaction on GTT with(on commit preserve rows) with\ndata in any session, we will not be able to DROP from any session(not even\nfrom the session in which GTT is created)*\n\n*Case1:*\ncreate global temporary table gtt2 (c1 integer) on commit preserve rows;\ninsert into gtt2 values(100);\ndrop table gtt2;\n\nSQL> drop table gtt2;\ndrop table gtt2\n *\nERROR at line 1:\nORA-14452: attempt to create, alter or drop an index on temporary table\nalready in use\n\n-- *Issue 3:* But, we are able to drop the GTT(having data) which we have\ncreated in the same session.\n\npostgres=# drop table gtt2;\nDROP TABLE\n\n\n\n\n*Case2: GTT with(on commit preserve rows) having data in both session1 and\nsession2Session1:*create global temporary table gtt2 (c1 integer) on commit\npreserve rows;\ninsert into gtt2 values(100);\n\n\n*Session2:*insert into gtt2 values(200);\n\n-- If we try to drop the table from any session we should get an error, it\nis working fine.\ndrop table gtt2;\n\nSQL> drop table gtt2;\ndrop table gtt2\n *\nERROR at line 1:\nORA-14452: attempt to create, alter or drop an index on temporary table\nalready in use\n\npostgres=# drop table gtt2 ;\nERROR: can not drop relation gtt2 when other backend attached this global\ntemp table\n\n\n-- To drop the table gtt2 from any session1/session2, we need to truncate\nthe table data first from all the session(session1, session2) which is\nhaving data.\n*Session1:*\ntruncate table gtt2;\n-- Session2:\ntruncate table gtt2;\n\n*Session 2:*\nSQL> drop table gtt2;\n\nTable dropped.\n\n-- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE the\ntable in all the sessions.\n-- truncate from all sessions where GTT have data.\npostgres=# truncate gtt2 ;\nTRUNCATE TABLE\n\n-- *try to DROP GTT still, we are getting error.*\n\npostgres=# drop table gtt2 ;\nERROR: can not drop relation gtt2 when other backend attached this global\ntemp table\n\n\nTo drop the GTT from any session, we need to exit from all other sessions.\npostgres=# drop table gtt2 ;\nDROP TABLE\n\nKindly let me know if I am missing something.\n\n\nOn Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com>\nwrote:\n\n> Hi Wenjing,\n> I hope we need to change the below error message.\n>\n> postgres=# create global temporary table gtt(c1 int) on commit preserve\n> rows;\n> CREATE TABLE\n>\n> postgres=# create materialized view mvw as select * from gtt;\n> ERROR: materialized views must not use global temporary tables* or views*\n>\n> Anyways we are not allowed to create a \"global temporary view\",\n> so the above ERROR message should change(i.e. *\" or view\"* need to be\n> removed from the error message) something like:\n> *\"ERROR: materialized views must not use global temporary tables\"*\n>\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.Session1:create table t1 (c1 integer);create global temporary table gtt1 (c1 integer) on commit delete rows;create global temporary table gtt2 (c1 integer) on commit preserve rows;Session2:drop table t1;drop table gtt1;drop table gtt2;-- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\tpostgres=# drop table t1;\tDROP TABLE\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table\tpostgres=# drop table gtt2;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.Session1:create global temporary table gtt1 (c1 integer) on commit delete rows;Session2:drop table gtt1;-- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)Case1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use-- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\tpostgres=# drop table gtt2;\tDROP TABLECase2: GTT with(on commit preserve rows) having data in both session1 and session2Session1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);Session2:insert into gtt2 values(200);-- If we try to drop the table from any session we should get an error, it is working fine.drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table-- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.Session1:truncate table gtt2;-- Session2:truncate table gtt2; Session 2:\tSQL> drop table gtt2;\tTable dropped.-- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\t-- truncate from all sessions where GTT have data.\tpostgres=# truncate gtt2 ;\tTRUNCATE TABLE\t-- try to DROP GTT still, we are getting error.\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp tableTo drop the GTT from any session, we need to exit from all other sessions.postgres=# drop table gtt2 ;DROP TABLEKindly let me know if I am missing something.On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Hi Wenjing,I hope we need to change the below error message.postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# create materialized view mvw as select * from gtt;ERROR: materialized views must not use global temporary tables or viewsAnyways we are not allowed to create a \"global temporary view\", so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\"ERROR: materialized views must not use global temporary tables\"-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Apr 2020 14:15:44 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 2. 4. 2020 v 10:45 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com>\nnapsal:\n\n> Hi All,\n>\n> I have noted down few behavioral difference in our GTT implementation in\n> PG as compared to Oracle DB:\n> As per my understanding, the behavior of DROP TABLE in case of \"Normal\n> table and GTT\" in Oracle DB are as below:\n>\n> 1. Any tables(Normal table / GTT) without having data in a session, we\n> will be able to DROP from another session.\n> 2. For a completed transaction on a normal table having data, we will\n> be able to DROP from another session. If the transaction is not yet\n> complete, and we are trying to drop the table from another session, then we\n> will get an error. (working as expected)\n> 3. For a completed transaction on GTT with(on commit delete rows)\n> (i.e. no data in GTT) in a session, we will be able to DROP from another\n> session.\n> 4. For a completed transaction on GTT with(on commit preserve rows)\n> with data in a session, we will not be able to DROP from any session(not\n> even from the session in which GTT is created), we need to truncate the\n> table data first from all the session(session1, session2) which is having\n> data.\n>\n> *1. Any tables(Normal table / GTT) without having data in a session, we\n> will be able to DROP from another session.*\n> *Session1:*\n> create table t1 (c1 integer);\n> create global temporary table gtt1 (c1 integer) on commit delete rows;\n> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>\n> *Session2:*\n> drop table t1;\n> drop table gtt1;\n> drop table gtt2;\n>\n> -- *Issue 1:* But we are able to drop a simple table and failed to drop\n> GTT as below.\n>\n> postgres=# drop table t1;\n> DROP TABLE\n> postgres=# drop table gtt1;\n> ERROR: can not drop relation gtt1 when other backend attached this global\n> temp table\n> postgres=# drop table gtt2;\n> ERROR: can not drop relation gtt2 when other backend attached this global\n> temp table\n>\n>\nI think so this is expected behave. It was proposed for first release - and\nfor next releases there can be support for DROP TABLE with force option\nlike DROP DATABASE (force).\n\nRegards\n\nPavel\n\n\n> *3. For a completed transaction on GTT with(on commit delete rows) (i.e.\n> no data in GTT) in a session, we will be able to DROP from another session.*\n>\n> *Session1:*create global temporary table gtt1 (c1 integer) on commit\n> delete rows;\n>\n> *Session2:*\n> drop table gtt1;\n>\n> -- *Issue 2:* But we are getting error for GTT\n> with(on_commit_delete_rows) without data.\n>\n> postgres=# drop table gtt1;\n> ERROR: can not drop relation gtt1 when other backend attached this global\n> temp table\n>\n>\n> *4. For a completed transaction on GTT with(on commit preserve rows) with\n> data in any session, we will not be able to DROP from any session(not even\n> from the session in which GTT is created)*\n>\n> *Case1:*\n> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n> insert into gtt2 values(100);\n> drop table gtt2;\n>\n> SQL> drop table gtt2;\n> drop table gtt2\n> *\n> ERROR at line 1:\n> ORA-14452: attempt to create, alter or drop an index on temporary table\n> already in use\n>\n> -- *Issue 3:* But, we are able to drop the GTT(having data) which we have\n> created in the same session.\n>\n> postgres=# drop table gtt2;\n> DROP TABLE\n>\n>\n>\n>\n> *Case2: GTT with(on commit preserve rows) having data in both session1 and\n> session2Session1:*create global temporary table gtt2 (c1 integer) on\n> commit preserve rows;\n> insert into gtt2 values(100);\n>\n>\n> *Session2:*insert into gtt2 values(200);\n>\n> -- If we try to drop the table from any session we should get an error, it\n> is working fine.\n> drop table gtt2;\n>\n> SQL> drop table gtt2;\n> drop table gtt2\n> *\n> ERROR at line 1:\n> ORA-14452: attempt to create, alter or drop an index on temporary table\n> already in use\n>\n> postgres=# drop table gtt2 ;\n> ERROR: can not drop relation gtt2 when other backend attached this global\n> temp table\n>\n>\n> -- To drop the table gtt2 from any session1/session2, we need to truncate\n> the table data first from all the session(session1, session2) which is\n> having data.\n> *Session1:*\n> truncate table gtt2;\n> -- Session2:\n> truncate table gtt2;\n>\n> *Session 2:*\n> SQL> drop table gtt2;\n>\n> Table dropped.\n>\n> -- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE\n> the table in all the sessions.\n> -- truncate from all sessions where GTT have data.\n> postgres=# truncate gtt2 ;\n> TRUNCATE TABLE\n>\n> -- *try to DROP GTT still, we are getting error.*\n>\n> postgres=# drop table gtt2 ;\n> ERROR: can not drop relation gtt2 when other backend attached this global\n> temp table\n>\n>\n> To drop the GTT from any session, we need to exit from all other sessions.\n> postgres=# drop table gtt2 ;\n> DROP TABLE\n>\n> Kindly let me know if I am missing something.\n>\n>\n> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n> wrote:\n>\n>> Hi Wenjing,\n>> I hope we need to change the below error message.\n>>\n>> postgres=# create global temporary table gtt(c1 int) on commit preserve\n>> rows;\n>> CREATE TABLE\n>>\n>> postgres=# create materialized view mvw as select * from gtt;\n>> ERROR: materialized views must not use global temporary tables* or views*\n>>\n>> Anyways we are not allowed to create a \"global temporary view\",\n>> so the above ERROR message should change(i.e. *\" or view\"* need to be\n>> removed from the error message) something like:\n>> *\"ERROR: materialized views must not use global temporary tables\"*\n>>\n>> --\n>>\n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>\n>\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nčt 2. 4. 2020 v 10:45 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com> napsal:Hi All,I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.Session1:create table t1 (c1 integer);create global temporary table gtt1 (c1 integer) on commit delete rows;create global temporary table gtt2 (c1 integer) on commit preserve rows;Session2:drop table t1;drop table gtt1;drop table gtt2;-- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\tpostgres=# drop table t1;\tDROP TABLE\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table\tpostgres=# drop table gtt2;\tERROR:  can not drop relation gtt2 when other backend attached this global temp tableI think so this is expected behave. It was proposed for first release - and for next releases there can be support for DROP TABLE with force option like DROP DATABASE (force).RegardsPavel3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.Session1:create global temporary table gtt1 (c1 integer) on commit delete rows;Session2:drop table gtt1;-- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)Case1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use-- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\tpostgres=# drop table gtt2;\tDROP TABLECase2: GTT with(on commit preserve rows) having data in both session1 and session2Session1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);Session2:insert into gtt2 values(200);-- If we try to drop the table from any session we should get an error, it is working fine.drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table-- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.Session1:truncate table gtt2;-- Session2:truncate table gtt2; Session 2:\tSQL> drop table gtt2;\tTable dropped.-- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\t-- truncate from all sessions where GTT have data.\tpostgres=# truncate gtt2 ;\tTRUNCATE TABLE\t-- try to DROP GTT still, we are getting error.\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp tableTo drop the GTT from any session, we need to exit from all other sessions.postgres=# drop table gtt2 ;DROP TABLEKindly let me know if I am missing something.On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Hi Wenjing,I hope we need to change the below error message.postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# create materialized view mvw as select * from gtt;ERROR: materialized views must not use global temporary tables or viewsAnyways we are not allowed to create a \"global temporary view\", so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\"ERROR: materialized views must not use global temporary tables\"-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Apr 2020 10:52:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "In my opinion\n1 We are developing GTT according to the SQL standard, not Oracle.\n\n2 The implementation differences you listed come from pg and oracle storage modules and DDL implementations.\n\n2.1 issue 1 and issue 2\nThe creation of Normal table/GTT defines the catalog and initializes the data store file, in the case of the GTT, which initializes the store file for the current session. \nBut in oracle It just looks like only defines the catalog.\nThis causes other sessions can not drop the GTT in PostgreSQL.\nThis is the reason for issue 1 and issue 2, I think it is reasonable.\n\n2.2 issue 3\nI thinking the logic of drop GTT is\nWhen only the current session is using the GTT, it is safe to drop the GTT. \nbecause the GTT's definition and storage files can completely delete from db.\nBut, If multiple sessions are using this GTT, it is hard to drop GTT in session a, because remove the local buffer and data file of the GTT in other session is difficult.\nI am not sure why oracle has this limitation.\nSo, issue 3 is reasonable.\n\n2.3 TRUNCATE Normal table/GTT\nTRUNCATE Normal table / GTT clean up the logical data but not unlink data store file. in the case of the GTT, which is the store file for the current session.\nBut in oracle, It just looks like data store file was cleaned up.\nPostgreSQL storage is obviously different from oracle, In other words, session is detached from storage.\nThis is the reason for issue4 I think it is reasonable.\n\nAll in all, I think the current implementation is sufficient for dba to manage GTT.\n\n> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi All,\n> \n> I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:\n> As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:\n> Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n> For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)\n> For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n> For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.\n> 1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n> Session1:\n> create table t1 (c1 integer);\n> create global temporary table gtt1 (c1 integer) on commit delete rows;\n> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n> \n> Session2:\n> drop table t1;\n> drop table gtt1;\n> drop table gtt2;\n> \n> -- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\n> postgres=# drop table t1;\n> DROP TABLE\n> postgres=# drop table gtt1;\n> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n> postgres=# drop table gtt2;\n> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n> \n> 3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n> Session1:\n> create global temporary table gtt1 (c1 integer) on commit delete rows;\n> \n> Session2:\n> drop table gtt1;\n> \n> -- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\n> postgres=# drop table gtt1;\n> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n> \n> 4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)\n> \n> Case1:\n> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n> insert into gtt2 values(100);\n> drop table gtt2;\n> \n> SQL> drop table gtt2;\n> drop table gtt2\n> *\n> ERROR at line 1:\n> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n> \n> -- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\n> postgres=# drop table gtt2;\n> DROP TABLE\n> \n> Case2: GTT with(on commit preserve rows) having data in both session1 and session2\n> Session1:\n> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n> insert into gtt2 values(100);\n> \n> Session2:\n> insert into gtt2 values(200);\n> \n> -- If we try to drop the table from any session we should get an error, it is working fine.\n> drop table gtt2;\n> SQL> drop table gtt2;\n> drop table gtt2\n> *\n> ERROR at line 1:\n> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n> \n> postgres=# drop table gtt2 ;\n> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n> \n> -- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.\n> Session1:\n> truncate table gtt2;\n> -- Session2:\n> truncate table gtt2;\n> \n> Session 2:\n> SQL> drop table gtt2;\n> \n> Table dropped.\n> \n> -- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\n> -- truncate from all sessions where GTT have data.\n> postgres=# truncate gtt2 ;\n> TRUNCATE TABLE\n> \n> -- try to DROP GTT still, we are getting error.\n> postgres=# drop table gtt2 ;\n> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n> \n> To drop the GTT from any session, we need to exit from all other sessions.\n> postgres=# drop table gtt2 ;\n> DROP TABLE\n> \n> Kindly let me know if I am missing something.\n> \n> \n> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> wrote:\n> Hi Wenjing,\n> I hope we need to change the below error message.\n> \n> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n> CREATE TABLE\n> \n> postgres=# create materialized view mvw as select * from gtt;\n> ERROR: materialized views must not use global temporary tables or views\n> \n> Anyways we are not allowed to create a \"global temporary view\", \n> so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\n> \"ERROR: materialized views must not use global temporary tables\"\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Fri, 3 Apr 2020 15:52:52 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 3. 4. 2020 v 9:52 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:\n\n> In my opinion\n> 1 We are developing GTT according to the SQL standard, not Oracle.\n>\n> 2 The implementation differences you listed come from pg and oracle\n> storage modules and DDL implementations.\n>\n> 2.1 issue 1 and issue 2\n> The creation of Normal table/GTT defines the catalog and initializes the\n> data store file, in the case of the GTT, which initializes the store file\n> for the current session.\n> But in oracle It just looks like only defines the catalog.\n> This causes other sessions can not drop the GTT in PostgreSQL.\n> This is the reason for issue 1 and issue 2, I think it is reasonable.\n>\n> 2.2 issue 3\n> I thinking the logic of drop GTT is\n> When only the current session is using the GTT, it is safe to drop the\n> GTT.\n> because the GTT's definition and storage files can completely delete from\n> db.\n> But, If multiple sessions are using this GTT, it is hard to drop GTT in\n> session a, because remove the local buffer and data file of the GTT in\n> other session is difficult.\n> I am not sure why oracle has this limitation.\n> So, issue 3 is reasonable.\n>\n> 2.3 TRUNCATE Normal table/GTT\n> TRUNCATE Normal table / GTT clean up the logical data but not unlink data\n> store file. in the case of the GTT, which is the store file for the\n> current session.\n> But in oracle, It just looks like data store file was cleaned up.\n> PostgreSQL storage is obviously different from oracle, In other words,\n> session is detached from storage.\n> This is the reason for issue4 I think it is reasonable.\n>\n\nAlthough the implementation of GTT is different, I think so TRUNCATE on\nPostgres (when it is really finalized) can remove session metadata of GTT\ntoo (and reduce usage's counter). It is not critical feature, but I think\nso it should not be hard to implement. From practical reason can be nice to\nhave a tool how to refresh GTT without a necessity to close session.\nTRUNCATE can be this tool.\n\nRegards\n\nPavel\n\n\n> All in all, I think the current implementation is sufficient for dba to\n> manage GTT.\n>\n> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n> Hi All,\n>\n> I have noted down few behavioral difference in our GTT implementation in\n> PG as compared to Oracle DB:\n> As per my understanding, the behavior of DROP TABLE in case of \"Normal\n> table and GTT\" in Oracle DB are as below:\n>\n> 1. Any tables(Normal table / GTT) without having data in a session, we\n> will be able to DROP from another session.\n> 2. For a completed transaction on a normal table having data, we will\n> be able to DROP from another session. If the transaction is not yet\n> complete, and we are trying to drop the table from another session, then we\n> will get an error. (working as expected)\n> 3. For a completed transaction on GTT with(on commit delete rows)\n> (i.e. no data in GTT) in a session, we will be able to DROP from another\n> session.\n> 4. For a completed transaction on GTT with(on commit preserve rows)\n> with data in a session, we will not be able to DROP from any session(not\n> even from the session in which GTT is created), we need to truncate the\n> table data first from all the session(session1, session2) which is having\n> data.\n>\n> *1. Any tables(Normal table / GTT) without having data in a session, we\n> will be able to DROP from another session.*\n> *Session1:*\n> create table t1 (c1 integer);\n> create global temporary table gtt1 (c1 integer) on commit delete rows;\n> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>\n> *Session2:*\n> drop table t1;\n> drop table gtt1;\n> drop table gtt2;\n>\n> -- *Issue 1:* But we are able to drop a simple table and failed to drop\n> GTT as below.\n>\n> postgres=# drop table t1;\n> DROP TABLE\n> postgres=# drop table gtt1;\n> ERROR: can not drop relation gtt1 when other backend attached this global\n> temp table\n> postgres=# drop table gtt2;\n> ERROR: can not drop relation gtt2 when other backend attached this global\n> temp table\n>\n>\n> *3. For a completed transaction on GTT with(on commit delete rows) (i.e.\n> no data in GTT) in a session, we will be able to DROP from another session.*\n>\n> *Session1:*create global temporary table gtt1 (c1 integer) on commit\n> delete rows;\n>\n> *Session2:*\n> drop table gtt1;\n>\n> -- *Issue 2:* But we are getting error for GTT\n> with(on_commit_delete_rows) without data.\n>\n> postgres=# drop table gtt1;\n> ERROR: can not drop relation gtt1 when other backend attached this global\n> temp table\n>\n>\n> *4. For a completed transaction on GTT with(on commit preserve rows) with\n> data in any session, we will not be able to DROP from any session(not even\n> from the session in which GTT is created)*\n>\n> *Case1:*\n> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n> insert into gtt2 values(100);\n> drop table gtt2;\n>\n> SQL> drop table gtt2;\n> drop table gtt2\n> *\n> ERROR at line 1:\n> ORA-14452: attempt to create, alter or drop an index on temporary table\n> already in use\n>\n> -- *Issue 3:* But, we are able to drop the GTT(having data) which we have\n> created in the same session.\n>\n> postgres=# drop table gtt2;\n> DROP TABLE\n>\n>\n>\n>\n> *Case2: GTT with(on commit preserve rows) having data in both session1 and\n> session2Session1:*create global temporary table gtt2 (c1 integer) on\n> commit preserve rows;\n> insert into gtt2 values(100);\n>\n>\n> *Session2:*insert into gtt2 values(200);\n>\n> -- If we try to drop the table from any session we should get an error, it\n> is working fine.\n> drop table gtt2;\n>\n> SQL> drop table gtt2;\n> drop table gtt2\n> *\n> ERROR at line 1:\n> ORA-14452: attempt to create, alter or drop an index on temporary table\n> already in use\n>\n> postgres=# drop table gtt2 ;\n> ERROR: can not drop relation gtt2 when other backend attached this global\n> temp table\n>\n>\n> -- To drop the table gtt2 from any session1/session2, we need to truncate\n> the table data first from all the session(session1, session2) which is\n> having data.\n> *Session1:*\n> truncate table gtt2;\n> -- Session2:\n> truncate table gtt2;\n>\n> *Session 2:*\n> SQL> drop table gtt2;\n>\n> Table dropped.\n>\n> -- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE\n> the table in all the sessions.\n> -- truncate from all sessions where GTT have data.\n> postgres=# truncate gtt2 ;\n> TRUNCATE TABLE\n>\n> -- *try to DROP GTT still, we are getting error.*\n>\n> postgres=# drop table gtt2 ;\n> ERROR: can not drop relation gtt2 when other backend attached this global\n> temp table\n>\n>\n> To drop the GTT from any session, we need to exit from all other sessions.\n> postgres=# drop table gtt2 ;\n> DROP TABLE\n>\n> Kindly let me know if I am missing something.\n>\n>\n> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com>\n> wrote:\n>\n>> Hi Wenjing,\n>> I hope we need to change the below error message.\n>>\n>> postgres=# create global temporary table gtt(c1 int) on commit preserve\n>> rows;\n>> CREATE TABLE\n>>\n>> postgres=# create materialized view mvw as select * from gtt;\n>> ERROR: materialized views must not use global temporary tables* or views*\n>>\n>> Anyways we are not allowed to create a \"global temporary view\",\n>> so the above ERROR message should change(i.e. *\" or view\"* need to be\n>> removed from the error message) something like:\n>> *\"ERROR: materialized views must not use global temporary tables\"*\n>>\n>> --\n>>\n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>\n>\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\npá 3. 4. 2020 v 9:52 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:In my opinion1 We are developing GTT according to the SQL standard, not Oracle.2 The implementation differences you listed come from pg and oracle storage modules and DDL implementations.2.1 issue 1 and issue 2The creation of Normal table/GTT defines the catalog and initializes the data store file, in the case of the GTT, which initializes the store file for the current session. But in oracle It just looks like only defines the catalog.This causes other sessions can not drop the GTT in PostgreSQL.This is the reason for issue 1 and issue 2, I think it is reasonable.2.2 issue 3I thinking the logic of drop GTT isWhen only the current session is using the GTT, it is safe to drop the GTT. because the GTT's definition and storage files can completely delete from db.But, If multiple sessions are using this GTT, it is hard to drop GTT in session a, because remove the local buffer and data file of the GTT in other session is difficult.I am not sure why oracle has this limitation.So, issue 3 is reasonable.2.3 TRUNCATE Normal table/GTTTRUNCATE Normal table / GTT clean up the logical data but not unlink data store file. in the case of the GTT, which is the store file for the current session.But in oracle,  It just looks like data store file was cleaned up.PostgreSQL storage is obviously different from oracle, In other words, session is detached from storage.This is the reason for issue4 I think it is reasonable.Although the implementation of GTT is different, I think so TRUNCATE on Postgres (when it is really finalized) can remove session metadata of GTT too (and reduce usage's counter). It is not critical feature, but I think so it should not be hard to implement. From practical reason can be nice to have a tool how to refresh GTT without a necessity to close session. TRUNCATE can be this tool.RegardsPavelAll in all, I think the current implementation is sufficient for dba to manage GTT.2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi All,I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.Session1:create table t1 (c1 integer);create global temporary table gtt1 (c1 integer) on commit delete rows;create global temporary table gtt2 (c1 integer) on commit preserve rows;Session2:drop table t1;drop table gtt1;drop table gtt2;-- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\tpostgres=# drop table t1;\tDROP TABLE\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table\tpostgres=# drop table gtt2;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.Session1:create global temporary table gtt1 (c1 integer) on commit delete rows;Session2:drop table gtt1;-- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)Case1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use-- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\tpostgres=# drop table gtt2;\tDROP TABLECase2: GTT with(on commit preserve rows) having data in both session1 and session2Session1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);Session2:insert into gtt2 values(200);-- If we try to drop the table from any session we should get an error, it is working fine.drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table-- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.Session1:truncate table gtt2;-- Session2:truncate table gtt2; Session 2:\tSQL> drop table gtt2;\tTable dropped.-- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\t-- truncate from all sessions where GTT have data.\tpostgres=# truncate gtt2 ;\tTRUNCATE TABLE\t-- try to DROP GTT still, we are getting error.\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp tableTo drop the GTT from any session, we need to exit from all other sessions.postgres=# drop table gtt2 ;DROP TABLEKindly let me know if I am missing something.On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Hi Wenjing,I hope we need to change the below error message.postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# create materialized view mvw as select * from gtt;ERROR: materialized views must not use global temporary tables or viewsAnyways we are not allowed to create a \"global temporary view\", so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\"ERROR: materialized views must not use global temporary tables\"-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Apr 2020 10:38:50 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nPlease check the allowed values for boolean parameter\n\"on_commit_delete_rows\".\n\npostgres=# create global temp table gtt1(c1 int)\nwith(on_commit_delete_rows='true');\nCREATE TABLE\nSimilarly we can successfully create GTT by using the values as:\n'true','false', true, false, 'ON', 'OFF', ON, OFF, 1, 0 for boolean\nparameter \"on_commit_delete_rows\"\n\nBut we are getting error while using the boolean value as: '1', '0', 't',\n'f', 'yes', 'no', 'y', 'n' as below.\npostgres=# create global temp table gtt11(c1 int)\nwith(on_commit_delete_rows='1');\nERROR: on_commit_delete_rows requires a Boolean value\npostgres=# create global temp table gtt11(c1 int)\nwith(on_commit_delete_rows='0');\nERROR: on_commit_delete_rows requires a Boolean value\npostgres=# create global temp table gtt11(c1 int)\nwith(on_commit_delete_rows='t');\nERROR: on_commit_delete_rows requires a Boolean value\npostgres=# create global temp table gtt11(c1 int)\nwith(on_commit_delete_rows='f');\nERROR: on_commit_delete_rows requires a Boolean value\npostgres=# create global temp table gtt11(c1 int)\nwith(on_commit_delete_rows='yes');\nERROR: on_commit_delete_rows requires a Boolean value\npostgres=# create global temp table gtt11(c1 int)\nwith(on_commit_delete_rows='no');\nERROR: on_commit_delete_rows requires a Boolean value\npostgres=# create global temp table gtt11(c1 int)\nwith(on_commit_delete_rows='y');\nERROR: on_commit_delete_rows requires a Boolean value\npostgres=# create global temp table gtt11(c1 int)\nwith(on_commit_delete_rows='n');\nERROR: on_commit_delete_rows requires a Boolean value\n\n-- As per the error message \"ERROR: on_commit_delete_rows requires a\nBoolean value\" either we should allow all the boolean values.\n\n*Example*: CREATE VIEW view1 WITH (security_barrier = 'true') as select 5;\nThe syntax of VIEW allows all the above possible boolean values for the\nboolean parameter \"security_barrier\"\n\n\n-- or else we should change the error message something like\n\"ERROR: on_commit_delete_rows requires 'true','false','ON','OFF',1,0 as\nBoolean value\".\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Please check the allowed values for boolean parameter \"on_commit_delete_rows\".postgres=# create global temp table gtt1(c1 int) with(on_commit_delete_rows='true');CREATE TABLESimilarly we can successfully create GTT by using the values as: 'true','false', true, false, 'ON', 'OFF', ON, OFF, 1, 0 for boolean parameter \"on_commit_delete_rows\"But we are getting error while using the boolean value as: '1', '0', 't', 'f', 'yes', 'no', 'y', 'n' as below.postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='1');ERROR:  on_commit_delete_rows requires a Boolean valuepostgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='0');ERROR:  on_commit_delete_rows requires a Boolean valuepostgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='t');ERROR:  on_commit_delete_rows requires a Boolean valuepostgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='f');ERROR:  on_commit_delete_rows requires a Boolean valuepostgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='yes');ERROR:  on_commit_delete_rows requires a Boolean valuepostgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='no');ERROR:  on_commit_delete_rows requires a Boolean valuepostgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='y');ERROR:  on_commit_delete_rows requires a Boolean valuepostgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='n');ERROR:  on_commit_delete_rows requires a Boolean value-- As per the error message \"ERROR:  on_commit_delete_rows requires a Boolean value\" either we should allow all the boolean values.Example: CREATE VIEW view1 WITH (security_barrier = 'true') as select 5;The syntax of VIEW allows all the above possible boolean values for the boolean parameter \"security_barrier\"-- or else we should change the error message something like \"ERROR:  on_commit_delete_rows requires 'true','false','ON','OFF',1,0 as Boolean value\".-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Apr 2020 18:13:36 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月3日 下午8:43,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> \n> Please check the allowed values for boolean parameter \"on_commit_delete_rows\".\n> \n> postgres=# create global temp table gtt1(c1 int) with(on_commit_delete_rows='true');\n> CREATE TABLE\n> Similarly we can successfully create GTT by using the values as: 'true','false', true, false, 'ON', 'OFF', ON, OFF, 1, 0 for boolean parameter \"on_commit_delete_rows\"\n> \n> But we are getting error while using the boolean value as: '1', '0', 't', 'f', 'yes', 'no', 'y', 'n' as below.\n> postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='1');\n> ERROR: on_commit_delete_rows requires a Boolean value\n> postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='0');\n> ERROR: on_commit_delete_rows requires a Boolean value\n> postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='t');\n> ERROR: on_commit_delete_rows requires a Boolean value\n> postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='f');\n> ERROR: on_commit_delete_rows requires a Boolean value\n> postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='yes');\n> ERROR: on_commit_delete_rows requires a Boolean value\n> postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='no');\n> ERROR: on_commit_delete_rows requires a Boolean value\n> postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='y');\n> ERROR: on_commit_delete_rows requires a Boolean value\n> postgres=# create global temp table gtt11(c1 int) with(on_commit_delete_rows='n');\n> ERROR: on_commit_delete_rows requires a Boolean value\nThanks for review.\nThis parameter should support all types of writing of the bool type like parameter autovacuum_enabled.\nSo I fixed in global_temporary_table_v24-pg13.patch.\n\n\nWenjing\n\n\n\n> \n> -- As per the error message \"ERROR: on_commit_delete_rows requires a Boolean value\" either we should allow all the boolean values.\n> Example: CREATE VIEW view1 WITH (security_barrier = 'true') as select 5;\n> The syntax of VIEW allows all the above possible boolean values for the boolean parameter \"security_barrier\"\n> \n> -- or else we should change the error message something like \n> \"ERROR: on_commit_delete_rows requires 'true','false','ON','OFF',1,0 as Boolean value\".\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Tue, 7 Apr 2020 16:51:01 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年2月15日 下午6:06,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n>> postgres=# insert into foo select generate_series(1,10000);\n>> INSERT 0 10000\n>> postgres=# \\dt+ foo\n>> List of relations\n>> ┌────────┬──────┬───────┬───────┬─────────────┬────────┬─────────────┐\n>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\n>> ╞════════╪══════╪═══════╪═══════╪═════════════╪════════╪═════════════╡\n>> │ public │ foo │ table │ pavel │ session │ 384 kB │ │\n>> └────────┴──────┴───────┴───────┴─────────────┴────────┴─────────────┘\n>> (1 row)\n>> \n>> postgres=# truncate foo;\n>> TRUNCATE TABLE\n>> postgres=# \\dt+ foo\n>> List of relations\n>> ┌────────┬──────┬───────┬───────┬─────────────┬───────┬─────────────┐\n>> │ Schema │ Name │ Type │ Owner │ Persistence │ Size │ Description │\n>> ╞════════╪══════╪═══════╪═══════╪═════════════╪═══════╪═════════════╡\n>> │ public │ foo │ table │ pavel │ session │ 16 kB │ │\n>> └────────┴──────┴───────┴───────┴─────────────┴───────┴─────────────┘\n>> (1 row)\n>> \n>> I expect zero size after truncate.\n> Thanks for review.\n> \n> I can explain, I don't think it's a bug.\n> The current implementation of the truncated GTT retains two blocks of FSM pages.\n> The same is true for truncating regular tables in subtransactions.\n> This is an implementation that truncates the table without changing the relfilenode of the table.\n> \n> \n> This is not extra important feature - now this is little bit a surprise, because I was not under transaction.\n> \n> Changing relfilenode, I think, is necessary, minimally for future VACUUM FULL support.\nHI all\n\nVacuum full GTT, cluster GTT is already supported in global_temporary_table_v24-pg13.patch.\n\n\n\nWenjing\n\n\n\n> \n> Regards\n> \n> Pavel Stehule\n> \n> \n> Wenjing\n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> > \n>> > -- \n>> > Robert Haas\n>> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> > The Enterprise PostgreSQL Company\n>> \n>", "msg_date": "Tue, 7 Apr 2020 16:57:57 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": ">\n> Thanks for review.\n> This parameter should support all types of writing of the bool type like\n> parameter autovacuum_enabled.\n> So I fixed in global_temporary_table_v24-pg13.patch.\n>\n\nThank you Wenjing for the new patch with the fix and the \"VACUUM FULL GTT\"\nsupport.\nI have verified the above issue now its resolved.\n\nPlease check the below findings on VACUUM FULL.\n\npostgres=# create global temporary table gtt(c1 int) on commit preserve\nrows;\nCREATE TABLE\npostgres=# vacuum FULL ;\nWARNING: global temp table oldest FrozenXid is far in the past\nHINT: please truncate them or kill those sessions that use them.\nVACUUM\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nThanks for review.This parameter should support all types of writing of the bool type like parameter autovacuum_enabled.So I fixed in global_temporary_table_v24-pg13.patch.Thank you Wenjing for the new patch with the fix and the \"VACUUM FULL GTT\" support.I have verified the above issue now its resolved.Please check the below findings on VACUUM FULL.postgres=# create global temporary table  gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# vacuum FULL ;WARNING:  global temp table oldest FrozenXid is far in the pastHINT:  please truncate them or kill those sessions that use them.VACUUM-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Apr 2020 15:52:12 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 2020-04-07 10:57, 曾文旌 wrote:\n\n> [global_temporary_table_v24-pg13.patch ]\n\nHi,\n\nWith gcc 9.3.0 (debian stretch), I see some low-key protests during the \nbuild:\n\nindex.c: In function ‘index_drop’:\nindex.c:2051:8: warning: variable ‘rel_persistence’ set but not used \n[-Wunused-but-set-variable]\n 2051 | char rel_persistence;\n | ^~~~~~~~~~~~~~~\nstorage_gtt.c: In function ‘gtt_force_enable_index’:\nstorage_gtt.c:1252:6: warning: unused variable ‘indexOid’ \n[-Wunused-variable]\n 1252 | Oid indexOid = RelationGetRelid(index);\n | ^~~~~~~~\ncluster.c: In function ‘copy_table_data’:\ncluster.c:780:2: warning: this ‘if’ clause does not guard... \n[-Wmisleading-indentation]\n 780 | if (RELATION_IS_GLOBAL_TEMP(OldHeap));\n | ^~\ncluster.c:781:3: note: ...this statement, but the latter is misleadingly \nindented as if it were guarded by the ‘if’\n 781 | is_gtt = true;\n | ^~~~~~\n\n\nErik\n\n\n", "msg_date": "Tue, 07 Apr 2020 12:40:33 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 4/7/20 2:27 PM, 曾文旌 wrote:\n> Vacuum full GTT, cluster GTT is already \n> supported in global_temporary_table_v24-pg13.patch.\n\nHere , it is skipping GTT\n\npostgres=# \\c foo\nYou are now connected to database \"foo\" as user \"tushar\".\nfoo=# create global temporary table  g123( c1 int) ;\nCREATE TABLE\nfoo=# \\q\n[tushar@localhost bin]$ ./vacuumdb --full  foo\nvacuumdb: vacuuming database \"foo\"\nWARNING:  skipping vacuum global temp table \"g123\" because storage is \nnot initialized for current session\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Tue, 7 Apr 2020 16:55:01 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月7日 下午6:22,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Thanks for review.\n> This parameter should support all types of writing of the bool type like parameter autovacuum_enabled.\n> So I fixed in global_temporary_table_v24-pg13.patch.\n> \n> Thank you Wenjing for the new patch with the fix and the \"VACUUM FULL GTT\" support.\n> I have verified the above issue now its resolved.\n> \n> Please check the below findings on VACUUM FULL.\n> \n> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n> CREATE TABLE\n> postgres=# vacuum FULL ;\n> WARNING: global temp table oldest FrozenXid is far in the past\n> HINT: please truncate them or kill those sessions that use them.\n> VACUUM\n\nThis is expected,\nThis represents that the GTT FrozenXid is the oldest in the entire db, and dba should vacuum the GTT if he want to push the db datfrozenxid.\nAlso he can use function pg_list_gtt_relfrozenxids() to check which session has \"too old” data and truncate them or kill the sessions.\n\n\n\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Wed, 8 Apr 2020 16:18:27 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 4/7/20 2:27 PM, 曾文旌 wrote:\n> Vacuum full GTT, cluster GTT is already \n> supported in global_temporary_table_v24-pg13.patch.\nPlease refer this below scenario , where pg_upgrade is failing\n1)Server is up and running (./pg_ctl -D data status)\n2)Stop the server ( ./pg_ctl -D data stop)\n3)Connect to server using single user mode ( ./postgres --single -D data \npostgres) and create a global temp table\n[tushar@localhost bin]$ ./postgres --single -D data1233 postgres\n\nPostgreSQL stand-alone backend 13devel\nbackend> create global temp table t(n int);\n\n--Press Ctl+D to exit\n\n4)Perform initdb ( ./initdb -D data123)\n5.Run pg_upgrade\n[tushar@localhost bin]$ ./pg_upgrade -d data -D data123 -b . -B .\n--\n--\n--\nRestoring database schemas in the new cluster\n   postgres\n*failure*\nConsult the last few lines of \"pg_upgrade_dump_13592.log\" for\nthe probable cause of the failure.\nFailure, exiting\n\nlog file content  -\n\n[tushar@localhost bin]$ tail -20   pg_upgrade_dump_13592.log\npg_restore: error: could not execute query: ERROR:  pg_type array OID \nvalue not set when in binary upgrade mode\nCommand was:\n-- For binary upgrade, must preserve pg_type oid\nSELECT \npg_catalog.binary_upgrade_set_next_pg_type_oid('13594'::pg_catalog.oid);\n\n\n-- For binary upgrade, must preserve pg_class oids\nSELECT \npg_catalog.binary_upgrade_set_next_heap_pg_class_oid('13593'::pg_catalog.oid);\n\nCREATE GLOBAL TEMPORARY TABLE \"public\".\"t\" (\n     \"n\" integer\n)\nWITH (\"on_commit_delete_rows\"='false');\n\n-- For binary upgrade, set heap's relfrozenxid and relminmxid\nUPDATE pg_catalog.pg_class\nSET relfrozenxid = '0', relminmxid = '0'\nWHERE oid = '\"public\".\"t\"'::pg_catalog.regclass;\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 8 Apr 2020 16:04:20 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Apr 8, 2020 at 1:48 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> 2020年4月7日 下午6:22,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n> Thanks for review.\n>> This parameter should support all types of writing of the bool type like\n>> parameter autovacuum_enabled.\n>> So I fixed in global_temporary_table_v24-pg13.patch.\n>>\n>\n> Thank you Wenjing for the new patch with the fix and the \"VACUUM FULL GTT\"\n> support.\n> I have verified the above issue now its resolved.\n>\n> Please check the below findings on VACUUM FULL.\n>\n> postgres=# create global temporary table gtt(c1 int) on commit preserve\n> rows;\n> CREATE TABLE\n> postgres=# vacuum FULL ;\n> WARNING: global temp table oldest FrozenXid is far in the past\n> HINT: please truncate them or kill those sessions that use them.\n> VACUUM\n>\n>\n> This is expected,\n> This represents that the GTT FrozenXid is the oldest in the entire db, and\n> dba should vacuum the GTT if he want to push the db datfrozenxid.\n> Also he can use function pg_list_gtt_relfrozenxids() to check which\n> session has \"too old” data and truncate them or kill the sessions.\n>\n\nAgain as per the HINT given, as \"HINT: please truncate them or kill those\nsessions that use them.\"\nThere is only a single session.\nIf we try \"TRUNCATE\" and \"VACUUM FULL\" still the behavior is same as below.\n\npostgres=# truncate gtt ;\nTRUNCATE TABLE\npostgres=# vacuum full;\nWARNING: global temp table oldest FrozenXid is far in the past\nHINT: please truncate them or kill those sessions that use them.\nVACUUM\n\nI have one more finding related to \"CLUSTER table USING index\", Please\ncheck the below issue.\npostgres=# create global temporary table gtt(c1 int) on commit preserve\nrows;\nCREATE TABLE\npostgres=# create index idx1 ON gtt (c1);\nCREATE INDEX\n\n-- exit and re-connect the psql prompt\npostgres=# \\q\n[edb@localhost bin]$ ./psql postgres\npsql (13devel)\nType \"help\" for help.\n\npostgres=# cluster gtt using idx1;\nWARNING: relcache reference leak: relation \"gtt\" not closed\nCLUSTER\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Apr 8, 2020 at 1:48 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:2020年4月7日 下午6:22,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Thanks for review.This parameter should support all types of writing of the bool type like parameter autovacuum_enabled.So I fixed in global_temporary_table_v24-pg13.patch.Thank you Wenjing for the new patch with the fix and the \"VACUUM FULL GTT\" support.I have verified the above issue now its resolved.Please check the below findings on VACUUM FULL.postgres=# create global temporary table  gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# vacuum FULL ;WARNING:  global temp table oldest FrozenXid is far in the pastHINT:  please truncate them or kill those sessions that use them.VACUUMThis is expected,This represents that the GTT FrozenXid is the oldest in the entire db, and dba should vacuum the GTT if he want to push the db datfrozenxid.Also he can use function pg_list_gtt_relfrozenxids() to check which session has \"too old” data and truncate them or kill the sessions. Again as per the HINT given, as  \"HINT:  please truncate them or kill those sessions that use them.\"There is only a single session.If we try \"TRUNCATE\" and \"VACUUM FULL\" still the behavior is same as below.postgres=# truncate gtt ;TRUNCATE TABLEpostgres=# vacuum full;WARNING: global temp table oldest FrozenXid is far in the pastHINT: please truncate them or kill those sessions that use them.VACUUMI have one more finding related to \"CLUSTER table USING index\", Please check the below issue.postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# create index idx1 ON gtt (c1);CREATE INDEX-- exit and re-connect the psql promptpostgres=# \\q[edb@localhost bin]$ ./psql postgrespsql (13devel)Type \"help\" for help.postgres=# cluster gtt using idx1;WARNING:  relcache reference leak: relation \"gtt\" not closedCLUSTER -- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Apr 2020 16:25:12 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 4/7/20 2:27 PM, 曾文旌 wrote:\n> Vacuum full GTT, cluster GTT is already \n> supported in global_temporary_table_v24-pg13.patch.\n\nHi Wenjing,\n\nPlease refer this scenario , where reindex   message is not coming next \ntime ( after reconnecting to database) for GTT\n\nA)\n--normal table\npostgres=# create table nt(n int primary key);\nCREATE TABLE\n--GTT table\npostgres=# create global temp table gtt(n int primary key);\nCREATE TABLE\nB)\n--Reindex  , normal table\npostgres=# REINDEX (VERBOSE) TABLE  nt;\nINFO:  index \"nt_pkey\" was reindexed\nDETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nREINDEX\n--reindex GTT table\npostgres=# REINDEX (VERBOSE) TABLE  gtt;\nINFO:  index \"gtt_pkey\" was reindexed\nDETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nREINDEX\nC)\n--Reconnect  to database\npostgres=# \\c\nYou are now connected to database \"postgres\" as user \"tushar\".\nD) again perform step B)\n\npostgres=# REINDEX (VERBOSE) TABLE  nt;\nINFO:  index \"nt_pkey\" was reindexed\nDETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nREINDEX\npostgres=# REINDEX (VERBOSE) TABLE  gtt;   <-- message  not coming\nREINDEX\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 9 Apr 2020 17:16:53 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月8日 下午6:55,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> \n> \n> On Wed, Apr 8, 2020 at 1:48 PM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n>> 2020年4月7日 下午6:22,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>> \n>> Thanks for review.\n>> This parameter should support all types of writing of the bool type like parameter autovacuum_enabled.\n>> So I fixed in global_temporary_table_v24-pg13.patch.\n>> \n>> Thank you Wenjing for the new patch with the fix and the \"VACUUM FULL GTT\" support.\n>> I have verified the above issue now its resolved.\n>> \n>> Please check the below findings on VACUUM FULL.\n>> \n>> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n>> CREATE TABLE\n>> postgres=# vacuum FULL ;\n>> WARNING: global temp table oldest FrozenXid is far in the past\n>> HINT: please truncate them or kill those sessions that use them.\n>> VACUUM\n> \n> This is expected,\n> This represents that the GTT FrozenXid is the oldest in the entire db, and dba should vacuum the GTT if he want to push the db datfrozenxid.\n> Also he can use function pg_list_gtt_relfrozenxids() to check which session has \"too old” data and truncate them or kill the sessions.\n> \n> Again as per the HINT given, as \"HINT: please truncate them or kill those sessions that use them.\"\n> There is only a single session.\n> If we try \"TRUNCATE\" and \"VACUUM FULL\" still the behavior is same as below.\n> \n> postgres=# truncate gtt ;\n> TRUNCATE TABLE\n> postgres=# vacuum full;\n> WARNING: global temp table oldest FrozenXid is far in the past\n> HINT: please truncate them or kill those sessions that use them.\n> VACUUM\n> \nIf the GTT is vacuumed in the all table first, it will still receive a warning message,\nSo I improved the error message to make it more reasonable.\n\n\n> I have one more finding related to \"CLUSTER table USING index\", Please check the below issue.\n> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n> CREATE TABLE\n> postgres=# create index idx1 ON gtt (c1);\n> CREATE INDEX\n> \n> -- exit and re-connect the psql prompt\n> postgres=# \\q\n> [edb@localhost bin]$ ./psql postgres\n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# cluster gtt using idx1;\n> WARNING: relcache reference leak: relation \"gtt\" not closed\n> CLUSTER\nIt is a bug, I fixed ,please check global_temporary_table_v25-pg13.patch.\n\n\nWenjing\n\n\n\n\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Thu, 9 Apr 2020 20:55:04 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月7日 下午7:25,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 4/7/20 2:27 PM, 曾文旌 wrote:\n>> Vacuum full GTT, cluster GTT is already supported in global_temporary_table_v24-pg13.patch.\n> \n> Here , it is skipping GTT\n> \n> postgres=# \\c foo\n> You are now connected to database \"foo\" as user \"tushar\".\n> foo=# create global temporary table g123( c1 int) ;\n> CREATE TABLE\n> foo=# \\q\n> [tushar@localhost bin]$ ./vacuumdb --full foo\n> vacuumdb: vacuuming database \"foo\"\n> WARNING: skipping vacuum global temp table \"g123\" because storage is not initialized for current session\nThe message was inappropriate at some point, so I removed it.\n\n\nWenjing\n\n\n\n\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Thu, 9 Apr 2020 20:56:17 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月9日 下午7:46,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 4/7/20 2:27 PM, 曾文旌 wrote:\n>> Vacuum full GTT, cluster GTT is already supported in global_temporary_table_v24-pg13.patch.\n> \n> Hi Wenjing,\n> \n> Please refer this scenario , where reindex message is not coming next time ( after reconnecting to database) for GTT\n> \n> A)\n> --normal table\n> postgres=# create table nt(n int primary key);\n> CREATE TABLE\n> --GTT table\n> postgres=# create global temp table gtt(n int primary key);\n> CREATE TABLE\n> B)\n> --Reindex , normal table\n> postgres=# REINDEX (VERBOSE) TABLE nt;\n> INFO: index \"nt_pkey\" was reindexed\n> DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> REINDEX\n> --reindex GTT table\n> postgres=# REINDEX (VERBOSE) TABLE gtt;\n> INFO: index \"gtt_pkey\" was reindexed\n> DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> REINDEX\n> C)\n> --Reconnect to database\n> postgres=# \\c\n> You are now connected to database \"postgres\" as user \"tushar\".\n> D) again perform step B)\n> \n> postgres=# REINDEX (VERBOSE) TABLE nt;\n> INFO: index \"nt_pkey\" was reindexed\n> DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> REINDEX\n> postgres=# REINDEX (VERBOSE) TABLE gtt; <-- message not coming\n> REINDEX\nYes , Since the newly established connection is on the db, the GTT store file is not initialized, so there is no info message.\n\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Thu, 9 Apr 2020 20:58:02 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月7日 下午6:40,Erik Rijkers <er@xs4all.nl> 写道:\n> \n> On 2020-04-07 10:57, 曾文旌 wrote:\n> \n>> [global_temporary_table_v24-pg13.patch ]\n> \n> Hi,\n> \n> With gcc 9.3.0 (debian stretch), I see some low-key protests during the build:\n> \n> index.c: In function ‘index_drop’:\n> index.c:2051:8: warning: variable ‘rel_persistence’ set but not used [-Wunused-but-set-variable]\n> 2051 | char rel_persistence;\n> | ^~~~~~~~~~~~~~~\n> storage_gtt.c: In function ‘gtt_force_enable_index’:\n> storage_gtt.c:1252:6: warning: unused variable ‘indexOid’ [-Wunused-variable]\n> 1252 | Oid indexOid = RelationGetRelid(index);\n> | ^~~~~~~~\n> cluster.c: In function ‘copy_table_data’:\n> cluster.c:780:2: warning: this ‘if’ clause does not guard... [-Wmisleading-indentation]\n> 780 | if (RELATION_IS_GLOBAL_TEMP(OldHeap));\n> | ^~\n> cluster.c:781:3: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘if’\n> 781 | is_gtt = true;\n> | ^~~~~~\n> \n\nPart of the problem is that some variables are only used by assert statements, and I fixed those alarms.\nPlease provide your configue parameter, and I will verify it again.\n\n\nWenjing\n\n\n\n\n\n> \n> Erik\n> \n>", "msg_date": "Thu, 9 Apr 2020 21:28:25 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 2020-04-09 15:28, 曾文旌 wrote:\n> [global_temporary_table_v25-pg13.patch]\n\n> Part of the problem is that some variables are only used by assert \n> statements, and I fixed those alarms.\n> Please provide your configue parameter, and I will verify it again.\n\n\nHi,\n\nJust now I compiled the newer version of your patch (v25), and the \nwarnings/notes that I saw earlier, are now gone. Thank you.\n\n\nIn case you still want it here is the configure:\n\n-- [2020.04.09 15:06:45 global_temp_tables/1] ./configure \n--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.global_temp_tables \n--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.global_temp_tables/bin.fast \n--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.global_temp_tables/lib.fast \n--with-pgport=6975 --quiet --enable-depend --with-openssl --with-perl \n--with-libxml --with-libxslt --with-zlib --enable-tap-tests \n--with-extra-version=_0409\n\n-- [2020.04.09 15:07:13 global_temp_tables/1] make core: make --quiet -j \n4\npartbounds.c: In function ‘partition_bounds_merge’:\npartbounds.c:1024:21: warning: unused variable ‘inner_binfo’ \n[-Wunused-variable]\n 1024 | PartitionBoundInfo inner_binfo = inner_rel->boundinfo;\n | ^~~~~~~~~~~\nAll of PostgreSQL successfully made. Ready to install.\n\n\nThanks,\n\nErik Rijkers\n\n\n\n\n\n\n\n", "msg_date": "Thu, 09 Apr 2020 15:38:16 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月8日 下午6:34,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 4/7/20 2:27 PM, 曾文旌 wrote:\n>> Vacuum full GTT, cluster GTT is already supported in global_temporary_table_v24-pg13.patch.\n> Please refer this below scenario , where pg_upgrade is failing\n> 1)Server is up and running (./pg_ctl -D data status)\n> 2)Stop the server ( ./pg_ctl -D data stop)\n> 3)Connect to server using single user mode ( ./postgres --single -D data postgres) and create a global temp table\n> [tushar@localhost bin]$ ./postgres --single -D data1233 postgres\n> \n> PostgreSQL stand-alone backend 13devel\n> backend> create global temp table t(n int);\n> \n> --Press Ctl+D to exit\n> \n> 4)Perform initdb ( ./initdb -D data123)\n> 5.Run pg_upgrade\n> [tushar@localhost bin]$ ./pg_upgrade -d data -D data123 -b . -B .\n> --\n> --\n> --\n> Restoring database schemas in the new cluster\n> postgres\n> *failure*\n> Consult the last few lines of \"pg_upgrade_dump_13592.log\" for\n> the probable cause of the failure.\n> Failure, exiting\n> \n> log file content -\n> \n> [tushar@localhost bin]$ tail -20 pg_upgrade_dump_13592.log\n> pg_restore: error: could not execute query: ERROR: pg_type array OID value not set when in binary upgrade mode\nI found that the regular table also has this problem, I am very unfamiliar with this part, so I opened another email to consult this problem.\n\n> Command was:\n> -- For binary upgrade, must preserve pg_type oid\n> SELECT pg_catalog.binary_upgrade_set_next_pg_type_oid('13594'::pg_catalog.oid);\n> \n> \n> -- For binary upgrade, must preserve pg_class oids\n> SELECT pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('13593'::pg_catalog.oid);\n> \n> CREATE GLOBAL TEMPORARY TABLE \"public\".\"t\" (\n> \"n\" integer\n> )\n> WITH (\"on_commit_delete_rows\"='false');\n> \n> -- For binary upgrade, set heap's relfrozenxid and relminmxid\n> UPDATE pg_catalog.pg_class\n> SET relfrozenxid = '0', relminmxid = '0'\n> WHERE oid = '\"public\".\"t\"'::pg_catalog.regclass;\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Mon, 13 Apr 2020 16:27:59 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 4/13/20 1:57 PM, 曾文旌 wrote:\n>> [tushar@localhost bin]$ tail -20 pg_upgrade_dump_13592.log\n>> pg_restore: error: could not execute query: ERROR: pg_type array OID value not set when in binary upgrade mode\n> I found that the regular table also has this problem, I am very unfamiliar with this part, so I opened another email to consult this problem.\n>\nohh. Thanks.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 4/13/20 1:57 PM, 曾文旌 wrote:\n\n\n\n[tushar@localhost bin]$ tail -20 pg_upgrade_dump_13592.log\npg_restore: error: could not execute query: ERROR: pg_type array OID value not set when in binary upgrade mode\n\n\nI found that the regular table also has this problem, I am very unfamiliar with this part, so I opened another email to consult this problem.\n\n\n\nohh. Thanks.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 13 Apr 2020 14:07:14 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 4/9/20 6:26 PM, 曾文旌 wrote:\n>> On 4/7/20 2:27 PM, 曾文旌 wrote:\n>>> Vacuum full GTT, cluster GTT is already supported in global_temporary_table_v24-pg13.patch.\n>> Here , it is skipping GTT\n>>\n>> postgres=# \\c foo\n>> You are now connected to database \"foo\" as user \"tushar\".\n>> foo=# create global temporary table g123( c1 int) ;\n>> CREATE TABLE\n>> foo=# \\q\n>> [tushar@localhost bin]$ ./vacuumdb --full foo\n>> vacuumdb: vacuuming database \"foo\"\n>> WARNING: skipping vacuum global temp table \"g123\" because storage is not initialized for current session\n> The message was inappropriate at some point, so I removed it.\n>\nThanks Wenjing. Please see -if this below behavior is correct\n\nX terminal -\n\npostgres=# create global temp table foo1(n int);\nCREATE TABLE\npostgres=# insert into foo1 values (generate_series(1,10));\nINSERT 0 10\npostgres=# vacuum full ;\nVACUUM\n\nY Terminal -\n\n[tushar@localhost bin]$ ./vacuumdb -f  postgres\nvacuumdb: vacuuming database \"postgres\"\nWARNING:  global temp table oldest relfrozenxid 3276 is the oldest in \nthe entire db\nDETAIL:  The oldest relfrozenxid in pg_class is 3277\nHINT:  If they differ greatly, please consider cleaning up the data in \nglobal temp table.\nWARNING:  global temp table oldest relfrozenxid 3276 is the oldest in \nthe entire db\nDETAIL:  The oldest relfrozenxid in pg_class is 3277\nHINT:  If they differ greatly, please consider cleaning up the data in \nglobal temp table.\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 4/9/20 6:26 PM, 曾文旌 wrote:\n\n\n\nOn 4/7/20 2:27 PM, 曾文旌 wrote:\n\n\nVacuum full GTT, cluster GTT is already supported in global_temporary_table_v24-pg13.patch.\n\n\nHere , it is skipping GTT\n\npostgres=# \\c foo\nYou are now connected to database \"foo\" as user \"tushar\".\nfoo=# create global temporary table g123( c1 int) ;\nCREATE TABLE\nfoo=# \\q\n[tushar@localhost bin]$ ./vacuumdb --full foo\nvacuumdb: vacuuming database \"foo\"\nWARNING: skipping vacuum global temp table \"g123\" because storage is not initialized for current session\n\n\nThe message was inappropriate at some point, so I removed it.\n\n\n\nThanks Wenjing. Please see -if this below behavior is correct \n\nX terminal -\npostgres=# create global temp table foo1(n int);\n CREATE TABLE\n postgres=# insert into foo1 values (generate_series(1,10));\n INSERT 0 10\n postgres=# vacuum full ;\n VACUUM\n\nY Terminal - \n\n[tushar@localhost bin]$ ./vacuumdb -f  postgres\n vacuumdb: vacuuming database \"postgres\"\n WARNING:  global temp table oldest relfrozenxid 3276 is the oldest\n in the entire db\n DETAIL:  The oldest relfrozenxid in pg_class is 3277\n HINT:  If they differ greatly, please consider cleaning up the\n data in global temp table.\n WARNING:  global temp table oldest relfrozenxid 3276 is the oldest\n in the entire db\n DETAIL:  The oldest relfrozenxid in pg_class is 3277\n HINT:  If they differ greatly, please consider cleaning up the\n data in global temp table.\n\n\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 13 Apr 2020 16:02:30 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月13日 下午6:32,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 4/9/20 6:26 PM, 曾文旌 wrote:\n>>> On 4/7/20 2:27 PM, 曾文旌 wrote:\n>>>> Vacuum full GTT, cluster GTT is already supported in global_temporary_table_v24-pg13.patch.\n>>> Here , it is skipping GTT\n>>> \n>>> postgres=# \\c foo\n>>> You are now connected to database \"foo\" as user \"tushar\".\n>>> foo=# create global temporary table g123( c1 int) ;\n>>> CREATE TABLE\n>>> foo=# \\q\n>>> [tushar@localhost bin]$ ./vacuumdb --full foo\n>>> vacuumdb: vacuuming database \"foo\"\n>>> WARNING: skipping vacuum global temp table \"g123\" because storage is not initialized for current session\n>> The message was inappropriate at some point, so I removed it.\n>> \n> Thanks Wenjing. Please see -if this below behavior is correct \n> \n> X terminal -\n> \n> postgres=# create global temp table foo1(n int);\n> CREATE TABLE\n> postgres=# insert into foo1 values (generate_series(1,10));\n> INSERT 0 10\n> postgres=# vacuum full ;\n> VACUUM\n> \n> Y Terminal - \n> \n> [tushar@localhost bin]$ ./vacuumdb -f postgres\n> vacuumdb: vacuuming database \"postgres\"\n> WARNING: global temp table oldest relfrozenxid 3276 is the oldest in the entire db\n> DETAIL: The oldest relfrozenxid in pg_class is 3277\n> HINT: If they differ greatly, please consider cleaning up the data in global temp table.\n> WARNING: global temp table oldest relfrozenxid 3276 is the oldest in the entire db\n> DETAIL: The oldest relfrozenxid in pg_class is 3277\n> HINT: If they differ greatly, please consider cleaning up the data in global temp table.\n> \nI improved the logic of the warning message so that when the gap between relfrozenxid of GTT is small,\nit will no longer be alarmed message.\n\n\n\nWenjing\n\n\n\n\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/ <https://www.enterprisedb.com/>\n> The Enterprise PostgreSQL Company", "msg_date": "Fri, 17 Apr 2020 17:14:44 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Fri, Apr 17, 2020 at 2:44 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n> I improved the logic of the warning message so that when the gap between\n> relfrozenxid of GTT is small,\n> it will no longer be alarmed message.\n>\n\nHi Wenjing,\nThanks for the patch(v26), I have verified the previous related issues, and\nare working fine now.\nPlease check the below scenario VACUUM from a non-super user.\n\n-- Create user \"test_gtt\", connect it , create gtt, VACUUM gtt and VACUUM /\nVACUUM FULL\npostgres=# CREATE USER test_gtt;\nCREATE ROLE\npostgres=# \\c postgres test_gtt\nYou are now connected to database \"postgres\" as user \"test_gtt\".\npostgres=> CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int);\nCREATE TABLE\n\n-- VACUUM gtt is working fine, whereas we are getting huge WARNING for\nVACUUM / VACUUM FULL as below:\npostgres=> VACUUM gtt1 ;\nVACUUM\npostgres=> VACUUM;\nWARNING: skipping \"pg_statistic\" --- only superuser or database owner can\nvacuum it\nWARNING: skipping \"pg_type\" --- only superuser or database owner can\nvacuum it\nWARNING: skipping \"pg_toast_2600\" --- only table or database owner can\nvacuum it\nWARNING: skipping \"pg_toast_2600_index\" --- only table or database owner\ncan vacuum it\n\n... ...\n... ...\n\nWARNING: skipping \"_pg_foreign_tables\" --- only table or database owner\ncan vacuum it\nWARNING: skipping \"foreign_table_options\" --- only table or database owner\ncan vacuum it\nWARNING: skipping \"user_mapping_options\" --- only table or database owner\ncan vacuum it\nWARNING: skipping \"user_mappings\" --- only table or database owner can\nvacuum it\nVACUUM\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Apr 17, 2020 at 2:44 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:I improved the logic of the warning message so that when the gap between relfrozenxid of GTT is small,it will no longer be alarmed message.Hi Wenjing,Thanks for the patch(v26), I have verified the previous related issues, and are working fine now.Please check the below scenario VACUUM from a non-super user.-- Create user \"test_gtt\", connect it , create gtt, VACUUM gtt and VACUUM / VACUUM FULLpostgres=# CREATE USER test_gtt;CREATE ROLEpostgres=# \\c postgres test_gttYou are now connected to database \"postgres\" as user \"test_gtt\".postgres=> CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int);CREATE TABLE-- VACUUM gtt is working fine, whereas we are getting huge WARNING for VACUUM / VACUUM FULL as below:postgres=> VACUUM gtt1 ;VACUUMpostgres=> VACUUM;WARNING:  skipping \"pg_statistic\" --- only superuser or database owner can vacuum itWARNING:  skipping \"pg_type\" --- only superuser or database owner can vacuum itWARNING:  skipping \"pg_toast_2600\" --- only table or database owner can vacuum itWARNING:  skipping \"pg_toast_2600_index\" --- only table or database owner can vacuum it... ... ... ... WARNING:  skipping \"_pg_foreign_tables\" --- only table or database owner can vacuum itWARNING:  skipping \"foreign_table_options\" --- only table or database owner can vacuum itWARNING:  skipping \"user_mapping_options\" --- only table or database owner can vacuum itWARNING:  skipping \"user_mappings\" --- only table or database owner can vacuum itVACUUM -- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Apr 2020 16:56:55 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nPlease check below scenario, we are getting a server crash with \"ALTER\nTABLE\" add column with default value as sequence:\n\n-- Create gtt, exit and re-connect the psql prompt, create sequence, alter\ntable add a column with sequence.\npostgres=# create global temporary table gtt1 (c1 int);\nCREATE TABLE\npostgres=# \\q\n[edb@localhost bin]$ ./psql postgres\npsql (13devel)\nType \"help\" for help.\n\npostgres=# create sequence seq;\nCREATE SEQUENCE\npostgres=# alter table gtt1 add c2 int default nextval('seq');\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?> \\q\n\n\n-- Stack trace:\n[edb@localhost bin]$ gdb -q -c data/core.70358 postgres\nReading symbols from\n/home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.\n[New LWP 70358]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: edb postgres [local] ALTER TABLE\n '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f150223b337 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\nkrb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64\nlibgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64\nopenssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\nzlib-1.2.7-18.el7.x86_64\n(gdb) bt\n#0 0x00007f150223b337 in raise () from /lib64/libc.so.6\n#1 0x00007f150223ca28 in abort () from /lib64/libc.so.6\n#2 0x0000000000ab2cdd in ExceptionalCondition (conditionName=0xc03ab8\n\"OidIsValid(relfilenode1) && OidIsValid(relfilenode2)\",\n errorType=0xc0371f \"FailedAssertion\", fileName=0xc03492 \"cluster.c\",\nlineNumber=1637) at assert.c:67\n#3 0x000000000065e200 in gtt_swap_relation_files (r1=16384, r2=16390,\ntarget_is_pg_class=false, swap_toast_by_content=false, is_internal=true,\n frozenXid=490, cutoffMulti=1, mapped_tables=0x7ffd841f7ee0) at\ncluster.c:1637\n#4 0x000000000065dcd9 in finish_heap_swap (OIDOldHeap=16384,\nOIDNewHeap=16390, is_system_catalog=false, swap_toast_by_content=false,\n check_constraints=true, is_internal=true, frozenXid=490, cutoffMulti=1,\nnewrelpersistence=103 'g') at cluster.c:1395\n#5 0x00000000006bca18 in ATRewriteTables (parsetree=0x1deab80,\nwqueue=0x7ffd841f80c8, lockmode=8, context=0x7ffd841f8260) at\ntablecmds.c:4991\n#6 0x00000000006ba890 in ATController (parsetree=0x1deab80,\nrel=0x7f150378f330, cmds=0x1deab28, recurse=true, lockmode=8,\ncontext=0x7ffd841f8260)\n at tablecmds.c:3991\n#7 0x00000000006ba4f8 in AlterTable (stmt=0x1deab80, lockmode=8,\ncontext=0x7ffd841f8260) at tablecmds.c:3644\n#8 0x000000000093b62a in ProcessUtilitySlow (pstate=0x1e0d6d0,\npstmt=0x1deac48,\n queryString=0x1de9b30 \"alter table gtt1 add c2 int default\nnextval('seq');\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\nqueryEnv=0x0, dest=0x1deaf28,\n qc=0x7ffd841f8830) at utility.c:1267\n#9 0x000000000093b141 in standard_ProcessUtility (pstmt=0x1deac48,\nqueryString=0x1de9b30 \"alter table gtt1 add c2 int default\nnextval('seq');\",\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:1067\n#10 0x000000000093a22b in ProcessUtility (pstmt=0x1deac48,\nqueryString=0x1de9b30 \"alter table gtt1 add c2 int default\nnextval('seq');\",\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:522\n#11 0x000000000093909d in PortalRunUtility (portal=0x1e4fba0,\npstmt=0x1deac48, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28,\nqc=0x7ffd841f8830)\n at pquery.c:1157\n#12 0x00000000009392b3 in PortalRunMulti (portal=0x1e4fba0,\nisTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, altdest=0x1deaf28,\nqc=0x7ffd841f8830)\n at pquery.c:1303\n#13 0x00000000009387d1 in PortalRun (portal=0x1e4fba0,\ncount=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1deaf28,\naltdest=0x1deaf28,\n qc=0x7ffd841f8830) at pquery.c:779\n#14 0x000000000093298b in exec_simple_query (query_string=0x1de9b30 \"alter\ntable gtt1 add c2 int default nextval('seq');\") at postgres.c:1239\n#15 0x0000000000936997 in PostgresMain (argc=1, argv=0x1e13b80,\ndbname=0x1e13a78 \"postgres\", username=0x1e13a58 \"edb\") at postgres.c:4315\n#16 0x00000000008868b3 in BackendRun (port=0x1e0bb50) at postmaster.c:4510\n#17 0x00000000008860a8 in BackendStartup (port=0x1e0bb50) at\npostmaster.c:4202\n#18 0x0000000000882626 in ServerLoop () at postmaster.c:1727\n#19 0x0000000000881efd in PostmasterMain (argc=3, argv=0x1de4460) at\npostmaster.c:1400\n#20 0x0000000000789288 in main (argc=3, argv=0x1de4460) at main.c:210\n(gdb)\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Please check below scenario, we are getting a server crash with \"ALTER TABLE\" add column with default value as sequence:-- Create gtt, exit and re-connect the psql prompt, create sequence, alter table add a column with sequence.postgres=# create global temporary table gtt1 (c1 int);CREATE TABLEpostgres=# \\q[edb@localhost bin]$ ./psql postgres psql (13devel)Type \"help\" for help.postgres=# create sequence seq;CREATE SEQUENCEpostgres=# alter table gtt1 add c2 int default nextval('seq');server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?> \\q-- Stack trace:[edb@localhost bin]$ gdb -q -c data/core.70358 postgres Reading symbols from /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.[New LWP 70358][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib64/libthread_db.so.1\".Core was generated by `postgres: edb postgres [local] ALTER TABLE                '.Program terminated with signal 6, Aborted.#0  0x00007f150223b337 in raise () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64(gdb) bt#0  0x00007f150223b337 in raise () from /lib64/libc.so.6#1  0x00007f150223ca28 in abort () from /lib64/libc.so.6#2  0x0000000000ab2cdd in ExceptionalCondition (conditionName=0xc03ab8 \"OidIsValid(relfilenode1) && OidIsValid(relfilenode2)\",     errorType=0xc0371f \"FailedAssertion\", fileName=0xc03492 \"cluster.c\", lineNumber=1637) at assert.c:67#3  0x000000000065e200 in gtt_swap_relation_files (r1=16384, r2=16390, target_is_pg_class=false, swap_toast_by_content=false, is_internal=true,     frozenXid=490, cutoffMulti=1, mapped_tables=0x7ffd841f7ee0) at cluster.c:1637#4  0x000000000065dcd9 in finish_heap_swap (OIDOldHeap=16384, OIDNewHeap=16390, is_system_catalog=false, swap_toast_by_content=false,     check_constraints=true, is_internal=true, frozenXid=490, cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1395#5  0x00000000006bca18 in ATRewriteTables (parsetree=0x1deab80, wqueue=0x7ffd841f80c8, lockmode=8, context=0x7ffd841f8260) at tablecmds.c:4991#6  0x00000000006ba890 in ATController (parsetree=0x1deab80, rel=0x7f150378f330, cmds=0x1deab28, recurse=true, lockmode=8, context=0x7ffd841f8260)    at tablecmds.c:3991#7  0x00000000006ba4f8 in AlterTable (stmt=0x1deab80, lockmode=8, context=0x7ffd841f8260) at tablecmds.c:3644#8  0x000000000093b62a in ProcessUtilitySlow (pstate=0x1e0d6d0, pstmt=0x1deac48,     queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28,     qc=0x7ffd841f8830) at utility.c:1267#9  0x000000000093b141 in standard_ProcessUtility (pstmt=0x1deac48, queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\",     context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:1067#10 0x000000000093a22b in ProcessUtility (pstmt=0x1deac48, queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\",     context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:522#11 0x000000000093909d in PortalRunUtility (portal=0x1e4fba0, pstmt=0x1deac48, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, qc=0x7ffd841f8830)    at pquery.c:1157#12 0x00000000009392b3 in PortalRunMulti (portal=0x1e4fba0, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, altdest=0x1deaf28, qc=0x7ffd841f8830)    at pquery.c:1303#13 0x00000000009387d1 in PortalRun (portal=0x1e4fba0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1deaf28, altdest=0x1deaf28,     qc=0x7ffd841f8830) at pquery.c:779#14 0x000000000093298b in exec_simple_query (query_string=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\") at postgres.c:1239#15 0x0000000000936997 in PostgresMain (argc=1, argv=0x1e13b80, dbname=0x1e13a78 \"postgres\", username=0x1e13a58 \"edb\") at postgres.c:4315#16 0x00000000008868b3 in BackendRun (port=0x1e0bb50) at postmaster.c:4510#17 0x00000000008860a8 in BackendStartup (port=0x1e0bb50) at postmaster.c:4202#18 0x0000000000882626 in ServerLoop () at postmaster.c:1727#19 0x0000000000881efd in PostmasterMain (argc=3, argv=0x1de4460) at postmaster.c:1400#20 0x0000000000789288 in main (argc=3, argv=0x1de4460) at main.c:210(gdb) -- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Apr 2020 18:29:38 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月17日 下午8:59,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> \n> Please check below scenario, we are getting a server crash with \"ALTER TABLE\" add column with default value as sequence:\n> \n> -- Create gtt, exit and re-connect the psql prompt, create sequence, alter table add a column with sequence.\n> postgres=# create global temporary table gtt1 (c1 int);\n> CREATE TABLE\n> postgres=# \\q\n> [edb@localhost bin]$ ./psql postgres \n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# create sequence seq;\n> CREATE SEQUENCE\n> postgres=# alter table gtt1 add c2 int default nextval('seq');\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?> \\q\nFixed in global_temporary_table_v27-pg13.patch\n\n\nWenjing\n\n\n\n\n> \n> \n> -- Stack trace:\n> [edb@localhost bin]$ gdb -q -c data/core.70358 postgres \n> Reading symbols from /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.\n> [New LWP 70358]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib64/libthread_db.so.1\".\n> Core was generated by `postgres: edb postgres [local] ALTER TABLE '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f150223b337 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n> (gdb) bt\n> #0 0x00007f150223b337 in raise () from /lib64/libc.so.6\n> #1 0x00007f150223ca28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ab2cdd in ExceptionalCondition (conditionName=0xc03ab8 \"OidIsValid(relfilenode1) && OidIsValid(relfilenode2)\", \n> errorType=0xc0371f \"FailedAssertion\", fileName=0xc03492 \"cluster.c\", lineNumber=1637) at assert.c:67\n> #3 0x000000000065e200 in gtt_swap_relation_files (r1=16384, r2=16390, target_is_pg_class=false, swap_toast_by_content=false, is_internal=true, \n> frozenXid=490, cutoffMulti=1, mapped_tables=0x7ffd841f7ee0) at cluster.c:1637\n> #4 0x000000000065dcd9 in finish_heap_swap (OIDOldHeap=16384, OIDNewHeap=16390, is_system_catalog=false, swap_toast_by_content=false, \n> check_constraints=true, is_internal=true, frozenXid=490, cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1395\n> #5 0x00000000006bca18 in ATRewriteTables (parsetree=0x1deab80, wqueue=0x7ffd841f80c8, lockmode=8, context=0x7ffd841f8260) at tablecmds.c:4991\n> #6 0x00000000006ba890 in ATController (parsetree=0x1deab80, rel=0x7f150378f330, cmds=0x1deab28, recurse=true, lockmode=8, context=0x7ffd841f8260)\n> at tablecmds.c:3991\n> #7 0x00000000006ba4f8 in AlterTable (stmt=0x1deab80, lockmode=8, context=0x7ffd841f8260) at tablecmds.c:3644\n> #8 0x000000000093b62a in ProcessUtilitySlow (pstate=0x1e0d6d0, pstmt=0x1deac48, \n> queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28, \n> qc=0x7ffd841f8830) at utility.c:1267\n> #9 0x000000000093b141 in standard_ProcessUtility (pstmt=0x1deac48, queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\", \n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:1067\n> #10 0x000000000093a22b in ProcessUtility (pstmt=0x1deac48, queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\", \n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:522\n> #11 0x000000000093909d in PortalRunUtility (portal=0x1e4fba0, pstmt=0x1deac48, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, qc=0x7ffd841f8830)\n> at pquery.c:1157\n> #12 0x00000000009392b3 in PortalRunMulti (portal=0x1e4fba0, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, altdest=0x1deaf28, qc=0x7ffd841f8830)\n> at pquery.c:1303\n> #13 0x00000000009387d1 in PortalRun (portal=0x1e4fba0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1deaf28, altdest=0x1deaf28, \n> qc=0x7ffd841f8830) at pquery.c:779\n> #14 0x000000000093298b in exec_simple_query (query_string=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\") at postgres.c:1239\n> #15 0x0000000000936997 in PostgresMain (argc=1, argv=0x1e13b80, dbname=0x1e13a78 \"postgres\", username=0x1e13a58 \"edb\") at postgres.c:4315\n> #16 0x00000000008868b3 in BackendRun (port=0x1e0bb50) at postmaster.c:4510\n> #17 0x00000000008860a8 in BackendStartup (port=0x1e0bb50) at postmaster.c:4202\n> #18 0x0000000000882626 in ServerLoop () at postmaster.c:1727\n> #19 0x0000000000881efd in PostmasterMain (argc=3, argv=0x1de4460) at postmaster.c:1400\n> #20 0x0000000000789288 in main (argc=3, argv=0x1de4460) at main.c:210\n> (gdb) \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Mon, 20 Apr 2020 17:09:24 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月17日 下午8:59,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> \n> Please check below scenario, we are getting a server crash with \"ALTER TABLE\" add column with default value as sequence:\n> \n> -- Create gtt, exit and re-connect the psql prompt, create sequence, alter table add a column with sequence.\n> postgres=# create global temporary table gtt1 (c1 int);\n> CREATE TABLE\n> postgres=# \\q\n> [edb@localhost bin]$ ./psql postgres \n> psql (13devel)\n> Type \"help\" for help.\n> \n> postgres=# create sequence seq;\n> CREATE SEQUENCE\n> postgres=# alter table gtt1 add c2 int default nextval('seq');\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?> \\q\nPlease check my new patch.\n\n\n\nWenjing\n\n\n\n\n> \n> \n> -- Stack trace:\n> [edb@localhost bin]$ gdb -q -c data/core.70358 postgres \n> Reading symbols from /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.\n> [New LWP 70358]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib64/libthread_db.so.1\".\n> Core was generated by `postgres: edb postgres [local] ALTER TABLE '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f150223b337 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n> (gdb) bt\n> #0 0x00007f150223b337 in raise () from /lib64/libc.so.6\n> #1 0x00007f150223ca28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ab2cdd in ExceptionalCondition (conditionName=0xc03ab8 \"OidIsValid(relfilenode1) && OidIsValid(relfilenode2)\", \n> errorType=0xc0371f \"FailedAssertion\", fileName=0xc03492 \"cluster.c\", lineNumber=1637) at assert.c:67\n> #3 0x000000000065e200 in gtt_swap_relation_files (r1=16384, r2=16390, target_is_pg_class=false, swap_toast_by_content=false, is_internal=true, \n> frozenXid=490, cutoffMulti=1, mapped_tables=0x7ffd841f7ee0) at cluster.c:1637\n> #4 0x000000000065dcd9 in finish_heap_swap (OIDOldHeap=16384, OIDNewHeap=16390, is_system_catalog=false, swap_toast_by_content=false, \n> check_constraints=true, is_internal=true, frozenXid=490, cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1395\n> #5 0x00000000006bca18 in ATRewriteTables (parsetree=0x1deab80, wqueue=0x7ffd841f80c8, lockmode=8, context=0x7ffd841f8260) at tablecmds.c:4991\n> #6 0x00000000006ba890 in ATController (parsetree=0x1deab80, rel=0x7f150378f330, cmds=0x1deab28, recurse=true, lockmode=8, context=0x7ffd841f8260)\n> at tablecmds.c:3991\n> #7 0x00000000006ba4f8 in AlterTable (stmt=0x1deab80, lockmode=8, context=0x7ffd841f8260) at tablecmds.c:3644\n> #8 0x000000000093b62a in ProcessUtilitySlow (pstate=0x1e0d6d0, pstmt=0x1deac48, \n> queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28, \n> qc=0x7ffd841f8830) at utility.c:1267\n> #9 0x000000000093b141 in standard_ProcessUtility (pstmt=0x1deac48, queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\", \n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:1067\n> #10 0x000000000093a22b in ProcessUtility (pstmt=0x1deac48, queryString=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\", \n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:522\n> #11 0x000000000093909d in PortalRunUtility (portal=0x1e4fba0, pstmt=0x1deac48, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, qc=0x7ffd841f8830)\n> at pquery.c:1157\n> #12 0x00000000009392b3 in PortalRunMulti (portal=0x1e4fba0, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, altdest=0x1deaf28, qc=0x7ffd841f8830)\n> at pquery.c:1303\n> #13 0x00000000009387d1 in PortalRun (portal=0x1e4fba0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1deaf28, altdest=0x1deaf28, \n> qc=0x7ffd841f8830) at pquery.c:779\n> #14 0x000000000093298b in exec_simple_query (query_string=0x1de9b30 \"alter table gtt1 add c2 int default nextval('seq');\") at postgres.c:1239\n> #15 0x0000000000936997 in PostgresMain (argc=1, argv=0x1e13b80, dbname=0x1e13a78 \"postgres\", username=0x1e13a58 \"edb\") at postgres.c:4315\n> #16 0x00000000008868b3 in BackendRun (port=0x1e0bb50) at postmaster.c:4510\n> #17 0x00000000008860a8 in BackendStartup (port=0x1e0bb50) at postmaster.c:4202\n> #18 0x0000000000882626 in ServerLoop () at postmaster.c:1727\n> #19 0x0000000000881efd in PostmasterMain (argc=3, argv=0x1de4460) at postmaster.c:1400\n> #20 0x0000000000789288 in main (argc=3, argv=0x1de4460) at main.c:210\n> (gdb) \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Mon, 20 Apr 2020 17:29:11 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月17日 下午7:26,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> On Fri, Apr 17, 2020 at 2:44 PM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> I improved the logic of the warning message so that when the gap between relfrozenxid of GTT is small,\n> it will no longer be alarmed message.\n> \n> Hi Wenjing,\n> Thanks for the patch(v26), I have verified the previous related issues, and are working fine now.\n> Please check the below scenario VACUUM from a non-super user.\n> \n> -- Create user \"test_gtt\", connect it , create gtt, VACUUM gtt and VACUUM / VACUUM FULL\n> postgres=# CREATE USER test_gtt;\n> CREATE ROLE\n> postgres=# \\c postgres test_gtt\n> You are now connected to database \"postgres\" as user \"test_gtt\".\n> postgres=> CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int);\n> CREATE TABLE\n> \n> -- VACUUM gtt is working fine, whereas we are getting huge WARNING for VACUUM / VACUUM FULL as below:\n> postgres=> VACUUM gtt1 ;\n> VACUUM\n> postgres=> VACUUM;\n> WARNING: skipping \"pg_statistic\" --- only superuser or database owner can vacuum it\n> WARNING: skipping \"pg_type\" --- only superuser or database owner can vacuum it\n> WARNING: skipping \"pg_toast_2600\" --- only table or database owner can vacuum it\n> WARNING: skipping \"pg_toast_2600_index\" --- only table or database owner can vacuum it\n> \n> ... ... \n> ... ... \n> \n> WARNING: skipping \"_pg_foreign_tables\" --- only table or database owner can vacuum it\n> WARNING: skipping \"foreign_table_options\" --- only table or database owner can vacuum it\n> WARNING: skipping \"user_mapping_options\" --- only table or database owner can vacuum it\n> WARNING: skipping \"user_mappings\" --- only table or database owner can vacuum it\n> VACUUM \nI think this is expected, and user test_gtt does not have permission to vacuum the system table.\nThis has nothing to do with GTT.\n\n\nWenjing\n\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Mon, 20 Apr 2020 17:31:43 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> I think this is expected, and user test_gtt does not have permission to\n> vacuum the system table.\n> This has nothing to do with GTT.\n>\n> Hi Wenjing, Thanks for the explanation.\nThanks for the new patch. I have verified the crash, Now its resolved.\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nI think this is expected, and user test_gtt does not have permission to vacuum the system table.This has nothing to do with GTT.Hi Wenjing, Thanks for the explanation.Thanks for the new patch. I have verified the crash, Now its resolved.-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 20 Apr 2020 15:48:41 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 4/20/20 2:59 PM, 曾文旌 wrote:\n> Please check my new patch.\n\nThanks Wenjing. Please refer this below scenario , getting error -  \nERROR:  could not read block 0 in file \"base/16466/t4_16472\": read only \n0 of 8192 bytes\n\nSteps to reproduce\n\nConnect to psql terminal,create a table ( create global temp table t2 (n \nint primary key ) on commit delete rows;)\nexit from psql terminal and execute (./clusterdb -t t2 -d postgres -v)\nconnect to psql terminal and one by one execute these below sql statements\n(\ncluster verbose t2 using t2_pkey;\ncluster verbose t2 ;\nalter table t2 add column i int;\ncluster verbose t2 ;\ncluster verbose t2 using t2_pkey;\ncreate unique index ind on t2(n);\ncreate unique index concurrently  ind1 on t2(n);\nselect * from t2;\n)\nThis last SQL - will throw this error -  - ERROR:  could not read block \n0 in file \"base/16466/t4_16472\": read only 0 of 8192 bytes\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 20 Apr 2020 18:45:42 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月20日 下午9:15,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 4/20/20 2:59 PM, 曾文旌 wrote:\n>> Please check my new patch.\n> \n> Thanks Wenjing. Please refer this below scenario , getting error - ERROR: could not read block 0 in file \"base/16466/t4_16472\": read only 0 of 8192 bytes\n> \n> Steps to reproduce\n> \n> Connect to psql terminal,create a table ( create global temp table t2 (n int primary key ) on commit delete rows;)\n> exit from psql terminal and execute (./clusterdb -t t2 -d postgres -v)\n> connect to psql terminal and one by one execute these below sql statements\n> (\n> cluster verbose t2 using t2_pkey;\n> cluster verbose t2 ;\n> alter table t2 add column i int;\n> cluster verbose t2 ;\n> cluster verbose t2 using t2_pkey;\n> create unique index ind on t2(n);\n> create unique index concurrently ind1 on t2(n);\n> select * from t2;\n> )\n> This last SQL - will throw this error - - ERROR: could not read block 0 in file \"base/16466/t4_16472\": read only 0 of 8192 bytes\nFixed in global_temporary_table_v29-pg13.patch\nPlease check.\n\n\n\nWenjing\n\n\n\n\n\n\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Tue, 21 Apr 2020 14:19:24 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月3日 下午4:38,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> pá 3. 4. 2020 v 9:52 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> In my opinion\n> 1 We are developing GTT according to the SQL standard, not Oracle.\n> \n> 2 The implementation differences you listed come from pg and oracle storage modules and DDL implementations.\n> \n> 2.1 issue 1 and issue 2\n> The creation of Normal table/GTT defines the catalog and initializes the data store file, in the case of the GTT, which initializes the store file for the current session. \n> But in oracle It just looks like only defines the catalog.\n> This causes other sessions can not drop the GTT in PostgreSQL.\n> This is the reason for issue 1 and issue 2, I think it is reasonable.\n> \n> 2.2 issue 3\n> I thinking the logic of drop GTT is\n> When only the current session is using the GTT, it is safe to drop the GTT. \n> because the GTT's definition and storage files can completely delete from db.\n> But, If multiple sessions are using this GTT, it is hard to drop GTT in session a, because remove the local buffer and data file of the GTT in other session is difficult.\n> I am not sure why oracle has this limitation.\n> So, issue 3 is reasonable.\n> \n> 2.3 TRUNCATE Normal table/GTT\n> TRUNCATE Normal table / GTT clean up the logical data but not unlink data store file. in the case of the GTT, which is the store file for the current session.\n> But in oracle, It just looks like data store file was cleaned up.\n> PostgreSQL storage is obviously different from oracle, In other words, session is detached from storage.\n> This is the reason for issue4 I think it is reasonable.\n> \n> Although the implementation of GTT is different, I think so TRUNCATE on Postgres (when it is really finalized) can remove session metadata of GTT too (and reduce usage's counter). It is not critical feature, but I think so it should not be hard to implement. From practical reason can be nice to have a tool how to refresh GTT without a necessity to close session. TRUNCATE can be this tool.\nYes, I think we need a way to delete the GTT local storage without closing the session.\n\nI provide the TRUNCATE tablename DROP to clear the data in the GTT and delete the storage files.\nThis feature requires the current transaction to commit immediately after it finishes truncate.\n\n\n\nWenjing\n\n\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> All in all, I think the current implementation is sufficient for dba to manage GTT.\n> \n>> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>> \n>> Hi All,\n>> \n>> I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:\n>> As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:\n>> Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n>> For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)\n>> For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n>> For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.\n>> 1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n>> Session1:\n>> create table t1 (c1 integer);\n>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>> \n>> Session2:\n>> drop table t1;\n>> drop table gtt1;\n>> drop table gtt2;\n>> \n>> -- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\n>> postgres=# drop table t1;\n>> DROP TABLE\n>> postgres=# drop table gtt1;\n>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>> postgres=# drop table gtt2;\n>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>> \n>> 3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n>> Session1:\n>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>> \n>> Session2:\n>> drop table gtt1;\n>> \n>> -- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\n>> postgres=# drop table gtt1;\n>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>> \n>> 4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)\n>> \n>> Case1:\n>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>> insert into gtt2 values(100);\n>> drop table gtt2;\n>> \n>> SQL> drop table gtt2;\n>> drop table gtt2\n>> *\n>> ERROR at line 1:\n>> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n>> \n>> -- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\n>> postgres=# drop table gtt2;\n>> DROP TABLE\n>> \n>> Case2: GTT with(on commit preserve rows) having data in both session1 and session2\n>> Session1:\n>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>> insert into gtt2 values(100);\n>> \n>> Session2:\n>> insert into gtt2 values(200);\n>> \n>> -- If we try to drop the table from any session we should get an error, it is working fine.\n>> drop table gtt2;\n>> SQL> drop table gtt2;\n>> drop table gtt2\n>> *\n>> ERROR at line 1:\n>> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n>> \n>> postgres=# drop table gtt2 ;\n>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>> \n>> -- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.\n>> Session1:\n>> truncate table gtt2;\n>> -- Session2:\n>> truncate table gtt2;\n>> \n>> Session 2:\n>> SQL> drop table gtt2;\n>> \n>> Table dropped.\n>> \n>> -- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\n>> -- truncate from all sessions where GTT have data.\n>> postgres=# truncate gtt2 ;\n>> TRUNCATE TABLE\n>> \n>> -- try to DROP GTT still, we are getting error.\n>> postgres=# drop table gtt2 ;\n>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>> \n>> To drop the GTT from any session, we need to exit from all other sessions.\n>> postgres=# drop table gtt2 ;\n>> DROP TABLE\n>> \n>> Kindly let me know if I am missing something.\n>> \n>> \n>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> wrote:\n>> Hi Wenjing,\n>> I hope we need to change the below error message.\n>> \n>> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n>> CREATE TABLE\n>> \n>> postgres=# create materialized view mvw as select * from gtt;\n>> ERROR: materialized views must not use global temporary tables or views\n>> \n>> Anyways we are not allowed to create a \"global temporary view\", \n>> so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\n>> \"ERROR: materialized views must not use global temporary tables\"\n>> \n>> -- \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> \n>> \n>> -- \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>", "msg_date": "Wed, 22 Apr 2020 17:19:04 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n> Although the implementation of GTT is different, I think so TRUNCATE on\n> Postgres (when it is really finalized) can remove session metadata of GTT\n> too (and reduce usage's counter). It is not critical feature, but I think\n> so it should not be hard to implement. From practical reason can be nice to\n> have a tool how to refresh GTT without a necessity to close session.\n> TRUNCATE can be this tool.\n>\n> Yes, I think we need a way to delete the GTT local storage without closing\n> the session.\n>\n> I provide the TRUNCATE tablename DROP to clear the data in the GTT and\n> delete the storage files.\n> This feature requires the current transaction to commit immediately after\n> it finishes truncate.\n>\n\nHi Wenjing,\nThanks for the patch(v30) for the new syntax support for (TRUNCATE\ntable_name DROP) for deleting storage files after TRUNCATE on GTT.\n\nPlease check below scenarios:\n\n\n*Case1:*-- session1:\npostgres=# create global temporary table gtt2 (c1 integer) on commit\npreserve rows;\nCREATE TABLE\npostgres=# create index idx1 on gtt2 (c1);\nCREATE INDEX\npostgres=# create index idx2 on gtt2 (c1) where c1%2 =0;\nCREATE INDEX\npostgres=#\npostgres=# CLUSTER gtt2 USING idx1;\nCLUSTER\npostgres=# CLUSTER gtt2 USING idx2;\nERROR: cannot cluster on partial index \"idx2\"\n\n\n*Case2:*-- Session2:\npostgres=# CLUSTER gtt2 USING idx1;\nCLUSTER\npostgres=# CLUSTER gtt2 USING idx2;\nCLUSTER\n\npostgres=# insert into gtt2 values(1);\nINSERT 0 1\npostgres=# CLUSTER gtt2 USING idx1;\nCLUSTER\npostgres=# CLUSTER gtt2 USING idx2;\nERROR: cannot cluster on partial index \"idx2\"\n\n\n*Case3:*-- Session2:\npostgres=# TRUNCATE gtt2 DROP;\nTRUNCATE TABLE\npostgres=# CLUSTER gtt2 USING idx1;\nCLUSTER\npostgres=# CLUSTER gtt2 USING idx2;\nCLUSTER\n\nIn Case2, Case3 we can observe, with the absence of data in GTT, we are\nable to \"CLUSTER gtt2 USING idx2;\" (having partial index)\nBut why does the same query fail for Case1 (absence of data)?\n\nThanks,\nPrabhat Sahu\n\n\n\n>\n>\n> Wenjing\n>\n>\n>\n> Regards\n>\n> Pavel\n>\n>\n>> All in all, I think the current implementation is sufficient for dba to\n>> manage GTT.\n>>\n>> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>>\n>> Hi All,\n>>\n>> I have noted down few behavioral difference in our GTT implementation in\n>> PG as compared to Oracle DB:\n>> As per my understanding, the behavior of DROP TABLE in case of \"Normal\n>> table and GTT\" in Oracle DB are as below:\n>>\n>> 1. Any tables(Normal table / GTT) without having data in a session,\n>> we will be able to DROP from another session.\n>> 2. For a completed transaction on a normal table having data, we will\n>> be able to DROP from another session. If the transaction is not yet\n>> complete, and we are trying to drop the table from another session, then we\n>> will get an error. (working as expected)\n>> 3. For a completed transaction on GTT with(on commit delete rows)\n>> (i.e. no data in GTT) in a session, we will be able to DROP from another\n>> session.\n>> 4. For a completed transaction on GTT with(on commit preserve rows)\n>> with data in a session, we will not be able to DROP from any session(not\n>> even from the session in which GTT is created), we need to truncate the\n>> table data first from all the session(session1, session2) which is having\n>> data.\n>>\n>> *1. Any tables(Normal table / GTT) without having data in a session, we\n>> will be able to DROP from another session.*\n>> *Session1:*\n>> create table t1 (c1 integer);\n>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>\n>> *Session2:*\n>> drop table t1;\n>> drop table gtt1;\n>> drop table gtt2;\n>>\n>> -- *Issue 1:* But we are able to drop a simple table and failed to drop\n>> GTT as below.\n>>\n>> postgres=# drop table t1;\n>> DROP TABLE\n>> postgres=# drop table gtt1;\n>> ERROR: can not drop relation gtt1 when other backend attached this\n>> global temp table\n>> postgres=# drop table gtt2;\n>> ERROR: can not drop relation gtt2 when other backend attached this\n>> global temp table\n>>\n>>\n>> *3. For a completed transaction on GTT with(on commit delete rows) (i.e.\n>> no data in GTT) in a session, we will be able to DROP from another session.*\n>>\n>> *Session1:*create global temporary table gtt1 (c1 integer) on commit\n>> delete rows;\n>>\n>> *Session2:*\n>> drop table gtt1;\n>>\n>> -- *Issue 2:* But we are getting error for GTT\n>> with(on_commit_delete_rows) without data.\n>>\n>> postgres=# drop table gtt1;\n>> ERROR: can not drop relation gtt1 when other backend attached this\n>> global temp table\n>>\n>>\n>> *4. For a completed transaction on GTT with(on commit preserve rows) with\n>> data in any session, we will not be able to DROP from any session(not even\n>> from the session in which GTT is created)*\n>>\n>> *Case1:*\n>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>> insert into gtt2 values(100);\n>> drop table gtt2;\n>>\n>> SQL> drop table gtt2;\n>> drop table gtt2\n>> *\n>> ERROR at line 1:\n>> ORA-14452: attempt to create, alter or drop an index on temporary table\n>> already in use\n>>\n>> -- *Issue 3:* But, we are able to drop the GTT(having data) which we\n>> have created in the same session.\n>>\n>> postgres=# drop table gtt2;\n>> DROP TABLE\n>>\n>>\n>>\n>>\n>> *Case2: GTT with(on commit preserve rows) having data in both session1\n>> and session2Session1:*create global temporary table gtt2 (c1 integer) on\n>> commit preserve rows;\n>> insert into gtt2 values(100);\n>>\n>>\n>> *Session2:*insert into gtt2 values(200);\n>>\n>> -- If we try to drop the table from any session we should get an error,\n>> it is working fine.\n>> drop table gtt2;\n>>\n>> SQL> drop table gtt2;\n>> drop table gtt2\n>> *\n>> ERROR at line 1:\n>> ORA-14452: attempt to create, alter or drop an index on temporary table\n>> already in use\n>>\n>> postgres=# drop table gtt2 ;\n>> ERROR: can not drop relation gtt2 when other backend attached this\n>> global temp table\n>>\n>>\n>> -- To drop the table gtt2 from any session1/session2, we need to truncate\n>> the table data first from all the session(session1, session2) which is\n>> having data.\n>> *Session1:*\n>> truncate table gtt2;\n>> -- Session2:\n>> truncate table gtt2;\n>>\n>> *Session 2:*\n>> SQL> drop table gtt2;\n>>\n>> Table dropped.\n>>\n>> -- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE\n>> the table in all the sessions.\n>> -- truncate from all sessions where GTT have data.\n>> postgres=# truncate gtt2 ;\n>> TRUNCATE TABLE\n>>\n>> -- *try to DROP GTT still, we are getting error.*\n>>\n>> postgres=# drop table gtt2 ;\n>> ERROR: can not drop relation gtt2 when other backend attached this\n>> global temp table\n>>\n>>\n>> To drop the GTT from any session, we need to exit from all other sessions.\n>> postgres=# drop table gtt2 ;\n>> DROP TABLE\n>>\n>> Kindly let me know if I am missing something.\n>>\n>>\n>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <\n>> prabhat.sahu@enterprisedb.com> wrote:\n>>\n>>> Hi Wenjing,\n>>> I hope we need to change the below error message.\n>>>\n>>> postgres=# create global temporary table gtt(c1 int) on commit preserve\n>>> rows;\n>>> CREATE TABLE\n>>>\n>>> postgres=# create materialized view mvw as select * from gtt;\n>>> ERROR: materialized views must not use global temporary tables* or\n>>> views*\n>>>\n>>> Anyways we are not allowed to create a \"global temporary view\",\n>>> so the above ERROR message should change(i.e. *\" or view\"* need to be\n>>> removed from the error message) something like:\n>>> *\"ERROR: materialized views must not use global temporary tables\"*\n>>>\n>>> --\n>>>\n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>>\n>>\n>>\n>> --\n>>\n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>>\n>>\n>\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:Although the implementation of GTT is different, I think so TRUNCATE on Postgres (when it is really finalized) can remove session metadata of GTT too (and reduce usage's counter). It is not critical feature, but I think so it should not be hard to implement. From practical reason can be nice to have a tool how to refresh GTT without a necessity to close session. TRUNCATE can be this tool.Yes, I think we need a way to delete the GTT local storage without closing the session.I provide the TRUNCATE tablename DROP to clear the data in the GTT and delete the storage files.This feature requires the current transaction to commit immediately after it finishes truncate.Hi Wenjing,Thanks for the patch(v30) for the new syntax support for (TRUNCATE table_name DROP) for deleting storage files after TRUNCATE on GTT. Please check below scenarios:Case1:-- session1:postgres=# create global temporary table gtt2 (c1 integer) on commit preserve rows;CREATE TABLEpostgres=# create index  idx1 on gtt2 (c1);CREATE INDEXpostgres=# create index  idx2 on gtt2 (c1) where c1%2 =0;CREATE INDEXpostgres=# postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;ERROR:  cannot cluster on partial index \"idx2\"        Case2:-- Session2:postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;CLUSTERpostgres=# insert into gtt2 values(1);INSERT 0 1postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;ERROR:  cannot cluster on partial index \"idx2\"Case3:-- Session2:postgres=# TRUNCATE gtt2 DROP;TRUNCATE TABLEpostgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;CLUSTERIn Case2, Case3 we can observe, with the absence of data in GTT, we are able to \"CLUSTER gtt2 USING idx2;\" (having partial index)But why does the same query fail for Case1 (absence of data)?Thanks,Prabhat Sahu WenjingRegardsPavelAll in all, I think the current implementation is sufficient for dba to manage GTT.2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi All,I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.Session1:create table t1 (c1 integer);create global temporary table gtt1 (c1 integer) on commit delete rows;create global temporary table gtt2 (c1 integer) on commit preserve rows;Session2:drop table t1;drop table gtt1;drop table gtt2;-- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\tpostgres=# drop table t1;\tDROP TABLE\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table\tpostgres=# drop table gtt2;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.Session1:create global temporary table gtt1 (c1 integer) on commit delete rows;Session2:drop table gtt1;-- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)Case1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use-- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\tpostgres=# drop table gtt2;\tDROP TABLECase2: GTT with(on commit preserve rows) having data in both session1 and session2Session1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);Session2:insert into gtt2 values(200);-- If we try to drop the table from any session we should get an error, it is working fine.drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table-- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.Session1:truncate table gtt2;-- Session2:truncate table gtt2; Session 2:\tSQL> drop table gtt2;\tTable dropped.-- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\t-- truncate from all sessions where GTT have data.\tpostgres=# truncate gtt2 ;\tTRUNCATE TABLE\t-- try to DROP GTT still, we are getting error.\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp tableTo drop the GTT from any session, we need to exit from all other sessions.postgres=# drop table gtt2 ;DROP TABLEKindly let me know if I am missing something.On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Hi Wenjing,I hope we need to change the below error message.postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# create materialized view mvw as select * from gtt;ERROR: materialized views must not use global temporary tables or viewsAnyways we are not allowed to create a \"global temporary view\", so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\"ERROR: materialized views must not use global temporary tables\"-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Apr 2020 20:08:11 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "st 22. 4. 2020 v 16:38 odesílatel Prabhat Sahu <\nprabhat.sahu@enterprisedb.com> napsal:\n\n>\n>\n> On Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n>\n>>\n>> Although the implementation of GTT is different, I think so TRUNCATE on\n>> Postgres (when it is really finalized) can remove session metadata of GTT\n>> too (and reduce usage's counter). It is not critical feature, but I think\n>> so it should not be hard to implement. From practical reason can be nice to\n>> have a tool how to refresh GTT without a necessity to close session.\n>> TRUNCATE can be this tool.\n>>\n>> Yes, I think we need a way to delete the GTT local storage without\n>> closing the session.\n>>\n>> I provide the TRUNCATE tablename DROP to clear the data in the GTT and\n>> delete the storage files.\n>> This feature requires the current transaction to commit immediately after\n>> it finishes truncate.\n>>\n>\n> Hi Wenjing,\n> Thanks for the patch(v30) for the new syntax support for (TRUNCATE\n> table_name DROP) for deleting storage files after TRUNCATE on GTT.\n>\n\nThis syntax looks strange, and I don't think so it solve anything in\npractical life, because without lock the table will be used in few seconds\nby other sessions.\n\nThis is same topic when we talked about ALTER - when and where the changes\nshould be applied.\n\nThe CLUSTER commands works only on session private data, so it should not\nto need some special lock or some special cleaning before.\n\nRegards\n\nPavel\n\n\n>\n> Please check below scenarios:\n>\n>\n> *Case1:*-- session1:\n> postgres=# create global temporary table gtt2 (c1 integer) on commit\n> preserve rows;\n> CREATE TABLE\n> postgres=# create index idx1 on gtt2 (c1);\n> CREATE INDEX\n> postgres=# create index idx2 on gtt2 (c1) where c1%2 =0;\n> CREATE INDEX\n> postgres=#\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> ERROR: cannot cluster on partial index \"idx2\"\n>\n>\n> *Case2:*-- Session2:\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> CLUSTER\n>\n> postgres=# insert into gtt2 values(1);\n> INSERT 0 1\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> ERROR: cannot cluster on partial index \"idx2\"\n>\n>\n> *Case3:*-- Session2:\n> postgres=# TRUNCATE gtt2 DROP;\n> TRUNCATE TABLE\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> CLUSTER\n>\n> In Case2, Case3 we can observe, with the absence of data in GTT, we are\n> able to \"CLUSTER gtt2 USING idx2;\" (having partial index)\n> But why does the same query fail for Case1 (absence of data)?\n>\n> Thanks,\n> Prabhat Sahu\n>\n>\n>\n>>\n>>\n>> Wenjing\n>>\n>>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> All in all, I think the current implementation is sufficient for dba to\n>>> manage GTT.\n>>>\n>>> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>>>\n>>> Hi All,\n>>>\n>>> I have noted down few behavioral difference in our GTT implementation in\n>>> PG as compared to Oracle DB:\n>>> As per my understanding, the behavior of DROP TABLE in case of \"Normal\n>>> table and GTT\" in Oracle DB are as below:\n>>>\n>>> 1. Any tables(Normal table / GTT) without having data in a session,\n>>> we will be able to DROP from another session.\n>>> 2. For a completed transaction on a normal table having data, we\n>>> will be able to DROP from another session. If the transaction is not yet\n>>> complete, and we are trying to drop the table from another session, then we\n>>> will get an error. (working as expected)\n>>> 3. For a completed transaction on GTT with(on commit delete rows)\n>>> (i.e. no data in GTT) in a session, we will be able to DROP from another\n>>> session.\n>>> 4. For a completed transaction on GTT with(on commit preserve rows)\n>>> with data in a session, we will not be able to DROP from any session(not\n>>> even from the session in which GTT is created), we need to truncate the\n>>> table data first from all the session(session1, session2) which is having\n>>> data.\n>>>\n>>> *1. Any tables(Normal table / GTT) without having data in a session, we\n>>> will be able to DROP from another session.*\n>>> *Session1:*\n>>> create table t1 (c1 integer);\n>>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>>\n>>> *Session2:*\n>>> drop table t1;\n>>> drop table gtt1;\n>>> drop table gtt2;\n>>>\n>>> -- *Issue 1:* But we are able to drop a simple table and failed to drop\n>>> GTT as below.\n>>>\n>>> postgres=# drop table t1;\n>>> DROP TABLE\n>>> postgres=# drop table gtt1;\n>>> ERROR: can not drop relation gtt1 when other backend attached this\n>>> global temp table\n>>> postgres=# drop table gtt2;\n>>> ERROR: can not drop relation gtt2 when other backend attached this\n>>> global temp table\n>>>\n>>>\n>>> *3. For a completed transaction on GTT with(on commit delete rows) (i.e.\n>>> no data in GTT) in a session, we will be able to DROP from another session.*\n>>>\n>>> *Session1:*create global temporary table gtt1 (c1 integer) on commit\n>>> delete rows;\n>>>\n>>> *Session2:*\n>>> drop table gtt1;\n>>>\n>>> -- *Issue 2:* But we are getting error for GTT\n>>> with(on_commit_delete_rows) without data.\n>>>\n>>> postgres=# drop table gtt1;\n>>> ERROR: can not drop relation gtt1 when other backend attached this\n>>> global temp table\n>>>\n>>>\n>>> *4. For a completed transaction on GTT with(on commit preserve\n>>> rows) with data in any session, we will not be able to DROP from any\n>>> session(not even from the session in which GTT is created)*\n>>>\n>>> *Case1:*\n>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>> insert into gtt2 values(100);\n>>> drop table gtt2;\n>>>\n>>> SQL> drop table gtt2;\n>>> drop table gtt2\n>>> *\n>>> ERROR at line 1:\n>>> ORA-14452: attempt to create, alter or drop an index on temporary table\n>>> already in use\n>>>\n>>> -- *Issue 3:* But, we are able to drop the GTT(having data) which we\n>>> have created in the same session.\n>>>\n>>> postgres=# drop table gtt2;\n>>> DROP TABLE\n>>>\n>>>\n>>>\n>>>\n>>> *Case2: GTT with(on commit preserve rows) having data in both session1\n>>> and session2Session1:*create global temporary table gtt2 (c1 integer)\n>>> on commit preserve rows;\n>>> insert into gtt2 values(100);\n>>>\n>>>\n>>> *Session2:*insert into gtt2 values(200);\n>>>\n>>> -- If we try to drop the table from any session we should get an error,\n>>> it is working fine.\n>>> drop table gtt2;\n>>>\n>>> SQL> drop table gtt2;\n>>> drop table gtt2\n>>> *\n>>> ERROR at line 1:\n>>> ORA-14452: attempt to create, alter or drop an index on temporary table\n>>> already in use\n>>>\n>>> postgres=# drop table gtt2 ;\n>>> ERROR: can not drop relation gtt2 when other backend attached this\n>>> global temp table\n>>>\n>>>\n>>> -- To drop the table gtt2 from any session1/session2, we need to\n>>> truncate the table data first from all the session(session1, session2)\n>>> which is having data.\n>>> *Session1:*\n>>> truncate table gtt2;\n>>> -- Session2:\n>>> truncate table gtt2;\n>>>\n>>> *Session 2:*\n>>> SQL> drop table gtt2;\n>>>\n>>> Table dropped.\n>>>\n>>> -- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE\n>>> the table in all the sessions.\n>>> -- truncate from all sessions where GTT have data.\n>>> postgres=# truncate gtt2 ;\n>>> TRUNCATE TABLE\n>>>\n>>> -- *try to DROP GTT still, we are getting error.*\n>>>\n>>> postgres=# drop table gtt2 ;\n>>> ERROR: can not drop relation gtt2 when other backend attached this\n>>> global temp table\n>>>\n>>>\n>>> To drop the GTT from any session, we need to exit from all other\n>>> sessions.\n>>> postgres=# drop table gtt2 ;\n>>> DROP TABLE\n>>>\n>>> Kindly let me know if I am missing something.\n>>>\n>>>\n>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <\n>>> prabhat.sahu@enterprisedb.com> wrote:\n>>>\n>>>> Hi Wenjing,\n>>>> I hope we need to change the below error message.\n>>>>\n>>>> postgres=# create global temporary table gtt(c1 int) on commit preserve\n>>>> rows;\n>>>> CREATE TABLE\n>>>>\n>>>> postgres=# create materialized view mvw as select * from gtt;\n>>>> ERROR: materialized views must not use global temporary tables* or\n>>>> views*\n>>>>\n>>>> Anyways we are not allowed to create a \"global temporary view\",\n>>>> so the above ERROR message should change(i.e. *\" or view\"* need to be\n>>>> removed from the error message) something like:\n>>>> *\"ERROR: materialized views must not use global temporary tables\"*\n>>>>\n>>>> --\n>>>>\n>>>> With Regards,\n>>>> Prabhat Kumar Sahu\n>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>\n>>>\n>>>\n>>> --\n>>>\n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>>\n>>>\n>>>\n>>\n>\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nst 22. 4. 2020 v 16:38 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com> napsal:On Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:Although the implementation of GTT is different, I think so TRUNCATE on Postgres (when it is really finalized) can remove session metadata of GTT too (and reduce usage's counter). It is not critical feature, but I think so it should not be hard to implement. From practical reason can be nice to have a tool how to refresh GTT without a necessity to close session. TRUNCATE can be this tool.Yes, I think we need a way to delete the GTT local storage without closing the session.I provide the TRUNCATE tablename DROP to clear the data in the GTT and delete the storage files.This feature requires the current transaction to commit immediately after it finishes truncate.Hi Wenjing,Thanks for the patch(v30) for the new syntax support for (TRUNCATE table_name DROP) for deleting storage files after TRUNCATE on GTT.This syntax looks strange, and I don't think so it solve anything in practical life, because without lock the table will be used in few seconds by other sessions.This is same topic when we talked about ALTER - when and where the changes should be applied. The CLUSTER commands works only on session private data, so it should not to need some special lock or some special cleaning before.RegardsPavel  Please check below scenarios:Case1:-- session1:postgres=# create global temporary table gtt2 (c1 integer) on commit preserve rows;CREATE TABLEpostgres=# create index  idx1 on gtt2 (c1);CREATE INDEXpostgres=# create index  idx2 on gtt2 (c1) where c1%2 =0;CREATE INDEXpostgres=# postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;ERROR:  cannot cluster on partial index \"idx2\"        Case2:-- Session2:postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;CLUSTERpostgres=# insert into gtt2 values(1);INSERT 0 1postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;ERROR:  cannot cluster on partial index \"idx2\"Case3:-- Session2:postgres=# TRUNCATE gtt2 DROP;TRUNCATE TABLEpostgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;CLUSTERIn Case2, Case3 we can observe, with the absence of data in GTT, we are able to \"CLUSTER gtt2 USING idx2;\" (having partial index)But why does the same query fail for Case1 (absence of data)?Thanks,Prabhat Sahu WenjingRegardsPavelAll in all, I think the current implementation is sufficient for dba to manage GTT.2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi All,I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.Session1:create table t1 (c1 integer);create global temporary table gtt1 (c1 integer) on commit delete rows;create global temporary table gtt2 (c1 integer) on commit preserve rows;Session2:drop table t1;drop table gtt1;drop table gtt2;-- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\tpostgres=# drop table t1;\tDROP TABLE\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table\tpostgres=# drop table gtt2;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.Session1:create global temporary table gtt1 (c1 integer) on commit delete rows;Session2:drop table gtt1;-- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)Case1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use-- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\tpostgres=# drop table gtt2;\tDROP TABLECase2: GTT with(on commit preserve rows) having data in both session1 and session2Session1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);Session2:insert into gtt2 values(200);-- If we try to drop the table from any session we should get an error, it is working fine.drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table-- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.Session1:truncate table gtt2;-- Session2:truncate table gtt2; Session 2:\tSQL> drop table gtt2;\tTable dropped.-- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\t-- truncate from all sessions where GTT have data.\tpostgres=# truncate gtt2 ;\tTRUNCATE TABLE\t-- try to DROP GTT still, we are getting error.\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp tableTo drop the GTT from any session, we need to exit from all other sessions.postgres=# drop table gtt2 ;DROP TABLEKindly let me know if I am missing something.On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Hi Wenjing,I hope we need to change the below error message.postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# create materialized view mvw as select * from gtt;ERROR: materialized views must not use global temporary tables or viewsAnyways we are not allowed to create a \"global temporary view\", so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\"ERROR: materialized views must not use global temporary tables\"-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Apr 2020 16:50:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月22日 下午10:38,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> \n> \n> On Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> \n>> Although the implementation of GTT is different, I think so TRUNCATE on Postgres (when it is really finalized) can remove session metadata of GTT too (and reduce usage's counter). It is not critical feature, but I think so it should not be hard to implement. From practical reason can be nice to have a tool how to refresh GTT without a necessity to close session. TRUNCATE can be this tool.\n> Yes, I think we need a way to delete the GTT local storage without closing the session.\n> \n> I provide the TRUNCATE tablename DROP to clear the data in the GTT and delete the storage files.\n> This feature requires the current transaction to commit immediately after it finishes truncate.\n> \n> Hi Wenjing,\n> Thanks for the patch(v30) for the new syntax support for (TRUNCATE table_name DROP) for deleting storage files after TRUNCATE on GTT.\n> \n> Please check below scenarios:\n> \n> Case1:\n> -- session1:\n> postgres=# create global temporary table gtt2 (c1 integer) on commit preserve rows;\n> CREATE TABLE\n> postgres=# create index idx1 on gtt2 (c1);\n> CREATE INDEX\n> postgres=# create index idx2 on gtt2 (c1) where c1%2 =0;\n> CREATE INDEX\n> postgres=# \n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> ERROR: cannot cluster on partial index \"idx2\" \n> \n> Case2:\n> -- Session2:\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> CLUSTER\n> \n> postgres=# insert into gtt2 values(1);\n> INSERT 0 1\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> ERROR: cannot cluster on partial index \"idx2\"\n> \n> Case3:\n> -- Session2:\n> postgres=# TRUNCATE gtt2 DROP;\n> TRUNCATE TABLE\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> CLUSTER\n> \n> In Case2, Case3 we can observe, with the absence of data in GTT, we are able to \"CLUSTER gtt2 USING idx2;\" (having partial index)\n> But why does the same query fail for Case1 (absence of data)?\nThis is expected\nBecause TRUNCATE gtt2 DROP; The local storage file was deleted, so CLUSTER checked that there were no local files and ended the process.\n\n\nWenjing\n\n\n> \n> Thanks,\n> Prabhat Sahu\n> \n> \n> \n> \n> Wenjing\n> \n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> All in all, I think the current implementation is sufficient for dba to manage GTT.\n>> \n>>> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>> \n>>> Hi All,\n>>> \n>>> I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:\n>>> As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:\n>>> Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n>>> For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)\n>>> For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n>>> For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.\n>>> 1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n>>> Session1:\n>>> create table t1 (c1 integer);\n>>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>> \n>>> Session2:\n>>> drop table t1;\n>>> drop table gtt1;\n>>> drop table gtt2;\n>>> \n>>> -- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\n>>> postgres=# drop table t1;\n>>> DROP TABLE\n>>> postgres=# drop table gtt1;\n>>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>>> postgres=# drop table gtt2;\n>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>> \n>>> 3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n>>> Session1:\n>>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>>> \n>>> Session2:\n>>> drop table gtt1;\n>>> \n>>> -- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\n>>> postgres=# drop table gtt1;\n>>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>>> \n>>> 4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)\n>>> \n>>> Case1:\n>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>> insert into gtt2 values(100);\n>>> drop table gtt2;\n>>> \n>>> SQL> drop table gtt2;\n>>> drop table gtt2\n>>> *\n>>> ERROR at line 1:\n>>> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n>>> \n>>> -- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\n>>> postgres=# drop table gtt2;\n>>> DROP TABLE\n>>> \n>>> Case2: GTT with(on commit preserve rows) having data in both session1 and session2\n>>> Session1:\n>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>> insert into gtt2 values(100);\n>>> \n>>> Session2:\n>>> insert into gtt2 values(200);\n>>> \n>>> -- If we try to drop the table from any session we should get an error, it is working fine.\n>>> drop table gtt2;\n>>> SQL> drop table gtt2;\n>>> drop table gtt2\n>>> *\n>>> ERROR at line 1:\n>>> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n>>> \n>>> postgres=# drop table gtt2 ;\n>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>> \n>>> -- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.\n>>> Session1:\n>>> truncate table gtt2;\n>>> -- Session2:\n>>> truncate table gtt2;\n>>> \n>>> Session 2:\n>>> SQL> drop table gtt2;\n>>> \n>>> Table dropped.\n>>> \n>>> -- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\n>>> -- truncate from all sessions where GTT have data.\n>>> postgres=# truncate gtt2 ;\n>>> TRUNCATE TABLE\n>>> \n>>> -- try to DROP GTT still, we are getting error.\n>>> postgres=# drop table gtt2 ;\n>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>> \n>>> To drop the GTT from any session, we need to exit from all other sessions.\n>>> postgres=# drop table gtt2 ;\n>>> DROP TABLE\n>>> \n>>> Kindly let me know if I am missing something.\n>>> \n>>> \n>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> wrote:\n>>> Hi Wenjing,\n>>> I hope we need to change the below error message.\n>>> \n>>> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n>>> CREATE TABLE\n>>> \n>>> postgres=# create materialized view mvw as select * from gtt;\n>>> ERROR: materialized views must not use global temporary tables or views\n>>> \n>>> Anyways we are not allowed to create a \"global temporary view\", \n>>> so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\n>>> \"ERROR: materialized views must not use global temporary tables\"\n>>> \n>>> -- \n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>>> \n>>> \n>>> -- \n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> \n> \n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Thu, 23 Apr 2020 14:21:43 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月22日 下午10:50,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> st 22. 4. 2020 v 16:38 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> napsal:\n> \n> \n> On Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> \n>> Although the implementation of GTT is different, I think so TRUNCATE on Postgres (when it is really finalized) can remove session metadata of GTT too (and reduce usage's counter). It is not critical feature, but I think so it should not be hard to implement. From practical reason can be nice to have a tool how to refresh GTT without a necessity to close session. TRUNCATE can be this tool.\nSorry, I don't quite understand what you mean, could you describe it in detail? \nIn my opinion the TRUNCATE GTT cannot clean up data in other sessions, especially clean up local buffers in other sessions.\n\n\n> Yes, I think we need a way to delete the GTT local storage without closing the session.\n> \n> I provide the TRUNCATE tablename DROP to clear the data in the GTT and delete the storage files.\n> This feature requires the current transaction to commit immediately after it finishes truncate.\n> \n> Hi Wenjing,\n> Thanks for the patch(v30) for the new syntax support for (TRUNCATE table_name DROP) for deleting storage files after TRUNCATE on GTT.\n> \n> This syntax looks strange, and I don't think so it solve anything in practical life, because without lock the table will be used in few seconds by other sessions.\n\nIf a dba wants to delete or modify a GTT, he can use locks to help him make the change.\n\npostgres=# begin;\nBEGIN\npostgres=*# LOCK TABLE gtt2 IN ACCESS EXCLUSIVE MODE;\npostgres=*# select * from pg_gtt_attached_pids ;\n\nKill session or let session do TRUNCATE tablename DROP\n\npostgres=*# drop table gtt2;\nDROP TABLE\npostgres=*# commit;\nCOMMIT\n\n> \n> This is same topic when we talked about ALTER - when and where the changes should be applied. \n> \n> The CLUSTER commands works only on session private data, so it should not to need some special lock or some special cleaning before.\n> \n> Regards\n> \n> Pavel\n> \n> \n> Please check below scenarios:\n> \n> Case1:\n> -- session1:\n> postgres=# create global temporary table gtt2 (c1 integer) on commit preserve rows;\n> CREATE TABLE\n> postgres=# create index idx1 on gtt2 (c1);\n> CREATE INDEX\n> postgres=# create index idx2 on gtt2 (c1) where c1%2 =0;\n> CREATE INDEX\n> postgres=# \n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> ERROR: cannot cluster on partial index \"idx2\" \n> \n> Case2:\n> -- Session2:\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> CLUSTER\n> \n> postgres=# insert into gtt2 values(1);\n> INSERT 0 1\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> ERROR: cannot cluster on partial index \"idx2\"\n> \n> Case3:\n> -- Session2:\n> postgres=# TRUNCATE gtt2 DROP;\n> TRUNCATE TABLE\n> postgres=# CLUSTER gtt2 USING idx1;\n> CLUSTER\n> postgres=# CLUSTER gtt2 USING idx2;\n> CLUSTER\n> \n> In Case2, Case3 we can observe, with the absence of data in GTT, we are able to \"CLUSTER gtt2 USING idx2;\" (having partial index)\n> But why does the same query fail for Case1 (absence of data)?\n> \n> Thanks,\n> Prabhat Sahu\n> \n> \n> \n> \n> Wenjing\n> \n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> All in all, I think the current implementation is sufficient for dba to manage GTT.\n>> \n>>> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>> \n>>> Hi All,\n>>> \n>>> I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:\n>>> As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:\n>>> Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n>>> For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)\n>>> For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n>>> For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.\n>>> 1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n>>> Session1:\n>>> create table t1 (c1 integer);\n>>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>> \n>>> Session2:\n>>> drop table t1;\n>>> drop table gtt1;\n>>> drop table gtt2;\n>>> \n>>> -- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\n>>> postgres=# drop table t1;\n>>> DROP TABLE\n>>> postgres=# drop table gtt1;\n>>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>>> postgres=# drop table gtt2;\n>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>> \n>>> 3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n>>> Session1:\n>>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>>> \n>>> Session2:\n>>> drop table gtt1;\n>>> \n>>> -- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\n>>> postgres=# drop table gtt1;\n>>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>>> \n>>> 4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)\n>>> \n>>> Case1:\n>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>> insert into gtt2 values(100);\n>>> drop table gtt2;\n>>> \n>>> SQL> drop table gtt2;\n>>> drop table gtt2\n>>> *\n>>> ERROR at line 1:\n>>> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n>>> \n>>> -- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\n>>> postgres=# drop table gtt2;\n>>> DROP TABLE\n>>> \n>>> Case2: GTT with(on commit preserve rows) having data in both session1 and session2\n>>> Session1:\n>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>> insert into gtt2 values(100);\n>>> \n>>> Session2:\n>>> insert into gtt2 values(200);\n>>> \n>>> -- If we try to drop the table from any session we should get an error, it is working fine.\n>>> drop table gtt2;\n>>> SQL> drop table gtt2;\n>>> drop table gtt2\n>>> *\n>>> ERROR at line 1:\n>>> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n>>> \n>>> postgres=# drop table gtt2 ;\n>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>> \n>>> -- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.\n>>> Session1:\n>>> truncate table gtt2;\n>>> -- Session2:\n>>> truncate table gtt2;\n>>> \n>>> Session 2:\n>>> SQL> drop table gtt2;\n>>> \n>>> Table dropped.\n>>> \n>>> -- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\n>>> -- truncate from all sessions where GTT have data.\n>>> postgres=# truncate gtt2 ;\n>>> TRUNCATE TABLE\n>>> \n>>> -- try to DROP GTT still, we are getting error.\n>>> postgres=# drop table gtt2 ;\n>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>> \n>>> To drop the GTT from any session, we need to exit from all other sessions.\n>>> postgres=# drop table gtt2 ;\n>>> DROP TABLE\n>>> \n>>> Kindly let me know if I am missing something.\n>>> \n>>> \n>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> wrote:\n>>> Hi Wenjing,\n>>> I hope we need to change the below error message.\n>>> \n>>> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n>>> CREATE TABLE\n>>> \n>>> postgres=# create materialized view mvw as select * from gtt;\n>>> ERROR: materialized views must not use global temporary tables or views\n>>> \n>>> Anyways we are not allowed to create a \"global temporary view\", \n>>> so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\n>>> \"ERROR: materialized views must not use global temporary tables\"\n>>> \n>>> -- \n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>>> \n>>> \n>>> -- \n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> \n> \n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Thu, 23 Apr 2020 15:10:31 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "čt 23. 4. 2020 v 9:10 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:\n\n>\n>\n> 2020年4月22日 下午10:50,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n>\n>\n> st 22. 4. 2020 v 16:38 odesílatel Prabhat Sahu <\n> prabhat.sahu@enterprisedb.com> napsal:\n>\n>>\n>>\n>> On Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n>>\n>>>\n>>> Although the implementation of GTT is different, I think so TRUNCATE on\n>>> Postgres (when it is really finalized) can remove session metadata of GTT\n>>> too (and reduce usage's counter). It is not critical feature, but I think\n>>> so it should not be hard to implement. From practical reason can be nice to\n>>> have a tool how to refresh GTT without a necessity to close session.\n>>> TRUNCATE can be this tool.\n>>>\n>>> Sorry, I don't quite understand what you mean, could you describe it in\n> detail?\n> In my opinion the TRUNCATE GTT cannot clean up data in other sessions,\n> especially clean up local buffers in other sessions.\n>\n\nIt is about a possibility to force reset GTT to default empty state for all\nsessions.\n\nMaybe it is some what does your TRUNCATE DROP, but I don't think so this\ndesign (TRUNCATE DROP) is good, because then user have to know\nimplementation detail.\n\nI prefer some like TRUNCATE tab WITH OPTION (GLOBAL, FORCE) - \"GLOBAL\" ..\napply on all sessions, FORCE try to do without waiting on some global lock,\ntry to do immediately with possibility to cancel some statements and\nrollback some session.\n\ninstead GLOBAL maybe we can use \"ALLSESSION\", or \"ALL SESSION\" or some else\n\nBut I like possible terminology LOCAL x GLOBAL for GTT. What I mean? Some\nstatements like \"TRUNCATE\" can works (by default) in \"local\" mode .. it\nhas impact to current session only. But sometimes can be executed in\n\"global\" mode with effect on all sessions.\n\n\n>\n> Yes, I think we need a way to delete the GTT local storage without closing\n>>> the session.\n>>>\n>>> I provide the TRUNCATE tablename DROP to clear the data in the GTT and\n>>> delete the storage files.\n>>> This feature requires the current transaction to commit immediately\n>>> after it finishes truncate.\n>>>\n>>\n>> Hi Wenjing,\n>> Thanks for the patch(v30) for the new syntax support for (TRUNCATE\n>> table_name DROP) for deleting storage files after TRUNCATE on GTT.\n>>\n>\n> This syntax looks strange, and I don't think so it solve anything in\n> practical life, because without lock the table will be used in few seconds\n> by other sessions.\n>\n>\n> If a dba wants to delete or modify a GTT, he can use locks to help him\n> make the change.\n>\n\n> postgres=# begin;\n> BEGIN\n> postgres=*# LOCK TABLE gtt2 IN ACCESS EXCLUSIVE MODE;\n> postgres=*# select * from pg_gtt_attached_pids ;\n>\n> Kill session or let session do TRUNCATE tablename DROP\n>\n> postgres=*# drop table gtt2;\n> DROP TABLE\n> postgres=*# commit;\n> COMMIT\n>\n\nyes, user can lock a tables. But I think so it is user friendly design. I\ndon't remember any statement in Postgres, where I have to use table locks\nexplicitly.\n\nFor builtin commands it should be done transparently (for user).\n\nRegards\n\nPavel\n\n\n>\n>\n> This is same topic when we talked about ALTER - when and where the changes\n> should be applied.\n>\n> The CLUSTER commands works only on session private data, so it should not\n> to need some special lock or some special cleaning before.\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Please check below scenarios:\n>>\n>>\n>> *Case1:*-- session1:\n>> postgres=# create global temporary table gtt2 (c1 integer) on commit\n>> preserve rows;\n>> CREATE TABLE\n>> postgres=# create index idx1 on gtt2 (c1);\n>> CREATE INDEX\n>> postgres=# create index idx2 on gtt2 (c1) where c1%2 =0;\n>> CREATE INDEX\n>> postgres=#\n>> postgres=# CLUSTER gtt2 USING idx1;\n>> CLUSTER\n>> postgres=# CLUSTER gtt2 USING idx2;\n>> ERROR: cannot cluster on partial index \"idx2\"\n>>\n>>\n>> *Case2:*-- Session2:\n>> postgres=# CLUSTER gtt2 USING idx1;\n>> CLUSTER\n>> postgres=# CLUSTER gtt2 USING idx2;\n>> CLUSTER\n>>\n>> postgres=# insert into gtt2 values(1);\n>> INSERT 0 1\n>> postgres=# CLUSTER gtt2 USING idx1;\n>> CLUSTER\n>> postgres=# CLUSTER gtt2 USING idx2;\n>> ERROR: cannot cluster on partial index \"idx2\"\n>>\n>>\n>> *Case3:*-- Session2:\n>> postgres=# TRUNCATE gtt2 DROP;\n>> TRUNCATE TABLE\n>> postgres=# CLUSTER gtt2 USING idx1;\n>> CLUSTER\n>> postgres=# CLUSTER gtt2 USING idx2;\n>> CLUSTER\n>>\n>> In Case2, Case3 we can observe, with the absence of data in GTT, we are\n>> able to \"CLUSTER gtt2 USING idx2;\" (having partial index)\n>> But why does the same query fail for Case1 (absence of data)?\n>>\n>> Thanks,\n>> Prabhat Sahu\n>>\n>>\n>>\n>>>\n>>>\n>>> Wenjing\n>>>\n>>>\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>> All in all, I think the current implementation is sufficient for dba to\n>>>> manage GTT.\n>>>>\n>>>> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>>>>\n>>>> Hi All,\n>>>>\n>>>> I have noted down few behavioral difference in our GTT implementation\n>>>> in PG as compared to Oracle DB:\n>>>> As per my understanding, the behavior of DROP TABLE in case of \"Normal\n>>>> table and GTT\" in Oracle DB are as below:\n>>>>\n>>>> 1. Any tables(Normal table / GTT) without having data in a session,\n>>>> we will be able to DROP from another session.\n>>>> 2. For a completed transaction on a normal table having data, we\n>>>> will be able to DROP from another session. If the transaction is not yet\n>>>> complete, and we are trying to drop the table from another session, then we\n>>>> will get an error. (working as expected)\n>>>> 3. For a completed transaction on GTT with(on commit delete rows)\n>>>> (i.e. no data in GTT) in a session, we will be able to DROP from another\n>>>> session.\n>>>> 4. For a completed transaction on GTT with(on commit preserve rows)\n>>>> with data in a session, we will not be able to DROP from any session(not\n>>>> even from the session in which GTT is created), we need to truncate the\n>>>> table data first from all the session(session1, session2) which is having\n>>>> data.\n>>>>\n>>>> *1. Any tables(Normal table / GTT) without having data in a session, we\n>>>> will be able to DROP from another session.*\n>>>> *Session1:*\n>>>> create table t1 (c1 integer);\n>>>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>>>\n>>>> *Session2:*\n>>>> drop table t1;\n>>>> drop table gtt1;\n>>>> drop table gtt2;\n>>>>\n>>>> -- *Issue 1:* But we are able to drop a simple table and failed to\n>>>> drop GTT as below.\n>>>>\n>>>> postgres=# drop table t1;\n>>>> DROP TABLE\n>>>> postgres=# drop table gtt1;\n>>>> ERROR: can not drop relation gtt1 when other backend attached this\n>>>> global temp table\n>>>> postgres=# drop table gtt2;\n>>>> ERROR: can not drop relation gtt2 when other backend attached this\n>>>> global temp table\n>>>>\n>>>>\n>>>> *3. For a completed transaction on GTT with(on commit delete rows)\n>>>> (i.e. no data in GTT) in a session, we will be able to DROP from another\n>>>> session.*\n>>>>\n>>>> *Session1:*create global temporary table gtt1 (c1 integer) on commit\n>>>> delete rows;\n>>>>\n>>>> *Session2:*\n>>>> drop table gtt1;\n>>>>\n>>>> -- *Issue 2:* But we are getting error for GTT\n>>>> with(on_commit_delete_rows) without data.\n>>>>\n>>>> postgres=# drop table gtt1;\n>>>> ERROR: can not drop relation gtt1 when other backend attached this\n>>>> global temp table\n>>>>\n>>>>\n>>>> *4. For a completed transaction on GTT with(on commit preserve\n>>>> rows) with data in any session, we will not be able to DROP from any\n>>>> session(not even from the session in which GTT is created)*\n>>>>\n>>>> *Case1:*\n>>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>>> insert into gtt2 values(100);\n>>>> drop table gtt2;\n>>>>\n>>>> SQL> drop table gtt2;\n>>>> drop table gtt2\n>>>> *\n>>>> ERROR at line 1:\n>>>> ORA-14452: attempt to create, alter or drop an index on temporary table\n>>>> already in use\n>>>>\n>>>> -- *Issue 3:* But, we are able to drop the GTT(having data) which we\n>>>> have created in the same session.\n>>>>\n>>>> postgres=# drop table gtt2;\n>>>> DROP TABLE\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> *Case2: GTT with(on commit preserve rows) having data in both session1\n>>>> and session2Session1:*create global temporary table gtt2 (c1 integer)\n>>>> on commit preserve rows;\n>>>> insert into gtt2 values(100);\n>>>>\n>>>>\n>>>> *Session2:*insert into gtt2 values(200);\n>>>>\n>>>> -- If we try to drop the table from any session we should get an error,\n>>>> it is working fine.\n>>>> drop table gtt2;\n>>>>\n>>>> SQL> drop table gtt2;\n>>>> drop table gtt2\n>>>> *\n>>>> ERROR at line 1:\n>>>> ORA-14452: attempt to create, alter or drop an index on temporary table\n>>>> already in use\n>>>>\n>>>> postgres=# drop table gtt2 ;\n>>>> ERROR: can not drop relation gtt2 when other backend attached this\n>>>> global temp table\n>>>>\n>>>>\n>>>> -- To drop the table gtt2 from any session1/session2, we need to\n>>>> truncate the table data first from all the session(session1, session2)\n>>>> which is having data.\n>>>> *Session1:*\n>>>> truncate table gtt2;\n>>>> -- Session2:\n>>>> truncate table gtt2;\n>>>>\n>>>> *Session 2:*\n>>>> SQL> drop table gtt2;\n>>>>\n>>>> Table dropped.\n>>>>\n>>>> -- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE\n>>>> the table in all the sessions.\n>>>> -- truncate from all sessions where GTT have data.\n>>>> postgres=# truncate gtt2 ;\n>>>> TRUNCATE TABLE\n>>>>\n>>>> -- *try to DROP GTT still, we are getting error.*\n>>>>\n>>>> postgres=# drop table gtt2 ;\n>>>> ERROR: can not drop relation gtt2 when other backend attached this\n>>>> global temp table\n>>>>\n>>>>\n>>>> To drop the GTT from any session, we need to exit from all other\n>>>> sessions.\n>>>> postgres=# drop table gtt2 ;\n>>>> DROP TABLE\n>>>>\n>>>> Kindly let me know if I am missing something.\n>>>>\n>>>>\n>>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <\n>>>> prabhat.sahu@enterprisedb.com> wrote:\n>>>>\n>>>>> Hi Wenjing,\n>>>>> I hope we need to change the below error message.\n>>>>>\n>>>>> postgres=# create global temporary table gtt(c1 int) on commit\n>>>>> preserve rows;\n>>>>> CREATE TABLE\n>>>>>\n>>>>> postgres=# create materialized view mvw as select * from gtt;\n>>>>> ERROR: materialized views must not use global temporary tables* or\n>>>>> views*\n>>>>>\n>>>>> Anyways we are not allowed to create a \"global temporary view\",\n>>>>> so the above ERROR message should change(i.e. *\" or view\"* need to be\n>>>>> removed from the error message) something like:\n>>>>> *\"ERROR: materialized views must not use global temporary tables\"*\n>>>>>\n>>>>> --\n>>>>>\n>>>>> With Regards,\n>>>>> Prabhat Kumar Sahu\n>>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>>\n>>>>\n>>>>\n>>>> --\n>>>>\n>>>> With Regards,\n>>>> Prabhat Kumar Sahu\n>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>\n>>>>\n>>>>\n>>>\n>>\n>> --\n>>\n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>\n>\n\nčt 23. 4. 2020 v 9:10 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:2020年4月22日 下午10:50,Pavel Stehule <pavel.stehule@gmail.com> 写道:st 22. 4. 2020 v 16:38 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com> napsal:On Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:Although the implementation of GTT is different, I think so TRUNCATE on Postgres (when it is really finalized) can remove session metadata of GTT too (and reduce usage's counter). It is not critical feature, but I think so it should not be hard to implement. From practical reason can be nice to have a tool how to refresh GTT without a necessity to close session. TRUNCATE can be this tool.Sorry, I don't quite understand what you mean, could you describe it in detail? In my opinion the TRUNCATE GTT cannot clean up data in other sessions, especially clean up local buffers in other sessions.It is about a possibility to force reset GTT to default empty state for all sessions.Maybe it is some what does your TRUNCATE DROP, but I don't think so this design (TRUNCATE DROP) is good, because then user have to know implementation detail.I prefer some like TRUNCATE tab WITH OPTION (GLOBAL, FORCE) - \"GLOBAL\" .. apply on all sessions, FORCE try to do without waiting on some global lock, try to do immediately with possibility to cancel some statements and rollback some session.instead GLOBAL maybe we can use  \"ALLSESSION\", or \"ALL SESSION\" or some elseBut I like possible terminology LOCAL x GLOBAL for GTT. What I mean? Some statements like \"TRUNCATE\" can  works (by default) in \"local\" mode .. it has impact to current session only. But sometimes can be executed in \"global\" mode with effect on all sessions.Yes, I think we need a way to delete the GTT local storage without closing the session.I provide the TRUNCATE tablename DROP to clear the data in the GTT and delete the storage files.This feature requires the current transaction to commit immediately after it finishes truncate.Hi Wenjing,Thanks for the patch(v30) for the new syntax support for (TRUNCATE table_name DROP) for deleting storage files after TRUNCATE on GTT.This syntax looks strange, and I don't think so it solve anything in practical life, because without lock the table will be used in few seconds by other sessions.If a dba wants to delete or modify a GTT, he can use locks to help him make the change. postgres=# begin;BEGINpostgres=*# LOCK TABLE gtt2 IN ACCESS EXCLUSIVE MODE;postgres=*# select * from pg_gtt_attached_pids ;Kill session or let session do TRUNCATE tablename DROPpostgres=*# drop table gtt2;DROP TABLEpostgres=*# commit;COMMITyes, user can lock a tables. But I think so it is user friendly design. I don't remember any statement in Postgres, where I have to use table locks explicitly.For builtin commands it should be done transparently (for user).RegardsPavel This is same topic when we talked about ALTER - when and where the changes should be applied. The CLUSTER commands works only on session private data, so it should not to need some special lock or some special cleaning before.RegardsPavel  Please check below scenarios:Case1:-- session1:postgres=# create global temporary table gtt2 (c1 integer) on commit preserve rows;CREATE TABLEpostgres=# create index  idx1 on gtt2 (c1);CREATE INDEXpostgres=# create index  idx2 on gtt2 (c1) where c1%2 =0;CREATE INDEXpostgres=# postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;ERROR:  cannot cluster on partial index \"idx2\"        Case2:-- Session2:postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;CLUSTERpostgres=# insert into gtt2 values(1);INSERT 0 1postgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;ERROR:  cannot cluster on partial index \"idx2\"Case3:-- Session2:postgres=# TRUNCATE gtt2 DROP;TRUNCATE TABLEpostgres=# CLUSTER gtt2 USING idx1;CLUSTERpostgres=# CLUSTER gtt2 USING idx2;CLUSTERIn Case2, Case3 we can observe, with the absence of data in GTT, we are able to \"CLUSTER gtt2 USING idx2;\" (having partial index)But why does the same query fail for Case1 (absence of data)?Thanks,Prabhat Sahu WenjingRegardsPavelAll in all, I think the current implementation is sufficient for dba to manage GTT.2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi All,I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.Session1:create table t1 (c1 integer);create global temporary table gtt1 (c1 integer) on commit delete rows;create global temporary table gtt2 (c1 integer) on commit preserve rows;Session2:drop table t1;drop table gtt1;drop table gtt2;-- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\tpostgres=# drop table t1;\tDROP TABLE\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table\tpostgres=# drop table gtt2;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.Session1:create global temporary table gtt1 (c1 integer) on commit delete rows;Session2:drop table gtt1;-- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\tpostgres=# drop table gtt1;\tERROR:  can not drop relation gtt1 when other backend attached this global temp table4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)Case1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use-- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\tpostgres=# drop table gtt2;\tDROP TABLECase2: GTT with(on commit preserve rows) having data in both session1 and session2Session1:create global temporary table gtt2 (c1 integer) on commit preserve rows;insert into gtt2 values(100);Session2:insert into gtt2 values(200);-- If we try to drop the table from any session we should get an error, it is working fine.drop table gtt2;\tSQL> drop table gtt2;\tdrop table gtt2\t\t   *\tERROR at line 1:\tORA-14452: attempt to create, alter or drop an index on temporary table already in use\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp table-- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.Session1:truncate table gtt2;-- Session2:truncate table gtt2; Session 2:\tSQL> drop table gtt2;\tTable dropped.-- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\t-- truncate from all sessions where GTT have data.\tpostgres=# truncate gtt2 ;\tTRUNCATE TABLE\t-- try to DROP GTT still, we are getting error.\tpostgres=# drop table gtt2 ;\tERROR:  can not drop relation gtt2 when other backend attached this global temp tableTo drop the GTT from any session, we need to exit from all other sessions.postgres=# drop table gtt2 ;DROP TABLEKindly let me know if I am missing something.On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Hi Wenjing,I hope we need to change the below error message.postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# create materialized view mvw as select * from gtt;ERROR: materialized views must not use global temporary tables or viewsAnyways we are not allowed to create a \"global temporary view\", so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\"ERROR: materialized views must not use global temporary tables\"-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n\n-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Apr 2020 09:43:53 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nPlease check, the server getting crash with the below scenario(CLUSTER gtt\nusing INDEX).\n\n*-- Session1:*\npostgres=# create global temporary table gtt (c1 integer) on commit\npreserve rows;\nCREATE TABLE\npostgres=# create index idx1 on gtt (c1);\nCREATE INDEX\n\n*-- Session2:*\npostgres=# create index idx2 on gtt (c1);\nCREATE INDEX\n\n*-- Session1:*\npostgres=# cluster gtt using idx1;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?>\n\n*-- Below is the stacktrace:*\n[edb@localhost bin]$ gdb -q -c data/core.95690 postgres\nReading symbols from\n/home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.\n[New LWP 95690]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: edb postgres [local] CLUSTER\n '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f9c574ee337 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\nkrb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64\nlibgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64\nopenssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\nzlib-1.2.7-18.el7.x86_64\n(gdb) bt\n#0 0x00007f9c574ee337 in raise () from /lib64/libc.so.6\n#1 0x00007f9c574efa28 in abort () from /lib64/libc.so.6\n#2 0x0000000000ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8\n\"!ReindexIsProcessingIndex(indexOid)\", errorType=0xb5d365\n\"FailedAssertion\",\n fileName=0xb5d4e9 \"index.c\", lineNumber=3825) at assert.c:67\n#3 0x00000000005b0412 in reindex_relation (relid=16384, flags=2,\noptions=0) at index.c:3825\n#4 0x000000000065e36d in finish_heap_swap (OIDOldHeap=16384,\nOIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false,\n check_constraints=false, is_internal=true, frozenXid=491,\ncutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448\n#5 0x000000000065ccef in rebuild_relation (OldHeap=0x7f9c589adef0,\nindexOid=16387, verbose=false) at cluster.c:602\n#6 0x000000000065c757 in cluster_rel (tableOid=16384, indexOid=16387,\noptions=0) at cluster.c:418\n#7 0x000000000065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at\ncluster.c:180\n#8 0x000000000093b213 in standard_ProcessUtility (pstmt=0x2cd16c8,\nqueryString=0x2cd0b30 \"cluster gtt using idx1;\",\ncontext=PROCESS_UTILITY_TOPLEVEL,\n params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at\nutility.c:819\n#9 0x000000000093aa50 in ProcessUtility (pstmt=0x2cd16c8,\nqueryString=0x2cd0b30 \"cluster gtt using idx1;\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522\n#10 0x00000000009398c2 in PortalRunUtility (portal=0x2d36ba0,\npstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8,\nqc=0x7ffcd32604b0)\n at pquery.c:1157\n#11 0x0000000000939ad8 in PortalRunMulti (portal=0x2d36ba0,\nisTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8,\nqc=0x7ffcd32604b0)\n at pquery.c:1303\n#12 0x0000000000938ff6 in PortalRun (portal=0x2d36ba0,\ncount=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8,\naltdest=0x2cd19a8,\n qc=0x7ffcd32604b0) at pquery.c:779\n#13 0x00000000009331b0 in exec_simple_query (query_string=0x2cd0b30\n\"cluster gtt using idx1;\") at postgres.c:1239\n#14 0x00000000009371bc in PostgresMain (argc=1, argv=0x2cfab80,\ndbname=0x2cfaa78 \"postgres\", username=0x2cfaa58 \"edb\") at postgres.c:4315\n#15 0x00000000008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510\n#16 0x0000000000886a9e in BackendStartup (port=0x2cf2b50) at\npostmaster.c:4202\n#17 0x000000000088301c in ServerLoop () at postmaster.c:1727\n#18 0x00000000008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at\npostmaster.c:1400\n#19 0x0000000000789c54 in main (argc=3, argv=0x2ccb460) at main.c:210\n(gdb)\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Please check, the server getting crash with the below scenario(CLUSTER gtt using INDEX).-- Session1:postgres=# create global temporary table gtt (c1 integer) on commit preserve rows;CREATE TABLEpostgres=# create index idx1 on gtt (c1);CREATE INDEX-- Session2:postgres=# create index idx2 on gtt (c1);CREATE INDEX-- Session1:postgres=# cluster gtt using idx1;server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?>-- Below is the stacktrace:[edb@localhost bin]$ gdb -q -c data/core.95690 postgres Reading symbols from /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.[New LWP 95690][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib64/libthread_db.so.1\".Core was generated by `postgres: edb postgres [local] CLUSTER                    '.Program terminated with signal 6, Aborted.#0  0x00007f9c574ee337 in raise () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64(gdb) bt#0  0x00007f9c574ee337 in raise () from /lib64/libc.so.6#1  0x00007f9c574efa28 in abort () from /lib64/libc.so.6#2  0x0000000000ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8 \"!ReindexIsProcessingIndex(indexOid)\", errorType=0xb5d365 \"FailedAssertion\",     fileName=0xb5d4e9 \"index.c\", lineNumber=3825) at assert.c:67#3  0x00000000005b0412 in reindex_relation (relid=16384, flags=2, options=0) at index.c:3825#4  0x000000000065e36d in finish_heap_swap (OIDOldHeap=16384, OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false,     check_constraints=false, is_internal=true, frozenXid=491, cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448#5  0x000000000065ccef in rebuild_relation (OldHeap=0x7f9c589adef0, indexOid=16387, verbose=false) at cluster.c:602#6  0x000000000065c757 in cluster_rel (tableOid=16384, indexOid=16387, options=0) at cluster.c:418#7  0x000000000065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at cluster.c:180#8  0x000000000093b213 in standard_ProcessUtility (pstmt=0x2cd16c8, queryString=0x2cd0b30 \"cluster gtt using idx1;\", context=PROCESS_UTILITY_TOPLEVEL,     params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:819#9  0x000000000093aa50 in ProcessUtility (pstmt=0x2cd16c8, queryString=0x2cd0b30 \"cluster gtt using idx1;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,     queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522#10 0x00000000009398c2 in PortalRunUtility (portal=0x2d36ba0, pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, qc=0x7ffcd32604b0)    at pquery.c:1157#11 0x0000000000939ad8 in PortalRunMulti (portal=0x2d36ba0, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8, qc=0x7ffcd32604b0)    at pquery.c:1303#12 0x0000000000938ff6 in PortalRun (portal=0x2d36ba0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8, altdest=0x2cd19a8,     qc=0x7ffcd32604b0) at pquery.c:779#13 0x00000000009331b0 in exec_simple_query (query_string=0x2cd0b30 \"cluster gtt using idx1;\") at postgres.c:1239#14 0x00000000009371bc in PostgresMain (argc=1, argv=0x2cfab80, dbname=0x2cfaa78 \"postgres\", username=0x2cfaa58 \"edb\") at postgres.c:4315#15 0x00000000008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510#16 0x0000000000886a9e in BackendStartup (port=0x2cf2b50) at postmaster.c:4202#17 0x000000000088301c in ServerLoop () at postmaster.c:1727#18 0x00000000008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at postmaster.c:1400#19 0x0000000000789c54 in main (argc=3, argv=0x2ccb460) at main.c:210(gdb) -- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Apr 2020 10:25:41 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nWith the new patch(v30) as you mentioned the new syntax support for\n\"TRUNCATE TABLE gtt DROP\", but we also observe the syntax \"DROP TABLE gtt\nDROP\" is working as below:\n\npostgres=# create global temporary table gtt(c1 int) on commit preserve\nrows;\nCREATE TABLE\npostgres=# DROP TABLE gtt DROP;\nDROP TABLE\n\nDoes this syntax intensional? If not, we should get a syntax error.\n\nOn Fri, Apr 24, 2020 at 10:25 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com>\nwrote:\n\n> Hi Wenjing,\n>\n> Please check, the server getting crash with the below scenario(CLUSTER gtt\n> using INDEX).\n>\n> *-- Session1:*\n> postgres=# create global temporary table gtt (c1 integer) on commit\n> preserve rows;\n> CREATE TABLE\n> postgres=# create index idx1 on gtt (c1);\n> CREATE INDEX\n>\n> *-- Session2:*\n> postgres=# create index idx2 on gtt (c1);\n> CREATE INDEX\n>\n> *-- Session1:*\n> postgres=# cluster gtt using idx1;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?>\n>\n> *-- Below is the stacktrace:*\n> [edb@localhost bin]$ gdb -q -c data/core.95690 postgres\n> Reading symbols from\n> /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.\n> [New LWP 95690]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib64/libthread_db.so.1\".\n> Core was generated by `postgres: edb postgres [local] CLUSTER\n> '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f9c574ee337 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install\n> glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\n> krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64\n> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64\n> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64\n> zlib-1.2.7-18.el7.x86_64\n> (gdb) bt\n> #0 0x00007f9c574ee337 in raise () from /lib64/libc.so.6\n> #1 0x00007f9c574efa28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8\n> \"!ReindexIsProcessingIndex(indexOid)\", errorType=0xb5d365\n> \"FailedAssertion\",\n> fileName=0xb5d4e9 \"index.c\", lineNumber=3825) at assert.c:67\n> #3 0x00000000005b0412 in reindex_relation (relid=16384, flags=2,\n> options=0) at index.c:3825\n> #4 0x000000000065e36d in finish_heap_swap (OIDOldHeap=16384,\n> OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false,\n> check_constraints=false, is_internal=true, frozenXid=491,\n> cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448\n> #5 0x000000000065ccef in rebuild_relation (OldHeap=0x7f9c589adef0,\n> indexOid=16387, verbose=false) at cluster.c:602\n> #6 0x000000000065c757 in cluster_rel (tableOid=16384, indexOid=16387,\n> options=0) at cluster.c:418\n> #7 0x000000000065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at\n> cluster.c:180\n> #8 0x000000000093b213 in standard_ProcessUtility (pstmt=0x2cd16c8,\n> queryString=0x2cd0b30 \"cluster gtt using idx1;\",\n> context=PROCESS_UTILITY_TOPLEVEL,\n> params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at\n> utility.c:819\n> #9 0x000000000093aa50 in ProcessUtility (pstmt=0x2cd16c8,\n> queryString=0x2cd0b30 \"cluster gtt using idx1;\",\n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522\n> #10 0x00000000009398c2 in PortalRunUtility (portal=0x2d36ba0,\n> pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8,\n> qc=0x7ffcd32604b0)\n> at pquery.c:1157\n> #11 0x0000000000939ad8 in PortalRunMulti (portal=0x2d36ba0,\n> isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8,\n> qc=0x7ffcd32604b0)\n> at pquery.c:1303\n> #12 0x0000000000938ff6 in PortalRun (portal=0x2d36ba0,\n> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8,\n> altdest=0x2cd19a8,\n> qc=0x7ffcd32604b0) at pquery.c:779\n> #13 0x00000000009331b0 in exec_simple_query (query_string=0x2cd0b30\n> \"cluster gtt using idx1;\") at postgres.c:1239\n> #14 0x00000000009371bc in PostgresMain (argc=1, argv=0x2cfab80,\n> dbname=0x2cfaa78 \"postgres\", username=0x2cfaa58 \"edb\") at postgres.c:4315\n> #15 0x00000000008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510\n> #16 0x0000000000886a9e in BackendStartup (port=0x2cf2b50) at\n> postmaster.c:4202\n> #17 0x000000000088301c in ServerLoop () at postmaster.c:1727\n> #18 0x00000000008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at\n> postmaster.c:1400\n> #19 0x0000000000789c54 in main (argc=3, argv=0x2ccb460) at main.c:210\n> (gdb)\n>\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing, With the new patch(v30) as you mentioned the new syntax support for \"TRUNCATE TABLE gtt DROP\", but we also observe the syntax \"DROP TABLE gtt DROP\" is working as below:postgres=# create global temporary table gtt(c1 int) on commit preserve rows;CREATE TABLEpostgres=# DROP TABLE gtt DROP;DROP TABLEDoes this syntax intensional? If not, we should get a syntax error.On Fri, Apr 24, 2020 at 10:25 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Hi Wenjing,Please check, the server getting crash with the below scenario(CLUSTER gtt using INDEX).-- Session1:postgres=# create global temporary table gtt (c1 integer) on commit preserve rows;CREATE TABLEpostgres=# create index idx1 on gtt (c1);CREATE INDEX-- Session2:postgres=# create index idx2 on gtt (c1);CREATE INDEX-- Session1:postgres=# cluster gtt using idx1;server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?>-- Below is the stacktrace:[edb@localhost bin]$ gdb -q -c data/core.95690 postgres Reading symbols from /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.[New LWP 95690][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib64/libthread_db.so.1\".Core was generated by `postgres: edb postgres [local] CLUSTER                    '.Program terminated with signal 6, Aborted.#0  0x00007f9c574ee337 in raise () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64(gdb) bt#0  0x00007f9c574ee337 in raise () from /lib64/libc.so.6#1  0x00007f9c574efa28 in abort () from /lib64/libc.so.6#2  0x0000000000ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8 \"!ReindexIsProcessingIndex(indexOid)\", errorType=0xb5d365 \"FailedAssertion\",     fileName=0xb5d4e9 \"index.c\", lineNumber=3825) at assert.c:67#3  0x00000000005b0412 in reindex_relation (relid=16384, flags=2, options=0) at index.c:3825#4  0x000000000065e36d in finish_heap_swap (OIDOldHeap=16384, OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false,     check_constraints=false, is_internal=true, frozenXid=491, cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448#5  0x000000000065ccef in rebuild_relation (OldHeap=0x7f9c589adef0, indexOid=16387, verbose=false) at cluster.c:602#6  0x000000000065c757 in cluster_rel (tableOid=16384, indexOid=16387, options=0) at cluster.c:418#7  0x000000000065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at cluster.c:180#8  0x000000000093b213 in standard_ProcessUtility (pstmt=0x2cd16c8, queryString=0x2cd0b30 \"cluster gtt using idx1;\", context=PROCESS_UTILITY_TOPLEVEL,     params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:819#9  0x000000000093aa50 in ProcessUtility (pstmt=0x2cd16c8, queryString=0x2cd0b30 \"cluster gtt using idx1;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,     queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522#10 0x00000000009398c2 in PortalRunUtility (portal=0x2d36ba0, pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, qc=0x7ffcd32604b0)    at pquery.c:1157#11 0x0000000000939ad8 in PortalRunMulti (portal=0x2d36ba0, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8, qc=0x7ffcd32604b0)    at pquery.c:1303#12 0x0000000000938ff6 in PortalRun (portal=0x2d36ba0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8, altdest=0x2cd19a8,     qc=0x7ffcd32604b0) at pquery.c:779#13 0x00000000009331b0 in exec_simple_query (query_string=0x2cd0b30 \"cluster gtt using idx1;\") at postgres.c:1239#14 0x00000000009371bc in PostgresMain (argc=1, argv=0x2cfab80, dbname=0x2cfaa78 \"postgres\", username=0x2cfaa58 \"edb\") at postgres.c:4315#15 0x00000000008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510#16 0x0000000000886a9e in BackendStartup (port=0x2cf2b50) at postmaster.c:4202#17 0x000000000088301c in ServerLoop () at postmaster.c:1727#18 0x00000000008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at postmaster.c:1400#19 0x0000000000789c54 in main (argc=3, argv=0x2ccb460) at main.c:210(gdb) -- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Apr 2020 12:58:46 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 4/22/20 2:49 PM, 曾文旌 wrote:\n>\n> I provide the TRUNCATE tablename DROP to clear the data in the GTT and \n> delete the storage files.\n> This feature requires the current transaction to commit immediately \n> after it finishes truncate.\n>\nThanks Wenjing , Please refer this scenario\n\npostgres=# create global temp table testing (a int);\nCREATE TABLE\npostgres=# begin;\nBEGIN\npostgres=*# truncate testing;      -- working   [1]\nTRUNCATE TABLE\npostgres=*# truncate testing drop;\nERROR:  Truncate global temporary table cannot run inside a transaction \nblock    --that is throwing an error claiming something which i did  \nsuccessfully [1]\npostgres=!#\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Fri, 24 Apr 2020 18:33:03 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月23日 下午3:43,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> čt 23. 4. 2020 v 9:10 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年4月22日 下午10:50,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> \n>> \n>> st 22. 4. 2020 v 16:38 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> napsal:\n>> \n>> \n>> On Wed, Apr 22, 2020 at 2:49 PM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>>> \n>>> Although the implementation of GTT is different, I think so TRUNCATE on Postgres (when it is really finalized) can remove session metadata of GTT too (and reduce usage's counter). It is not critical feature, but I think so it should not be hard to implement. From practical reason can be nice to have a tool how to refresh GTT without a necessity to close session. TRUNCATE can be this tool.\n> Sorry, I don't quite understand what you mean, could you describe it in detail? \n> In my opinion the TRUNCATE GTT cannot clean up data in other sessions, especially clean up local buffers in other sessions.\n> \n> It is about a possibility to force reset GTT to default empty state for all sessions.\n> \n> Maybe it is some what does your TRUNCATE DROP, but I don't think so this design (TRUNCATE DROP) is good, because then user have to know implementation detail.\n> \n> I prefer some like TRUNCATE tab WITH OPTION (GLOBAL, FORCE) - \"GLOBAL\" .. apply on all sessions, FORCE try to do without waiting on some global lock, try to do immediately with possibility to cancel some statements and rollback some session.\n> \n> instead GLOBAL maybe we can use \"ALLSESSION\", or \"ALL SESSION\" or some else\n> \n> But I like possible terminology LOCAL x GLOBAL for GTT. What I mean? Some statements like \"TRUNCATE\" can works (by default) in \"local\" mode .. it has impact to current session only. But sometimes can be executed in \"global\" mode with effect on all sessions.\nThe TRUNCATE GTT GLOBAL like DROP GTT FORCE you mentioned that before.\nI think this requires identifying sessions that have initialized the stored file and no actual data.\nAnd Handling local buffers on other session and locks is also difficult.\nIt may be harder than dropping the GTT force, which can kill other sessions, but TRUNCATE GTT would prefer not to.\nThis doesn't seem to complete the basic conditions, it's not easy.\nSo, I want to put this feature in next releases, along with DROP GTT FORCE.\nAlso, in view of your comments, I roll back the feature of TRUNCATE GTT DROP.\n\n\n\nWenjing\n\n> \n> \n> \n>> Yes, I think we need a way to delete the GTT local storage without closing the session.\n>> \n>> I provide the TRUNCATE tablename DROP to clear the data in the GTT and delete the storage files.\n>> This feature requires the current transaction to commit immediately after it finishes truncate.\n>> \n>> Hi Wenjing,\n>> Thanks for the patch(v30) for the new syntax support for (TRUNCATE table_name DROP) for deleting storage files after TRUNCATE on GTT.\n>> \n>> This syntax looks strange, and I don't think so it solve anything in practical life, because without lock the table will be used in few seconds by other sessions.\n> \n> If a dba wants to delete or modify a GTT, he can use locks to help him make the change. \n> \n> postgres=# begin;\n> BEGIN\n> postgres=*# LOCK TABLE gtt2 IN ACCESS EXCLUSIVE MODE;\n> postgres=*# select * from pg_gtt_attached_pids ;\n> \n> Kill session or let session do TRUNCATE tablename DROP\n> \n> postgres=*# drop table gtt2;\n> DROP TABLE\n> postgres=*# commit;\n> COMMIT\n> \n> yes, user can lock a tables. But I think so it is user friendly design. I don't remember any statement in Postgres, where I have to use table locks explicitly.\n> \n> For builtin commands it should be done transparently (for user).\nIt can be improved ,like DROP GTT FORCE.\n\n> \n> Regards\n> \n> Pavel\n> \n> \n>> \n>> This is same topic when we talked about ALTER - when and where the changes should be applied. \n>> \n>> The CLUSTER commands works only on session private data, so it should not to need some special lock or some special cleaning before.\n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> Please check below scenarios:\n>> \n>> Case1:\n>> -- session1:\n>> postgres=# create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>> CREATE TABLE\n>> postgres=# create index idx1 on gtt2 (c1);\n>> CREATE INDEX\n>> postgres=# create index idx2 on gtt2 (c1) where c1%2 =0;\n>> CREATE INDEX\n>> postgres=# \n>> postgres=# CLUSTER gtt2 USING idx1;\n>> CLUSTER\n>> postgres=# CLUSTER gtt2 USING idx2;\n>> ERROR: cannot cluster on partial index \"idx2\" \n>> \n>> Case2:\n>> -- Session2:\n>> postgres=# CLUSTER gtt2 USING idx1;\n>> CLUSTER\n>> postgres=# CLUSTER gtt2 USING idx2;\n>> CLUSTER\n>> \n>> postgres=# insert into gtt2 values(1);\n>> INSERT 0 1\n>> postgres=# CLUSTER gtt2 USING idx1;\n>> CLUSTER\n>> postgres=# CLUSTER gtt2 USING idx2;\n>> ERROR: cannot cluster on partial index \"idx2\"\n>> \n>> Case3:\n>> -- Session2:\n>> postgres=# TRUNCATE gtt2 DROP;\n>> TRUNCATE TABLE\n>> postgres=# CLUSTER gtt2 USING idx1;\n>> CLUSTER\n>> postgres=# CLUSTER gtt2 USING idx2;\n>> CLUSTER\n>> \n>> In Case2, Case3 we can observe, with the absence of data in GTT, we are able to \"CLUSTER gtt2 USING idx2;\" (having partial index)\n>> But why does the same query fail for Case1 (absence of data)?\n>> \n>> Thanks,\n>> Prabhat Sahu\n>> \n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>>> \n>>> Regards\n>>> \n>>> Pavel\n>>> \n>>> \n>>> All in all, I think the current implementation is sufficient for dba to manage GTT.\n>>> \n>>>> 2020年4月2日 下午4:45,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>>> \n>>>> Hi All,\n>>>> \n>>>> I have noted down few behavioral difference in our GTT implementation in PG as compared to Oracle DB:\n>>>> As per my understanding, the behavior of DROP TABLE in case of \"Normal table and GTT\" in Oracle DB are as below:\n>>>> Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n>>>> For a completed transaction on a normal table having data, we will be able to DROP from another session. If the transaction is not yet complete, and we are trying to drop the table from another session, then we will get an error. (working as expected)\n>>>> For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n>>>> For a completed transaction on GTT with(on commit preserve rows) with data in a session, we will not be able to DROP from any session(not even from the session in which GTT is created), we need to truncate the table data first from all the session(session1, session2) which is having data.\n>>>> 1. Any tables(Normal table / GTT) without having data in a session, we will be able to DROP from another session.\n>>>> Session1:\n>>>> create table t1 (c1 integer);\n>>>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>>> \n>>>> Session2:\n>>>> drop table t1;\n>>>> drop table gtt1;\n>>>> drop table gtt2;\n>>>> \n>>>> -- Issue 1: But we are able to drop a simple table and failed to drop GTT as below.\n>>>> postgres=# drop table t1;\n>>>> DROP TABLE\n>>>> postgres=# drop table gtt1;\n>>>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>>>> postgres=# drop table gtt2;\n>>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>>> \n>>>> 3. For a completed transaction on GTT with(on commit delete rows) (i.e. no data in GTT) in a session, we will be able to DROP from another session.\n>>>> Session1:\n>>>> create global temporary table gtt1 (c1 integer) on commit delete rows;\n>>>> \n>>>> Session2:\n>>>> drop table gtt1;\n>>>> \n>>>> -- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) without data.\n>>>> postgres=# drop table gtt1;\n>>>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>>>> \n>>>> 4. For a completed transaction on GTT with(on commit preserve rows) with data in any session, we will not be able to DROP from any session(not even from the session in which GTT is created)\n>>>> \n>>>> Case1:\n>>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>>> insert into gtt2 values(100);\n>>>> drop table gtt2;\n>>>> \n>>>> SQL> drop table gtt2;\n>>>> drop table gtt2\n>>>> *\n>>>> ERROR at line 1:\n>>>> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n>>>> \n>>>> -- Issue 3: But, we are able to drop the GTT(having data) which we have created in the same session.\n>>>> postgres=# drop table gtt2;\n>>>> DROP TABLE\n>>>> \n>>>> Case2: GTT with(on commit preserve rows) having data in both session1 and session2\n>>>> Session1:\n>>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;\n>>>> insert into gtt2 values(100);\n>>>> \n>>>> Session2:\n>>>> insert into gtt2 values(200);\n>>>> \n>>>> -- If we try to drop the table from any session we should get an error, it is working fine.\n>>>> drop table gtt2;\n>>>> SQL> drop table gtt2;\n>>>> drop table gtt2\n>>>> *\n>>>> ERROR at line 1:\n>>>> ORA-14452: attempt to create, alter or drop an index on temporary table already in use\n>>>> \n>>>> postgres=# drop table gtt2 ;\n>>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>>> \n>>>> -- To drop the table gtt2 from any session1/session2, we need to truncate the table data first from all the session(session1, session2) which is having data.\n>>>> Session1:\n>>>> truncate table gtt2;\n>>>> -- Session2:\n>>>> truncate table gtt2;\n>>>> \n>>>> Session 2:\n>>>> SQL> drop table gtt2;\n>>>> \n>>>> Table dropped.\n>>>> \n>>>> -- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the table in all the sessions.\n>>>> -- truncate from all sessions where GTT have data.\n>>>> postgres=# truncate gtt2 ;\n>>>> TRUNCATE TABLE\n>>>> \n>>>> -- try to DROP GTT still, we are getting error.\n>>>> postgres=# drop table gtt2 ;\n>>>> ERROR: can not drop relation gtt2 when other backend attached this global temp table\n>>>> \n>>>> To drop the GTT from any session, we need to exit from all other sessions.\n>>>> postgres=# drop table gtt2 ;\n>>>> DROP TABLE\n>>>> \n>>>> Kindly let me know if I am missing something.\n>>>> \n>>>> \n>>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> wrote:\n>>>> Hi Wenjing,\n>>>> I hope we need to change the below error message.\n>>>> \n>>>> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n>>>> CREATE TABLE\n>>>> \n>>>> postgres=# create materialized view mvw as select * from gtt;\n>>>> ERROR: materialized views must not use global temporary tables or views\n>>>> \n>>>> Anyways we are not allowed to create a \"global temporary view\", \n>>>> so the above ERROR message should change(i.e. \" or view\" need to be removed from the error message) something like:\n>>>> \"ERROR: materialized views must not use global temporary tables\"\n>>>> \n>>>> -- \n>>>> With Regards,\n>>>> Prabhat Kumar Sahu\n>>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>>>> \n>>>> \n>>>> -- \n>>>> With Regards,\n>>>> Prabhat Kumar Sahu\n>>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>>> \n>> \n>> \n>> \n>> -- \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>", "msg_date": "Sun, 26 Apr 2020 16:09:02 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nPlease check the below scenario shows different error message with \"DROP\nTABLE gtt;\" for gtt with and without index.\n\n*-- Session1:*postgres=# create global temporary table gtt1 (c1 int);\nCREATE TABLE\npostgres=# create global temporary table gtt2 (c1 int);\nCREATE TABLE\npostgres=# create index idx2 on gtt2(c1);\nCREATE INDEX\n\n\n*-- Session2:*postgres=# drop table gtt1;\nERROR: can not drop relation gtt1 when other backend attached this global\ntemp table\npostgres=# drop table gtt2;\nERROR: can not drop index gtt2 when other backend attached this global\ntemp table.\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Wenjing,Please check the below scenario shows different error message with \"DROP TABLE gtt;\" for gtt with and without index.-- Session1:postgres=# create global temporary table gtt1 (c1 int);CREATE TABLEpostgres=# create global temporary table gtt2 (c1 int);CREATE TABLEpostgres=# create index idx2 on gtt2(c1);CREATE INDEX-- Session2:postgres=# drop table gtt1;ERROR:  can not drop relation gtt1 when other backend attached this global temp tablepostgres=# drop table gtt2;ERROR:  can not drop index gtt2 when other backend attached this global temp table.-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Apr 2020 14:56:42 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月24日 下午12:55,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> \n> Please check, the server getting crash with the below scenario(CLUSTER gtt using INDEX).\n> \n> -- Session1:\n> postgres=# create global temporary table gtt (c1 integer) on commit preserve rows;\n> CREATE TABLE\n> postgres=# create index idx1 on gtt (c1);\n> CREATE INDEX\n> \n> -- Session2:\n> postgres=# create index idx2 on gtt (c1);\n> CREATE INDEX\n> \n> -- Session1:\n> postgres=# cluster gtt using idx1;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\nThanks for review, I fixed In v31.\n\n\nWenjing\n\n\n\n> !?>\n> \n> -- Below is the stacktrace:\n> [edb@localhost bin]$ gdb -q -c data/core.95690 postgres \n> Reading symbols from /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.\n> [New LWP 95690]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib64/libthread_db.so.1\".\n> Core was generated by `postgres: edb postgres [local] CLUSTER '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f9c574ee337 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n> (gdb) bt\n> #0 0x00007f9c574ee337 in raise () from /lib64/libc.so.6\n> #1 0x00007f9c574efa28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8 \"!ReindexIsProcessingIndex(indexOid)\", errorType=0xb5d365 \"FailedAssertion\", \n> fileName=0xb5d4e9 \"index.c\", lineNumber=3825) at assert.c:67\n> #3 0x00000000005b0412 in reindex_relation (relid=16384, flags=2, options=0) at index.c:3825\n> #4 0x000000000065e36d in finish_heap_swap (OIDOldHeap=16384, OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false, \n> check_constraints=false, is_internal=true, frozenXid=491, cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448\n> #5 0x000000000065ccef in rebuild_relation (OldHeap=0x7f9c589adef0, indexOid=16387, verbose=false) at cluster.c:602\n> #6 0x000000000065c757 in cluster_rel (tableOid=16384, indexOid=16387, options=0) at cluster.c:418\n> #7 0x000000000065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at cluster.c:180\n> #8 0x000000000093b213 in standard_ProcessUtility (pstmt=0x2cd16c8, queryString=0x2cd0b30 \"cluster gtt using idx1;\", context=PROCESS_UTILITY_TOPLEVEL, \n> params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:819\n> #9 0x000000000093aa50 in ProcessUtility (pstmt=0x2cd16c8, queryString=0x2cd0b30 \"cluster gtt using idx1;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, \n> queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522\n> #10 0x00000000009398c2 in PortalRunUtility (portal=0x2d36ba0, pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, qc=0x7ffcd32604b0)\n> at pquery.c:1157\n> #11 0x0000000000939ad8 in PortalRunMulti (portal=0x2d36ba0, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8, qc=0x7ffcd32604b0)\n> at pquery.c:1303\n> #12 0x0000000000938ff6 in PortalRun (portal=0x2d36ba0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8, altdest=0x2cd19a8, \n> qc=0x7ffcd32604b0) at pquery.c:779\n> #13 0x00000000009331b0 in exec_simple_query (query_string=0x2cd0b30 \"cluster gtt using idx1;\") at postgres.c:1239\n> #14 0x00000000009371bc in PostgresMain (argc=1, argv=0x2cfab80, dbname=0x2cfaa78 \"postgres\", username=0x2cfaa58 \"edb\") at postgres.c:4315\n> #15 0x00000000008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510\n> #16 0x0000000000886a9e in BackendStartup (port=0x2cf2b50) at postmaster.c:4202\n> #17 0x000000000088301c in ServerLoop () at postmaster.c:1727\n> #18 0x00000000008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at postmaster.c:1400\n> #19 0x0000000000789c54 in main (argc=3, argv=0x2ccb460) at main.c:210\n> (gdb) \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Mon, 27 Apr 2020 17:45:28 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月24日 下午3:28,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing, \n> \n> With the new patch(v30) as you mentioned the new syntax support for \"TRUNCATE TABLE gtt DROP\", but we also observe the syntax \"DROP TABLE gtt DROP\" is working as below:\n> \n> postgres=# create global temporary table gtt(c1 int) on commit preserve rows;\n> CREATE TABLE\n> postgres=# DROP TABLE gtt DROP;\n> DROP TABLE\nFixed in v31.\nThe truncate GTT drop was also removed.\n\n\nWenjing\n\n\n\n> \n> Does this syntax intensional? If not, we should get a syntax error.\n> \n> On Fri, Apr 24, 2020 at 10:25 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> wrote:\n> Hi Wenjing,\n> \n> Please check, the server getting crash with the below scenario(CLUSTER gtt using INDEX).\n> \n> -- Session1:\n> postgres=# create global temporary table gtt (c1 integer) on commit preserve rows;\n> CREATE TABLE\n> postgres=# create index idx1 on gtt (c1);\n> CREATE INDEX\n> \n> -- Session2:\n> postgres=# create index idx2 on gtt (c1);\n> CREATE INDEX\n> \n> -- Session1:\n> postgres=# cluster gtt using idx1;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?>\n> \n> -- Below is the stacktrace:\n> [edb@localhost bin]$ gdb -q -c data/core.95690 postgres \n> Reading symbols from /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.\n> [New LWP 95690]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib64/libthread_db.so.1\".\n> Core was generated by `postgres: edb postgres [local] CLUSTER '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f9c574ee337 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n> (gdb) bt\n> #0 0x00007f9c574ee337 in raise () from /lib64/libc.so.6\n> #1 0x00007f9c574efa28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8 \"!ReindexIsProcessingIndex(indexOid)\", errorType=0xb5d365 \"FailedAssertion\", \n> fileName=0xb5d4e9 \"index.c\", lineNumber=3825) at assert.c:67\n> #3 0x00000000005b0412 in reindex_relation (relid=16384, flags=2, options=0) at index.c:3825\n> #4 0x000000000065e36d in finish_heap_swap (OIDOldHeap=16384, OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false, \n> check_constraints=false, is_internal=true, frozenXid=491, cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448\n> #5 0x000000000065ccef in rebuild_relation (OldHeap=0x7f9c589adef0, indexOid=16387, verbose=false) at cluster.c:602\n> #6 0x000000000065c757 in cluster_rel (tableOid=16384, indexOid=16387, options=0) at cluster.c:418\n> #7 0x000000000065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at cluster.c:180\n> #8 0x000000000093b213 in standard_ProcessUtility (pstmt=0x2cd16c8, queryString=0x2cd0b30 \"cluster gtt using idx1;\", context=PROCESS_UTILITY_TOPLEVEL, \n> params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:819\n> #9 0x000000000093aa50 in ProcessUtility (pstmt=0x2cd16c8, queryString=0x2cd0b30 \"cluster gtt using idx1;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, \n> queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522\n> #10 0x00000000009398c2 in PortalRunUtility (portal=0x2d36ba0, pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, qc=0x7ffcd32604b0)\n> at pquery.c:1157\n> #11 0x0000000000939ad8 in PortalRunMulti (portal=0x2d36ba0, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8, qc=0x7ffcd32604b0)\n> at pquery.c:1303\n> #12 0x0000000000938ff6 in PortalRun (portal=0x2d36ba0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8, altdest=0x2cd19a8, \n> qc=0x7ffcd32604b0) at pquery.c:779\n> #13 0x00000000009331b0 in exec_simple_query (query_string=0x2cd0b30 \"cluster gtt using idx1;\") at postgres.c:1239\n> #14 0x00000000009371bc in PostgresMain (argc=1, argv=0x2cfab80, dbname=0x2cfaa78 \"postgres\", username=0x2cfaa58 \"edb\") at postgres.c:4315\n> #15 0x00000000008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510\n> #16 0x0000000000886a9e in BackendStartup (port=0x2cf2b50) at postmaster.c:4202\n> #17 0x000000000088301c in ServerLoop () at postmaster.c:1727\n> #18 0x00000000008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at postmaster.c:1400\n> #19 0x0000000000789c54 in main (argc=3, argv=0x2ccb460) at main.c:210\n> (gdb) \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Mon, 27 Apr 2020 17:46:47 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月24日 下午9:03,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 4/22/20 2:49 PM, 曾文旌 wrote:\n>> \n>> I provide the TRUNCATE tablename DROP to clear the data in the GTT and delete the storage files.\n>> This feature requires the current transaction to commit immediately after it finishes truncate.\n>> \n> Thanks Wenjing , Please refer this scenario\n> \n> postgres=# create global temp table testing (a int);\n> CREATE TABLE\n> postgres=# begin;\n> BEGIN\n> postgres=*# truncate testing; -- working [1]\n> TRUNCATE TABLE\n> postgres=*# truncate testing drop;\n> ERROR: Truncate global temporary table cannot run inside a transaction block --that is throwing an error claiming something which i did successfully [1]\nThe truncate GTT drop was removed.\nSo the problem goes away.\n\n\nWenjing\n\n\n> postgres=!#\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Mon, 27 Apr 2020 17:49:10 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月27日 下午5:26,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Hi Wenjing,\n> \n> Please check the below scenario shows different error message with \"DROP TABLE gtt;\" for gtt with and without index.\n> -- Session1:\n> postgres=# create global temporary table gtt1 (c1 int);\n> CREATE TABLE\n> postgres=# create global temporary table gtt2 (c1 int);\n> CREATE TABLE\n> postgres=# create index idx2 on gtt2(c1);\n> CREATE INDEX\n> \n> -- Session2:\n> postgres=# drop table gtt1;\n> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n> postgres=# drop table gtt2;\n> ERROR: can not drop index gtt2 when other backend attached this global temp table.\nFor DROP GTT, we need to drop the index on the table first. \nSo the indexes on the GTT are checked first.\nBut the error message needs to be fixed.\nFixed in v32\n\n\nwenjing\n\n\n\n\n\n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Mon, 27 Apr 2020 20:04:30 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thanks Wenjing, for the fix patch for previous issues.\nI have verified the issues, now those fix look good to me.\nBut the below error message is confusing(for gtt2).\n\npostgres=# drop table gtt1;\nERROR: cannot drop global temp table gtt1 when other backend attached it.\n\npostgres=# drop table gtt2;\nERROR: cannot drop index idx2 on global temp table gtt2 when other backend\nattached it.\n\nI feel the above error message shown for \"DROP TABLE gtt2;\" is a bit\nconfusing(looks similar to DROP INDEX gtt2;).\nIf possible, can we keep the error message simple as \"ERROR: cannot drop\nglobal temp table gtt2 when other backend attached it.\"?\nI mean, without giving extra information for the index attached to that GTT.\n\nOn Mon, Apr 27, 2020 at 5:34 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n\n>\n>\n> 2020年4月27日 下午5:26,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n> Hi Wenjing,\n>\n> Please check the below scenario shows different error message with \"DROP\n> TABLE gtt;\" for gtt with and without index.\n>\n> *-- Session1:*postgres=# create global temporary table gtt1 (c1 int);\n> CREATE TABLE\n> postgres=# create global temporary table gtt2 (c1 int);\n> CREATE TABLE\n> postgres=# create index idx2 on gtt2(c1);\n> CREATE INDEX\n>\n>\n> *-- Session2:*postgres=# drop table gtt1;\n> ERROR: can not drop relation gtt1 when other backend attached this global\n> temp table\n> postgres=# drop table gtt2;\n> ERROR: can not drop index gtt2 when other backend attached this global\n> temp table.\n>\n> For DROP GTT, we need to drop the index on the table first.\n> So the indexes on the GTT are checked first.\n> But the error message needs to be fixed.\n> Fixed in v32\n>\n>\n> wenjing\n>\n>\n>\n>\n>\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nThanks Wenjing, for the fix patch for previous issues.I have verified the issues, now those fix look good to me.But the below error message is confusing(for gtt2).postgres=# drop table gtt1;ERROR:  cannot drop global temp table gtt1 when other backend attached it.postgres=# drop table gtt2;ERROR:  cannot drop index idx2 on global temp table gtt2 when other backend attached it.I feel the above error message shown for \"DROP TABLE gtt2;\" is a bit confusing(looks similar to DROP INDEX gtt2;).If possible, can we keep the error message simple as \"ERROR:  cannot drop global temp table gtt2 when other backend attached it.\"?I mean, without giving extra information for the index attached to that GTT.On Mon, Apr 27, 2020 at 5:34 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:2020年4月27日 下午5:26,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Hi Wenjing,Please check the below scenario shows different error message with \"DROP TABLE gtt;\" for gtt with and without index.-- Session1:postgres=# create global temporary table gtt1 (c1 int);CREATE TABLEpostgres=# create global temporary table gtt2 (c1 int);CREATE TABLEpostgres=# create index idx2 on gtt2(c1);CREATE INDEX-- Session2:postgres=# drop table gtt1;ERROR:  can not drop relation gtt1 when other backend attached this global temp tablepostgres=# drop table gtt2;ERROR:  can not drop index gtt2 when other backend attached this global temp table.For DROP GTT, we need to drop the index on the table first. So the indexes on the GTT are checked first.But the error message needs to be fixed.Fixed in v32wenjing-- With Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com\n-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Apr 2020 19:18:04 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月27日 下午9:48,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> Thanks Wenjing, for the fix patch for previous issues.\n> I have verified the issues, now those fix look good to me.\n> But the below error message is confusing(for gtt2).\n> \n> postgres=# drop table gtt1;\n> ERROR: cannot drop global temp table gtt1 when other backend attached it.\n> \n> postgres=# drop table gtt2;\n> ERROR: cannot drop index idx2 on global temp table gtt2 when other backend attached it.\n> \n> I feel the above error message shown for \"DROP TABLE gtt2;\" is a bit confusing(looks similar to DROP INDEX gtt2;).\n> If possible, can we keep the error message simple as \"ERROR: cannot drop global temp table gtt2 when other backend attached it.\"?\n> I mean, without giving extra information for the index attached to that GTT.\nFixed the error message to make the expression more accurate. In v33.\n\n\nWenjing\n\n\n\n> \n> On Mon, Apr 27, 2020 at 5:34 PM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n> \n> \n>> 2020年4月27日 下午5:26,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>> \n>> Hi Wenjing,\n>> \n>> Please check the below scenario shows different error message with \"DROP TABLE gtt;\" for gtt with and without index.\n>> -- Session1:\n>> postgres=# create global temporary table gtt1 (c1 int);\n>> CREATE TABLE\n>> postgres=# create global temporary table gtt2 (c1 int);\n>> CREATE TABLE\n>> postgres=# create index idx2 on gtt2(c1);\n>> CREATE INDEX\n>> \n>> -- Session2:\n>> postgres=# drop table gtt1;\n>> ERROR: can not drop relation gtt1 when other backend attached this global temp table\n>> postgres=# drop table gtt2;\n>> ERROR: can not drop index gtt2 when other backend attached this global temp table.\n> For DROP GTT, we need to drop the index on the table first. \n> So the indexes on the GTT are checked first.\n> But the error message needs to be fixed.\n> Fixed in v32\n> \n> \n> wenjing\n> \n> \n> \n> \n>> \n>> -- \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> \n> \n> \n> -- \n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Wed, 29 Apr 2020 11:22:42 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 4/29/20 8:52 AM, 曾文旌 wrote:\n> Fixed the error message to make the expression more accurate. In v33.\n\nThanks wenjing\n\nPlease refer this scenario  , where getting an error while performing \ncluster o/p\n\n1)\n\nX terminal -\n\npostgres=# create global temp table f(n int);\nCREATE TABLE\n\nY Terminal -\n\npostgres=# create index index12 on f(n);\nCREATE INDEX\npostgres=# \\q\n\nX terminal -\n\npostgres=# reindex index  index12;\nREINDEX\npostgres=#  cluster f using index12;\nERROR:  cannot cluster on invalid index \"index12\"\npostgres=# drop index index12;\nDROP INDEX\n\nif this is an expected  , could we try  to make the error message more \nsimpler, if possible.\n\nAnother issue  -\n\nX terminal -\n\npostgres=# create global temp table f11(n int);\nCREATE TABLE\npostgres=# create index ind1 on f11(n);\nCREATE INDEX\npostgres=# create index ind2 on f11(n);\nCREATE INDEX\npostgres=#\n\nY terminal -\n\npostgres=# drop table f11;\nERROR:  cannot drop index ind2 or global temporary table f11\nHINT:  Because the index is created on the global temporary table and \nother backend attached it.\npostgres=#\n\nit is only mentioning about ind2 index but what about ind1 and what if  \n- they have lots of indexes ?\ni  think - we should not mix index information while dropping the table \nand vice versa.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 17:16:40 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年4月29日 下午7:46,tushar <tushar.ahuja@enterprisedb.com> 写道:\n> \n> On 4/29/20 8:52 AM, 曾文旌 wrote:\n>> Fixed the error message to make the expression more accurate. In v33.\n> \n> Thanks wenjing\n> \n> Please refer this scenario , where getting an error while performing cluster o/p\n> \n> 1)\n> \n> X terminal -\n> \n> postgres=# create global temp table f(n int);\n> CREATE TABLE\n> \n> Y Terminal -\n> \n> postgres=# create index index12 on f(n);\n> CREATE INDEX\n> postgres=# \\q\n> \n> X terminal -\n> \n> postgres=# reindex index index12;\n> REINDEX\n> postgres=# cluster f using index12;\n> ERROR: cannot cluster on invalid index \"index12\"\n> postgres=# drop index index12;\n> DROP INDEX\n> \n> if this is an expected , could we try to make the error message more simpler, if possible.\n> \n> Another issue -\n> \n> X terminal -\n> \n> postgres=# create global temp table f11(n int);\n> CREATE TABLE\n> postgres=# create index ind1 on f11(n);\n> CREATE INDEX\n> postgres=# create index ind2 on f11(n);\n> CREATE INDEX\n> postgres=#\n> \n> Y terminal -\n> \n> postgres=# drop table f11;\n> ERROR: cannot drop index ind2 or global temporary table f11\n> HINT: Because the index is created on the global temporary table and other backend attached it.\n> postgres=#\n> \n> it is only mentioning about ind2 index but what about ind1 and what if - they have lots of indexes ?\n> i think - we should not mix index information while dropping the table and vice versa.\npostgres=# drop index index12;\nERROR: cannot drop index index12 or global temporary table f\nHINT: Because the index is created on the global temporary table and other backend attached it.\n\npostgres=# drop table f;\nERROR: cannot drop index index12 or global temporary table f\nHINT: Because the index is created on the global temporary table and other backend attached it.\npostgres=#\n\nDropping an index on a GTT and dropping a GTT with an index can both trigger this message, so the message looks like this, and it feels like there's no better way to do it.\n\n\n\nWenjing\n\n\n\n> \n> -- \n> regards,tushar\n> EnterpriseDB https://www.enterprisedb.com/\n> The Enterprise PostgreSQL Company", "msg_date": "Thu, 7 May 2020 19:12:24 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Apr 29, 2020 at 8:52 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n\n> 2020年4月27日 下午9:48,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n> Thanks Wenjing, for the fix patch for previous issues.\n> I have verified the issues, now those fix look good to me.\n> But the below error message is confusing(for gtt2).\n>\n> postgres=# drop table gtt1;\n> ERROR: cannot drop global temp table gtt1 when other backend attached it.\n>\n> postgres=# drop table gtt2;\n> ERROR: cannot drop index idx2 on global temp table gtt2 when other\n> backend attached it.\n>\n> I feel the above error message shown for \"DROP TABLE gtt2;\" is a bit\n> confusing(looks similar to DROP INDEX gtt2;).\n> If possible, can we keep the error message simple as \"ERROR: cannot drop\n> global temp table gtt2 when other backend attached it.\"?\n> I mean, without giving extra information for the index attached to that\n> GTT.\n>\n> Fixed the error message to make the expression more accurate. In v33.\n>\n\nThanks Wenjing. We verified your latest patch(gtt_v33) focusing on all\nreported issues and they work fine.\nThanks.\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Apr 29, 2020 at 8:52 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:2020年4月27日 下午9:48,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Thanks Wenjing, for the fix patch for previous issues.I have verified the issues, now those fix look good to me.But the below error message is confusing(for gtt2).postgres=# drop table gtt1;ERROR:  cannot drop global temp table gtt1 when other backend attached it.postgres=# drop table gtt2;ERROR:  cannot drop index idx2 on global temp table gtt2 when other backend attached it.I feel the above error message shown for \"DROP TABLE gtt2;\" is a bit confusing(looks similar to DROP INDEX gtt2;).If possible, can we keep the error message simple as \"ERROR:  cannot drop global temp table gtt2 when other backend attached it.\"?I mean, without giving extra information for the index attached to that GTT.Fixed the error message to make the expression more accurate. In v33. Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all reported issues and they work fine. Thanks.-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 9 Jun 2020 17:45:34 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年6月9日 下午8:15,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n> \n> \n> \n> On Wed, Apr 29, 2020 at 8:52 AM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>> 2020年4月27日 下午9:48,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>> \n>> Thanks Wenjing, for the fix patch for previous issues.\n>> I have verified the issues, now those fix look good to me.\n>> But the below error message is confusing(for gtt2).\n>> \n>> postgres=# drop table gtt1;\n>> ERROR: cannot drop global temp table gtt1 when other backend attached it.\n>> \n>> postgres=# drop table gtt2;\n>> ERROR: cannot drop index idx2 on global temp table gtt2 when other backend attached it.\n>> \n>> I feel the above error message shown for \"DROP TABLE gtt2;\" is a bit confusing(looks similar to DROP INDEX gtt2;).\n>> If possible, can we keep the error message simple as \"ERROR: cannot drop global temp table gtt2 when other backend attached it.\"?\n>> I mean, without giving extra information for the index attached to that GTT.\n> Fixed the error message to make the expression more accurate. In v33.\n> \n> Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all reported issues and they work fine. \n> Thanks.\n> -- \n\nI'm very glad to hear such good news.\nI am especially grateful for your professional work on GTT.\nPlease feel free to let me know if there is anything you think could be improved.\n\n\nThanks.\n\n\nWenjing\n\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>", "msg_date": "Thu, 11 Jun 2020 10:13:08 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\nčt 11. 6. 2020 v 4:13 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:\n\n>\n> 2020年6月9日 下午8:15,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>\n>\n>\n> On Wed, Apr 29, 2020 at 8:52 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n>\n>> 2020年4月27日 下午9:48,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:\n>>\n>> Thanks Wenjing, for the fix patch for previous issues.\n>> I have verified the issues, now those fix look good to me.\n>> But the below error message is confusing(for gtt2).\n>>\n>> postgres=# drop table gtt1;\n>> ERROR: cannot drop global temp table gtt1 when other backend attached it.\n>>\n>> postgres=# drop table gtt2;\n>> ERROR: cannot drop index idx2 on global temp table gtt2 when other\n>> backend attached it.\n>>\n>> I feel the above error message shown for \"DROP TABLE gtt2;\" is a bit\n>> confusing(looks similar to DROP INDEX gtt2;).\n>> If possible, can we keep the error message simple as \"ERROR: cannot\n>> drop global temp table gtt2 when other backend attached it.\"?\n>> I mean, without giving extra information for the index attached to that\n>> GTT.\n>>\n>> Fixed the error message to make the expression more accurate. In v33.\n>>\n>\n> Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all\n> reported issues and they work fine.\n> Thanks.\n> --\n>\n>\n> I'm very glad to hear such good news.\n> I am especially grateful for your professional work on GTT.\n> Please feel free to let me know if there is anything you think could be\n> improved.\n>\n>\n> Thanks.\n>\n>\n> Wenjing\n>\n\nthis patch needs rebase\n\nRegards\n\nPavel\n\n\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nHičt 11. 6. 2020 v 4:13 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:2020年6月9日 下午8:15,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:On Wed, Apr 29, 2020 at 8:52 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:2020年4月27日 下午9:48,Prabhat Sahu <prabhat.sahu@enterprisedb.com> 写道:Thanks Wenjing, for the fix patch for previous issues.I have verified the issues, now those fix look good to me.But the below error message is confusing(for gtt2).postgres=# drop table gtt1;ERROR:  cannot drop global temp table gtt1 when other backend attached it.postgres=# drop table gtt2;ERROR:  cannot drop index idx2 on global temp table gtt2 when other backend attached it.I feel the above error message shown for \"DROP TABLE gtt2;\" is a bit confusing(looks similar to DROP INDEX gtt2;).If possible, can we keep the error message simple as \"ERROR:  cannot drop global temp table gtt2 when other backend attached it.\"?I mean, without giving extra information for the index attached to that GTT.Fixed the error message to make the expression more accurate. In v33. Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all reported issues and they work fine. Thanks.-- I'm very glad to hear such good news.I am especially grateful for your professional work on GTT.Please feel free to let me know if there is anything you think could be improved.Thanks.Wenjingthis patch needs rebaseRegardsPavelWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 6 Jul 2020 17:31:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年7月6日 下午11:31,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> Hi\n> \n> čt 11. 6. 2020 v 4:13 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n>> 2020年6月9日 下午8:15,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>> \n>> \n>> \n>> On Wed, Apr 29, 2020 at 8:52 AM 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> wrote:\n>>> 2020年4月27日 下午9:48,Prabhat Sahu <prabhat.sahu@enterprisedb.com <mailto:prabhat.sahu@enterprisedb.com>> 写道:\n>>> \n>>> Thanks Wenjing, for the fix patch for previous issues.\n>>> I have verified the issues, now those fix look good to me.\n>>> But the below error message is confusing(for gtt2).\n>>> \n>>> postgres=# drop table gtt1;\n>>> ERROR: cannot drop global temp table gtt1 when other backend attached it.\n>>> \n>>> postgres=# drop table gtt2;\n>>> ERROR: cannot drop index idx2 on global temp table gtt2 when other backend attached it.\n>>> \n>>> I feel the above error message shown for \"DROP TABLE gtt2;\" is a bit confusing(looks similar to DROP INDEX gtt2;).\n>>> If possible, can we keep the error message simple as \"ERROR: cannot drop global temp table gtt2 when other backend attached it.\"?\n>>> I mean, without giving extra information for the index attached to that GTT.\n>> Fixed the error message to make the expression more accurate. In v33.\n>> \n>> Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all reported issues and they work fine. \n>> Thanks.\n>> -- \n> \n> I'm very glad to hear such good news.\n> I am especially grateful for your professional work on GTT.\n> Please feel free to let me know if there is anything you think could be improved.\n> \n> \n> Thanks.\n> \n> \n> Wenjing\n> \n> this patch needs rebase\n\nGTT Merge the latest PGMaster and resolves conflicts.\n\n\nWenjing\n\n\n\n\n\n\n\n\n\n\n\n> \n> Regards\n> \n> Pavel\n> \n> \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>", "msg_date": "Tue, 7 Jul 2020 11:47:16 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\n\n> GTT Merge the latest PGMaster and resolves conflicts.\n>\n>\n>\nI tested it and it looks fine. I think it is very usable in current form,\nbut still there are some issues:\n\npostgres=# create global temp table foo(a int);\nCREATE TABLE\npostgres=# insert into foo values(10);\nINSERT 0 1\npostgres=# alter table foo add column x int;\nALTER TABLE\npostgres=# analyze foo;\nWARNING: reloid 16400 not support update attstat after add colunm\nWARNING: reloid 16400 not support update attstat after add colunm\nANALYZE\n\nPlease, can you summarize what is done, what limits are there, what can be\nimplemented hard, what can be implemented easily?\n\n\n\nI found one open question - how can be implemented table locks - because\ndata is physically separated, then we don't need table locks as protection\nagainst race conditions.\n\nNow, table locks are implemented on a global level. So exclusive lock on\nGTT in one session block insertion on the second session. Is it expected\nbehaviour? It is safe, but maybe it is too strict.\n\nWe should define what table lock is meaning on GTT.\n\nRegards\n\nPavel\n\n\n> Pavel\n>\n>\n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>>\n>>\n>\n\nHi GTT Merge the latest PGMaster and resolves conflicts.I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:postgres=# create global temp table foo(a int);CREATE TABLEpostgres=# insert into foo values(10);INSERT 0 1postgres=# alter table foo add column x int;ALTER TABLEpostgres=# analyze foo;WARNING:  reloid 16400 not support update attstat after add colunmWARNING:  reloid 16400 not support update attstat after add colunmANALYZEPlease, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. Now, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. We should define what table lock is meaning on GTT.RegardsPavel PavelWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Jul 2020 12:05:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "HI all\n\nI started using my personal email to respond to community issue.\n\n\n\n> 2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> Hi\n> \n> GTT Merge the latest PGMaster and resolves conflicts.\n> \n> \n> \n> I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:\n> \n> postgres=# create global temp table foo(a int);\n> CREATE TABLE\n> postgres=# insert into foo values(10);\n> INSERT 0 1\n> postgres=# alter table foo add column x int;\n> ALTER TABLE\n> postgres=# analyze foo;\n> WARNING: reloid 16400 not support update attstat after add colunm\n> WARNING: reloid 16400 not support update attstat after add colunm\n> ANALYZE\nThis is a limitation that we can completely eliminate.\n\n> \n> Please, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?\nSure.\n\nThe current version of the GTT implementation supports all regular table operations.\n1 what is done\n1.1 insert/update/delete on GTT.\n1.2 The GTT supports all types of indexes, and the query statement supports the use of GTT indexes to speed up the reading of data in the GTT.\n1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided to the optimizer to choose the best query plan.\n1.4 analyze vacuum GTT.\n1.5 truncate cluster GTT.\n1.6 all DDL on GTT.\n1.7 GTT table can use GTT sequence or Regular sequence.\n1.8 Support for creating views on GTT.\n1.9 Support for creating views on foreign key.\n1.10 support global temp partition.\n\nI feel like I cover all the necessary GTT requirements.\n\nFor cluster GTT,I think it's complicated.\nI'm not sure the current implementation is quite reasonable. Maybe you can help review it.\n\n\n> \n> \n> \n> I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. \nYes, but GTT’s DML DDL still requires table locking.\n1 The DML requires table locks (RowExclusiveLock) to ensure that \ndefinitions do not change during run time (the DDL may modify or delete them).\nThis part of the implementation does not actually change the code,\nbecause the DML on GTT does not block each other between sessions.\n\n2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, \nthey only modify local data and do not modify the GTT definition.\nSo I lowered the table lock level held by the GTT, only need RowExclusiveLock.\n\n3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter GTT), \nan exclusive level of table locking is required(AccessExclusiveLock), \nas is the case for regular table.\nThis part of the implementation also does not actually change the code.\n\nSummary: What I have done is to adjust the GTT lock levels in different types of statements based on the above thinking.\nFor example, truncate GTT, I'm reducing the GTT holding table lock level to RowExclusiveLock,\nSo We can truncate data in the same GTT between different sessions at the same time.\n\nWhat do you think about table locks on GTT?\n\n\nWenjing\n\n\n> \n> Now, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. \n> \n> We should define what table lock is meaning on GTT.\n> \n> Regards\n> \n> Pavel\n> \n> Pavel\n> \n> \n>> With Regards,\n>> Prabhat Kumar Sahu\n>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> \n> \n\n\nHI allI started using my personal email to respond to community issue.2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hi GTT Merge the latest PGMaster and resolves conflicts.I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:postgres=# create global temp table foo(a int);CREATE TABLEpostgres=# insert into foo values(10);INSERT 0 1postgres=# alter table foo add column x int;ALTER TABLEpostgres=# analyze foo;WARNING:  reloid 16400 not support update attstat after add colunmWARNING:  reloid 16400 not support update attstat after add colunmANALYZEThis is a limitation that we can completely eliminate.Please, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?Sure.The current version of the GTT implementation supports all regular table operations.1 what is done1.1 insert/update/delete on GTT.1.2 The GTT supports all types of indexes, and the query statement supports the use of GTT indexes to speed up the reading of data in the GTT.1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided to the optimizer to choose the best query plan.1.4 analyze vacuum GTT.1.5 truncate cluster GTT.1.6 all DDL on GTT.1.7 GTT table can use  GTT sequence  or Regular sequence.1.8 Support for creating views on GTT.1.9 Support for creating views on foreign key.1.10 support global temp partition.I feel like I cover all the necessary GTT requirements.For cluster GTT,I think it's complicated.I'm not sure the current implementation is quite reasonable. Maybe you can help review it.I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. Yes, but GTT’s DML DDL still requires table locking.1 The DML requires table locks (RowExclusiveLock) to ensure that definitions do not change during run time (the DDL may modify or delete them).This part of the implementation does not actually change the code,because the DML on GTT does not block each other between sessions.2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, they only modify local data and do not modify the GTT definition.So I lowered the table lock level held by the GTT, only need RowExclusiveLock.3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter GTT), an exclusive level of table locking is required(AccessExclusiveLock), as is the case for regular table.This part of the implementation also does not actually change the code.Summary: What I have done is to adjust the GTT lock levels in different types of statements based on the above thinking.For example, truncate GTT, I'm reducing the GTT holding table lock level to RowExclusiveLock,So We can truncate data in the same GTT between different sessions at the same time.What do you think about table locks on GTT?WenjingNow, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. We should define what table lock is meaning on GTT.RegardsPavel PavelWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 10 Jul 2020 17:03:56 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年7月10日 下午5:03,wenjing zeng <wjzeng2012@gmail.com> 写道:\n> \n> HI all\n> \n> I started using my personal email to respond to community issue.\n> \n> \n> \n>> 2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> GTT Merge the latest PGMaster and resolves conflicts.\n>> \n>> \n>> \n>> I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:\n>> \n>> postgres=# create global temp table foo(a int);\n>> CREATE TABLE\n>> postgres=# insert into foo values(10);\n>> INSERT 0 1\n>> postgres=# alter table foo add column x int;\n>> ALTER TABLE\n>> postgres=# analyze foo;\n>> WARNING: reloid 16400 not support update attstat after add colunm\n>> WARNING: reloid 16400 not support update attstat after add colunm\n>> ANALYZE\n> This is a limitation that we can completely eliminate.\n> \n>> \n>> Please, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?\n> Sure.\n> \n> The current version of the GTT implementation supports all regular table operations.\n> 1 what is done\n> 1.1 insert/update/delete on GTT.\n> 1.2 The GTT supports all types of indexes, and the query statement supports the use of GTT indexes to speed up the reading of data in the GTT.\n> 1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided to the optimizer to choose the best query plan.\n> 1.4 analyze vacuum GTT.\n> 1.5 truncate cluster GTT.\n> 1.6 all DDL on GTT.\n> 1.7 GTT table can use GTT sequence or Regular sequence.\n> 1.8 Support for creating views on GTT.\n> 1.9 Support for creating views on foreign key.\n> 1.10 support global temp partition.\n> \n> I feel like I cover all the necessary GTT requirements.\n> \n> For cluster GTT,I think it's complicated.\n> I'm not sure the current implementation is quite reasonable. Maybe you can help review it.\n> \n> \n>> \n>> \n>> \n>> I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. \n> Yes, but GTT’s DML DDL still requires table locking.\n> 1 The DML requires table locks (RowExclusiveLock) to ensure that \n> definitions do not change during run time (the DDL may modify or delete them).\n> This part of the implementation does not actually change the code,\n> because the DML on GTT does not block each other between sessions.\nAs a side note, since the same row of GTT data can not modified by different sessions,\nSo, I don't see the need to care the GTT's PG_class.relminmxID.\nWhat do you think?\n\n\nWenjing\n\n\n> \n> 2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, \n> they only modify local data and do not modify the GTT definition.\n> So I lowered the table lock level held by the GTT, only need RowExclusiveLock.\n> \n> 3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter GTT), \n> an exclusive level of table locking is required(AccessExclusiveLock), \n> as is the case for regular table.\n> This part of the implementation also does not actually change the code.\n> \n> Summary: What I have done is to adjust the GTT lock levels in different types of statements based on the above thinking.\n> For example, truncate GTT, I'm reducing the GTT holding table lock level to RowExclusiveLock,\n> So We can truncate data in the same GTT between different sessions at the same time.\n> \n> What do you think about table locks on GTT?\n> \n> \n> Wenjing\n> \n> \n>> \n>> Now, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. \n>> \n>> We should define what table lock is meaning on GTT.\n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> Pavel\n>> \n>> \n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> \n>> \n> \n\n\n2020年7月10日 下午5:03,wenjing zeng <wjzeng2012@gmail.com> 写道:HI allI started using my personal email to respond to community issue.2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hi GTT Merge the latest PGMaster and resolves conflicts.I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:postgres=# create global temp table foo(a int);CREATE TABLEpostgres=# insert into foo values(10);INSERT 0 1postgres=# alter table foo add column x int;ALTER TABLEpostgres=# analyze foo;WARNING:  reloid 16400 not support update attstat after add colunmWARNING:  reloid 16400 not support update attstat after add colunmANALYZEThis is a limitation that we can completely eliminate.Please, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?Sure.The current version of the GTT implementation supports all regular table operations.1 what is done1.1 insert/update/delete on GTT.1.2 The GTT supports all types of indexes, and the query statement supports the use of GTT indexes to speed up the reading of data in the GTT.1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided to the optimizer to choose the best query plan.1.4 analyze vacuum GTT.1.5 truncate cluster GTT.1.6 all DDL on GTT.1.7 GTT table can use  GTT sequence  or Regular sequence.1.8 Support for creating views on GTT.1.9 Support for creating views on foreign key.1.10 support global temp partition.I feel like I cover all the necessary GTT requirements.For cluster GTT,I think it's complicated.I'm not sure the current implementation is quite reasonable. Maybe you can help review it.I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. Yes, but GTT’s DML DDL still requires table locking.1 The DML requires table locks (RowExclusiveLock) to ensure that definitions do not change during run time (the DDL may modify or delete them).This part of the implementation does not actually change the code,because the DML on GTT does not block each other between sessions.As a side note, since the same row of GTT data can not modified by different sessions,So, I don't see the need to care the GTT's PG_class.relminmxID.What do you think?Wenjing2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, they only modify local data and do not modify the GTT definition.So I lowered the table lock level held by the GTT, only need RowExclusiveLock.3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter GTT), an exclusive level of table locking is required(AccessExclusiveLock), as is the case for regular table.This part of the implementation also does not actually change the code.Summary: What I have done is to adjust the GTT lock levels in different types of statements based on the above thinking.For example, truncate GTT, I'm reducing the GTT holding table lock level to RowExclusiveLock,So We can truncate data in the same GTT between different sessions at the same time.What do you think about table locks on GTT?WenjingNow, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. We should define what table lock is meaning on GTT.RegardsPavel PavelWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Jul 2020 19:59:24 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "po 13. 7. 2020 v 13:59 odesílatel wenjing zeng <wjzeng2012@gmail.com>\nnapsal:\n\n>\n>\n> 2020年7月10日 下午5:03,wenjing zeng <wjzeng2012@gmail.com> 写道:\n>\n> HI all\n>\n> I started using my personal email to respond to community issue.\n>\n>\n>\n> 2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n> Hi\n>\n>\n>> GTT Merge the latest PGMaster and resolves conflicts.\n>>\n>>\n>>\n> I tested it and it looks fine. I think it is very usable in current form,\n> but still there are some issues:\n>\n> postgres=# create global temp table foo(a int);\n> CREATE TABLE\n> postgres=# insert into foo values(10);\n> INSERT 0 1\n> postgres=# alter table foo add column x int;\n> ALTER TABLE\n> postgres=# analyze foo;\n> WARNING: reloid 16400 not support update attstat after add colunm\n> WARNING: reloid 16400 not support update attstat after add colunm\n> ANALYZE\n>\n> This is a limitation that we can completely eliminate.\n>\n>\n> Please, can you summarize what is done, what limits are there, what can be\n> implemented hard, what can be implemented easily?\n>\n> Sure.\n>\n> The current version of the GTT implementation supports all regular table\n> operations.\n> 1 what is done\n> 1.1 insert/update/delete on GTT.\n> 1.2 The GTT supports all types of indexes, and the query statement\n> supports the use of GTT indexes to speed up the reading of data in the GTT.\n> 1.3 GTT statistics keep a copy of THE GTT local statistics, which are\n> provided to the optimizer to choose the best query plan.\n> 1.4 analyze vacuum GTT.\n> 1.5 truncate cluster GTT.\n> 1.6 all DDL on GTT.\n> 1.7 GTT table can use GTT sequence or Regular sequence.\n> 1.8 Support for creating views on GTT.\n> 1.9 Support for creating views on foreign key.\n> 1.10 support global temp partition.\n>\n> I feel like I cover all the necessary GTT requirements.\n>\n> For cluster GTT,I think it's complicated.\n> I'm not sure the current implementation is quite reasonable. Maybe you can\n> help review it.\n>\n>\n>\n>\n>\n> I found one open question - how can be implemented table locks - because\n> data is physically separated, then we don't need table locks as protection\n> against race conditions.\n>\n> Yes, but GTT’s DML DDL still requires table locking.\n> 1 The DML requires table locks (RowExclusiveLock) to ensure that\n> definitions do not change during run time (the DDL may modify or delete\n> them).\n> This part of the implementation does not actually change the code,\n> because the DML on GTT does not block each other between sessions.\n>\n> As a side note, since the same row of GTT data can not modified by\n> different sessions,\n> So, I don't see the need to care the GTT's PG_class.relminmxID.\n> What do you think?\n>\n\nyes, probably it is not necessary\n\nRegards\n\nPavel\n\n>\n>\n> Wenjing\n>\n>\n>\n> 2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML,\n> they only modify local data and do not modify the GTT definition.\n> So I lowered the table lock level held by the GTT, only need\n> RowExclusiveLock.\n>\n> 3 For DDLs that need to be modified the GTT table definition(Drop\n> GTT Alter GTT),\n> an exclusive level of table locking is required(AccessExclusiveLock),\n> as is the case for regular table.\n> This part of the implementation also does not actually change the code.\n>\n> Summary: What I have done is to adjust the GTT lock levels in different\n> types of statements based on the above thinking.\n> For example, truncate GTT, I'm reducing the GTT holding table lock level\n> to RowExclusiveLock,\n> So We can truncate data in the same GTT between different sessions at the\n> same time.\n>\n> What do you think about table locks on GTT?\n>\n>\n> Wenjing\n>\n>\n>\n> Now, table locks are implemented on a global level. So exclusive lock on\n> GTT in one session block insertion on the second session. Is it expected\n> behaviour? It is safe, but maybe it is too strict.\n>\n> We should define what table lock is meaning on GTT.\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Pavel\n>>\n>>\n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>>\n>>>\n>>>\n>>\n>\n>\n\npo 13. 7. 2020 v 13:59 odesílatel wenjing zeng <wjzeng2012@gmail.com> napsal:2020年7月10日 下午5:03,wenjing zeng <wjzeng2012@gmail.com> 写道:HI allI started using my personal email to respond to community issue.2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hi GTT Merge the latest PGMaster and resolves conflicts.I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:postgres=# create global temp table foo(a int);CREATE TABLEpostgres=# insert into foo values(10);INSERT 0 1postgres=# alter table foo add column x int;ALTER TABLEpostgres=# analyze foo;WARNING:  reloid 16400 not support update attstat after add colunmWARNING:  reloid 16400 not support update attstat after add colunmANALYZEThis is a limitation that we can completely eliminate.Please, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?Sure.The current version of the GTT implementation supports all regular table operations.1 what is done1.1 insert/update/delete on GTT.1.2 The GTT supports all types of indexes, and the query statement supports the use of GTT indexes to speed up the reading of data in the GTT.1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided to the optimizer to choose the best query plan.1.4 analyze vacuum GTT.1.5 truncate cluster GTT.1.6 all DDL on GTT.1.7 GTT table can use  GTT sequence  or Regular sequence.1.8 Support for creating views on GTT.1.9 Support for creating views on foreign key.1.10 support global temp partition.I feel like I cover all the necessary GTT requirements.For cluster GTT,I think it's complicated.I'm not sure the current implementation is quite reasonable. Maybe you can help review it.I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. Yes, but GTT’s DML DDL still requires table locking.1 The DML requires table locks (RowExclusiveLock) to ensure that definitions do not change during run time (the DDL may modify or delete them).This part of the implementation does not actually change the code,because the DML on GTT does not block each other between sessions.As a side note, since the same row of GTT data can not modified by different sessions,So, I don't see the need to care the GTT's PG_class.relminmxID.What do you think?yes, probably it is not necessaryRegardsPavelWenjing2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, they only modify local data and do not modify the GTT definition.So I lowered the table lock level held by the GTT, only need RowExclusiveLock.3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter GTT), an exclusive level of table locking is required(AccessExclusiveLock), as is the case for regular table.This part of the implementation also does not actually change the code.Summary: What I have done is to adjust the GTT lock levels in different types of statements based on the above thinking.For example, truncate GTT, I'm reducing the GTT holding table lock level to RowExclusiveLock,So We can truncate data in the same GTT between different sessions at the same time.What do you think about table locks on GTT?WenjingNow, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. We should define what table lock is meaning on GTT.RegardsPavel PavelWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Jul 2020 16:11:36 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "pá 10. 7. 2020 v 11:04 odesílatel wenjing zeng <wjzeng2012@gmail.com>\nnapsal:\n\n> HI all\n>\n> I started using my personal email to respond to community issue.\n>\n>\n>\n> 2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n> Hi\n>\n>\n>> GTT Merge the latest PGMaster and resolves conflicts.\n>>\n>>\n>>\n> I tested it and it looks fine. I think it is very usable in current form,\n> but still there are some issues:\n>\n> postgres=# create global temp table foo(a int);\n> CREATE TABLE\n> postgres=# insert into foo values(10);\n> INSERT 0 1\n> postgres=# alter table foo add column x int;\n> ALTER TABLE\n> postgres=# analyze foo;\n> WARNING: reloid 16400 not support update attstat after add colunm\n> WARNING: reloid 16400 not support update attstat after add colunm\n> ANALYZE\n>\n> This is a limitation that we can completely eliminate.\n>\n>\n> Please, can you summarize what is done, what limits are there, what can be\n> implemented hard, what can be implemented easily?\n>\n> Sure.\n>\n> The current version of the GTT implementation supports all regular table\n> operations.\n> 1 what is done\n> 1.1 insert/update/delete on GTT.\n> 1.2 The GTT supports all types of indexes, and the query statement\n> supports the use of GTT indexes to speed up the reading of data in the GTT.\n> 1.3 GTT statistics keep a copy of THE GTT local statistics, which are\n> provided to the optimizer to choose the best query plan.\n> 1.4 analyze vacuum GTT.\n> 1.5 truncate cluster GTT.\n> 1.6 all DDL on GTT.\n> 1.7 GTT table can use GTT sequence or Regular sequence.\n> 1.8 Support for creating views on GTT.\n> 1.9 Support for creating views on foreign key.\n> 1.10 support global temp partition.\n>\n> I feel like I cover all the necessary GTT requirements.\n>\n> For cluster GTT,I think it's complicated.\n> I'm not sure the current implementation is quite reasonable. Maybe you can\n> help review it.\n>\n>\n>\n>\n>\n> I found one open question - how can be implemented table locks - because\n> data is physically separated, then we don't need table locks as protection\n> against race conditions.\n>\n> Yes, but GTT’s DML DDL still requires table locking.\n> 1 The DML requires table locks (RowExclusiveLock) to ensure that\n> definitions do not change during run time (the DDL may modify or delete\n> them).\n> This part of the implementation does not actually change the code,\n> because the DML on GTT does not block each other between sessions.\n>\n> 2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML,\n> they only modify local data and do not modify the GTT definition.\n> So I lowered the table lock level held by the GTT, only need\n> RowExclusiveLock.\n>\n> 3 For DDLs that need to be modified the GTT table definition(Drop\n> GTT Alter GTT),\n> an exclusive level of table locking is required(AccessExclusiveLock),\n> as is the case for regular table.\n> This part of the implementation also does not actually change the code.\n>\n> Summary: What I have done is to adjust the GTT lock levels in different\n> types of statements based on the above thinking.\n> For example, truncate GTT, I'm reducing the GTT holding table lock level\n> to RowExclusiveLock,\n> So We can truncate data in the same GTT between different sessions at the\n> same time.\n>\n> What do you think about table locks on GTT?\n>\n\nI am thinking about explicit LOCK statements. Some applications use\nexplicit locking from some reasons - typically as protection against race\nconditions.\n\nBut on GTT race conditions are not possible. So my question is - does the\nexclusive lock on GTT protection other sessions do insert into their own\ninstances of the same GTT?\n\nWhat is a level where table locks are active? shared part of GTT or session\ninstance part of GTT?\n\n\n\n\n>\n> Wenjing\n>\n>\n>\n> Now, table locks are implemented on a global level. So exclusive lock on\n> GTT in one session block insertion on the second session. Is it expected\n> behaviour? It is safe, but maybe it is too strict.\n>\n> We should define what table lock is meaning on GTT.\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Pavel\n>>\n>>\n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>>\n>>>\n>>>\n>>\n>\n\npá 10. 7. 2020 v 11:04 odesílatel wenjing zeng <wjzeng2012@gmail.com> napsal:HI allI started using my personal email to respond to community issue.2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hi GTT Merge the latest PGMaster and resolves conflicts.I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:postgres=# create global temp table foo(a int);CREATE TABLEpostgres=# insert into foo values(10);INSERT 0 1postgres=# alter table foo add column x int;ALTER TABLEpostgres=# analyze foo;WARNING:  reloid 16400 not support update attstat after add colunmWARNING:  reloid 16400 not support update attstat after add colunmANALYZEThis is a limitation that we can completely eliminate.Please, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?Sure.The current version of the GTT implementation supports all regular table operations.1 what is done1.1 insert/update/delete on GTT.1.2 The GTT supports all types of indexes, and the query statement supports the use of GTT indexes to speed up the reading of data in the GTT.1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided to the optimizer to choose the best query plan.1.4 analyze vacuum GTT.1.5 truncate cluster GTT.1.6 all DDL on GTT.1.7 GTT table can use  GTT sequence  or Regular sequence.1.8 Support for creating views on GTT.1.9 Support for creating views on foreign key.1.10 support global temp partition.I feel like I cover all the necessary GTT requirements.For cluster GTT,I think it's complicated.I'm not sure the current implementation is quite reasonable. Maybe you can help review it.I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. Yes, but GTT’s DML DDL still requires table locking.1 The DML requires table locks (RowExclusiveLock) to ensure that definitions do not change during run time (the DDL may modify or delete them).This part of the implementation does not actually change the code,because the DML on GTT does not block each other between sessions.2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, they only modify local data and do not modify the GTT definition.So I lowered the table lock level held by the GTT, only need RowExclusiveLock.3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter GTT), an exclusive level of table locking is required(AccessExclusiveLock), as is the case for regular table.This part of the implementation also does not actually change the code.Summary: What I have done is to adjust the GTT lock levels in different types of statements based on the above thinking.For example, truncate GTT, I'm reducing the GTT holding table lock level to RowExclusiveLock,So We can truncate data in the same GTT between different sessions at the same time.What do you think about table locks on GTT?I am thinking about explicit LOCK statements. Some applications use explicit locking from some reasons - typically as protection against race conditions. But on GTT race conditions are not possible. So my question is - does the exclusive lock on GTT  protection other sessions do insert into their own instances of the same GTT?What is a level where table locks are active? shared part of GTT or session instance part of GTT?WenjingNow, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. We should define what table lock is meaning on GTT.RegardsPavel PavelWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Jul 2020 16:28:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年7月14日 下午10:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> pá 10. 7. 2020 v 11:04 odesílatel wenjing zeng <wjzeng2012@gmail.com <mailto:wjzeng2012@gmail.com>> napsal:\n> HI all\n> \n> I started using my personal email to respond to community issue.\n> \n> \n> \n>> 2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> GTT Merge the latest PGMaster and resolves conflicts.\n>> \n>> \n>> \n>> I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:\n>> \n>> postgres=# create global temp table foo(a int);\n>> CREATE TABLE\n>> postgres=# insert into foo values(10);\n>> INSERT 0 1\n>> postgres=# alter table foo add column x int;\n>> ALTER TABLE\n>> postgres=# analyze foo;\n>> WARNING: reloid 16400 not support update attstat after add colunm\n>> WARNING: reloid 16400 not support update attstat after add colunm\n>> ANALYZE\n> This is a limitation that we can completely eliminate.\n> \n>> \n>> Please, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?\n> Sure.\n> \n> The current version of the GTT implementation supports all regular table operations.\n> 1 what is done\n> 1.1 insert/update/delete on GTT.\n> 1.2 The GTT supports all types of indexes, and the query statement supports the use of GTT indexes to speed up the reading of data in the GTT.\n> 1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided to the optimizer to choose the best query plan.\n> 1.4 analyze vacuum GTT.\n> 1.5 truncate cluster GTT.\n> 1.6 all DDL on GTT.\n> 1.7 GTT table can use GTT sequence or Regular sequence.\n> 1.8 Support for creating views on GTT.\n> 1.9 Support for creating views on foreign key.\n> 1.10 support global temp partition.\n> \n> I feel like I cover all the necessary GTT requirements.\n> \n> For cluster GTT,I think it's complicated.\n> I'm not sure the current implementation is quite reasonable. Maybe you can help review it.\n> \n> \n>> \n>> \n>> \n>> I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. \n> Yes, but GTT’s DML DDL still requires table locking.\n> 1 The DML requires table locks (RowExclusiveLock) to ensure that \n> definitions do not change during run time (the DDL may modify or delete them).\n> This part of the implementation does not actually change the code,\n> because the DML on GTT does not block each other between sessions.\n> \n> 2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, \n> they only modify local data and do not modify the GTT definition.\n> So I lowered the table lock level held by the GTT, only need RowExclusiveLock.\n> \n> 3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter GTT), \n> an exclusive level of table locking is required(AccessExclusiveLock), \n> as is the case for regular table.\n> This part of the implementation also does not actually change the code.\n> \n> Summary: What I have done is to adjust the GTT lock levels in different types of statements based on the above thinking.\n> For example, truncate GTT, I'm reducing the GTT holding table lock level to RowExclusiveLock,\n> So We can truncate data in the same GTT between different sessions at the same time.\n> \n> What do you think about table locks on GTT?\n> \n> I am thinking about explicit LOCK statements. Some applications use explicit locking from some reasons - typically as protection against race conditions. \n> \n> But on GTT race conditions are not possible. So my question is - does the exclusive lock on GTT protection other sessions do insert into their own instances of the same GTT?\nIn my opinion, with a GTT, always work on the private data of the session, \nthere is no need to do anything by holding the lock, so the lock statement should do nothing (The same is true for ORACLE GTT)\n\nWhat do you think?\n\n> \n> What is a level where table locks are active? shared part of GTT or session instance part of GTT?\nI don't quite understand what you mean, could you explain it a little bit?\n\n\n\nWenjing\n\n\n\n> \n> \n> \n> \n> \n> Wenjing\n> \n> \n>> \n>> Now, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. \n>> \n>> We should define what table lock is meaning on GTT.\n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> Pavel\n>> \n>> \n>>> With Regards,\n>>> Prabhat Kumar Sahu\n>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> \n>> \n> \n\n\n2020年7月14日 下午10:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:pá 10. 7. 2020 v 11:04 odesílatel wenjing zeng <wjzeng2012@gmail.com> napsal:HI allI started using my personal email to respond to community issue.2020年7月7日 下午6:05,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hi GTT Merge the latest PGMaster and resolves conflicts.I tested it and it looks fine. I think it is very usable in current form, but still there are some issues:postgres=# create global temp table foo(a int);CREATE TABLEpostgres=# insert into foo values(10);INSERT 0 1postgres=# alter table foo add column x int;ALTER TABLEpostgres=# analyze foo;WARNING:  reloid 16400 not support update attstat after add colunmWARNING:  reloid 16400 not support update attstat after add colunmANALYZEThis is a limitation that we can completely eliminate.Please, can you summarize what is done, what limits are there, what can be implemented hard, what can be implemented easily?Sure.The current version of the GTT implementation supports all regular table operations.1 what is done1.1 insert/update/delete on GTT.1.2 The GTT supports all types of indexes, and the query statement supports the use of GTT indexes to speed up the reading of data in the GTT.1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided to the optimizer to choose the best query plan.1.4 analyze vacuum GTT.1.5 truncate cluster GTT.1.6 all DDL on GTT.1.7 GTT table can use  GTT sequence  or Regular sequence.1.8 Support for creating views on GTT.1.9 Support for creating views on foreign key.1.10 support global temp partition.I feel like I cover all the necessary GTT requirements.For cluster GTT,I think it's complicated.I'm not sure the current implementation is quite reasonable. Maybe you can help review it.I found one open question - how can be implemented table locks - because data is physically separated, then we don't need table locks as protection against race conditions. Yes, but GTT’s DML DDL still requires table locking.1 The DML requires table locks (RowExclusiveLock) to ensure that definitions do not change during run time (the DDL may modify or delete them).This part of the implementation does not actually change the code,because the DML on GTT does not block each other between sessions.2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, they only modify local data and do not modify the GTT definition.So I lowered the table lock level held by the GTT, only need RowExclusiveLock.3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter GTT), an exclusive level of table locking is required(AccessExclusiveLock), as is the case for regular table.This part of the implementation also does not actually change the code.Summary: What I have done is to adjust the GTT lock levels in different types of statements based on the above thinking.For example, truncate GTT, I'm reducing the GTT holding table lock level to RowExclusiveLock,So We can truncate data in the same GTT between different sessions at the same time.What do you think about table locks on GTT?I am thinking about explicit LOCK statements. Some applications use explicit locking from some reasons - typically as protection against race conditions. But on GTT race conditions are not possible. So my question is - does the exclusive lock on GTT  protection other sessions do insert into their own instances of the same GTT?In my opinion, with a GTT, always work on the private data of the session, there is no need to do anything by holding the lock, so the lock statement should do nothing (The same is true for ORACLE GTT)What do you think?What is a level where table locks are active? shared part of GTT or session instance part of GTT?I don't quite understand what you mean, could you explain it a little bit?WenjingWenjingNow, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. We should define what table lock is meaning on GTT.RegardsPavel PavelWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Jul 2020 19:53:25 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> I am thinking about explicit LOCK statements. Some applications use\n> explicit locking from some reasons - typically as protection against race\n> conditions.\n>\n> But on GTT race conditions are not possible. So my question is - does the\n> exclusive lock on GTT protection other sessions do insert into their own\n> instances of the same GTT?\n>\n> In my opinion, with a GTT, always work on the private data of the session,\n> there is no need to do anything by holding the lock, so the lock statement\n> should do nothing (The same is true for ORACLE GTT)\n>\n> What do you think?\n>\n>\n> What is a level where table locks are active? shared part of GTT or\n> session instance part of GTT?\n>\n> I don't quite understand what you mean, could you explain it a little bit?\n>\n\nIt is about perspective, how we should see GTT tables. GTT table is a mix\nof two concepts - session private (data), and session shared (catalog). And\nhypothetically we can place locks to the private part (no effect) or shared\npart (usual effect how we know it). But can has sense, and both have an\nadvantage and disadvantage. I afraid little bit about behaviour of stupid\nORM systems - but the most important part of table are data - and then I\nprefer empty lock implementation for GTT.\n\nRegards\n\nPavel\n\n\n\n>\n>\n> Wenjing\n>\n>\n>\n>\n>\n>\n>\n>>\n>> Wenjing\n>>\n>>\n>>\n>> Now, table locks are implemented on a global level. So exclusive lock on\n>> GTT in one session block insertion on the second session. Is it expected\n>> behaviour? It is safe, but maybe it is too strict.\n>>\n>> We should define what table lock is meaning on GTT.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> Pavel\n>>>\n>>>\n>>>> With Regards,\n>>>> Prabhat Kumar Sahu\n>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>\n>>>>\n>>>>\n>>>\n>>\n>\n\nI am thinking about explicit LOCK statements. Some applications use explicit locking from some reasons - typically as protection against race conditions. But on GTT race conditions are not possible. So my question is - does the exclusive lock on GTT  protection other sessions do insert into their own instances of the same GTT?In my opinion, with a GTT, always work on the private data of the session, there is no need to do anything by holding the lock, so the lock statement should do nothing (The same is true for ORACLE GTT)What do you think?What is a level where table locks are active? shared part of GTT or session instance part of GTT?I don't quite understand what you mean, could you explain it a little bit?It is about perspective, how we should see GTT tables. GTT table is a mix of two concepts - session private (data), and session shared (catalog). And hypothetically we can place locks to the private part (no effect) or shared part (usual effect how we know it). But can has sense, and both have an advantage and disadvantage. I afraid little bit about behaviour of stupid ORM systems - but the most important part of table are data - and then I prefer empty lock implementation for GTT.RegardsPavel WenjingWenjingNow, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. We should define what table lock is meaning on GTT.RegardsPavel PavelWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Jul 2020 08:54:25 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年7月23日 下午2:54,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n>> I am thinking about explicit LOCK statements. Some applications use explicit locking from some reasons - typically as protection against race conditions. \n>> \n>> But on GTT race conditions are not possible. So my question is - does the exclusive lock on GTT protection other sessions do insert into their own instances of the same GTT?\n> In my opinion, with a GTT, always work on the private data of the session, \n> there is no need to do anything by holding the lock, so the lock statement should do nothing (The same is true for ORACLE GTT)\n> \n> What do you think?\n> \n>> \n>> What is a level where table locks are active? shared part of GTT or session instance part of GTT?\n> I don't quite understand what you mean, could you explain it a little bit?\n> \n> It is about perspective, how we should see GTT tables. GTT table is a mix of two concepts - session private (data), and session shared (catalog). And hypothetically we can place locks to the private part (no effect) or shared part (usual effect how we know it). But can has sense, and both have an advantage and disadvantage. I afraid little bit about behaviour of stupid ORM systems - but the most important part of table are data - and then I prefer empty lock implementation for GTT.\nThis is empty lock implementation for GTT.\n\nPlease continue to review the code.\n\nThanks\n\n\nWenjing\n\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> \n> \n> Wenjing\n> \n> \n> \n>> \n>> \n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>>> \n>>> Now, table locks are implemented on a global level. So exclusive lock on GTT in one session block insertion on the second session. Is it expected behaviour? It is safe, but maybe it is too strict. \n>>> \n>>> We should define what table lock is meaning on GTT.\n>>> \n>>> Regards\n>>> \n>>> Pavel\n>>> \n>>> Pavel\n>>> \n>>> \n>>>> With Regards,\n>>>> Prabhat Kumar Sahu\n>>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>>> \n>>> \n>> \n>", "msg_date": "Thu, 30 Jul 2020 20:09:36 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com> wrote:\n> Please continue to review the code.\n\nThis patch is pretty light on comments. Many of the new functions have\nno header comments, for example. There are comments here and there in\nthe body of the new functions that are added, and in places where\nexisting code is changed there are comments here and there, but\noverall it's not a whole lot. There's no documentation and no README,\neither. Since this adds a new feature and a bunch of new SQL-callable\nfunctions that interact with that feature, the feature itself should\nbe documented, along with its limitations and the new SQL-callable\nfunctions that interact with it. I think there should be either a\nlengthy comment in some suitable file, or maybe various comments in\nvarious files, or else a README file, that clearly sets out the major\ndesign principles behind the patch, and explaining also what that\nmeans in terms of features and limitations. Without that, it's really\nhard for anyone to jump into reviewing this code, and it will be hard\nfor people who have to maintain it in the future to understand it,\neither. Or for users, for that matter.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 30 Jul 2020 16:57:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": ">Fixed in global_temporary_table_v29-pg13.patch\r\n>Please check.\r\n\r\nI find this is the most latest mail with an attachment, so I test and reply on\r\nthis thread, several points as below:\r\n\r\n1. I notice it produces new relfilenode when new session login and some\r\ndata insert. But the relfilenode column in pg_class still the one when create\r\nthe global temp table. I think you can try to show 0 in this area as what nail\r\nrelation does. \r\n\r\n2. The nail relations handle their relfilenodes by RelMapFile struct, and this\r\npatch use hash entry and relfilenode_list, maybe RelMapFile approach more\r\nunderstandable in my opinion. Sorry if I miss the real design for that.\r\n\r\n3. I get a wrong result of pg_relation_filepath() function for global temp table,\r\nI think it's necessaryto keep this an correct output.\r\n\r\n4. In gtt_search_by_relid() function, it has not handle the missing_ok argument\r\nif gtt_storage_local_hash is null. There should be some comments if it's the right\r\ncode.\r\n\r\n5. It's a long patch and hard to review, I think it will pretty good if it can be\r\ndivided into several subpatches with relatively independent subfunctions.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>Fixed in global_temporary_table_v29-pg13.patch>Please check.I find this is the most latest mail with an attachment, so I test and reply onthis thread, several points as below:1. I notice it produces new relfilenode when new session login and somedata insert. But the relfilenode column in pg_class still the one when createthe global temp table. I think you can try to show 0 in this area as what nailrelation does. 2. The nail relations handle their relfilenodes by RelMapFile struct, and thispatch use hash entry and relfilenode_list, maybe RelMapFile approach moreunderstandable in my opinion. Sorry if I miss the real design for that.3. I get a wrong result of pg_relation_filepath() function for global temp table,I think it's necessaryto keep this an correct output.4. In gtt_search_by_relid() function, it has not handle the missing_ok argumentif gtt_storage_local_hash is null. There should be some comments if it's the rightcode.5. It's a long patch and hard to review, I think it will pretty good if it can bedivided into several subpatches with relatively independent subfunctions.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 3 Aug 2020 15:09:28 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thank you very much for reviewing this patch.\nThis is very important to improve the GTT.\n\n> 2020年8月3日 下午3:09,movead.li@highgo.ca 写道:\n> \n> \n> >Fixed in global_temporary_table_v29-pg13.patch\n> >Please check.\n> \n> I find this is the most latest mail with an attachment, so I test and reply on\n> this thread, several points as below:\n> \n> 1. I notice it produces new relfilenode when new session login and some\n> data insert. But the relfilenode column in pg_class still the one when create\n> the global temp table. I think you can try to show 0 in this area as what nail\n> relation does. \nI think getting the GTT to have a default relfilenode looks closer to the existing implementation, and setting it to 0 requires extra work and has no clear benefit.\nWhat do you think?\nI'd like to know the reasons for your suggestion.\n\n> \n> 2. The nail relations handle their relfilenodes by RelMapFile struct, and this\n> patch use hash entry and relfilenode_list, maybe RelMapFile approach more\n> understandable in my opinion. Sorry if I miss the real design for that.\nWe can see the STORAGE and statistics info for the GTT, including relfilenode, through view pg_gtt_relstats\n\npostgres=# \\d gtt\n Table \"public.gtt\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n a | integer | | | \n b | integer | | | \n\npostgres=# insert into gtt values(1,1);\nINSERT 0 1\npostgres=# select * from pg_gtt_relstats ;\n schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | relfrozenxid | relminmxid \n------------+-----------+-------------+----------+-----------+---------------+--------------+------------\n public | gtt | 16384 | 0 | 0 | 0 | 532 | 1\n(1 row)\n\npostgres=# truncate gtt;\nTRUNCATE TABLE\npostgres=# select * from pg_gtt_relstats ;\n schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | relfrozenxid | relminmxid \n------------+-----------+-------------+----------+-----------+---------------+--------------+------------\n public | gtt | 16387 | 0 | 0 | 0 | 533 | 1\n(1 row)\n\n> \n> 3. I get a wrong result of pg_relation_filepath() function for global temp table,\n> I think it's necessaryto keep this an correct output.\n\npostgres=# select pg_relation_filepath(oid) from pg_class where relname = 'gtt';\n pg_relation_filepath \n----------------------\n base/13835/t3_16384\n(1 row)\n\nI didn't find anything wrong. Could you please give me a demo.\n\n> \n> 4. In gtt_search_by_relid() function, it has not handle the missing_ok argument\n> if gtt_storage_local_hash is null. There should be some comments if it's the right\n> code.\nThis is a problem that has been fixed in global_temporary_table_v34-pg13.patch.\n\n> \n> 5. It's a long patch and hard to review, I think it will pretty good if it can be\n> divided into several subpatches with relatively independent subfunctions.\nThank you for your suggestion, and I am considering doing so, including adding comments.\n\n\nWenjing\n\n> \n> Regards,\n> Highgo Software (Canada/China/Pakistan) \n> URL : www.highgo.ca <http://www.highgo.ca/> \n> EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 7 Aug 2020 16:17:22 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com> wrote:\n>> Please continue to review the code.\n> \n> This patch is pretty light on comments. Many of the new functions have\n> no header comments, for example. There are comments here and there in\n> the body of the new functions that are added, and in places where\n> existing code is changed there are comments here and there, but\n> overall it's not a whole lot. There's no documentation and no README,\n> either. Since this adds a new feature and a bunch of new SQL-callable\n> functions that interact with that feature, the feature itself should\n> be documented, along with its limitations and the new SQL-callable\n> functions that interact with it. I think there should be either a\n> lengthy comment in some suitable file, or maybe various comments in\n> various files, or else a README file, that clearly sets out the major\n> design principles behind the patch, and explaining also what that\n> means in terms of features and limitations. Without that, it's really\n> hard for anyone to jump into reviewing this code, and it will be hard\n> for people who have to maintain it in the future to understand it,\n> either. Or for users, for that matter.\nYour suggestion is to the point. I do lack a lot of comments, as is necessary.\nI'll do this.\n\n\nWenjing\n\n\n\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Fri, 7 Aug 2020 16:26:04 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": ">I find this is the most latest mail with an attachment, so I test and reply on\r\n>this thread, several points as below:\r\n\r\n>1. I notice it produces new relfilenode when new session login and some\r\n>data insert. But the relfilenode column in pg_class still the one when create\r\n>the global temp table. I think you can try to show 0 in this area as what nail\r\n>relation does. \r\n>I think getting the GTT to have a default relfilenode looks closer to the existing implementation, and setting it to 0 requires extra work and has no clear benefit.\r\n>What do you think?\r\n>I'd like to know the reasons for your suggestion.\r\nThe 'relfilenode' mean the file no on disk which different from oid of a relation,\r\n the default one is a wrong for gtt, so I think it's not so good to show it in \r\npg_class.\r\n\r\n>2. The nail relations handle their relfilenodes by RelMapFile struct, and this\r\n>patch use hash entry and relfilenode_list, maybe RelMapFile approach more\r\n>understandable in my opinion. Sorry if I miss the real design for that.\r\n>We can see the STORAGE and statistics info for the GTT, including relfilenode, through view pg_gtt_relstats\r\n\r\npostgres=# \\d gtt\r\n Table \"public.gtt\"\r\n Column | Type | Collation | Nullable | Default \r\n--------+---------+-----------+----------+---------\r\n a | integer | | | \r\n b | integer | | | \r\n\r\npostgres=# insert into gtt values(1,1);\r\nINSERT 0 1\r\npostgres=# select * from pg_gtt_relstats ;\r\n schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | relfrozenxid | relminmxid \r\n------------+-----------+-------------+----------+-----------+---------------+--------------+------------\r\n public | gtt | 16384 | 0 | 0 | 0 | 532 | 1\r\n(1 row)\r\n\r\npostgres=# truncate gtt;\r\nTRUNCATE TABLE\r\npostgres=# select * from pg_gtt_relstats ;\r\n schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | relfrozenxid | relminmxid \r\n------------+-----------+-------------+----------+-----------+---------------+--------------+------------\r\n public | gtt | 16387 | 0 | 0 | 0 | 533 | 1\r\n(1 row)\r\n\r\n\r\nI just suggest a way which maybe most naturely to the exist code struct, and it's\r\nuo to you.\r\n\r\n\r\n>3. I get a wrong result of pg_relation_filepath() function for global temp table,\r\n>I think it's necessaryto keep this an correct output.\r\n\r\npostgres=# select pg_relation_filepath(oid) from pg_class where relname = 'gtt';\r\n pg_relation_filepath \r\n----------------------\r\n base/13835/t3_16384\r\n(1 row)\r\n\r\nI didn't find anything wrong. Could you please give me a demo.\r\n\r\nIn my opinoin it should show 'base/13835/t3_16387', other than 'base/13835/t3_16384',\r\nbecause the relfilenode change to 16387 when you truncate it in step 2.\r\n\r\n>4. In gtt_search_by_relid() function, it has not handle the missing_ok argument\r\n>if gtt_storage_local_hash is null. There should be some comments if it's the right\r\n>code.\r\n>This is a problem that has been fixed in global_temporary_table_v34-pg13.patch.\r\nSorry about it, I can not find it in mail thread and maybe I miss something. The mail thread\r\nis so long, it's better to create a new mail thread I think.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>I find this is the most latest mail with an attachment, so I test and reply on>this thread, several points as below:>1. I notice it produces new relfilenode when new session login and some>data insert. But the relfilenode column in pg_class still the one when create>the global temp table. I think you can try to show 0 in this area as what nail>relation does. >I think getting the GTT to have a default relfilenode looks closer to the existing implementation, and setting it to 0 requires extra work and has no clear benefit.>What do you think?>I'd like to know the reasons for your suggestion.The 'relfilenode' mean the file no on disk which different from oid of a relation, the default one is a wrong for gtt, so I think it's not so good to show it in pg_class.>2. The nail relations handle their relfilenodes by RelMapFile struct, and this>patch use hash entry and relfilenode_list, maybe RelMapFile approach more>understandable in my opinion. Sorry if I miss the real design for that.>We can see the STORAGE and statistics info for the GTT, including relfilenode, through view pg_gtt_relstatspostgres=# \\d gtt                Table \"public.gtt\" Column |  Type   | Collation | Nullable | Default --------+---------+-----------+----------+--------- a      | integer |           |          |  b      | integer |           |          | postgres=# insert into gtt values(1,1);INSERT 0 1postgres=# select * from pg_gtt_relstats ; schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | relfrozenxid | relminmxid ------------+-----------+-------------+----------+-----------+---------------+--------------+------------ public     | gtt       |       16384 |        0 |         0 |             0 |          532 |          1(1 row)postgres=# truncate gtt;TRUNCATE TABLEpostgres=# select * from pg_gtt_relstats ; schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | relfrozenxid | relminmxid ------------+-----------+-------------+----------+-----------+---------------+--------------+------------ public     | gtt       |       16387 |        0 |         0 |             0 |          533 |          1(1 row)I just suggest a way which maybe most naturely to the exist code struct, and it'suo to you.>3. I get a wrong result of pg_relation_filepath() function for global temp table,>I think it's necessaryto keep this an correct output.postgres=# select pg_relation_filepath(oid) from pg_class where relname = 'gtt'; pg_relation_filepath ---------------------- base/13835/t3_16384(1 row)I didn't find anything wrong. Could you please give me a demo.In my opinoin it should show 'base/13835/t3_16387', other than 'base/13835/t3_16384',because the relfilenode change to 16387 when you truncate it in step 2.>4. In gtt_search_by_relid() function, it has not handle the missing_ok argument>if gtt_storage_local_hash is null. There should be some comments if it's the right>code.>This is a problem that has been fixed in global_temporary_table_v34-pg13.patch.Sorry about it, I can not find it in mail thread and maybe I miss something. The mail threadis so long, it's better to create a new mail thread I think.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 7 Aug 2020 17:30:24 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年8月7日 下午5:30,movead.li@highgo.ca 写道:\n> \n>> \n>> >I find this is the most latest mail with an attachment, so I test and reply on\n>> >this thread, several points as below:\n>> \n>> >1. I notice it produces new relfilenode when new session login and some\n>> >data insert. But the relfilenode column in pg_class still the one when create\n>> >the global temp table. I think you can try to show 0 in this area as what nail\n>> >relation does. \n> >I think getting the GTT to have a default relfilenode looks closer to the existing implementation, and setting it to 0 requires extra work and has no clear benefit.\n> >What do you think?\n> >I'd like to know the reasons for your suggestion.\n> The 'relfilenode' mean the file no on disk which different from oid of a relation,\n> the default one is a wrong for gtt, so I think it's not so good to show it in \n> pg_class.\n>> \n>> >2. The nail relations handle their relfilenodes by RelMapFile struct, and this\n>> >patch use hash entry and relfilenode_list, maybe RelMapFile approach more\n>> >understandable in my opinion. Sorry if I miss the real design for that.\n> >We can see the STORAGE and statistics info for the GTT, including relfilenode, through view pg_gtt_relstats\n> \n> postgres=# \\d gtt\n> Table \"public.gtt\"\n> Column | Type | Collation | Nullable | Default \n> --------+---------+-----------+----------+---------\n> a | integer | | | \n> b | integer | | | \n> \n> postgres=# insert into gtt values(1,1);\n> INSERT 0 1\n> postgres=# select * from pg_gtt_relstats ;\n> schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | relfrozenxid | relminmxid \n> ------------+-----------+-------------+----------+-----------+---------------+--------------+------------\n> public | gtt | 16384 | 0 | 0 | 0 | 532 | 1\n> (1 row)\n> \n> postgres=# truncate gtt;\n> TRUNCATE TABLE\n> postgres=# select * from pg_gtt_relstats ;\n> schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | relfrozenxid | relminmxid \n> ------------+-----------+-------------+----------+-----------+---------------+--------------+------------\n> public | gtt | 16387 | 0 | 0 | 0 | 533 | 1\n> (1 row)\n> \n>> \n>> I just suggest a way which maybe most naturely to the exist code struct, and it's\n>> uo to you.\n>> \n>> \n>> >3. I get a wrong result of pg_relation_filepath() function for global temp table,\n>> >I think it's necessaryto keep this an correct output.\n> \n> postgres=# select pg_relation_filepath(oid) from pg_class where relname = 'gtt';\n> pg_relation_filepath \n> ----------------------\n> base/13835/t3_16384\n> (1 row)\n> \n> I didn't find anything wrong. Could you please give me a demo.\n> \n>> In my opinoin it should show 'base/13835/t3_16387', other than 'base/13835/t3_16384',\n>> because the relfilenode change to 16387 when you truncate it in step 2.\n>> \n>> >4. In gtt_search_by_relid() function, it has not handle the missing_ok argument\n>> >if gtt_storage_local_hash is null. There should be some comments if it's the right\n>> >code.\n> >This is a problem that has been fixed in global_temporary_table_v34-pg13.patch.\n> Sorry about it, I can not find it in mail thread and maybe I miss something. The mail thread\n> is so long, it's better to create a new mail thread I think.\n\nThe latest status is tracked here\nhttps://commitfest.postgresql.org/28/2349/ <https://commitfest.postgresql.org/28/2349/>\n\nThe latest patch is V35. I don't know why the patches in some of my emails are indexed, but some of them are not.\n\n\n\nWenjing\n\n\n\n\n> \n> Regards,\n> Highgo Software (Canada/China/Pakistan) \n> URL : www.highgo.ca <http://www.highgo.ca/> \n> EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 12 Aug 2020 17:51:41 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "I have written the README for the GTT, which contains the GTT requirements and design.\nI found that compared to my first email a year ago, many GTT Limitations are now gone.\nNow, I'm adding comments to some of the necessary functions.\n\n\nWenjing\n\n\n\n\n\n\n\n> 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com> wrote:\n>> Please continue to review the code.\n> \n> This patch is pretty light on comments. Many of the new functions have\n> no header comments, for example. There are comments here and there in\n> the body of the new functions that are added, and in places where\n> existing code is changed there are comments here and there, but\n> overall it's not a whole lot. There's no documentation and no README,\n> either. Since this adds a new feature and a bunch of new SQL-callable\n> functions that interact with that feature, the feature itself should\n> be documented, along with its limitations and the new SQL-callable\n> functions that interact with it. I think there should be either a\n> lengthy comment in some suitable file, or maybe various comments in\n> various files, or else a README file, that clearly sets out the major\n> design principles behind the patch, and explaining also what that\n> means in terms of features and limitations. Without that, it's really\n> hard for anyone to jump into reviewing this code, and it will be hard\n> for people who have to maintain it in the future to understand it,\n> either. Or for users, for that matter.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Fri, 11 Sep 2020 23:00:12 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\npá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:\n\n> I have written the README for the GTT, which contains the GTT requirements\n> and design.\n> I found that compared to my first email a year ago, many GTT Limitations\n> are now gone.\n> Now, I'm adding comments to some of the necessary functions.\n>\n\nThere are problems with patching. Please, can you rebase your patch?\n\nRegards\n\nPavel\n\n\n>\n> Wenjing\n>\n>\n>\n>\n>\n> > 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n> >\n> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com>\n> wrote:\n> >> Please continue to review the code.\n> >\n> > This patch is pretty light on comments. Many of the new functions have\n> > no header comments, for example. There are comments here and there in\n> > the body of the new functions that are added, and in places where\n> > existing code is changed there are comments here and there, but\n> > overall it's not a whole lot. There's no documentation and no README,\n> > either. Since this adds a new feature and a bunch of new SQL-callable\n> > functions that interact with that feature, the feature itself should\n> > be documented, along with its limitations and the new SQL-callable\n> > functions that interact with it. I think there should be either a\n> > lengthy comment in some suitable file, or maybe various comments in\n> > various files, or else a README file, that clearly sets out the major\n> > design principles behind the patch, and explaining also what that\n> > means in terms of features and limitations. Without that, it's really\n> > hard for anyone to jump into reviewing this code, and it will be hard\n> > for people who have to maintain it in the future to understand it,\n> > either. Or for users, for that matter.\n> >\n> > --\n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n>\n>\n\nHipá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:I have written the README for the GTT, which contains the GTT requirements and design.\nI found that compared to my first email a year ago, many GTT Limitations are now gone.\nNow, I'm adding comments to some of the necessary functions.There are problems with patching. Please, can you rebase your patch?RegardsPavel \n\n\nWenjing\n\n\n\n\n\n> 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com> wrote:\n>> Please continue to review the code.\n> \n> This patch is pretty light on comments. Many of the new functions have\n> no header comments, for example. There are comments here and there in\n> the body of the new functions that are added, and in places where\n> existing code is changed there are comments here and there, but\n> overall it's not a whole lot. There's no documentation and no README,\n> either. Since this adds a new feature and a bunch of new SQL-callable\n> functions that interact with that feature, the feature itself should\n> be documented, along with its limitations and the new SQL-callable\n> functions that interact with it. I think there should be either a\n> lengthy comment in some suitable file, or maybe various comments in\n> various files, or else a README file, that clearly sets out the major\n> design principles behind the patch, and explaining also what that\n> means in terms of features and limitations. Without that, it's really\n> hard for anyone to jump into reviewing this code, and it will be hard\n> for people who have to maintain it in the future to understand it,\n> either. Or for users, for that matter.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Fri, 20 Nov 2020 19:28:58 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年11月21日 02:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> Hi\n> \n> pá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> I have written the README for the GTT, which contains the GTT requirements and design.\n> I found that compared to my first email a year ago, many GTT Limitations are now gone.\n> Now, I'm adding comments to some of the necessary functions.\n> \n> There are problems with patching. Please, can you rebase your patch?\nSure.\nI'm still working on sort code and comments.\nIf you have any suggestions, please let me know.\n\n\nWenjing\n\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> Wenjing\n> \n> \n> \n> \n> \n> > 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n> > \n> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com <mailto:wjzeng2012@gmail.com>> wrote:\n> >> Please continue to review the code.\n> > \n> > This patch is pretty light on comments. Many of the new functions have\n> > no header comments, for example. There are comments here and there in\n> > the body of the new functions that are added, and in places where\n> > existing code is changed there are comments here and there, but\n> > overall it's not a whole lot. There's no documentation and no README,\n> > either. Since this adds a new feature and a bunch of new SQL-callable\n> > functions that interact with that feature, the feature itself should\n> > be documented, along with its limitations and the new SQL-callable\n> > functions that interact with it. I think there should be either a\n> > lengthy comment in some suitable file, or maybe various comments in\n> > various files, or else a README file, that clearly sets out the major\n> > design principles behind the patch, and explaining also what that\n> > means in terms of features and limitations. Without that, it's really\n> > hard for anyone to jump into reviewing this code, and it will be hard\n> > for people who have to maintain it in the future to understand it,\n> > either. Or for users, for that matter.\n> > \n> > -- \n> > Robert Haas\n> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n> > The Enterprise PostgreSQL Company\n>", "msg_date": "Mon, 23 Nov 2020 17:27:20 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "po 23. 11. 2020 v 10:27 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:\n\n>\n>\n> 2020年11月21日 02:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n> Hi\n>\n> pá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>> I have written the README for the GTT, which contains the GTT\n>> requirements and design.\n>> I found that compared to my first email a year ago, many GTT Limitations\n>> are now gone.\n>> Now, I'm adding comments to some of the necessary functions.\n>>\n>\n> There are problems with patching. Please, can you rebase your patch?\n>\n> Sure.\n> I'm still working on sort code and comments.\n> If you have any suggestions, please let me know.\n>\n\nIt is broken again\n\nThere is bad white space\n\n+ /*\n+ * For global temp table only\n+ * use ShareUpdateExclusiveLock for ensure safety\n+ */\n+ {\n+ {\n+ \"on_commit_delete_rows\",\n+ \"global temp table on commit options\",\n+ RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n+ ShareUpdateExclusiveLock\n+ },\n+ true\n+ }, <=================\n /* list terminator */\n {{NULL}}\n\n+7 OTHERS\n+Parallel query\n+Planner does not produce parallel query plans for SQL related to GTT.\nBecause <=================\n+GTT private data cannot be accessed across processes.\ndiff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile\n\n\n+/*\n+ * Update global temp table relstats(relpage/reltuple/relallvisible)\n<========================\n+ * to local hashtable\n+ */\n+void\n\n+/*\n+ * Search global temp table relstats(relpage/reltuple/relallvisible)\n<==============\n+ * from lo\n\nand there are lot of more places ...\n\nI found other issue\n\npostgres=# create global temp table foo(a int);\nCREATE TABLE\npostgres=# create index on foo(a);\nCREATE INDEX\n\nclose session and in new session\n\npostgres=# reindex index foo_a_idx ;\nWARNING: relcache reference leak: relation \"foo\" not closed\nREINDEX\n\nRegards\n\nPavel\n\n\n\n>\n> Wenjing\n>\n>\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Wenjing\n>>\n>>\n>>\n>>\n>>\n>> > 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n>> >\n>> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com>\n>> wrote:\n>> >> Please continue to review the code.\n>> >\n>> > This patch is pretty light on comments. Many of the new functions have\n>> > no header comments, for example. There are comments here and there in\n>> > the body of the new functions that are added, and in places where\n>> > existing code is changed there are comments here and there, but\n>> > overall it's not a whole lot. There's no documentation and no README,\n>> > either. Since this adds a new feature and a bunch of new SQL-callable\n>> > functions that interact with that feature, the feature itself should\n>> > be documented, along with its limitations and the new SQL-callable\n>> > functions that interact with it. I think there should be either a\n>> > lengthy comment in some suitable file, or maybe various comments in\n>> > various files, or else a README file, that clearly sets out the major\n>> > design principles behind the patch, and explaining also what that\n>> > means in terms of features and limitations. Without that, it's really\n>> > hard for anyone to jump into reviewing this code, and it will be hard\n>> > for people who have to maintain it in the future to understand it,\n>> > either. Or for users, for that matter.\n>> >\n>> > --\n>> > Robert Haas\n>> > EnterpriseDB: http://www.enterprisedb.com\n>> > The Enterprise PostgreSQL Company\n>>\n>>\n>\n\npo 23. 11. 2020 v 10:27 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:2020年11月21日 02:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hipá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:I have written the README for the GTT, which contains the GTT requirements and design.\nI found that compared to my first email a year ago, many GTT Limitations are now gone.\nNow, I'm adding comments to some of the necessary functions.There are problems with patching. Please, can you rebase your patch?Sure.I'm still working on sort code and comments.If you have any suggestions, please let me know.It is broken again There is bad white space+   /*+    * For global temp table only+    * use ShareUpdateExclusiveLock for ensure safety+    */+   {+       {+           \"on_commit_delete_rows\",+           \"global temp table on commit options\",+           RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,+           ShareUpdateExclusiveLock+       },+       true+   },  <=================    /* list terminator */    {{NULL}}+7 OTHERS+Parallel query+Planner does not produce parallel query plans for SQL related to GTT. Because <=================+GTT private data cannot be accessed across processes.diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile+/*+ * Update global temp table relstats(relpage/reltuple/relallvisible) <========================+ * to local hashtable+ */+void+/*+ * Search global temp table relstats(relpage/reltuple/relallvisible) <==============+ * from loand there are lot of more places ...I found other issuepostgres=# create global temp table foo(a int);CREATE TABLEpostgres=# create index on foo(a);CREATE INDEXclose session and in new sessionpostgres=# reindex index foo_a_idx ;WARNING:  relcache reference leak: relation \"foo\" not closedREINDEXRegardsPavelWenjingRegardsPavel \n\n\nWenjing\n\n\n\n\n\n> 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com> wrote:\n>> Please continue to review the code.\n> \n> This patch is pretty light on comments. Many of the new functions have\n> no header comments, for example. There are comments here and there in\n> the body of the new functions that are added, and in places where\n> existing code is changed there are comments here and there, but\n> overall it's not a whole lot. There's no documentation and no README,\n> either. Since this adds a new feature and a bunch of new SQL-callable\n> functions that interact with that feature, the feature itself should\n> be documented, along with its limitations and the new SQL-callable\n> functions that interact with it. I think there should be either a\n> lengthy comment in some suitable file, or maybe various comments in\n> various files, or else a README file, that clearly sets out the major\n> design principles behind the patch, and explaining also what that\n> means in terms of features and limitations. Without that, it's really\n> hard for anyone to jump into reviewing this code, and it will be hard\n> for people who have to maintain it in the future to understand it,\n> either. Or for users, for that matter.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Wed, 25 Nov 2020 07:19:56 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2020年11月25日 14:19,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> po 23. 11. 2020 v 10:27 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年11月21日 02:28,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> pá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> I have written the README for the GTT, which contains the GTT requirements and design.\n>> I found that compared to my first email a year ago, many GTT Limitations are now gone.\n>> Now, I'm adding comments to some of the necessary functions.\n>> \n>> There are problems with patching. Please, can you rebase your patch?\n> Sure.\n> I'm still working on sort code and comments.\n> If you have any suggestions, please let me know.\n> \n> It is broken again \n> \n> There is bad white space\n> \n> + /*\n> + * For global temp table only\n> + * use ShareUpdateExclusiveLock for ensure safety\n> + */\n> + {\n> + {\n> + \"on_commit_delete_rows\",\n> + \"global temp table on commit options\",\n> + RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n> + ShareUpdateExclusiveLock\n> + },\n> + true\n> + }, <=================\n> /* list terminator */\n> {{NULL}}\n> \n> +7 OTHERS\n> +Parallel query\n> +Planner does not produce parallel query plans for SQL related to GTT. Because <=================\n> +GTT private data cannot be accessed across processes.\n> diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile\n> \n> \n> +/*\n> + * Update global temp table relstats(relpage/reltuple/relallvisible) <========================\n> + * to local hashtable\n> + */\n> +void\n> \n> +/*\n> + * Search global temp table relstats(relpage/reltuple/relallvisible) <==============\n> + * from lo\n> \n> and there are lot of more places ...\n> \n> I found other issue\n> \n> postgres=# create global temp table foo(a int);\n> CREATE TABLE\n> postgres=# create index on foo(a);\n> CREATE INDEX\n> \n> close session and in new session\n> \n> postgres=# reindex index foo_a_idx ;\n> WARNING: relcache reference leak: relation \"foo\" not closed\n> REINDEX\n\nI fixed all the above issues and rebase code.\nPlease review the new version code again.\n\n\nWenjing\n\n\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> \n> Wenjing\n> \n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> \n>> > 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n>> > \n>> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com <mailto:wjzeng2012@gmail.com>> wrote:\n>> >> Please continue to review the code.\n>> > \n>> > This patch is pretty light on comments. Many of the new functions have\n>> > no header comments, for example. There are comments here and there in\n>> > the body of the new functions that are added, and in places where\n>> > existing code is changed there are comments here and there, but\n>> > overall it's not a whole lot. There's no documentation and no README,\n>> > either. Since this adds a new feature and a bunch of new SQL-callable\n>> > functions that interact with that feature, the feature itself should\n>> > be documented, along with its limitations and the new SQL-callable\n>> > functions that interact with it. I think there should be either a\n>> > lengthy comment in some suitable file, or maybe various comments in\n>> > various files, or else a README file, that clearly sets out the major\n>> > design principles behind the patch, and explaining also what that\n>> > means in terms of features and limitations. Without that, it's really\n>> > hard for anyone to jump into reviewing this code, and it will be hard\n>> > for people who have to maintain it in the future to understand it,\n>> > either. Or for users, for that matter.\n>> > \n>> > -- \n>> > Robert Haas\n>> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> > The Enterprise PostgreSQL Company\n>> \n>", "msg_date": "Wed, 25 Nov 2020 21:08:04 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "I found that the new Patch mail failed to register to Commitfest\nhttps://commitfest.postgresql.org/28/2349/# <https://commitfest.postgresql.org/28/2349/#>\nI don't know what's wrong and how to check it?\nCould you help me figure it out?\n\n\n\n> 2020年11月25日 14:19,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> po 23. 11. 2020 v 10:27 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n> \n> \n>> 2020年11月21日 02:28,Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> 写道:\n>> \n>> Hi\n>> \n>> pá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com <mailto:wenjing.zwj@alibaba-inc.com>> napsal:\n>> I have written the README for the GTT, which contains the GTT requirements and design.\n>> I found that compared to my first email a year ago, many GTT Limitations are now gone.\n>> Now, I'm adding comments to some of the necessary functions.\n>> \n>> There are problems with patching. Please, can you rebase your patch?\n> Sure.\n> I'm still working on sort code and comments.\n> If you have any suggestions, please let me know.\n> \n> It is broken again \n> \n> There is bad white space\n> \n> + /*\n> + * For global temp table only\n> + * use ShareUpdateExclusiveLock for ensure safety\n> + */\n> + {\n> + {\n> + \"on_commit_delete_rows\",\n> + \"global temp table on commit options\",\n> + RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n> + ShareUpdateExclusiveLock\n> + },\n> + true\n> + }, <=================\n> /* list terminator */\n> {{NULL}}\n> \n> +7 OTHERS\n> +Parallel query\n> +Planner does not produce parallel query plans for SQL related to GTT. Because <=================\n> +GTT private data cannot be accessed across processes.\n> diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile\n> \n> \n> +/*\n> + * Update global temp table relstats(relpage/reltuple/relallvisible) <========================\n> + * to local hashtable\n> + */\n> +void\n> \n> +/*\n> + * Search global temp table relstats(relpage/reltuple/relallvisible) <==============\n> + * from lo\n> \n> and there are lot of more places ...\n> \n> I found other issue\n> \n> postgres=# create global temp table foo(a int);\n> CREATE TABLE\n> postgres=# create index on foo(a);\n> CREATE INDEX\n> \n> close session and in new session\n> \n> postgres=# reindex index foo_a_idx ;\n> WARNING: relcache reference leak: relation \"foo\" not closed\n> REINDEX\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> \n> Wenjing\n> \n> \n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> \n>> \n>> Wenjing\n>> \n>> \n>> \n>> \n>> \n>> > 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com <mailto:robertmhaas@gmail.com>> 写道:\n>> > \n>> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com <mailto:wjzeng2012@gmail.com>> wrote:\n>> >> Please continue to review the code.\n>> > \n>> > This patch is pretty light on comments. Many of the new functions have\n>> > no header comments, for example. There are comments here and there in\n>> > the body of the new functions that are added, and in places where\n>> > existing code is changed there are comments here and there, but\n>> > overall it's not a whole lot. There's no documentation and no README,\n>> > either. Since this adds a new feature and a bunch of new SQL-callable\n>> > functions that interact with that feature, the feature itself should\n>> > be documented, along with its limitations and the new SQL-callable\n>> > functions that interact with it. I think there should be either a\n>> > lengthy comment in some suitable file, or maybe various comments in\n>> > various files, or else a README file, that clearly sets out the major\n>> > design principles behind the patch, and explaining also what that\n>> > means in terms of features and limitations. Without that, it's really\n>> > hard for anyone to jump into reviewing this code, and it will be hard\n>> > for people who have to maintain it in the future to understand it,\n>> > either. Or for users, for that matter.\n>> > \n>> > -- \n>> > Robert Haas\n>> > EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>\n>> > The Enterprise PostgreSQL Company", "msg_date": "Thu, 26 Nov 2020 10:54:46 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Nov 26, 2020 at 4:05 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n>\n> I found that the new Patch mail failed to register to Commitfest\n> https://commitfest.postgresql.org/28/2349/#\n> I don't know what's wrong and how to check it?\n> Could you help me figure it out?\n\nApparently the attachment in\nhttps://www.postgresql.org/message-id/A3F1EBD9-E694-4384-8049-37B09308491B@alibaba-inc.com\nwasn't detected. I have no idea why, maybe Magnus will know.\nOtherwise you could try to ask on -www.\n\n\n", "msg_date": "Thu, 26 Nov 2020 18:16:17 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Nov 26, 2020 at 11:16 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Nov 26, 2020 at 4:05 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n> >\n> > I found that the new Patch mail failed to register to Commitfest\n> > https://commitfest.postgresql.org/28/2349/#\n> > I don't know what's wrong and how to check it?\n> > Could you help me figure it out?\n>\n> Apparently the attachment in\n> https://www.postgresql.org/message-id/A3F1EBD9-E694-4384-8049-37B09308491B@alibaba-inc.com\n> wasn't detected. I have no idea why, maybe Magnus will know.\n> Otherwise you could try to ask on -www.\n\nNot offhand. The email appeas to have a fairly complex nested mime\nstructure, so something in the python library that parses the MIME\ndecides that it's not there. For some reason the email is 7 parts. 1\nis the signature, the rest seems complexly nested. And the attachment\nseems to be squeezed in between two different HTML parts.\n\nBasically, at the top it's multipart/alternative, which says there are\ntwo choices. One is text/plain, which is what the archives uses. The\nother is a combination of text/html followed by\napplication/octetstream (the patch) followed by another text/html.\n\nThe archives picks the first alternative, which is text/plain, which\ndoes not contain the attachment. The attachment only exists in the\nHTML view.\n\nI think the easiest solution is to re-send as plain text email with\nthe attachment, which would then put the attachment on the email\nitself instead of embedded in the HTML, I would guess.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 26 Nov 2020 11:44:21 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\nthis patch is broken now. Please, can you check it?\n\nRegards\n\nPavel\n\n\nst 25. 11. 2020 v 14:08 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:\n\n>\n>\n> 2020年11月25日 14:19,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>\n>\n>\n> po 23. 11. 2020 v 10:27 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com>\n> napsal:\n>\n>>\n>>\n>> 2020年11月21日 02:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n>>\n>> Hi\n>>\n>> pá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com>\n>> napsal:\n>>\n>>> I have written the README for the GTT, which contains the GTT\n>>> requirements and design.\n>>> I found that compared to my first email a year ago, many GTT Limitations\n>>> are now gone.\n>>> Now, I'm adding comments to some of the necessary functions.\n>>>\n>>\n>> There are problems with patching. Please, can you rebase your patch?\n>>\n>> Sure.\n>> I'm still working on sort code and comments.\n>> If you have any suggestions, please let me know.\n>>\n>\n> It is broken again\n>\n> There is bad white space\n>\n> + /*\n> + * For global temp table only\n> + * use ShareUpdateExclusiveLock for ensure safety\n> + */\n> + {\n> + {\n> + \"on_commit_delete_rows\",\n> + \"global temp table on commit options\",\n> + RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n> + ShareUpdateExclusiveLock\n> + },\n> + true\n> + }, <=================\n> /* list terminator */\n> {{NULL}}\n>\n> +7 OTHERS\n> +Parallel query\n> +Planner does not produce parallel query plans for SQL related to GTT.\n> Because <=================\n> +GTT private data cannot be accessed across processes.\n> diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile\n>\n>\n> +/*\n> + * Update global temp table relstats(relpage/reltuple/relallvisible)\n> <========================\n> + * to local hashtable\n> + */\n> +void\n>\n> +/*\n> + * Search global temp table relstats(relpage/reltuple/relallvisible)\n> <==============\n> + * from lo\n>\n> and there are lot of more places ...\n>\n> I found other issue\n>\n> postgres=# create global temp table foo(a int);\n> CREATE TABLE\n> postgres=# create index on foo(a);\n> CREATE INDEX\n>\n> close session and in new session\n>\n> postgres=# reindex index foo_a_idx ;\n> WARNING: relcache reference leak: relation \"foo\" not closed\n> REINDEX\n>\n>\n> I fixed all the above issues and rebase code.\n> Please review the new version code again.\n>\n>\n> Wenjing\n>\n>\n>\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>>\n>> Wenjing\n>>\n>>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>>\n>>> Wenjing\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> > 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n>>> >\n>>> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com>\n>>> wrote:\n>>> >> Please continue to review the code.\n>>> >\n>>> > This patch is pretty light on comments. Many of the new functions have\n>>> > no header comments, for example. There are comments here and there in\n>>> > the body of the new functions that are added, and in places where\n>>> > existing code is changed there are comments here and there, but\n>>> > overall it's not a whole lot. There's no documentation and no README,\n>>> > either. Since this adds a new feature and a bunch of new SQL-callable\n>>> > functions that interact with that feature, the feature itself should\n>>> > be documented, along with its limitations and the new SQL-callable\n>>> > functions that interact with it. I think there should be either a\n>>> > lengthy comment in some suitable file, or maybe various comments in\n>>> > various files, or else a README file, that clearly sets out the major\n>>> > design principles behind the patch, and explaining also what that\n>>> > means in terms of features and limitations. Without that, it's really\n>>> > hard for anyone to jump into reviewing this code, and it will be hard\n>>> > for people who have to maintain it in the future to understand it,\n>>> > either. Or for users, for that matter.\n>>> >\n>>> > --\n>>> > Robert Haas\n>>> > EnterpriseDB: http://www.enterprisedb.com\n>>> > The Enterprise PostgreSQL Company\n>>>\n>>>\n>>\n>\n\nHithis patch is broken now. Please, can you check it?RegardsPavelst 25. 11. 2020 v 14:08 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:2020年11月25日 14:19,Pavel Stehule <pavel.stehule@gmail.com> 写道:po 23. 11. 2020 v 10:27 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:2020年11月21日 02:28,Pavel Stehule <pavel.stehule@gmail.com> 写道:Hipá 11. 9. 2020 v 17:00 odesílatel 曾文旌 <wenjing.zwj@alibaba-inc.com> napsal:I have written the README for the GTT, which contains the GTT requirements and design.\nI found that compared to my first email a year ago, many GTT Limitations are now gone.\nNow, I'm adding comments to some of the necessary functions.There are problems with patching. Please, can you rebase your patch?Sure.I'm still working on sort code and comments.If you have any suggestions, please let me know.It is broken again There is bad white space+   /*+    * For global temp table only+    * use ShareUpdateExclusiveLock for ensure safety+    */+   {+       {+           \"on_commit_delete_rows\",+           \"global temp table on commit options\",+           RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,+           ShareUpdateExclusiveLock+       },+       true+   },  <=================    /* list terminator */    {{NULL}}+7 OTHERS+Parallel query+Planner does not produce parallel query plans for SQL related to GTT. Because <=================+GTT private data cannot be accessed across processes.diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile+/*+ * Update global temp table relstats(relpage/reltuple/relallvisible) <========================+ * to local hashtable+ */+void+/*+ * Search global temp table relstats(relpage/reltuple/relallvisible) <==============+ * from loand there are lot of more places ...I found other issuepostgres=# create global temp table foo(a int);CREATE TABLEpostgres=# create index on foo(a);CREATE INDEXclose session and in new sessionpostgres=# reindex index foo_a_idx ;WARNING:  relcache reference leak: relation \"foo\" not closedREINDEXI fixed all the above issues and rebase code.Please review the new version code again.WenjingRegardsPavelWenjingRegardsPavel \n\n\nWenjing\n\n\n\n\n\n> 2020年7月31日 上午4:57,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng <wjzeng2012@gmail.com> wrote:\n>> Please continue to review the code.\n> \n> This patch is pretty light on comments. Many of the new functions have\n> no header comments, for example. There are comments here and there in\n> the body of the new functions that are added, and in places where\n> existing code is changed there are comments here and there, but\n> overall it's not a whole lot. There's no documentation and no README,\n> either. Since this adds a new feature and a bunch of new SQL-callable\n> functions that interact with that feature, the feature itself should\n> be documented, along with its limitations and the new SQL-callable\n> functions that interact with it. I think there should be either a\n> lengthy comment in some suitable file, or maybe various comments in\n> various files, or else a README file, that clearly sets out the major\n> design principles behind the patch, and explaining also what that\n> means in terms of features and limitations. Without that, it's really\n> hard for anyone to jump into reviewing this code, and it will be hard\n> for people who have to maintain it in the future to understand it,\n> either. Or for users, for that matter.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Tue, 16 Mar 2021 19:05:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "ok\n\nThe cause of the problem is that the name of the dependent function\n(readNextTransactionID) has changed. I fixed it.\n\nThis patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n\nWenjing", "msg_date": "Wed, 17 Mar 2021 19:59:03 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\nst 17. 3. 2021 v 12:59 odesílatel wenjing <wjzeng2012@gmail.com> napsal:\n\n> ok\n>\n> The cause of the problem is that the name of the dependent function\n> (readNextTransactionID) has changed. I fixed it.\n>\n> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n>\n> Wenjing\n>\n\nI tested this patch and make check-world fails\n\nmake[2]: Vstupuje se do adresáře\n„/home/pavel/src/postgresql.master/src/test/recovery“\nrm -rf '/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check\n/usr/bin/mkdir -p\n'/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check\ncd . && TESTDIR='/home/pavel/src/postgresql.master/src/test/recovery'\nPATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:$PATH\"\nLD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib\"\n PGPORT='65432'\nPG_REGRESS='/home/pavel/src/postgresql.master/src/test/recovery/../../../src/test/regress/pg_regress'\nREGRESS_SHLIB='/home/pavel/src/postgresql.master/src/test/regress/regress.so'\n/usr/bin/prove -I ../../../src/test/perl/ -I . t/*.pl\nt/001_stream_rep.pl .................. ok\nt/002_archiving.pl ................... ok\nt/003_recovery_targets.pl ............ ok\nt/004_timeline_switch.pl ............. ok\nt/005_replay_delay.pl ................ ok\nt/006_logical_decoding.pl ............ ok\nt/007_sync_rep.pl .................... ok\nt/008_fsm_truncation.pl .............. ok\nt/009_twophase.pl .................... ok\nt/010_logical_decoding_timelines.pl .. ok\nt/011_crash_recovery.pl .............. ok\nt/012_subtransactions.pl ............. ok\nt/013_crash_restart.pl ............... ok\nt/014_unlogged_reinit.pl ............. ok\nt/015_promotion_pages.pl ............. ok\nt/016_min_consistency.pl ............. ok\nt/017_shm.pl ......................... skipped: SysV shared memory not\nsupported by this platform\nt/018_wal_optimize.pl ................ ok\nt/019_replslot_limit.pl .............. ok\nt/020_archive_status.pl .............. ok\nt/021_row_visibility.pl .............. ok\nt/022_crash_temp_files.pl ............ 1/9\n# Failed test 'one temporary file'\n# at t/022_crash_temp_files.pl line 231.\n# got: '0'\n# expected: '1'\nt/022_crash_temp_files.pl ............ 9/9 # Looks like you failed 1 test\nof 9.\nt/022_crash_temp_files.pl ............ Dubious, test returned 1 (wstat 256,\n0x100)\nFailed 1/9 subtests\nt/023_pitr_prepared_xact.pl .......... ok\n\nTest Summary Report\n-------------------\nt/022_crash_temp_files.pl (Wstat: 256 Tests: 9 Failed: 1)\n Failed test: 8\n Non-zero exit status: 1\nFiles=23, Tests=259, 115 wallclock secs ( 0.21 usr 0.06 sys + 28.57 cusr\n18.01 csys = 46.85 CPU)\nResult: FAIL\nmake[2]: *** [Makefile:19: check] Chyba 1\nmake[2]: Opouští se adresář\n„/home/pavel/src/postgresql.master/src/test/recovery“\nmake[1]: *** [Makefile:49: check-recovery-recurse] Chyba 2\nmake[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“\nmake: *** [GNUmakefile:71: check-world-src/test-recurse] Chyba 2\n\nRegards\n\nPavel\n\nHist 17. 3. 2021 v 12:59 odesílatel wenjing <wjzeng2012@gmail.com> napsal:okThe cause of the problem is that the name of the dependent function (readNextTransactionID) has changed. I fixed it.This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71aWenjingI tested this patch and make check-world failsmake[2]: Vstupuje se do adresáře „/home/pavel/src/postgresql.master/src/test/recovery“rm -rf '/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check/usr/bin/mkdir -p '/home/pavel/src/postgresql.master/src/test/recovery'/tmp_checkcd . && TESTDIR='/home/pavel/src/postgresql.master/src/test/recovery' PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:$PATH\" LD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib\"  PGPORT='65432' PG_REGRESS='/home/pavel/src/postgresql.master/src/test/recovery/../../../src/test/regress/pg_regress' REGRESS_SHLIB='/home/pavel/src/postgresql.master/src/test/regress/regress.so' /usr/bin/prove -I ../../../src/test/perl/ -I .  t/*.plt/001_stream_rep.pl .................. ok     t/002_archiving.pl ................... ok   t/003_recovery_targets.pl ............ ok   t/004_timeline_switch.pl ............. ok   t/005_replay_delay.pl ................ ok   t/006_logical_decoding.pl ............ ok     t/007_sync_rep.pl .................... ok     t/008_fsm_truncation.pl .............. ok   t/009_twophase.pl .................... ok     t/010_logical_decoding_timelines.pl .. ok     t/011_crash_recovery.pl .............. ok   t/012_subtransactions.pl ............. ok     t/013_crash_restart.pl ............... ok     t/014_unlogged_reinit.pl ............. ok     t/015_promotion_pages.pl ............. ok   t/016_min_consistency.pl ............. ok   t/017_shm.pl ......................... skipped: SysV shared memory not supported by this platformt/018_wal_optimize.pl ................ ok     t/019_replslot_limit.pl .............. ok     t/020_archive_status.pl .............. ok     t/021_row_visibility.pl .............. ok     t/022_crash_temp_files.pl ............ 1/9 #   Failed test 'one temporary file'#   at t/022_crash_temp_files.pl line 231.#          got: '0'#     expected: '1't/022_crash_temp_files.pl ............ 9/9 # Looks like you failed 1 test of 9.t/022_crash_temp_files.pl ............ Dubious, test returned 1 (wstat 256, 0x100)Failed 1/9 subtests t/023_pitr_prepared_xact.pl .......... ok   Test Summary Report-------------------t/022_crash_temp_files.pl          (Wstat: 256 Tests: 9 Failed: 1)  Failed test:  8  Non-zero exit status: 1Files=23, Tests=259, 115 wallclock secs ( 0.21 usr  0.06 sys + 28.57 cusr 18.01 csys = 46.85 CPU)Result: FAILmake[2]: *** [Makefile:19: check] Chyba 1make[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test/recovery“make[1]: *** [Makefile:49: check-recovery-recurse] Chyba 2make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“make: *** [GNUmakefile:71: check-world-src/test-recurse] Chyba 2RegardsPavel", "msg_date": "Sun, 28 Mar 2021 09:27:11 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\nI wrote simple benchmarks. I checked the possible slowdown of connections\nto postgres when GTT is used.\n\n/usr/local/pgsql/master/bin/pgbench -c 10 -C -f script4.sql -t 1000\n\nscript has one line just with INSERT or SELECT LIMIT 1;\n\nPATCH\ninsert to global temp table (with connect) -- 349 tps (10 clients 443tps)\nselect from gtt (with connects) -- 370 tps (10 clients 446tps)\ninsert to normal table (with connect) - 115 tps (10 clients 417 tps)\nselect from normal table (with connect) -- 358 (10 clients 445 tps)\n\nMASTER\ninsert to temp table (with connect) -- 58 tps (10 clients 352 tps) -- after\ntest pg_attribute bloated to 11MB\ninsert into normal table (with connect) -- 118 tps (10 clients 385)\nselect from normal table (with connect) -- 346 tps (10 clients 449)\n\nThe measurement doesn't show anything interesting - it is not possible to\nsee the impact of usage of GTT on connect time.\n\nIt is interesting to see the overhead of local temp tables against global\ntemp tables - the performance is about 6x worse, and there is a significant\nbloat of the pg_attribute table. And the tested table had only one column.\nSo an idea or concept of global temp tables is very good, and\nimplementation looks well (from performance perspective).\n\nI didn't check the code yet, I just tested behaviour and I think it is very\nsatisfiable for the first stage and first release. The patch is long now,\nand for the first step is good to stop in implemented features.\n\nNext steps should be supporting DDL for actively used GTT tables. This\ntopic is pretty complex, there are possible more scenarios. I think so GTT\nbehaviour should be the same like behaviour of normal tables (by default) -\nbut I see an advantage of other possibilities, so I don't want to open the\ndiscussion about this topic now. Current implementation should not block\nany possible implementations in future.\n\nRegards\n\nPavel\n\nHiI wrote simple benchmarks. I checked the possible slowdown of connections to postgres when GTT is used./usr/local/pgsql/master/bin/pgbench -c 10 -C -f script4.sql -t 1000script has one line just with INSERT or SELECT LIMIT 1;PATCHinsert to global temp table (with connect) -- 349 tps (10 clients 443tps)select from gtt (with connects) -- 370 tps (10 clients 446tps)insert to normal table (with connect) - 115 tps (10 clients 417 tps)select from normal table (with connect) -- 358 (10 clients 445 tps)MASTERinsert to temp table (with connect) -- 58 tps (10 clients 352 tps) -- after test pg_attribute bloated to 11MBinsert into normal table (with connect) -- 118 tps (10 clients 385)select from normal table (with connect) -- 346 tps (10 clients 449)The measurement doesn't show anything interesting - it is not possible to see the impact of usage of GTT on connect time.It is interesting to see the overhead of local temp tables against global temp tables - the performance is about 6x worse, and there is a significant bloat of  the pg_attribute table. And the tested table had only one column. So an idea or concept of global temp tables is very good, and implementation looks well (from performance perspective).I didn't check the code yet, I just tested behaviour and I think it is very satisfiable for the first stage and first release. The patch is long now, and for the first step is good to stop in implemented features. Next steps should be supporting DDL for actively used GTT tables. This topic is pretty complex, there are possible more scenarios. I think so GTT behaviour should be the same like behaviour of normal tables (by default) - but I see an advantage of other possibilities, so I don't want to open the discussion about this topic now. Current implementation should not block any possible implementations in future. RegardsPavel", "msg_date": "Sun, 28 Mar 2021 10:49:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/17/21 7:59 AM, wenjing wrote:\n> ok\n>\n> The cause of the problem is that the name of the dependent function\n> (readNextTransactionID) has changed. I fixed it.\n>\n> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n>\n> Wenjing\n>\n>\n\nI have fixed this patch so that\n\na) it applies cleanly\n\nb) it uses project best practice for catalog Oid assignment.\n\nHowever, as noted elsewhere it fails the recovery TAP test.\n\nI also note this:\n\n\ndiff --git a/src/test/regress/parallel_schedule\nb/src/test/regress/parallel_schedule\nindex 312c11a4bd..d44fa62f4e 100644\n--- a/src/test/regress/parallel_schedule\n+++ b/src/test/regress/parallel_schedule\n@@ -129,3 +129,10 @@ test: fast_default\n \n # run stats by itself because its delay may be insufficient under heavy\nload\n test: stats\n+\n+# global temp table test\n+test: gtt_stats\n+test: gtt_function\n+test: gtt_prepare\n+test: gtt_parallel_1 gtt_parallel_2\n+test: gtt_clean\n\n\nTests that need to run in parallel should use either the isolation\ntester framework (which is explicitly for testing things concurrently)\nor the TAP test framework.\n\nAdding six test files to the regression test suite for this one feature\nis not a good idea. You should have one regression test script ideally,\nand it should be added as appropriate to both the parallel and serial\nschedules (and not at the end). Any further tests should be added using\nthe other frameworks mentioned.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 28 Mar 2021 09:07:19 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "ne 28. 3. 2021 v 15:07 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n> On 3/17/21 7:59 AM, wenjing wrote:\n> > ok\n> >\n> > The cause of the problem is that the name of the dependent function\n> > (readNextTransactionID) has changed. I fixed it.\n> >\n> > This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n> >\n> > Wenjing\n> >\n> >\n>\n> I have fixed this patch so that\n>\n> a) it applies cleanly\n>\n> b) it uses project best practice for catalog Oid assignment.\n>\n> However, as noted elsewhere it fails the recovery TAP test.\n>\n> I also note this:\n>\n>\n> diff --git a/src/test/regress/parallel_schedule\n> b/src/test/regress/parallel_schedule\n> index 312c11a4bd..d44fa62f4e 100644\n> --- a/src/test/regress/parallel_schedule\n> +++ b/src/test/regress/parallel_schedule\n> @@ -129,3 +129,10 @@ test: fast_default\n>\n> # run stats by itself because its delay may be insufficient under heavy\n> load\n> test: stats\n> +\n> +# global temp table test\n> +test: gtt_stats\n> +test: gtt_function\n> +test: gtt_prepare\n> +test: gtt_parallel_1 gtt_parallel_2\n> +test: gtt_clean\n>\n>\n> Tests that need to run in parallel should use either the isolation\n> tester framework (which is explicitly for testing things concurrently)\n> or the TAP test framework.\n>\n> Adding six test files to the regression test suite for this one feature\n> is not a good idea. You should have one regression test script ideally,\n> and it should be added as appropriate to both the parallel and serial\n> schedules (and not at the end). Any further tests should be added using\n> the other frameworks mentioned.\n>\n>\n* bad name of GTT-README - the convention is README.gtt\n\n* Typo - \"ofa\"\n\n2) Use beforeshmemexit to ensure that all files ofa session GTT are deleted\nwhen\nthe session exits.\n\n* Typo \"nd\"\n\n3) GTT storage file cleanup during abnormal situations\nWhen a backend exits abnormally (such as oom kill), the startup process\nstarts\nrecovery before accepting client connection. The same startup process checks\nnd removes all GTT files before redo WAL.\n\n* This comment is wrong\n\n /*\n+ * Global temporary table is allowed to be dropped only when the\n+ * current session is using it.\n+ */\n+ if (RELATION_IS_GLOBAL_TEMP(rel))\n+ {\n+ if (is_other_backend_use_gtt(RelationGetRelid(rel)))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),\n+ errmsg(\"cannot drop global temporary table %s when other backend attached\nit.\",\n+ RelationGetRelationName(rel))));\n+ }\n\n* same wrong comment\n\n /*\n+ * Global temporary table is allowed to be dropped only when the\n+ * current session is using it.\n+ */\n+ if (RELATION_IS_GLOBAL_TEMP(rel))\n+ {\n+ if (is_other_backend_use_gtt(RelationGetRelid(rel)))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),\n+ errmsg(\"cannot drop global temporary table %s when other backend attached\nit.\",\n+ RelationGetRelationName(rel))));\n+ }\n\n* typo \"backand\"\n\n+/*\n+ * Check if there are other backends using this GTT besides the current\nbackand.\n+ */\n\nThere is not user's documentation\n\nRegards\n\nPavel\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n>\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nne 28. 3. 2021 v 15:07 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\nOn 3/17/21 7:59 AM, wenjing wrote:\n> ok\n>\n> The cause of the problem is that the name of the dependent function\n> (readNextTransactionID) has changed. I fixed it.\n>\n> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n>\n> Wenjing\n>\n>\n\nI have fixed this patch so that\n\na) it applies cleanly\n\nb) it uses project best practice for catalog Oid assignment.\n\nHowever, as noted elsewhere it fails the recovery TAP test.\n\nI also note this:\n\n\ndiff --git a/src/test/regress/parallel_schedule\nb/src/test/regress/parallel_schedule\nindex 312c11a4bd..d44fa62f4e 100644\n--- a/src/test/regress/parallel_schedule\n+++ b/src/test/regress/parallel_schedule\n@@ -129,3 +129,10 @@ test: fast_default\n \n # run stats by itself because its delay may be insufficient under heavy\nload\n test: stats\n+\n+# global temp table test\n+test: gtt_stats\n+test: gtt_function\n+test: gtt_prepare\n+test: gtt_parallel_1 gtt_parallel_2\n+test: gtt_clean\n\n\nTests that need to run in parallel should use either the isolation\ntester framework (which is explicitly for testing things concurrently)\nor the TAP test framework.\n\nAdding six test files to the regression test suite for this one feature\nis not a good idea. You should have one regression test script ideally,\nand it should be added as appropriate to both the parallel and serial\nschedules (and not at the end). Any further tests should be added using\nthe other frameworks mentioned.\n* bad name of GTT-README - the convention is README.gtt* Typo - \"ofa\" 2) Use beforeshmemexit to ensure that all files ofa session GTT are deleted whenthe session exits. * Typo \"nd\" 3) GTT storage file cleanup during abnormal situationsWhen a backend exits abnormally (such as oom kill), the startup process startsrecovery before accepting client connection. The same startup process checksnd removes all GTT files before redo WAL.* This comment is wrong \t/*+\t * Global temporary table is allowed to be dropped only when the+\t * current session is using it.+\t */+\tif (RELATION_IS_GLOBAL_TEMP(rel))+\t{+\t\tif (is_other_backend_use_gtt(RelationGetRelid(rel)))+\t\t\tereport(ERROR,+\t\t\t\t(errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),+\t\t\t\t errmsg(\"cannot drop global temporary table %s when other backend attached it.\",+\t\t\t\t\t\tRelationGetRelationName(rel))));+\t}* same wrong comment \t/*+\t * Global temporary table is allowed to be dropped only when the+\t * current session is using it.+\t */+\tif (RELATION_IS_GLOBAL_TEMP(rel))+\t{+\t\tif (is_other_backend_use_gtt(RelationGetRelid(rel)))+\t\t\tereport(ERROR,+\t\t\t\t(errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),+\t\t\t\t errmsg(\"cannot drop global temporary table %s when other backend attached it.\",+\t\t\t\t\t\tRelationGetRelationName(rel))));+\t}* typo \"backand\"+/*+ * Check if there are other backends using this GTT besides the current backand.+ */There is not user's documentationRegardsPavel \n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 29 Mar 2021 10:37:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2021年3月28日 15:27,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> Hi\n> \n> st 17. 3. 2021 v 12:59 odesílatel wenjing <wjzeng2012@gmail.com <mailto:wjzeng2012@gmail.com>> napsal:\n> ok\n> \n> The cause of the problem is that the name of the dependent function (readNextTransactionID) has changed. I fixed it.\n> \n> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n> \n> Wenjing\n> \n> I tested this patch and make check-world fails\n> \n> make[2]: Vstupuje se do adresáře „/home/pavel/src/postgresql.master/src/test/recovery“\n> rm -rf '/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check\n> /usr/bin/mkdir -p '/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check\n> cd . && TESTDIR='/home/pavel/src/postgresql.master/src/test/recovery' PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:$PATH\" LD_LIBRARY_PATH=\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib\" PGPORT='65432' PG_REGRESS='/home/pavel/src/postgresql.master/src/test/recovery/../../../src/test/regress/pg_regress' REGRESS_SHLIB='/home/pavel/src/postgresql.master/src/test/regress/regress.so' /usr/bin/prove -I ../../../src/test/perl/ -I . t/*.pl\n> t/001_stream_rep.pl <http://001_stream_rep.pl/> .................. ok \n> t/002_archiving.pl <http://002_archiving.pl/> ................... ok \n> t/003_recovery_targets.pl <http://003_recovery_targets.pl/> ............ ok \n> t/004_timeline_switch.pl <http://004_timeline_switch.pl/> ............. ok \n> t/005_replay_delay.pl <http://005_replay_delay.pl/> ................ ok \n> t/006_logical_decoding.pl <http://006_logical_decoding.pl/> ............ ok \n> t/007_sync_rep.pl <http://007_sync_rep.pl/> .................... ok \n> t/008_fsm_truncation.pl <http://008_fsm_truncation.pl/> .............. ok \n> t/009_twophase.pl <http://009_twophase.pl/> .................... ok \n> t/010_logical_decoding_timelines.pl <http://010_logical_decoding_timelines.pl/> .. ok \n> t/011_crash_recovery.pl <http://011_crash_recovery.pl/> .............. ok \n> t/012_subtransactions.pl <http://012_subtransactions.pl/> ............. ok \n> t/013_crash_restart.pl <http://013_crash_restart.pl/> ............... ok \n> t/014_unlogged_reinit.pl <http://014_unlogged_reinit.pl/> ............. ok \n> t/015_promotion_pages.pl <http://015_promotion_pages.pl/> ............. ok \n> t/016_min_consistency.pl <http://016_min_consistency.pl/> ............. ok \n> t/017_shm.pl <http://017_shm.pl/> ......................... skipped: SysV shared memory not supported by this platform\n> t/018_wal_optimize.pl <http://018_wal_optimize.pl/> ................ ok \n> t/019_replslot_limit.pl <http://019_replslot_limit.pl/> .............. ok \n> t/020_archive_status.pl <http://020_archive_status.pl/> .............. ok \n> t/021_row_visibility.pl <http://021_row_visibility.pl/> .............. ok \n> t/022_crash_temp_files.pl <http://022_crash_temp_files.pl/> ............ 1/9 \n> # Failed test 'one temporary file'\n> # at t/022_crash_temp_files.pl <http://022_crash_temp_files.pl/> line 231.\n> # got: '0'\n> # expected: '1'\n> t/022_crash_temp_files.pl <http://022_crash_temp_files.pl/> ............ 9/9 # Looks like you failed 1 test of 9.\n> t/022_crash_temp_files.pl <http://022_crash_temp_files.pl/> ............ Dubious, test returned 1 (wstat 256, 0x100)\n> Failed 1/9 subtests \n> t/023_pitr_prepared_xact.pl <http://023_pitr_prepared_xact.pl/> .......... ok \n> \n> Test Summary Report\n> -------------------\n> t/022_crash_temp_files.pl <http://022_crash_temp_files.pl/> (Wstat: 256 Tests: 9 Failed: 1)\n> Failed test: 8\n> Non-zero exit status: 1\n> Files=23, Tests=259, 115 wallclock secs ( 0.21 usr 0.06 sys + 28.57 cusr 18.01 csys = 46.85 CPU)\n> Result: FAIL\n> make[2]: *** [Makefile:19: check] Chyba 1\n> make[2]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test/recovery“\n> make[1]: *** [Makefile:49: check-recovery-recurse] Chyba 2\n> make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“\n> make: *** [GNUmakefile:71: check-world-src/test-recurse] Chyba 2\n\nThis is because part of the logic of GTT is duplicated with the new commid cd91de0d17952b5763466cfa663e98318f26d357\nthat is commit by Tomas Vondra merge 11 days ago: \"Remove Temporary Files after Backend Crash”.\nThe \"Remove Temporary Files after Backend Crash” is exactly what GTT needs, or even better.\nTherefore, I chose to delete the temporary file cleanup logic in the GTT path.\n\nLet me update a new version.\n\n\nWenjing\n\n> \n> Regards\n> \n> Pavel", "msg_date": "Mon, 29 Mar 2021 17:55:05 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2021年3月28日 21:07,Andrew Dunstan <andrew@dunslane.net> 写道:\n> \n> \n> On 3/17/21 7:59 AM, wenjing wrote:\n>> ok\n>> \n>> The cause of the problem is that the name of the dependent function\n>> (readNextTransactionID) has changed. I fixed it.\n>> \n>> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n>> \n>> Wenjing\n>> \n>> \n> \n> I have fixed this patch so that\n> \n> a) it applies cleanly\n> \n> b) it uses project best practice for catalog Oid assignment.\n> \n> However, as noted elsewhere it fails the recovery TAP test.\n> \n> I also note this:\n> \n> \n> diff --git a/src/test/regress/parallel_schedule\n> b/src/test/regress/parallel_schedule\n> index 312c11a4bd..d44fa62f4e 100644\n> --- a/src/test/regress/parallel_schedule\n> +++ b/src/test/regress/parallel_schedule\n> @@ -129,3 +129,10 @@ test: fast_default\n> \n> # run stats by itself because its delay may be insufficient under heavy\n> load\n> test: stats\n> +\n> +# global temp table test\n> +test: gtt_stats\n> +test: gtt_function\n> +test: gtt_prepare\n> +test: gtt_parallel_1 gtt_parallel_2\n> +test: gtt_clean\n> \n> \n> Tests that need to run in parallel should use either the isolation\n> tester framework (which is explicitly for testing things concurrently)\n> or the TAP test framework.\n> \n> Adding six test files to the regression test suite for this one feature\n> is not a good idea. You should have one regression test script ideally,\n> and it should be added as appropriate to both the parallel and serial\n> schedules (and not at the end). Any further tests should be added using\n> the other frameworks mentioned.\nYou're right, it doesn't look good.\nI'll organize them and put them in place.\n\n\nWenjing.\n\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> \n> -- \n> \n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n> \n> <global_temporary_table_v44-pg14.patch.gz>", "msg_date": "Mon, 29 Mar 2021 19:27:39 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2021年3月29日 16:37,Pavel Stehule <pavel.stehule@gmail.com> 写道:\n> \n> \n> \n> ne 28. 3. 2021 v 15:07 odesílatel Andrew Dunstan <andrew@dunslane.net <mailto:andrew@dunslane.net>> napsal:\n> \n> On 3/17/21 7:59 AM, wenjing wrote:\n> > ok\n> >\n> > The cause of the problem is that the name of the dependent function\n> > (readNextTransactionID) has changed. I fixed it.\n> >\n> > This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n> >\n> > Wenjing\n> >\n> >\n> \n> I have fixed this patch so that\n> \n> a) it applies cleanly\n> \n> b) it uses project best practice for catalog Oid assignment.\n> \n> However, as noted elsewhere it fails the recovery TAP test.\n> \n> I also note this:\n> \n> \n> diff --git a/src/test/regress/parallel_schedule\n> b/src/test/regress/parallel_schedule\n> index 312c11a4bd..d44fa62f4e 100644\n> --- a/src/test/regress/parallel_schedule\n> +++ b/src/test/regress/parallel_schedule\n> @@ -129,3 +129,10 @@ test: fast_default\n> \n> # run stats by itself because its delay may be insufficient under heavy\n> load\n> test: stats\n> +\n> +# global temp table test\n> +test: gtt_stats\n> +test: gtt_function\n> +test: gtt_prepare\n> +test: gtt_parallel_1 gtt_parallel_2\n> +test: gtt_clean\n> \n> \n> Tests that need to run in parallel should use either the isolation\n> tester framework (which is explicitly for testing things concurrently)\n> or the TAP test framework.\n> \n> Adding six test files to the regression test suite for this one feature\n> is not a good idea. You should have one regression test script ideally,\n> and it should be added as appropriate to both the parallel and serial\n> schedules (and not at the end). Any further tests should be added using\n> the other frameworks mentioned.\n> \n> \n> * bad name of GTT-README - the convention is README.gtt\n> \n> * Typo - \"ofa\" \n> \n> 2) Use beforeshmemexit to ensure that all files ofa session GTT are deleted when\n> the session exits. \n> \n> * Typo \"nd\" \n> \n> 3) GTT storage file cleanup during abnormal situations\n> When a backend exits abnormally (such as oom kill), the startup process starts\n> recovery before accepting client connection. The same startup process checks\n> nd removes all GTT files before redo WAL.\n> \n> * This comment is wrong\n> \n> /*\n> + * Global temporary table is allowed to be dropped only when the\n> + * current session is using it.\n> + */\n> + if (RELATION_IS_GLOBAL_TEMP(rel))\n> + {\n> + if (is_other_backend_use_gtt(RelationGetRelid(rel)))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),\n> + errmsg(\"cannot drop global temporary table %s when other backend attached it.\",\n> + RelationGetRelationName(rel))));\n> + }\n> \n> * same wrong comment\n> \n> /*\n> + * Global temporary table is allowed to be dropped only when the\n> + * current session is using it.\n> + */\n> + if (RELATION_IS_GLOBAL_TEMP(rel))\n> + {\n> + if (is_other_backend_use_gtt(RelationGetRelid(rel)))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),\n> + errmsg(\"cannot drop global temporary table %s when other backend attached it.\",\n> + RelationGetRelationName(rel))));\n> + }\n> \n> * typo \"backand\"\n> \n> +/*\n> + * Check if there are other backends using this GTT besides the current backand.\n> + */\n> \n> There is not user's documentation\nThis is necessary, and I will make a separate document patch.\n\n\nWenjing.\n\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> cheers\n> \n> \n> andrew\n> \n> \n> -- \n> \n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com <https://www.enterprisedb.com/>\n>", "msg_date": "Mon, 29 Mar 2021 19:34:51 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "HI all\n\nI fixed the document description error and the regression test bug\nmentioned by Pavel.\nThis patch(V45) is base on 30aaab26e52144097a1a5bbb0bb66ea1ebc0cb81\nPlease give me feedback.\n\n\nWenjing", "msg_date": "Mon, 29 Mar 2021 19:44:54 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "po 29. 3. 2021 v 13:45 odesílatel wenjing <wjzeng2012@gmail.com> napsal:\n\n> HI all\n>\n> I fixed the document description error and the regression test bug\n> mentioned by Pavel.\n> This patch(V45) is base on 30aaab26e52144097a1a5bbb0bb66ea1ebc0cb81\n> Please give me feedback.\n>\n\nYes, it is working.\n\nSo please, can you write some user documentation?\n\n\n\n>\n> Wenjing\n>\n>\n>\n\npo 29. 3. 2021 v 13:45 odesílatel wenjing <wjzeng2012@gmail.com> napsal:HI allI fixed the document description error and the regression test bug mentioned by Pavel.This patch(V45) is base on 30aaab26e52144097a1a5bbb0bb66ea1ebc0cb81Please give me feedback.Yes, it is working.So please, can you write some user documentation?Wenjing", "msg_date": "Tue, 30 Mar 2021 08:08:16 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "HI Pavel\n\nI added user documentation.\nPlease give me feedback.\n\n\nWenjing", "msg_date": "Thu, 15 Apr 2021 15:26:28 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "wenjing <wjzeng2012@gmail.com> 于2021年4月15日周四 下午3:26写道:\n\n> HI Pavel\n>\n> I added user documentation.\n> Please give me feedback.\n>\n>\n> Wenjing\n>\n>\nHi, Wenjing,\n\nI have checked your documentation section and fixed a spelling mistake,\nadjusted some sentences for you.\nAll the modified content is in the new patch, and please check it.\n\nRegards\n\nShawn", "msg_date": "Thu, 15 Apr 2021 16:48:53 +0800", "msg_from": "shawn wang <shawn.wang.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "shawn wang <shawn.wang.pg@gmail.com> 于2021年4月15日周四 下午4:49写道:\n\n> wenjing <wjzeng2012@gmail.com> 于2021年4月15日周四 下午3:26写道:\n>\n>> HI Pavel\n>>\n>> I added user documentation.\n>> Please give me feedback.\n>>\n>>\n>> Wenjing\n>>\n>>\n> Hi, Wenjing,\n>\n> I have checked your documentation section and fixed a spelling mistake,\n> adjusted some sentences for you.\n> All the modified content is in the new patch, and please check it.\n>\nThank you for your comments.\nI made some repairs and fixed a bug.\nLooking forward to your feedback.\n\n\nWenjing\n\n>\n> Regards\n>\n> Shawn\n>\n>", "msg_date": "Thu, 22 Apr 2021 15:41:22 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Apr 22, 2021 at 1:11 PM wenjing <wjzeng2012@gmail.com> wrote:\n>\n\nI have briefly looked into the design comments added by the patch. I\nhave a few questions.\n\n+Feature description\n+--------------------------------\n+\n+Previously, temporary tables are defined once and automatically\n+created (starting with empty contents) in every session before using them.\n\n\nI don’t think this statement is correct, I mean if we define a temp\ntable in one session then it doesn’t automatically create in all the\nsessions.\n\n\n+\n+Like local temporary table, Global Temporary Table supports ON COMMIT\nPRESERVE ROWS\n+or ON COMMIT DELETE ROWS clause, so that data in the temporary table can be\n+cleaned up or reserved automatically when a session exits or a\ntransaction COMMITs.\n\n/reserved/preserved\n\n\nI was trying to look into the “Main design idea” section.\n\n+1) CATALOG\n+GTTs store session-specific data. The storage information of GTTs'data, their\n+transaction information, and their statistics are not stored in the catalog.\n\nI did not understand what do you mean by “transaction information” is\nnot stored in the catalog? Mean what transaction information are\nstored in catalog in the normal table which is not stored for GTT?\n\n+Changes to the GTT's metadata affect all sessions.\n+The operations making those changes include truncate GTT, Vacuum/Cluster GTT,\n+and Lock GTT.\n\nHow does Truncate or Vacuum affect all the sessions, I mean truncate\nshould only truncate the data of the current session and the same is\ntrue for the vacuum no?\n\nI will try to do a more detailed review.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 May 2021 16:13:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> 于2021年5月10日周一 下午6:44写道:\n\n> On Thu, Apr 22, 2021 at 1:11 PM wenjing <wjzeng2012@gmail.com> wrote:\n> >\n>\n> I have briefly looked into the design comments added by the patch. I\n> have a few questions.\n\n\n> +Feature description\n> +--------------------------------\n> +\n> +Previously, temporary tables are defined once and automatically\n> +created (starting with empty contents) in every session before using them.\n>\n>\n> I don’t think this statement is correct, I mean if we define a temp\n> table in one session then it doesn’t automatically create in all the\n> sessions.\n>\nThe point is the schema definition of GTT which is shared between sessions.\nWhen a session creates a GTT, once the transaction for the Create Table is\ncommitted, other sessions can see the GTT and can use it.\nso I modified the description as follows:\nautomatically exist in every session that needs them.\n\nWhat do you think?\n\n>\n>\n> +\n> +Like local temporary table, Global Temporary Table supports ON COMMIT\n> PRESERVE ROWS\n> +or ON COMMIT DELETE ROWS clause, so that data in the temporary table can\n> be\n> +cleaned up or reserved automatically when a session exits or a\n> transaction COMMITs.\n>\n> /reserved/preserved\n>\n> OK, I fixed it.\n\n\n>\n> I was trying to look into the “Main design idea” section.\n>\n> +1) CATALOG\n> +GTTs store session-specific data. The storage information of GTTs'data,\n> their\n> +transaction information, and their statistics are not stored in the\n> catalog.\n>\n> I did not understand what do you mean by “transaction information” is\n> not stored in the catalog? Mean what transaction information are\n> stored in catalog in the normal table which is not stored for GTT?\n>\n\"Transaction Information\" refers to the GTT's relfrozenXID,\nThe relfrozenxid of a normal table is stored in pg_class, but GTT is not.\n\nEach row of the data (the tuple header) contains transaction information\n(such as xmin xmax).\nAt the same time, for regular table we record the oldest XID (as\nrelfrozenXID) in each piece of data into the pg_class, which is used to\nclean up the data and clog and reuse transactional resources.\nMy design is:\nEach session in GTT has a local copy of data (session level relfrozenXID),\nwhich is stored in memory (local hashtable). and vacuum will refer to this\ninformation.\n\n\n> +Changes to the GTT's metadata affect all sessions.\n> +The operations making those changes include truncate GTT, Vacuum/Cluster\n> GTT,\n> +and Lock GTT.\n>\n> How does Truncate or Vacuum affect all the sessions, I mean truncate\n> should only truncate the data of the current session and the same is\n> true for the vacuum no?\n\nYour understanding is correct.\nTruncate GTT, VACUUM/CLUuster GTT, and Lock GTT affect current session and\nwithout causing exclusive locking.\n\"Changes to the GTT's metadata affect All Sessions. \"is not used to\ndescribe the lock behavior. I deleted it.\n\n\n> I will try to do a more detailed review.\n>\nThank you very much for your careful review. We are closer to success.\n\n\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nI updated the code and passed the regression tests.\n\nRegards,\nwjzeng", "msg_date": "Wed, 12 May 2021 20:39:07 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Rebase code based on the latest version.\n\nRegards,\nwenjing", "msg_date": "Fri, 4 Jun 2021 18:01:27 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing,\n\nSome suggestions may help:\n\n1) It seems that no test case covers the below scenario: 2 sessions attach\nthe same gtt, and insert/update/select concurrently. It is better to use\nthe test framework in src/test/isolation like the code changes in\nhttps://commitfest.postgresql.org/24/2233/.\n\n2) CREATE GLOBAL TEMP SEQUENCE also need to be supported\nin src/bin/psql/tab-complete.c\n\n\nOn Wed, Jul 14, 2021 at 10:36 AM wenjing <wjzeng2012@gmail.com> wrote:\n\n> Rebase code based on the latest version.\n>\n> Regards,\n> wenjing\n>\n>\n\nHi Wenjing,Some suggestions may help:1) It seems that no test case covers the below scenario: 2 sessions attach the same gtt, and insert/update/select concurrently. It is better to use the test framework in src/test/isolation like the code changes in https://commitfest.postgresql.org/24/2233/.2) CREATE GLOBAL TEMP SEQUENCE also need to be supported in src/bin/psql/tab-complete.cOn Wed, Jul 14, 2021 at 10:36 AM wenjing <wjzeng2012@gmail.com> wrote:Rebase code based on the latest version.Regards,wenjing", "msg_date": "Wed, 14 Jul 2021 10:56:00 +0800", "msg_from": "Ming Li <mli@apache.org>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Ming Li <mli@apache.org> 于2021年7月14日周三 上午10:56写道:\n\n> Hi Wenjing,\n>\n> Some suggestions may help:\n>\n> 1) It seems that no test case covers the below scenario: 2 sessions attach\n> the same gtt, and insert/update/select concurrently. It is better to use\n> the test framework in src/test/isolation like the code changes in\n> https://commitfest.postgresql.org/24/2233/.\n>\n\nThanks for pointing this out, I am working on this issue.\n\n\n>\n> 2) CREATE GLOBAL TEMP SEQUENCE also need to be supported\n> in src/bin/psql/tab-complete.c\n>\nIt has been fixed in V51, please check\n\nRegards,\nwenjing\n\n>\n>\n> On Wed, Jul 14, 2021 at 10:36 AM wenjing <wjzeng2012@gmail.com> wrote:\n>\n>> Rebase code based on the latest version.\n>>\n>> Regards,\n>> wenjing\n>>\n>>", "msg_date": "Thu, 22 Jul 2021 17:40:49 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing\r\n\r\nwould you please rebase the code?\r\n\r\nThank you very much\r\nTony\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Wed, 28 Jul 2021 15:09:07 +0000", "msg_from": "Tony Zhu <tony.zhu@ww-it.cn>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2021年7月28日 23:09,Tony Zhu <tony.zhu@ww-it.cn> 写道:\n> \n> Hi Wenjing\n> \n> would you please rebase the code?\nThank you for your attention.\nAccording to the test, the latest pgmaster code can merge the latest patch and pass the test.\nhttps://www.travis-ci.com/github/wjzeng/postgres/builds <https://www.travis-ci.com/github/wjzeng/postgres/builds>\nIf you have any questions, please give me feedback.\n\n\nWenjing\n\n\n> \n> Thank you very much\n> Tony\n> \n> The new status of this patch is: Waiting on Author\n\n\n2021年7月28日 23:09,Tony Zhu <tony.zhu@ww-it.cn> 写道:Hi Wenjingwould you please rebase the code?Thank you for your attention.According to the test, the latest pgmaster code can merge the latest patch and pass the test.https://www.travis-ci.com/github/wjzeng/postgres/buildsIf you have any questions, please give me feedback.WenjingThank you very muchTonyThe new status of this patch is: Waiting on Author", "msg_date": "Thu, 29 Jul 2021 23:19:23 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi WenJing\n\n\nThanks for the feedback,\n\nI have tested the code, it seems okay, and regression tests got pass\n\nand I have reviewed the code, and I don't find any issue anymore\n\n\nHello all\n\n\nReview and comments for the patches V51 is welcome.\n\n\nif there is no feedback, I'm going to changed the status to 'Ready for\nCommitter' on Aug 25\n\n\nbig thanks\n\nTony\n\n\n\nOn 2021/7/29 23:19, wenjing zeng wrote:\n>\n>> 2021年7月28日 23:09,Tony Zhu <tony.zhu@ww-it.cn> 写道:\n>>\n>> Hi Wenjing\n>>\n>> would you please rebase the code?\n> Thank you for your attention.\n> According to the test, the latest pgmaster code can merge the latest patch and pass the test.\n> https://www.travis-ci.com/github/wjzeng/postgres/builds <https://www.travis-ci.com/github/wjzeng/postgres/builds>\n> If you have any questions, please give me feedback.\n>\n>\n> Wenjing\n>\n>\n>> Thank you very much\n>> Tony\n>>\n>> The new status of this patch is: Waiting on Author\n>", "msg_date": "Thu, 5 Aug 2021 15:58:18 +0800", "msg_from": "ZHU XIAN WEN <tony.zhu@ww-it.cn>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\nlooks so this patch is broken again. Please, can you do rebase?\n\nRegards\n\nPavel\n\nčt 16. 9. 2021 v 8:28 odesílatel wenjing <wjzeng2012@gmail.com> napsal:\n\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\nHilooks so this patch is broken again. Please, can you do rebase?RegardsPavelčt 16. 9. 2021 v 8:28 odesílatel wenjing <wjzeng2012@gmail.com> napsal:", "msg_date": "Thu, 16 Sep 2021 08:29:43 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> 于2021年9月16日周四 下午2:30写道:\n\n> Hi\n>\n> looks so this patch is broken again. Please, can you do rebase?\n>\nGTT update to V52 and merge with the latest code.\n\nWenjing\n\n>\n> Regards\n>\n> Pavel\n>\n> čt 16. 9. 2021 v 8:28 odesílatel wenjing <wjzeng2012@gmail.com> napsal:\n>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>", "msg_date": "Thu, 23 Sep 2021 11:48:56 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "2021年7月14日 10:56,Ming Li <mli@apache.org> 写道:\n\nHi Wenjing,\n\nSome suggestions may help:\n\n1) It seems that no test case covers the below scenario: 2 sessions attach\nthe same gtt, and insert/update/select concurrently. It is better to use\nthe test framework in src/test/isolation like the code changes in\nhttps://commitfest.postgresql.org/24/2233/.\n\n\nI rewrote the case under regress to make it easier to read.\nand I used the Isolation module to add some concurrent cases and fix some\nbugs.\n\nPlease check code(v52) and give me feedback.\n\n\nWenjing\n\n\n2) CREATE GLOBAL TEMP SEQUENCE also need to be supported\nin src/bin/psql/tab-complete.c\n\n\nOn Wed, Jul 14, 2021 at 10:36 AM wenjing <wjzeng2012@gmail.com> wrote:\n\n> Rebase code based on the latest version.\n>\n> Regards,\n> wenjing\n>\n>\n\n2021年7月14日 10:56,Ming Li <mli@apache.org> 写道:Hi Wenjing,Some suggestions may help:1) It seems that no test case covers the below scenario: 2 sessions attach the same gtt, and insert/update/select concurrently. It is better to use the test framework in src/test/isolation like the code changes in https://commitfest.postgresql.org/24/2233/.I rewrote the case under regress to make it easier to read.and I used the Isolation module to add some concurrent cases and fix some bugs.Please check code(v52) and give me feedback.Wenjing2) CREATE GLOBAL TEMP SEQUENCE also need to be supported in src/bin/psql/tab-complete.cOn Wed, Jul 14, 2021 at 10:36 AM wenjing <wjzeng2012@gmail.com> wrote:Rebase code based on the latest version.Regards,wenjing", "msg_date": "Thu, 23 Sep 2021 12:03:25 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> 于2021年3月28日周日 下午9:07写道:\n\n>\n> On 3/17/21 7:59 AM, wenjing wrote:\n> > ok\n> >\n> > The cause of the problem is that the name of the dependent function\n> > (readNextTransactionID) has changed. I fixed it.\n> >\n> > This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n> >\n> > Wenjing\n> >\n> >\n>\n> I have fixed this patch so that\n>\n> a) it applies cleanly\n>\n> b) it uses project best practice for catalog Oid assignment.\n>\n> However, as noted elsewhere it fails the recovery TAP test.\n>\n> I also note this:\n>\n>\n> diff --git a/src/test/regress/parallel_schedule\n> b/src/test/regress/parallel_schedule\n> index 312c11a4bd..d44fa62f4e 100644\n> --- a/src/test/regress/parallel_schedule\n> +++ b/src/test/regress/parallel_schedule\n> @@ -129,3 +129,10 @@ test: fast_default\n>\n> # run stats by itself because its delay may be insufficient under heavy\n> load\n> test: stats\n> +\n> +# global temp table test\n> +test: gtt_stats\n> +test: gtt_function\n> +test: gtt_prepare\n> +test: gtt_parallel_1 gtt_parallel_2\n> +test: gtt_clean\n>\n>\n> Tests that need to run in parallel should use either the isolation\n> tester framework (which is explicitly for testing things concurrently)\n> or the TAP test framework.\n>\n> Adding six test files to the regression test suite for this one feature\n> is not a good idea. You should have one regression test script ideally,\n> and it should be added as appropriate to both the parallel and serial\n> schedules (and not at the end). Any further tests should be added using\n> the other frameworks mentioned.\n>\nThank you for your advice.\nI have simplified the case in regress and put further tests into the\nIsolation Tester Framework based on your suggestion.\nAnd I found a few bugs and fixed them.\n\nPlease review the GTT v52 and give me feedback.\nhttps://commitfest.postgresql.org/31/2349/\n\n\nWenjing\n\n\n\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n>\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nAndrew Dunstan <andrew@dunslane.net> 于2021年3月28日周日 下午9:07写道:\nOn 3/17/21 7:59 AM, wenjing wrote:\n> ok\n>\n> The cause of the problem is that the name of the dependent function\n> (readNextTransactionID) has changed. I fixed it.\n>\n> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a\n>\n> Wenjing\n>\n>\n\nI have fixed this patch so that\n\na) it applies cleanly\n\nb) it uses project best practice for catalog Oid assignment.\n\nHowever, as noted elsewhere it fails the recovery TAP test.\n\nI also note this:\n\n\ndiff --git a/src/test/regress/parallel_schedule\nb/src/test/regress/parallel_schedule\nindex 312c11a4bd..d44fa62f4e 100644\n--- a/src/test/regress/parallel_schedule\n+++ b/src/test/regress/parallel_schedule\n@@ -129,3 +129,10 @@ test: fast_default\n \n # run stats by itself because its delay may be insufficient under heavy\nload\n test: stats\n+\n+# global temp table test\n+test: gtt_stats\n+test: gtt_function\n+test: gtt_prepare\n+test: gtt_parallel_1 gtt_parallel_2\n+test: gtt_clean\n\n\nTests that need to run in parallel should use either the isolation\ntester framework (which is explicitly for testing things concurrently)\nor the TAP test framework.\n\nAdding six test files to the regression test suite for this one feature\nis not a good idea. You should have one regression test script ideally,\nand it should be added as appropriate to both the parallel and serial\nschedules (and not at the end). Any further tests should be added using\nthe other frameworks mentioned.Thank you for your advice.I have simplified the case in regress and put further tests into the Isolation Tester Framework based on your suggestion.And I found a few bugs and fixed them.Please review the GTT v52 and give me feedback.https://commitfest.postgresql.org/31/2349/Wenjing \n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 23 Sep 2021 15:22:34 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi Wenjing\r\n\r\nwe have reviewed the code, and done the regression tests, all tests is pass, we believe the feature code quality is ready for production ; and I will change the status to \"Ready for commit\"", "msg_date": "Thu, 23 Sep 2021 13:55:30 +0000", "msg_from": "Tony Zhu <tony.zhu@ww-it.cn>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "2021年9月23日 21:55,Tony Zhu <tony.zhu@ww-it.cn> 写道:\n\nHi Wenjing\n\nwe have reviewed the code, and done the regression tests, all tests is\npass, we believe the feature code quality is ready for production ; and I\nwill change the status to \"Ready for commit”\n\nThank you very much for your attention and testing.\nAs we communicated, I fixed several issues and attached the latest patch.\n\n\nWenjing", "msg_date": "Sun, 26 Sep 2021 12:04:53 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "hi\n\nne 26. 9. 2021 v 6:05 odesílatel wenjing <wjzeng2012@gmail.com> napsal:\n\n>\n>\n> 2021年9月23日 21:55,Tony Zhu <tony.zhu@ww-it.cn> 写道:\n>\n> Hi Wenjing\n>\n> we have reviewed the code, and done the regression tests, all tests is\n> pass, we believe the feature code quality is ready for production ; and I\n> will change the status to \"Ready for commit”\n>\n> Thank you very much for your attention and testing.\n> As we communicated, I fixed several issues and attached the latest patch.\n>\n\nlooks so windows build is broken\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.148542\n\nRegards\n\nPavel\n\n>\n>\n> Wenjing\n>\n>\n>\n\nhine 26. 9. 2021 v 6:05 odesílatel wenjing <wjzeng2012@gmail.com> napsal:2021年9月23日 21:55,Tony Zhu <tony.zhu@ww-it.cn> 写道:Hi Wenjingwe have reviewed the code, and done the regression tests,  all tests is pass,  we believe the feature code quality is ready for production ; and I will change the status to \"Ready for commit”Thank you very much for your attention and testing.As we communicated, I fixed several issues and attached the latest patch.looks so windows build is broken https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.148542RegardsPavelWenjing", "msg_date": "Wed, 29 Sep 2021 07:53:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> 于2021年9月29日周三 下午1:53写道:\n\n> hi\n>\n> ne 26. 9. 2021 v 6:05 odesílatel wenjing <wjzeng2012@gmail.com> napsal:\n>\n>>\n>>\n>> 2021年9月23日 21:55,Tony Zhu <tony.zhu@ww-it.cn> 写道:\n>>\n>> Hi Wenjing\n>>\n>> we have reviewed the code, and done the regression tests, all tests is\n>> pass, we believe the feature code quality is ready for production ; and I\n>> will change the status to \"Ready for commit”\n>>\n>> Thank you very much for your attention and testing.\n>> As we communicated, I fixed several issues and attached the latest patch.\n>>\n>\n> looks so windows build is broken\n>\n>\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.148542\n>\nThis is indeed a problem and it has been fixed in the new version(v54).\nThank you for pointing it out, please review the code again.\n\n\nWenjing\n\n>\n> Regards\n>\n> Pavel\n>\n>>\n>>\n>> Wenjing\n>>\n>>\n>>", "msg_date": "Sun, 3 Oct 2021 21:15:03 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On master with the v54 patches applied the following script leads to crash:\nexport\nASAN_OPTIONS=detect_leaks=0:abort_on_error=1:disable_coredump=0:strict_string_checks=1:check_initialization_order=1:strict_init_order=1\ninitdb -D data\npg_ctl -w -t 5 -D data -l server.log start\npsql -c \"create global temp table tmp_table_test_statistics(a int); insert\ninto temp_table_test_statistics values(generate_series(1,1000000000));\" &\nsleep 1\npg_ctl -w -t 5 -D data -l server.log stop\n\nand i got error\n=================================================================\n==1022892==ERROR: AddressSanitizer: heap-use-after-free on address\n0x62500004c640 at pc 0x562435348750 bp 0x7ffee8487e60 sp 0x7ffee8487e50\nREAD of size 8 at 0x62500004c640 thread T0\n---\n\nwith backtrace:\n\nCore was generated by `postgres: andrew regression [local] INSERT\n '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007fa8fd008859 in __GI_abort () at abort.c:79\n#2 0x000056243471eae2 in __sanitizer::Abort() ()\n#3 0x000056243472968c in __sanitizer::Die() ()\n#4 0x000056243470ad1c in\n__asan::ScopedInErrorReport::~ScopedInErrorReport() ()\n#5 0x000056243470a793 in __asan::ReportGenericError(unsigned long,\nunsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned\nint, bool) ()\n#6 0x000056243470b5db in __asan_report_load8 ()\n#7 0x0000562435348750 in DropRelFileNodesAllBuffers\n(smgr_reln=smgr_reln@entry=0x62500004c640, nnodes=nnodes@entry=1) at\nbufmgr.c:3211\n#8 0x00005624353ec8a8 in smgrdounlinkall (rels=rels@entry=0x62500004c640,\nnrels=nrels@entry=1, isRedo=isRedo@entry=false) at smgr.c:397\n#9 0x0000562434aa76e1 in gtt_storage_removeall (code=<optimized out>,\narg=<optimized out>) at storage_gtt.c:726\n#10 0x0000562435371962 in shmem_exit (code=code@entry=1) at ipc.c:236\n#11 0x0000562435371d4f in proc_exit_prepare (code=code@entry=1) at ipc.c:194\n#12 0x0000562435371f74 in proc_exit (code=code@entry=1) at ipc.c:107\n#13 0x000056243581e35c in errfinish (filename=<optimized out>,\nfilename@entry=0x562435b800e0 \"postgres.c\", lineno=lineno@entry=3191,\nfuncname=funcname@entry=0x562435b836a0 <__func__.26025>\n\"ProcessInterrupts\") at elog.c:666\n#14 0x00005624353f5f86 in ProcessInterrupts () at postgres.c:3191\n#15 0x0000562434eb26d6 in ExecProjectSet (pstate=0x62500003f150) at\nnodeProjectSet.c:51\n#16 0x0000562434eaae8e in ExecProcNode (node=0x62500003f150) at\n../../../src/include/executor/executor.h:257\n#17 ExecModifyTable (pstate=0x62500003ec98) at nodeModifyTable.c:2429\n#18 0x0000562434df5755 in ExecProcNodeFirst (node=0x62500003ec98) at\nexecProcnode.c:463\n#19 0x0000562434dd678a in ExecProcNode (node=0x62500003ec98) at\n../../../src/include/executor/executor.h:257\n#20 ExecutePlan (estate=estate@entry=0x62500003ea20,\nplanstate=0x62500003ec98, use_parallel_mode=<optimized out>,\nuse_parallel_mode@entry=false, operation=operation@entry=CMD_INSERT,\nsendTuples=false, numberTuples=numberTuples@entry=0,\ndirection=ForwardScanDirection,\n dest=0x625000045550, execute_once=true) at execMain.c:1555\n#21 0x0000562434dd9867 in standard_ExecutorRun (queryDesc=0x6190000015a0,\ndirection=ForwardScanDirection, count=0, execute_once=execute_once@entry=true)\nat execMain.c:361\n#22 0x0000562434dd9a83 in ExecutorRun\n(queryDesc=queryDesc@entry=0x6190000015a0,\ndirection=direction@entry=ForwardScanDirection, count=count@entry=0,\nexecute_once=execute_once@entry=true) at execMain.c:305\n#23 0x0000562435401be6 in ProcessQuery (plan=plan@entry=0x625000045480,\nsourceText=0x625000005220 \"insert into temp_table_test_statistics\nvalues(generate_series(1,1000000000));\", params=0x0, queryEnv=0x0,\ndest=dest@entry=0x625000045550, qc=qc@entry=0x7ffee84886d0)\n at pquery.c:160\n#24 0x0000562435404a32 in PortalRunMulti (portal=portal@entry=0x625000020a20,\nisTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,\ndest=dest@entry=0x625000045550, altdest=altdest@entry=0x625000045550,\nqc=qc@entry=0x7ffee84886d0)\n at pquery.c:1274\n#25 0x000056243540598d in PortalRun (portal=portal@entry=0x625000020a20,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true, dest=dest@entry=0x625000045550,\naltdest=altdest@entry=0x625000045550, qc=<optimized out>)\n at pquery.c:788\n#26 0x00005624353fa917 in exec_simple_query\n(query_string=query_string@entry=0x625000005220\n\"insert into temp_table_test_statistics\nvalues(generate_series(1,1000000000));\") at postgres.c:1214\n#27 0x00005624353ff61d in PostgresMain (dbname=dbname@entry=0x629000011278\n\"regression\", username=username@entry=0x629000011258 \"andrew\") at\npostgres.c:4497\n#28 0x00005624351f65c7 in BackendRun (port=port@entry=0x615000002d80) at\npostmaster.c:4560\n#29 0x00005624351ff1c5 in BackendStartup (port=port@entry=0x615000002d80)\nat postmaster.c:4288\n#30 0x00005624351ff970 in ServerLoop () at postmaster.c:1801\n#31 0x0000562435201da4 in PostmasterMain (argc=3, argv=<optimized out>) at\npostmaster.c:1473\n#32 0x0000562434f3ab2d in main (argc=3, argv=0x603000000280) at main.c:198\n---\n\nI've built the server with sanitizers using gcc 9 as following:\nCPPFLAGS=\"-Og -fsanitize=address -fsanitize=undefined\n-fno-sanitize=nonnull-attribute -fno-sanitize-recover\n-fno-sanitize=alignment -fstack-protector\" LDFLAGS='-fsanitize=address\n-fsanitize=undefined -static-libasan' ./configure --enable-tap-tests\n--enable-debug\n\nOn master with the v54 patches applied the following script leads to crash:export ASAN_OPTIONS=detect_leaks=0:abort_on_error=1:disable_coredump=0:strict_string_checks=1:check_initialization_order=1:strict_init_order=1initdb -D datapg_ctl -w -t 5 -D data -l server.log startpsql -c \"create global temp table tmp_table_test_statistics(a int); insert into temp_table_test_statistics values(generate_series(1,1000000000));\" &sleep 1pg_ctl -w -t 5 -D data -l server.log stopand i got error===================================================================1022892==ERROR:\n AddressSanitizer: heap-use-after-free on address 0x62500004c640 at pc \n0x562435348750 bp 0x7ffee8487e60 sp 0x7ffee8487e50READ of size 8 at 0x62500004c640 thread T0---with backtrace:Core was generated by `postgres: andrew regression [local] INSERT                                    '.Program terminated with signal SIGABRT, Aborted.#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5050      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1  0x00007fa8fd008859 in __GI_abort () at abort.c:79#2  0x000056243471eae2 in __sanitizer::Abort() ()#3  0x000056243472968c in __sanitizer::Die() ()#4  0x000056243470ad1c in __asan::ScopedInErrorReport::~ScopedInErrorReport() ()#5  0x000056243470a793 in __asan::ReportGenericError(unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned int, bool) ()#6  0x000056243470b5db in __asan_report_load8 ()#7  0x0000562435348750 in DropRelFileNodesAllBuffers (smgr_reln=smgr_reln@entry=0x62500004c640, nnodes=nnodes@entry=1) at bufmgr.c:3211#8  0x00005624353ec8a8 in smgrdounlinkall (rels=rels@entry=0x62500004c640, nrels=nrels@entry=1, isRedo=isRedo@entry=false) at smgr.c:397#9  0x0000562434aa76e1 in gtt_storage_removeall (code=<optimized out>, arg=<optimized out>) at storage_gtt.c:726#10 0x0000562435371962 in shmem_exit (code=code@entry=1) at ipc.c:236#11 0x0000562435371d4f in proc_exit_prepare (code=code@entry=1) at ipc.c:194#12 0x0000562435371f74 in proc_exit (code=code@entry=1) at ipc.c:107#13 0x000056243581e35c in errfinish (filename=<optimized out>, filename@entry=0x562435b800e0 \"postgres.c\", lineno=lineno@entry=3191, funcname=funcname@entry=0x562435b836a0 <__func__.26025> \"ProcessInterrupts\") at elog.c:666#14 0x00005624353f5f86 in ProcessInterrupts () at postgres.c:3191#15 0x0000562434eb26d6 in ExecProjectSet (pstate=0x62500003f150) at nodeProjectSet.c:51#16 0x0000562434eaae8e in ExecProcNode (node=0x62500003f150) at ../../../src/include/executor/executor.h:257#17 ExecModifyTable (pstate=0x62500003ec98) at nodeModifyTable.c:2429#18 0x0000562434df5755 in ExecProcNodeFirst (node=0x62500003ec98) at execProcnode.c:463#19 0x0000562434dd678a in ExecProcNode (node=0x62500003ec98) at ../../../src/include/executor/executor.h:257#20 ExecutePlan (estate=estate@entry=0x62500003ea20, planstate=0x62500003ec98, use_parallel_mode=<optimized out>, use_parallel_mode@entry=false, operation=operation@entry=CMD_INSERT, sendTuples=false, numberTuples=numberTuples@entry=0, direction=ForwardScanDirection,     dest=0x625000045550, execute_once=true) at execMain.c:1555#21 0x0000562434dd9867 in standard_ExecutorRun (queryDesc=0x6190000015a0, direction=ForwardScanDirection, count=0, execute_once=execute_once@entry=true) at execMain.c:361#22 0x0000562434dd9a83 in ExecutorRun (queryDesc=queryDesc@entry=0x6190000015a0, direction=direction@entry=ForwardScanDirection, count=count@entry=0, execute_once=execute_once@entry=true) at execMain.c:305#23 0x0000562435401be6 in ProcessQuery (plan=plan@entry=0x625000045480, sourceText=0x625000005220 \"insert into temp_table_test_statistics values(generate_series(1,1000000000));\", params=0x0, queryEnv=0x0, dest=dest@entry=0x625000045550, qc=qc@entry=0x7ffee84886d0)    at pquery.c:160#24 0x0000562435404a32 in PortalRunMulti (portal=portal@entry=0x625000020a20, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x625000045550, altdest=altdest@entry=0x625000045550, qc=qc@entry=0x7ffee84886d0)    at pquery.c:1274#25 0x000056243540598d in PortalRun (portal=portal@entry=0x625000020a20, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x625000045550, altdest=altdest@entry=0x625000045550, qc=<optimized out>)    at pquery.c:788#26 0x00005624353fa917 in exec_simple_query (query_string=query_string@entry=0x625000005220 \"insert into temp_table_test_statistics values(generate_series(1,1000000000));\") at postgres.c:1214#27 0x00005624353ff61d in PostgresMain (dbname=dbname@entry=0x629000011278 \"regression\", username=username@entry=0x629000011258 \"andrew\") at postgres.c:4497#28 0x00005624351f65c7 in BackendRun (port=port@entry=0x615000002d80) at postmaster.c:4560#29 0x00005624351ff1c5 in BackendStartup (port=port@entry=0x615000002d80) at postmaster.c:4288#30 0x00005624351ff970 in ServerLoop () at postmaster.c:1801#31 0x0000562435201da4 in PostmasterMain (argc=3, argv=<optimized out>) at postmaster.c:1473#32 0x0000562434f3ab2d in main (argc=3, argv=0x603000000280) at main.c:198---I've built the server with sanitizers using gcc 9 as following:CPPFLAGS=\"-Og -fsanitize=address -fsanitize=undefined -fno-sanitize=nonnull-attribute  -fno-sanitize-recover -fno-sanitize=alignment -fstack-protector\" LDFLAGS='-fsanitize=address -fsanitize=undefined -static-libasan' ./configure --enable-tap-tests --enable-debug", "msg_date": "Wed, 6 Oct 2021 21:09:36 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Andrew Bille <andrewbille@gmail.com> 于2021年10月7日周四 上午12:30写道:\n\n> On master with the v54 patches applied the following script leads to crash:\n>\nThank you for pointing it out.\nThis is a bug that occurs during transaction rollback and process exit, I\nfixed it, please confirm it.\n\n\nWenjing\n\nexport\n> ASAN_OPTIONS=detect_leaks=0:abort_on_error=1:disable_coredump=0:strict_string_checks=1:check_initialization_order=1:strict_init_order=1\n> initdb -D data\n> pg_ctl -w -t 5 -D data -l server.log start\n> psql -c \"create global temp table tmp_table_test_statistics(a int); insert\n> into temp_table_test_statistics values(generate_series(1,1000000000));\" &\n> sleep 1\n> pg_ctl -w -t 5 -D data -l server.log stop\n>\n> and i got error\n> =================================================================\n> ==1022892==ERROR: AddressSanitizer: heap-use-after-free on address\n> 0x62500004c640 at pc 0x562435348750 bp 0x7ffee8487e60 sp 0x7ffee8487e50\n> READ of size 8 at 0x62500004c640 thread T0\n> ---\n>\n> with backtrace:\n>\n> Core was generated by `postgres: andrew regression [local] INSERT\n> '.\n> Program terminated with signal SIGABRT, Aborted.\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007fa8fd008859 in __GI_abort () at abort.c:79\n> #2 0x000056243471eae2 in __sanitizer::Abort() ()\n> #3 0x000056243472968c in __sanitizer::Die() ()\n> #4 0x000056243470ad1c in\n> __asan::ScopedInErrorReport::~ScopedInErrorReport() ()\n> #5 0x000056243470a793 in __asan::ReportGenericError(unsigned long,\n> unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned\n> int, bool) ()\n> #6 0x000056243470b5db in __asan_report_load8 ()\n> #7 0x0000562435348750 in DropRelFileNodesAllBuffers\n> (smgr_reln=smgr_reln@entry=0x62500004c640, nnodes=nnodes@entry=1) at\n> bufmgr.c:3211\n> #8 0x00005624353ec8a8 in smgrdounlinkall (rels=rels@entry=0x62500004c640,\n> nrels=nrels@entry=1, isRedo=isRedo@entry=false) at smgr.c:397\n> #9 0x0000562434aa76e1 in gtt_storage_removeall (code=<optimized out>,\n> arg=<optimized out>) at storage_gtt.c:726\n> #10 0x0000562435371962 in shmem_exit (code=code@entry=1) at ipc.c:236\n> #11 0x0000562435371d4f in proc_exit_prepare (code=code@entry=1) at\n> ipc.c:194\n> #12 0x0000562435371f74 in proc_exit (code=code@entry=1) at ipc.c:107\n> #13 0x000056243581e35c in errfinish (filename=<optimized out>,\n> filename@entry=0x562435b800e0 \"postgres.c\", lineno=lineno@entry=3191,\n> funcname=funcname@entry=0x562435b836a0 <__func__.26025>\n> \"ProcessInterrupts\") at elog.c:666\n> #14 0x00005624353f5f86 in ProcessInterrupts () at postgres.c:3191\n> #15 0x0000562434eb26d6 in ExecProjectSet (pstate=0x62500003f150) at\n> nodeProjectSet.c:51\n> #16 0x0000562434eaae8e in ExecProcNode (node=0x62500003f150) at\n> ../../../src/include/executor/executor.h:257\n> #17 ExecModifyTable (pstate=0x62500003ec98) at nodeModifyTable.c:2429\n> #18 0x0000562434df5755 in ExecProcNodeFirst (node=0x62500003ec98) at\n> execProcnode.c:463\n> #19 0x0000562434dd678a in ExecProcNode (node=0x62500003ec98) at\n> ../../../src/include/executor/executor.h:257\n> #20 ExecutePlan (estate=estate@entry=0x62500003ea20,\n> planstate=0x62500003ec98, use_parallel_mode=<optimized out>,\n> use_parallel_mode@entry=false, operation=operation@entry=CMD_INSERT,\n> sendTuples=false, numberTuples=numberTuples@entry=0,\n> direction=ForwardScanDirection,\n> dest=0x625000045550, execute_once=true) at execMain.c:1555\n> #21 0x0000562434dd9867 in standard_ExecutorRun (queryDesc=0x6190000015a0,\n> direction=ForwardScanDirection, count=0, execute_once=execute_once@entry=true)\n> at execMain.c:361\n> #22 0x0000562434dd9a83 in ExecutorRun (queryDesc=queryDesc@entry=0x6190000015a0,\n> direction=direction@entry=ForwardScanDirection, count=count@entry=0,\n> execute_once=execute_once@entry=true) at execMain.c:305\n> #23 0x0000562435401be6 in ProcessQuery (plan=plan@entry=0x625000045480,\n> sourceText=0x625000005220 \"insert into temp_table_test_statistics\n> values(generate_series(1,1000000000));\", params=0x0, queryEnv=0x0,\n> dest=dest@entry=0x625000045550, qc=qc@entry=0x7ffee84886d0)\n> at pquery.c:160\n> #24 0x0000562435404a32 in PortalRunMulti (portal=portal@entry=0x625000020a20,\n> isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,\n> dest=dest@entry=0x625000045550, altdest=altdest@entry=0x625000045550,\n> qc=qc@entry=0x7ffee84886d0)\n> at pquery.c:1274\n> #25 0x000056243540598d in PortalRun (portal=portal@entry=0x625000020a20,\n> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n> run_once=run_once@entry=true, dest=dest@entry=0x625000045550,\n> altdest=altdest@entry=0x625000045550, qc=<optimized out>)\n> at pquery.c:788\n> #26 0x00005624353fa917 in exec_simple_query\n> (query_string=query_string@entry=0x625000005220 \"insert into\n> temp_table_test_statistics values(generate_series(1,1000000000));\") at\n> postgres.c:1214\n> #27 0x00005624353ff61d in PostgresMain (dbname=dbname@entry=0x629000011278\n> \"regression\", username=username@entry=0x629000011258 \"andrew\") at\n> postgres.c:4497\n> #28 0x00005624351f65c7 in BackendRun (port=port@entry=0x615000002d80) at\n> postmaster.c:4560\n> #29 0x00005624351ff1c5 in BackendStartup (port=port@entry=0x615000002d80)\n> at postmaster.c:4288\n> #30 0x00005624351ff970 in ServerLoop () at postmaster.c:1801\n> #31 0x0000562435201da4 in PostmasterMain (argc=3, argv=<optimized out>) at\n> postmaster.c:1473\n> #32 0x0000562434f3ab2d in main (argc=3, argv=0x603000000280) at main.c:198\n> ---\n>\n> I've built the server with sanitizers using gcc 9 as following:\n> CPPFLAGS=\"-Og -fsanitize=address -fsanitize=undefined\n> -fno-sanitize=nonnull-attribute -fno-sanitize-recover\n> -fno-sanitize=alignment -fstack-protector\" LDFLAGS='-fsanitize=address\n> -fsanitize=undefined -static-libasan' ./configure --enable-tap-tests\n> --enable-debug\n>\n>", "msg_date": "Sat, 9 Oct 2021 15:41:26 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thanks for the fix. It works for me.\n\nNow I'm exploring another crash related to GTT, but I need a few days to\npresent a simple repro.\n\nOn Sat, Oct 9, 2021 at 2:41 PM wenjing <wjzeng2012@gmail.com> wrote:\n\n>\n> Thank you for pointing it out.\n> This is a bug that occurs during transaction rollback and process exit, I\n> fixed it, please confirm it.\n>\n> Wenjing\n>\n\nThanks for the fix. It works for me.Now I'm exploring another crash related to GTT, but I need a few days to present a simple repro.On Sat, Oct 9, 2021 at 2:41 PM wenjing <wjzeng2012@gmail.com> wrote:Thank you for pointing it out. This is a bug that occurs during transaction rollback and process exit, I fixed it, please confirm it.Wenjing", "msg_date": "Wed, 13 Oct 2021 12:08:08 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2021年10月13日 13:08,Andrew Bille <andrewbille@gmail.com> 写道:\n> \n> Thanks for the fix. It works for me.\n> \n> Now I'm exploring another crash related to GTT, but I need a few days to present a simple repro.\n\nBe deeply grateful.\nPerhaps you can give the stack of problems so that you can start analyzing them as soon as possible.\n\n\nWenjing\n\n> \n> On Sat, Oct 9, 2021 at 2:41 PM wenjing <wjzeng2012@gmail.com <mailto:wjzeng2012@gmail.com>> wrote:\n> \n> Thank you for pointing it out. \n> This is a bug that occurs during transaction rollback and process exit, I fixed it, please confirm it.\n> \n> Wenjing \n\n\n2021年10月13日 13:08,Andrew Bille <andrewbille@gmail.com> 写道:Thanks for the fix. It works for me.Now I'm exploring another crash related to GTT, but I need a few days to present a simple repro.Be deeply grateful.Perhaps you can give the stack of problems so that you can start analyzing them as soon as possible.WenjingOn Sat, Oct 9, 2021 at 2:41 PM wenjing <wjzeng2012@gmail.com> wrote:Thank you for pointing it out. This is a bug that occurs during transaction rollback and process exit, I fixed it, please confirm it.Wenjing", "msg_date": "Thu, 14 Oct 2021 16:28:53 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On master with the v55 patches applied the following script leads to crash:\ninitdb -D data\npg_ctl -w -t 5 -D data -l server.log start\n\npsql -t -c \"begin; create global temp table gtt_with_index(a int primary\nkey, b text); commit; select pg_sleep(5);\" >psql1.log &\npsql -t -c \"select pg_sleep(1); create index idx_b on gtt_with_index(b);\"\n>psql2.log &\nfor i in `seq 40`; do (psql -t -c \"select pg_sleep(1); insert into\ngtt_with_index values(1,'test');\" &); done\n\nsleep 10\n\n\nand I got crash\nINSERT 0 1\n...\nINSERT 0 1\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited\nabnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\nand some coredumps with the following stack:\n\n[New LWP 1821493]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: andrew regression [local] INSERT\n '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f021d809859 in __GI_abort () at abort.c:79\n#2 0x0000564dc1bd22e8 in ExceptionalCondition\n(conditionName=conditionName@entry=0x564dc1c5c957\n\"index->rd_index->indisvalid\", errorType=errorType@entry=0x564dc1c2a00b\n\"FailedAssertion\", fileName=fileName@entry=0x564dc1c5c854 \"storage_gtt.c\",\n lineNumber=lineNumber@entry=1381) at assert.c:69\n#3 0x0000564dc185778b in init_gtt_storage\n(operation=operation@entry=CMD_INSERT,\nresultRelInfo=resultRelInfo@entry=0x564dc306f6c0) at storage_gtt.c:1381\n#4 0x0000564dc194c888 in ExecInsert (mtstate=0x564dc306f4a8,\nresultRelInfo=0x564dc306f6c0, slot=0x564dc30706d0, planSlot=0x564dc306fca0,\nestate=0x564dc306f230, canSetTag=<optimized out>) at nodeModifyTable.c:638\n#5 0x0000564dc194d945 in ExecModifyTable (pstate=<optimized out>) at\nnodeModifyTable.c:2565\n#6 0x0000564dc191ca83 in ExecProcNode (node=0x564dc306f4a8) at\n../../../src/include/executor/executor.h:257\n#7 ExecutePlan (execute_once=<optimized out>, dest=0x564dc310ed80,\ndirection=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\noperation=CMD_INSERT, use_parallel_mode=<optimized out>,\nplanstate=0x564dc306f4a8, estate=0x564dc306f230) at execMain.c:1555\n#8 standard_ExecutorRun (queryDesc=0x564dc306bce0, direction=<optimized\nout>, count=0, execute_once=<optimized out>) at execMain.c:361\n#9 0x0000564dc1ab47a0 in ProcessQuery (plan=<optimized out>,\nsourceText=0x564dc3049a30 \"select pg_sleep(1); insert into gtt_with_index\nvalues(1,'test');\", params=0x0, queryEnv=0x0, dest=0x564dc310ed80,\nqc=0x7ffd3a6cf2e0) at pquery.c:160\n#10 0x0000564dc1ab52e2 in PortalRunMulti (portal=portal@entry=0x564dc30acd80,\nisTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,\ndest=dest@entry=0x564dc310ed80, altdest=altdest@entry=0x564dc310ed80,\nqc=qc@entry=0x7ffd3a6cf2e0)\n at pquery.c:1274\n#11 0x0000564dc1ab5861 in PortalRun (portal=portal@entry=0x564dc30acd80,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true, dest=dest@entry=0x564dc310ed80,\naltdest=altdest@entry=0x564dc310ed80, qc=0x7ffd3a6cf2e0)\n at pquery.c:788\n#12 0x0000564dc1ab1522 in exec_simple_query (query_string=0x564dc3049a30\n\"select pg_sleep(1); insert into gtt_with_index values(1,'test');\") at\npostgres.c:1214\n#13 0x0000564dc1ab327a in PostgresMain (dbname=<optimized out>,\nusername=<optimized out>) at postgres.c:4497\n#14 0x0000564dc1a1f539 in BackendRun (port=<optimized out>, port=<optimized\nout>) at postmaster.c:4560\n#15 BackendStartup (port=<optimized out>) at postmaster.c:4288\n#16 ServerLoop () at postmaster.c:1801\n#17 0x0000564dc1a2053c in PostmasterMain (argc=<optimized out>,\nargv=0x564dc3043fc0) at postmaster.c:1473\n#18 0x0000564dc1750180 in main (argc=3, argv=0x564dc3043fc0) at main.c:198\n(gdb) q\n\n\nI've built the server using gcc 9 as following:\n./configure --enable-debug --enable-cassert\n\nThanks to Alexander Lakhin for simplifying the repro.\n\nOn Thu, Oct 14, 2021 at 3:29 PM wenjing zeng <wjzeng2012@gmail.com> wrote:\n\n>\n> Be deeply grateful.\n> Perhaps you can give the stack of problems so that you can start analyzing\n> them as soon as possible.\n>\n> Wenjing\n>\n>\n\nOn master with the v55 patches applied the following script leads to crash:initdb -D datapg_ctl -w -t 5 -D data -l server.log startpsql -t -c \"begin; create global temp table gtt_with_index(a int primary key, b text); commit; select pg_sleep(5);\" >psql1.log &psql -t -c \"select pg_sleep(1); create index idx_b on gtt_with_index(b);\" >psql2.log &for i in `seq 40`; do (psql -t -c \"select pg_sleep(1); insert into gtt_with_index values(1,'test');\" &); donesleep 10and I got crashINSERT 0 1...INSERT 0 1server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.connection to server was lostWARNING:  terminating connection because of crash of another server processDETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.HINT:  In a moment you should be able to reconnect to the database and repeat your command.server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.connection to server was lostand some coredumps with the following stack:[New LWP 1821493][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".Core was generated by `postgres: andrew regression [local] INSERT                                    '.Program terminated with signal SIGABRT, Aborted.#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5050      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1  0x00007f021d809859 in __GI_abort () at abort.c:79#2  0x0000564dc1bd22e8 in ExceptionalCondition (conditionName=conditionName@entry=0x564dc1c5c957 \"index->rd_index->indisvalid\", errorType=errorType@entry=0x564dc1c2a00b \"FailedAssertion\", fileName=fileName@entry=0x564dc1c5c854 \"storage_gtt.c\",     lineNumber=lineNumber@entry=1381) at assert.c:69#3  0x0000564dc185778b in init_gtt_storage (operation=operation@entry=CMD_INSERT, resultRelInfo=resultRelInfo@entry=0x564dc306f6c0) at storage_gtt.c:1381#4  0x0000564dc194c888 in ExecInsert (mtstate=0x564dc306f4a8, resultRelInfo=0x564dc306f6c0, slot=0x564dc30706d0, planSlot=0x564dc306fca0, estate=0x564dc306f230, canSetTag=<optimized out>) at nodeModifyTable.c:638#5  0x0000564dc194d945 in ExecModifyTable (pstate=<optimized out>) at nodeModifyTable.c:2565#6  0x0000564dc191ca83 in ExecProcNode (node=0x564dc306f4a8) at ../../../src/include/executor/executor.h:257#7  ExecutePlan (execute_once=<optimized out>, dest=0x564dc310ed80, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_INSERT, use_parallel_mode=<optimized out>, planstate=0x564dc306f4a8, estate=0x564dc306f230) at execMain.c:1555#8  standard_ExecutorRun (queryDesc=0x564dc306bce0, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:361#9  0x0000564dc1ab47a0 in ProcessQuery (plan=<optimized out>, sourceText=0x564dc3049a30 \"select pg_sleep(1); insert into gtt_with_index values(1,'test');\", params=0x0, queryEnv=0x0, dest=0x564dc310ed80, qc=0x7ffd3a6cf2e0) at pquery.c:160#10 0x0000564dc1ab52e2 in PortalRunMulti (portal=portal@entry=0x564dc30acd80, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x564dc310ed80, altdest=altdest@entry=0x564dc310ed80, qc=qc@entry=0x7ffd3a6cf2e0)    at pquery.c:1274#11 0x0000564dc1ab5861 in PortalRun (portal=portal@entry=0x564dc30acd80, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x564dc310ed80, altdest=altdest@entry=0x564dc310ed80, qc=0x7ffd3a6cf2e0)    at pquery.c:788#12 0x0000564dc1ab1522 in exec_simple_query (query_string=0x564dc3049a30 \"select pg_sleep(1); insert into gtt_with_index values(1,'test');\") at postgres.c:1214#13 0x0000564dc1ab327a in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4497#14 0x0000564dc1a1f539 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4560#15 BackendStartup (port=<optimized out>) at postmaster.c:4288#16 ServerLoop () at postmaster.c:1801#17 0x0000564dc1a2053c in PostmasterMain (argc=<optimized out>, argv=0x564dc3043fc0) at postmaster.c:1473#18 0x0000564dc1750180 in main (argc=3, argv=0x564dc3043fc0) at main.c:198(gdb) qI've built the server using gcc 9 as following:./configure --enable-debug --enable-cassertThanks to Alexander Lakhin for simplifying the repro.On Thu, Oct 14, 2021 at 3:29 PM wenjing zeng <wjzeng2012@gmail.com> wrote:Be deeply grateful.Perhaps you can give the stack of problems so that you can start analyzing them as soon as possible.Wenjing", "msg_date": "Fri, 15 Oct 2021 14:44:39 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Andrew Bille <andrewbille@gmail.com> 于2021年10月15日周五 下午3:44写道:\n\n> On master with the v55 patches applied the following script leads to crash:\n> initdb -D data\n> pg_ctl -w -t 5 -D data -l server.log start\n>\n> psql -t -c \"begin; create global temp table gtt_with_index(a int primary\n> key, b text); commit; select pg_sleep(5);\" >psql1.log &\n> psql -t -c \"select pg_sleep(1); create index idx_b on gtt_with_index(b);\"\n> >psql2.log &\n> for i in `seq 40`; do (psql -t -c \"select pg_sleep(1); insert into\n> gtt_with_index values(1,'test');\" &); done\n>\n> sleep 10\n>\n>\n> and I got crash\n> INSERT 0 1\n> ...\n> INSERT 0 1\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n> WARNING: terminating connection because of crash of another server process\n> DETAIL: The postmaster has commanded this server process to roll back the\n> current transaction and exit, because another server process exited\n> abnormally and possibly corrupted shared memory.\n> HINT: In a moment you should be able to reconnect to the database and\n> repeat your command.\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n>\n> and some coredumps with the following stack:\n>\n> [New LWP 1821493]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n> Core was generated by `postgres: andrew regression [local] INSERT\n> '.\n> Program terminated with signal SIGABRT, Aborted.\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007f021d809859 in __GI_abort () at abort.c:79\n> #2 0x0000564dc1bd22e8 in ExceptionalCondition\n> (conditionName=conditionName@entry=0x564dc1c5c957\n> \"index->rd_index->indisvalid\", errorType=errorType@entry=0x564dc1c2a00b\n> \"FailedAssertion\", fileName=fileName@entry=0x564dc1c5c854\n> \"storage_gtt.c\",\n> lineNumber=lineNumber@entry=1381) at assert.c:69\n> #3 0x0000564dc185778b in init_gtt_storage (operation=operation@entry=CMD_INSERT,\n> resultRelInfo=resultRelInfo@entry=0x564dc306f6c0) at storage_gtt.c:1381\n> #4 0x0000564dc194c888 in ExecInsert (mtstate=0x564dc306f4a8,\n> resultRelInfo=0x564dc306f6c0, slot=0x564dc30706d0, planSlot=0x564dc306fca0,\n> estate=0x564dc306f230, canSetTag=<optimized out>) at nodeModifyTable.c:638\n> #5 0x0000564dc194d945 in ExecModifyTable (pstate=<optimized out>) at\n> nodeModifyTable.c:2565\n> #6 0x0000564dc191ca83 in ExecProcNode (node=0x564dc306f4a8) at\n> ../../../src/include/executor/executor.h:257\n> #7 ExecutePlan (execute_once=<optimized out>, dest=0x564dc310ed80,\n> direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\n> operation=CMD_INSERT, use_parallel_mode=<optimized out>,\n> planstate=0x564dc306f4a8, estate=0x564dc306f230) at execMain.c:1555\n> #8 standard_ExecutorRun (queryDesc=0x564dc306bce0, direction=<optimized\n> out>, count=0, execute_once=<optimized out>) at execMain.c:361\n> #9 0x0000564dc1ab47a0 in ProcessQuery (plan=<optimized out>,\n> sourceText=0x564dc3049a30 \"select pg_sleep(1); insert into gtt_with_index\n> values(1,'test');\", params=0x0, queryEnv=0x0, dest=0x564dc310ed80,\n> qc=0x7ffd3a6cf2e0) at pquery.c:160\n> #10 0x0000564dc1ab52e2 in PortalRunMulti (portal=portal@entry=0x564dc30acd80,\n> isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,\n> dest=dest@entry=0x564dc310ed80, altdest=altdest@entry=0x564dc310ed80,\n> qc=qc@entry=0x7ffd3a6cf2e0)\n> at pquery.c:1274\n> #11 0x0000564dc1ab5861 in PortalRun (portal=portal@entry=0x564dc30acd80,\n> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n> run_once=run_once@entry=true, dest=dest@entry=0x564dc310ed80,\n> altdest=altdest@entry=0x564dc310ed80, qc=0x7ffd3a6cf2e0)\n> at pquery.c:788\n> #12 0x0000564dc1ab1522 in exec_simple_query (query_string=0x564dc3049a30\n> \"select pg_sleep(1); insert into gtt_with_index values(1,'test');\") at\n> postgres.c:1214\n> #13 0x0000564dc1ab327a in PostgresMain (dbname=<optimized out>,\n> username=<optimized out>) at postgres.c:4497\n> #14 0x0000564dc1a1f539 in BackendRun (port=<optimized out>,\n> port=<optimized out>) at postmaster.c:4560\n> #15 BackendStartup (port=<optimized out>) at postmaster.c:4288\n> #16 ServerLoop () at postmaster.c:1801\n> #17 0x0000564dc1a2053c in PostmasterMain (argc=<optimized out>,\n> argv=0x564dc3043fc0) at postmaster.c:1473\n> #18 0x0000564dc1750180 in main (argc=3, argv=0x564dc3043fc0) at main.c:198\n> (gdb) q\n>\n>\n> I've built the server using gcc 9 as following:\n> ./configure --enable-debug --enable-cassert\n>\n> Thanks to Alexander Lakhin for simplifying the repro.\n>\n> On Thu, Oct 14, 2021 at 3:29 PM wenjing zeng <wjzeng2012@gmail.com> wrote:\n>\n>>\n>> Be deeply grateful.\n>> Perhaps you can give the stack of problems so that you can start\n>> analyzing them as soon as possible.\n>>\n>> Wenjing\n>>\n>>\nHi Andrew\nI fixed the problem, please confirm again.\nThanks\n\nWenjing", "msg_date": "Mon, 18 Oct 2021 20:00:05 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Another thanks for the fix. It works for me.\n\nBut I found another crash!\n\nOn master with the v56 patches applied:\n\ninitdb -D data\npg_ctl -w -t 5 -D data -l server.log start\necho \"create global temp table t(i int4); insert into t values (1); vacuum\nt;\" > tmp.sql\npsql < tmp.sql\n\nCREATE TABLE\nINSERT 0 1\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\nwith following stack:\n[New LWP 2192409]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: andrew regression [local] VACUUM\n '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007fb26b558859 in __GI_abort () at abort.c:79\n#2 0x00005627ddd8466c in ExceptionalCondition\n(conditionName=conditionName@entry=0x5627dde153d0\n\"TransactionIdIsNormal(relfrozenxid)\", errorType=errorType@entry=0x5627ddde100b\n\"FailedAssertion\", fileName=fileName@entry=0x5627dddfa697 \"vacuum.c\",\nlineNumber=lineNumber@entry=1170) at assert.c:69\n#3 0x00005627dda70808 in vacuum_xid_failsafe_check\n(relfrozenxid=<optimized out>, relminmxid=<optimized out>) at vacuum.c:1170\n#4 0x00005627dd8db7ee in lazy_check_wraparound_failsafe\n(vacrel=vacrel@entry=0x5627df5c9680) at vacuumlazy.c:2607\n#5 0x00005627dd8ded18 in lazy_scan_heap (vacrel=vacrel@entry=0x5627df5c9680,\nparams=params@entry=0x7fffb3d36100, aggressive=aggressive@entry=true) at\nvacuumlazy.c:978\n#6 0x00005627dd8e019a in heap_vacuum_rel (rel=0x7fb26218af70,\nparams=0x7fffb3d36100, bstrategy=<optimized out>) at vacuumlazy.c:644\n#7 0x00005627dda70033 in table_relation_vacuum (bstrategy=<optimized out>,\nparams=0x7fffb3d36100, rel=0x7fb26218af70) at\n../../../src/include/access/tableam.h:1678\n#8 vacuum_rel (relid=16385, relation=<optimized out>,\nparams=params@entry=0x7fffb3d36100)\nat vacuum.c:2124\n#9 0x00005627dda71624 in vacuum (relations=0x5627df610598,\nparams=params@entry=0x7fffb3d36100, bstrategy=<optimized out>,\nbstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:476\n#10 0x00005627dda71eb1 in ExecVacuum (pstate=pstate@entry=0x5627df567440,\nvacstmt=vacstmt@entry=0x5627df545e70, isTopLevel=isTopLevel@entry=true) at\nvacuum.c:269\n#11 0x00005627ddc4a8cc in standard_ProcessUtility (pstmt=0x5627df5461c0,\nqueryString=0x5627df545380 \"vacuum t;\", readOnlyTree=<optimized out>,\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\ndest=0x5627df5462b0, qc=0x7fffb3d36470) at utility.c:858\n#12 0x00005627ddc4ada1 in ProcessUtility (pstmt=pstmt@entry=0x5627df5461c0,\nqueryString=<optimized out>, readOnlyTree=<optimized out>,\ncontext=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>,\nqueryEnv=<optimized out>, dest=0x5627df5462b0, qc=0x7fffb3d36470) at\nutility.c:527\n#13 0x00005627ddc4822d in PortalRunUtility (portal=portal@entry=0x5627df5a67e0,\npstmt=pstmt@entry=0x5627df5461c0, isTopLevel=isTopLevel@entry=true,\nsetHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x5627df5462b0,\nqc=qc@entry=0x7fffb3d36470) at pquery.c:1155\n#14 0x00005627ddc48551 in PortalRunMulti (portal=portal@entry=0x5627df5a67e0,\nisTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,\ndest=dest@entry=0x5627df5462b0, altdest=altdest@entry=0x5627df5462b0,\nqc=qc@entry=0x7fffb3d36470) at pquery.c:1312\n#15 0x00005627ddc4896c in PortalRun (portal=portal@entry=0x5627df5a67e0,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true, dest=dest@entry=0x5627df5462b0,\naltdest=altdest@entry=0x5627df5462b0, qc=0x7fffb3d36470) at pquery.c:788\n#16 0x00005627ddc44afb in exec_simple_query\n(query_string=query_string@entry=0x5627df545380\n\"vacuum t;\") at postgres.c:1214\n#17 0x00005627ddc469df in PostgresMain (dbname=<optimized out>,\nusername=<optimized out>) at postgres.c:4497\n#18 0x00005627ddb9fe7d in BackendRun (port=port@entry=0x5627df566580) at\npostmaster.c:4560\n#19 0x00005627ddba3001 in BackendStartup (port=port@entry=0x5627df566580)\nat postmaster.c:4288\n#20 0x00005627ddba3248 in ServerLoop () at postmaster.c:1801\n#21 0x00005627ddba482a in PostmasterMain (argc=3, argv=<optimized out>) at\npostmaster.c:1473\n#22 0x00005627ddae4d1d in main (argc=3, argv=0x5627df53f750) at main.c:198\n\nOn Mon, Oct 18, 2021 at 7:00 PM wenjing <wjzeng2012@gmail.com> wrote:\n\n> Hi Andrew\n> I fixed the problem, please confirm again.\n> Thanks\n>\n> Wenjing\n>\n\nAnother thanks for the fix. It works for me.But I found another crash!On master with the v56 patches applied:initdb -D datapg_ctl -w -t 5 -D data -l server.log startecho \"create global temp table t(i int4); insert into t values (1); vacuum t;\" > tmp.sqlpsql < tmp.sqlCREATE TABLEINSERT 0 1server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.connection to server was lostwith following stack:[New LWP 2192409][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".Core was generated by `postgres: andrew regression [local] VACUUM                                    '.Program terminated with signal SIGABRT, Aborted.#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5050      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1  0x00007fb26b558859 in __GI_abort () at abort.c:79#2  0x00005627ddd8466c in ExceptionalCondition (conditionName=conditionName@entry=0x5627dde153d0 \"TransactionIdIsNormal(relfrozenxid)\", errorType=errorType@entry=0x5627ddde100b \"FailedAssertion\", fileName=fileName@entry=0x5627dddfa697 \"vacuum.c\", lineNumber=lineNumber@entry=1170) at assert.c:69#3  0x00005627dda70808 in vacuum_xid_failsafe_check (relfrozenxid=<optimized out>, relminmxid=<optimized out>) at vacuum.c:1170#4  0x00005627dd8db7ee in lazy_check_wraparound_failsafe (vacrel=vacrel@entry=0x5627df5c9680) at vacuumlazy.c:2607#5  0x00005627dd8ded18 in lazy_scan_heap (vacrel=vacrel@entry=0x5627df5c9680, params=params@entry=0x7fffb3d36100, aggressive=aggressive@entry=true) at vacuumlazy.c:978#6  0x00005627dd8e019a in heap_vacuum_rel (rel=0x7fb26218af70, params=0x7fffb3d36100, bstrategy=<optimized out>) at vacuumlazy.c:644#7  0x00005627dda70033 in table_relation_vacuum (bstrategy=<optimized out>, params=0x7fffb3d36100, rel=0x7fb26218af70) at ../../../src/include/access/tableam.h:1678#8  vacuum_rel (relid=16385, relation=<optimized out>, params=params@entry=0x7fffb3d36100) at vacuum.c:2124#9  0x00005627dda71624 in vacuum (relations=0x5627df610598, params=params@entry=0x7fffb3d36100, bstrategy=<optimized out>, bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:476#10 0x00005627dda71eb1 in ExecVacuum (pstate=pstate@entry=0x5627df567440, vacstmt=vacstmt@entry=0x5627df545e70, isTopLevel=isTopLevel@entry=true) at vacuum.c:269#11 0x00005627ddc4a8cc in standard_ProcessUtility (pstmt=0x5627df5461c0, queryString=0x5627df545380 \"vacuum t;\", readOnlyTree=<optimized out>, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x5627df5462b0, qc=0x7fffb3d36470) at utility.c:858#12 0x00005627ddc4ada1 in ProcessUtility (pstmt=pstmt@entry=0x5627df5461c0, queryString=<optimized out>, readOnlyTree=<optimized out>, context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>, queryEnv=<optimized out>, dest=0x5627df5462b0, qc=0x7fffb3d36470) at utility.c:527#13 0x00005627ddc4822d in PortalRunUtility (portal=portal@entry=0x5627df5a67e0, pstmt=pstmt@entry=0x5627df5461c0, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x5627df5462b0, qc=qc@entry=0x7fffb3d36470) at pquery.c:1155#14 0x00005627ddc48551 in PortalRunMulti (portal=portal@entry=0x5627df5a67e0, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x5627df5462b0, altdest=altdest@entry=0x5627df5462b0, qc=qc@entry=0x7fffb3d36470) at pquery.c:1312#15 0x00005627ddc4896c in PortalRun (portal=portal@entry=0x5627df5a67e0, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x5627df5462b0, altdest=altdest@entry=0x5627df5462b0, qc=0x7fffb3d36470) at pquery.c:788#16 0x00005627ddc44afb in exec_simple_query (query_string=query_string@entry=0x5627df545380 \"vacuum t;\") at postgres.c:1214#17 0x00005627ddc469df in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4497#18 0x00005627ddb9fe7d in BackendRun (port=port@entry=0x5627df566580) at postmaster.c:4560#19 0x00005627ddba3001 in BackendStartup (port=port@entry=0x5627df566580) at postmaster.c:4288#20 0x00005627ddba3248 in ServerLoop () at postmaster.c:1801#21 0x00005627ddba482a in PostmasterMain (argc=3, argv=<optimized out>) at postmaster.c:1473#22 0x00005627ddae4d1d in main (argc=3, argv=0x5627df53f750) at main.c:198On Mon, Oct 18, 2021 at 7:00 PM wenjing <wjzeng2012@gmail.com> wrote:Hi AndrewI fixed the problem, please confirm again.ThanksWenjing", "msg_date": "Tue, 19 Oct 2021 14:39:35 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Andrew Bille <andrewbille@gmail.com> 于2021年10月20日周三 上午2:59写道:\n\n> Another thanks for the fix. It works for me.\n>\n> But I found another crash!\n>\nThis is a check code that was added this year, but it did find a problem\nand I fixed it.\nPlease review the new code(v57) again.\n\n\nWenjing\n\n\n>\n> On master with the v56 patches applied:\n>\n> initdb -D data\n> pg_ctl -w -t 5 -D data -l server.log start\n> echo \"create global temp table t(i int4); insert into t values (1); vacuum\n> t;\" > tmp.sql\n> psql < tmp.sql\n>\n> CREATE TABLE\n> INSERT 0 1\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n>\n> with following stack:\n> [New LWP 2192409]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n> Core was generated by `postgres: andrew regression [local] VACUUM\n> '.\n> Program terminated with signal SIGABRT, Aborted.\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007fb26b558859 in __GI_abort () at abort.c:79\n> #2 0x00005627ddd8466c in ExceptionalCondition\n> (conditionName=conditionName@entry=0x5627dde153d0\n> \"TransactionIdIsNormal(relfrozenxid)\", errorType=errorType@entry=0x5627ddde100b\n> \"FailedAssertion\", fileName=fileName@entry=0x5627dddfa697 \"vacuum.c\",\n> lineNumber=lineNumber@entry=1170) at assert.c:69\n> #3 0x00005627dda70808 in vacuum_xid_failsafe_check\n> (relfrozenxid=<optimized out>, relminmxid=<optimized out>) at vacuum.c:1170\n> #4 0x00005627dd8db7ee in lazy_check_wraparound_failsafe\n> (vacrel=vacrel@entry=0x5627df5c9680) at vacuumlazy.c:2607\n> #5 0x00005627dd8ded18 in lazy_scan_heap (vacrel=vacrel@entry=0x5627df5c9680,\n> params=params@entry=0x7fffb3d36100, aggressive=aggressive@entry=true) at\n> vacuumlazy.c:978\n> #6 0x00005627dd8e019a in heap_vacuum_rel (rel=0x7fb26218af70,\n> params=0x7fffb3d36100, bstrategy=<optimized out>) at vacuumlazy.c:644\n> #7 0x00005627dda70033 in table_relation_vacuum (bstrategy=<optimized\n> out>, params=0x7fffb3d36100, rel=0x7fb26218af70) at\n> ../../../src/include/access/tableam.h:1678\n> #8 vacuum_rel (relid=16385, relation=<optimized out>, params=params@entry=0x7fffb3d36100)\n> at vacuum.c:2124\n> #9 0x00005627dda71624 in vacuum (relations=0x5627df610598,\n> params=params@entry=0x7fffb3d36100, bstrategy=<optimized out>,\n> bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:476\n> #10 0x00005627dda71eb1 in ExecVacuum (pstate=pstate@entry=0x5627df567440,\n> vacstmt=vacstmt@entry=0x5627df545e70, isTopLevel=isTopLevel@entry=true)\n> at vacuum.c:269\n> #11 0x00005627ddc4a8cc in standard_ProcessUtility (pstmt=0x5627df5461c0,\n> queryString=0x5627df545380 \"vacuum t;\", readOnlyTree=<optimized out>,\n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n> dest=0x5627df5462b0, qc=0x7fffb3d36470) at utility.c:858\n> #12 0x00005627ddc4ada1 in ProcessUtility (pstmt=pstmt@entry=0x5627df5461c0,\n> queryString=<optimized out>, readOnlyTree=<optimized out>,\n> context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>,\n> queryEnv=<optimized out>, dest=0x5627df5462b0, qc=0x7fffb3d36470) at\n> utility.c:527\n> #13 0x00005627ddc4822d in PortalRunUtility (portal=portal@entry=0x5627df5a67e0,\n> pstmt=pstmt@entry=0x5627df5461c0, isTopLevel=isTopLevel@entry=true,\n> setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x5627df5462b0,\n> qc=qc@entry=0x7fffb3d36470) at pquery.c:1155\n> #14 0x00005627ddc48551 in PortalRunMulti (portal=portal@entry=0x5627df5a67e0,\n> isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,\n> dest=dest@entry=0x5627df5462b0, altdest=altdest@entry=0x5627df5462b0,\n> qc=qc@entry=0x7fffb3d36470) at pquery.c:1312\n> #15 0x00005627ddc4896c in PortalRun (portal=portal@entry=0x5627df5a67e0,\n> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n> run_once=run_once@entry=true, dest=dest@entry=0x5627df5462b0,\n> altdest=altdest@entry=0x5627df5462b0, qc=0x7fffb3d36470) at pquery.c:788\n> #16 0x00005627ddc44afb in exec_simple_query\n> (query_string=query_string@entry=0x5627df545380 \"vacuum t;\") at\n> postgres.c:1214\n> #17 0x00005627ddc469df in PostgresMain (dbname=<optimized out>,\n> username=<optimized out>) at postgres.c:4497\n> #18 0x00005627ddb9fe7d in BackendRun (port=port@entry=0x5627df566580) at\n> postmaster.c:4560\n> #19 0x00005627ddba3001 in BackendStartup (port=port@entry=0x5627df566580)\n> at postmaster.c:4288\n> #20 0x00005627ddba3248 in ServerLoop () at postmaster.c:1801\n> #21 0x00005627ddba482a in PostmasterMain (argc=3, argv=<optimized out>) at\n> postmaster.c:1473\n> #22 0x00005627ddae4d1d in main (argc=3, argv=0x5627df53f750) at main.c:198\n>\n> On Mon, Oct 18, 2021 at 7:00 PM wenjing <wjzeng2012@gmail.com> wrote:\n>\n>> Hi Andrew\n>> I fixed the problem, please confirm again.\n>> Thanks\n>>\n>> Wenjing\n>>\n>", "msg_date": "Thu, 21 Oct 2021 17:25:31 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thanks, the vacuum is fixed\n\nBut I found another crash (on v57 patches), reproduced with:\n\npsql -t -c \"create global temp table t (a integer); insert into t values\n(1); select count(*) from t group by t;\"\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\nwith trace:\n\n[New LWP 2580215]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: andrew postgres [local] SELECT\n '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f258d482859 in __GI_abort () at abort.c:79\n#2 0x000055ad0be8878f in ExceptionalCondition\n(conditionName=conditionName@entry=0x55ad0bf19743\n\"gtt_rnode->att_stat_tups[i]\", errorType=errorType@entry=0x55ad0bee500b\n\"FailedAssertion\", fileName=fileName@entry=0x55ad0bf1966b \"storage_gtt.c\",\nlineNumber=lineNumber@entry=902) at assert.c:69\n#3 0x000055ad0ba9379f in get_gtt_att_statistic (reloid=<optimized out>,\nattnum=0, inh=<optimized out>) at storage_gtt.c:902\n#4 0x000055ad0be35625 in examine_simple_variable\n(root=root@entry=0x55ad0c498748,\nvar=var@entry=0x55ad0c498c68, vardata=vardata@entry=0x7fff06c9ebf0) at\nselfuncs.c:5391\n#5 0x000055ad0be36a89 in examine_variable (root=root@entry=0x55ad0c498748,\nnode=node@entry=0x55ad0c498c68, varRelid=varRelid@entry=0,\nvardata=vardata@entry=0x7fff06c9ebf0) at selfuncs.c:4990\n#6 0x000055ad0be3ad64 in estimate_num_groups (root=root@entry=0x55ad0c498748,\ngroupExprs=<optimized out>, input_rows=input_rows@entry=255,\npgset=pgset@entry=0x0, estinfo=estinfo@entry=0x0) at selfuncs.c:3455\n#7 0x000055ad0bc50835 in get_number_of_groups (root=root@entry=0x55ad0c498748,\npath_rows=255, gd=gd@entry=0x0, target_list=0x55ad0c498bb8) at\nplanner.c:3241\n#8 0x000055ad0bc5576f in create_ordinary_grouping_paths\n(root=root@entry=0x55ad0c498748,\ninput_rel=input_rel@entry=0x55ad0c3ce148,\ngrouped_rel=grouped_rel@entry=0x55ad0c4983f0,\nagg_costs=agg_costs@entry=0x7fff06c9edb0, gd=gd@entry=0x0,\nextra=extra@entry=0x7fff06c9ede0,\npartially_grouped_rel_p=0x7fff06c9eda8)\n at planner.c:3628\n#9 0x000055ad0bc55a72 in create_grouping_paths\n(root=root@entry=0x55ad0c498748,\ninput_rel=input_rel@entry=0x55ad0c3ce148, target=target@entry=0x55ad0c4c95d8,\ntarget_parallel_safe=target_parallel_safe@entry=true, gd=gd@entry=0x0) at\nplanner.c:3377\n#10 0x000055ad0bc5686d in grouping_planner (root=root@entry=0x55ad0c498748,\ntuple_fraction=<optimized out>, tuple_fraction@entry=0) at planner.c:1592\n#11 0x000055ad0bc57910 in subquery_planner (glob=glob@entry=0x55ad0c497880,\nparse=parse@entry=0x55ad0c3cdbb8, parent_root=parent_root@entry=0x0,\nhasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0)\nat planner.c:1025\n#12 0x000055ad0bc57f36 in standard_planner (parse=0x55ad0c3cdbb8,\nquery_string=<optimized out>, cursorOptions=2048, boundParams=0x0) at\nplanner.c:406\n#13 0x000055ad0bc584d4 in planner (parse=parse@entry=0x55ad0c3cdbb8,\nquery_string=query_string@entry=0x55ad0c3cc470 \"create global temp table t\n(a integer); insert into t values (1); select count(*) from t group by t;\",\ncursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)\nat planner.c:277\n#14 0x000055ad0bd4855f in pg_plan_query\n(querytree=querytree@entry=0x55ad0c3cdbb8,\nquery_string=query_string@entry=0x55ad0c3cc470 \"create global temp table t\n(a integer); insert into t values (1); select count(*) from t group by t;\",\ncursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)\n at postgres.c:847\n#15 0x000055ad0bd4863b in pg_plan_queries (querytrees=0x55ad0c4986f0,\nquery_string=query_string@entry=0x55ad0c3cc470 \"create global temp table t\n(a integer); insert into t values (1); select count(*) from t group by t;\",\ncursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)\nat postgres.c:939\n#16 0x000055ad0bd48b20 in exec_simple_query\n(query_string=query_string@entry=0x55ad0c3cc470\n\"create global temp table t (a integer); insert into t values (1); select\ncount(*) from t group by t;\") at postgres.c:1133\n#17 0x000055ad0bd4aaf3 in PostgresMain (dbname=<optimized out>,\nusername=<optimized out>) at postgres.c:4497\n#18 0x000055ad0bca3f91 in BackendRun (port=port@entry=0x55ad0c3f1020) at\npostmaster.c:4560\n#19 0x000055ad0bca7115 in BackendStartup (port=port@entry=0x55ad0c3f1020)\nat postmaster.c:4288\n#20 0x000055ad0bca735c in ServerLoop () at postmaster.c:1801\n#21 0x000055ad0bca893e in PostmasterMain (argc=3, argv=<optimized out>) at\npostmaster.c:1473\n#22 0x000055ad0bbe8e31 in main (argc=3, argv=0x55ad0c3c6660) at main.c:198\n\nOn Thu, Oct 21, 2021 at 4:25 PM wenjing <wjzeng2012@gmail.com> wrote:\n\n>\n>\n> Andrew Bille <andrewbille@gmail.com> 于2021年10月20日周三 上午2:59写道:\n>\n>> Another thanks for the fix. It works for me.\n>>\n>> But I found another crash!\n>>\n> This is a check code that was added this year, but it did find a problem\n> and I fixed it.\n> Please review the new code(v57) again.\n>\n>\n>\n\nThanks, the vacuum is fixedBut I found another crash (on v57 patches), reproduced with:psql -t -c \"create global temp table t (a integer); insert into t values (1); select count(*) from t group by t;\"server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.connection to server was lostwith trace:[New LWP 2580215][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".Core was generated by `postgres: andrew postgres [local] SELECT                                      '.Program terminated with signal SIGABRT, Aborted.#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5050      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1  0x00007f258d482859 in __GI_abort () at abort.c:79#2  0x000055ad0be8878f in ExceptionalCondition (conditionName=conditionName@entry=0x55ad0bf19743 \"gtt_rnode->att_stat_tups[i]\", errorType=errorType@entry=0x55ad0bee500b \"FailedAssertion\", fileName=fileName@entry=0x55ad0bf1966b \"storage_gtt.c\", lineNumber=lineNumber@entry=902) at assert.c:69#3  0x000055ad0ba9379f in get_gtt_att_statistic (reloid=<optimized out>, attnum=0, inh=<optimized out>) at storage_gtt.c:902#4  0x000055ad0be35625 in examine_simple_variable (root=root@entry=0x55ad0c498748, var=var@entry=0x55ad0c498c68, vardata=vardata@entry=0x7fff06c9ebf0) at selfuncs.c:5391#5  0x000055ad0be36a89 in examine_variable (root=root@entry=0x55ad0c498748, node=node@entry=0x55ad0c498c68, varRelid=varRelid@entry=0, vardata=vardata@entry=0x7fff06c9ebf0) at selfuncs.c:4990#6  0x000055ad0be3ad64 in estimate_num_groups (root=root@entry=0x55ad0c498748, groupExprs=<optimized out>, input_rows=input_rows@entry=255, pgset=pgset@entry=0x0, estinfo=estinfo@entry=0x0) at selfuncs.c:3455#7  0x000055ad0bc50835 in get_number_of_groups (root=root@entry=0x55ad0c498748, path_rows=255, gd=gd@entry=0x0, target_list=0x55ad0c498bb8) at planner.c:3241#8  0x000055ad0bc5576f in create_ordinary_grouping_paths (root=root@entry=0x55ad0c498748, input_rel=input_rel@entry=0x55ad0c3ce148, grouped_rel=grouped_rel@entry=0x55ad0c4983f0, agg_costs=agg_costs@entry=0x7fff06c9edb0, gd=gd@entry=0x0, extra=extra@entry=0x7fff06c9ede0, partially_grouped_rel_p=0x7fff06c9eda8)    at planner.c:3628#9  0x000055ad0bc55a72 in create_grouping_paths (root=root@entry=0x55ad0c498748, input_rel=input_rel@entry=0x55ad0c3ce148, target=target@entry=0x55ad0c4c95d8, target_parallel_safe=target_parallel_safe@entry=true, gd=gd@entry=0x0) at planner.c:3377#10 0x000055ad0bc5686d in grouping_planner (root=root@entry=0x55ad0c498748, tuple_fraction=<optimized out>, tuple_fraction@entry=0) at planner.c:1592#11 0x000055ad0bc57910 in subquery_planner (glob=glob@entry=0x55ad0c497880, parse=parse@entry=0x55ad0c3cdbb8, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0) at planner.c:1025#12 0x000055ad0bc57f36 in standard_planner (parse=0x55ad0c3cdbb8, query_string=<optimized out>, cursorOptions=2048, boundParams=0x0) at planner.c:406#13 0x000055ad0bc584d4 in planner (parse=parse@entry=0x55ad0c3cdbb8, query_string=query_string@entry=0x55ad0c3cc470 \"create global temp table t (a integer); insert into t values (1); select count(*) from t group by t;\", cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at planner.c:277#14 0x000055ad0bd4855f in pg_plan_query (querytree=querytree@entry=0x55ad0c3cdbb8, query_string=query_string@entry=0x55ad0c3cc470 \"create global temp table t (a integer); insert into t values (1); select count(*) from t group by t;\", cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)    at postgres.c:847#15 0x000055ad0bd4863b in pg_plan_queries (querytrees=0x55ad0c4986f0, query_string=query_string@entry=0x55ad0c3cc470 \"create global temp table t (a integer); insert into t values (1); select count(*) from t group by t;\", cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at postgres.c:939#16 0x000055ad0bd48b20 in exec_simple_query (query_string=query_string@entry=0x55ad0c3cc470 \"create global temp table t (a integer); insert into t values (1); select count(*) from t group by t;\") at postgres.c:1133#17 0x000055ad0bd4aaf3 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4497#18 0x000055ad0bca3f91 in BackendRun (port=port@entry=0x55ad0c3f1020) at postmaster.c:4560#19 0x000055ad0bca7115 in BackendStartup (port=port@entry=0x55ad0c3f1020) at postmaster.c:4288#20 0x000055ad0bca735c in ServerLoop () at postmaster.c:1801#21 0x000055ad0bca893e in PostmasterMain (argc=3, argv=<optimized out>) at postmaster.c:1473#22 0x000055ad0bbe8e31 in main (argc=3, argv=0x55ad0c3c6660) at main.c:198On Thu, Oct 21, 2021 at 4:25 PM wenjing <wjzeng2012@gmail.com> wrote:Andrew Bille <andrewbille@gmail.com> 于2021年10月20日周三 上午2:59写道:Another thanks for the fix. It works for me.But I found another crash!This is a check code that was added this year, but it did find a problem and I fixed it.Please review the new code(v57) again.", "msg_date": "Fri, 22 Oct 2021 16:11:25 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Andrew Bille <andrewbille@gmail.com> 于2021年10月23日周六 下午9:22写道:\n\n> Thanks, the vacuum is fixed\n>\n> But I found another crash (on v57 patches), reproduced with:\n>\n> psql -t -c \"create global temp table t (a integer); insert into t values\n> (1); select count(*) from t group by t;\"\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n>\n> I missed whole row and system column. It has been fixed in v58.\nPlease review the new code(v58) again\n\n\nWenjing\n\nwith trace:\n>\n> [New LWP 2580215]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n> Core was generated by `postgres: andrew postgres [local] SELECT\n> '.\n> Program terminated with signal SIGABRT, Aborted.\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007f258d482859 in __GI_abort () at abort.c:79\n> #2 0x000055ad0be8878f in ExceptionalCondition\n> (conditionName=conditionName@entry=0x55ad0bf19743\n> \"gtt_rnode->att_stat_tups[i]\", errorType=errorType@entry=0x55ad0bee500b\n> \"FailedAssertion\", fileName=fileName@entry=0x55ad0bf1966b\n> \"storage_gtt.c\", lineNumber=lineNumber@entry=902) at assert.c:69\n> #3 0x000055ad0ba9379f in get_gtt_att_statistic (reloid=<optimized out>,\n> attnum=0, inh=<optimized out>) at storage_gtt.c:902\n> #4 0x000055ad0be35625 in examine_simple_variable (root=root@entry=0x55ad0c498748,\n> var=var@entry=0x55ad0c498c68, vardata=vardata@entry=0x7fff06c9ebf0) at\n> selfuncs.c:5391\n> #5 0x000055ad0be36a89 in examine_variable (root=root@entry=0x55ad0c498748,\n> node=node@entry=0x55ad0c498c68, varRelid=varRelid@entry=0,\n> vardata=vardata@entry=0x7fff06c9ebf0) at selfuncs.c:4990\n> #6 0x000055ad0be3ad64 in estimate_num_groups (root=root@entry=0x55ad0c498748,\n> groupExprs=<optimized out>, input_rows=input_rows@entry=255,\n> pgset=pgset@entry=0x0, estinfo=estinfo@entry=0x0) at selfuncs.c:3455\n> #7 0x000055ad0bc50835 in get_number_of_groups (root=root@entry=0x55ad0c498748,\n> path_rows=255, gd=gd@entry=0x0, target_list=0x55ad0c498bb8) at\n> planner.c:3241\n> #8 0x000055ad0bc5576f in create_ordinary_grouping_paths (root=root@entry=0x55ad0c498748,\n> input_rel=input_rel@entry=0x55ad0c3ce148, grouped_rel=grouped_rel@entry=0x55ad0c4983f0,\n> agg_costs=agg_costs@entry=0x7fff06c9edb0, gd=gd@entry=0x0,\n> extra=extra@entry=0x7fff06c9ede0, partially_grouped_rel_p=0x7fff06c9eda8)\n> at planner.c:3628\n> #9 0x000055ad0bc55a72 in create_grouping_paths (root=root@entry=0x55ad0c498748,\n> input_rel=input_rel@entry=0x55ad0c3ce148, target=target@entry=0x55ad0c4c95d8,\n> target_parallel_safe=target_parallel_safe@entry=true, gd=gd@entry=0x0) at\n> planner.c:3377\n> #10 0x000055ad0bc5686d in grouping_planner (root=root@entry=0x55ad0c498748,\n> tuple_fraction=<optimized out>, tuple_fraction@entry=0) at planner.c:1592\n> #11 0x000055ad0bc57910 in subquery_planner (glob=glob@entry=0x55ad0c497880,\n> parse=parse@entry=0x55ad0c3cdbb8, parent_root=parent_root@entry=0x0,\n> hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0)\n> at planner.c:1025\n> #12 0x000055ad0bc57f36 in standard_planner (parse=0x55ad0c3cdbb8,\n> query_string=<optimized out>, cursorOptions=2048, boundParams=0x0) at\n> planner.c:406\n> #13 0x000055ad0bc584d4 in planner (parse=parse@entry=0x55ad0c3cdbb8,\n> query_string=query_string@entry=0x55ad0c3cc470 \"create global temp table\n> t (a integer); insert into t values (1); select count(*) from t group by\n> t;\", cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)\n> at planner.c:277\n> #14 0x000055ad0bd4855f in pg_plan_query (querytree=querytree@entry=0x55ad0c3cdbb8,\n> query_string=query_string@entry=0x55ad0c3cc470 \"create global temp table\n> t (a integer); insert into t values (1); select count(*) from t group by\n> t;\", cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry\n> =0x0)\n> at postgres.c:847\n> #15 0x000055ad0bd4863b in pg_plan_queries (querytrees=0x55ad0c4986f0,\n> query_string=query_string@entry=0x55ad0c3cc470 \"create global temp table\n> t (a integer); insert into t values (1); select count(*) from t group by\n> t;\", cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)\n> at postgres.c:939\n> #16 0x000055ad0bd48b20 in exec_simple_query\n> (query_string=query_string@entry=0x55ad0c3cc470 \"create global temp table\n> t (a integer); insert into t values (1); select count(*) from t group by\n> t;\") at postgres.c:1133\n> #17 0x000055ad0bd4aaf3 in PostgresMain (dbname=<optimized out>,\n> username=<optimized out>) at postgres.c:4497\n> #18 0x000055ad0bca3f91 in BackendRun (port=port@entry=0x55ad0c3f1020) at\n> postmaster.c:4560\n> #19 0x000055ad0bca7115 in BackendStartup (port=port@entry=0x55ad0c3f1020)\n> at postmaster.c:4288\n> #20 0x000055ad0bca735c in ServerLoop () at postmaster.c:1801\n> #21 0x000055ad0bca893e in PostmasterMain (argc=3, argv=<optimized out>) at\n> postmaster.c:1473\n> #22 0x000055ad0bbe8e31 in main (argc=3, argv=0x55ad0c3c6660) at main.c:198\n>\n> On Thu, Oct 21, 2021 at 4:25 PM wenjing <wjzeng2012@gmail.com> wrote:\n>\n>>\n>>\n>> Andrew Bille <andrewbille@gmail.com> 于2021年10月20日周三 上午2:59写道:\n>>\n>>> Another thanks for the fix. It works for me.\n>>>\n>>> But I found another crash!\n>>>\n>> This is a check code that was added this year, but it did find a problem\n>> and I fixed it.\n>> Please review the new code(v57) again.\n>>\n>>\n>>", "msg_date": "Mon, 25 Oct 2021 20:13:01 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thanks, the \"group by\" is fixed\n\nYet another crash (on v58 patches), reproduced with:\n\npsql -t -c \"create global temp table t(b text)\nwith(on_commit_delete_rows=true); create index idx_b on t (b); insert into\nt values('test'); alter table t alter b type varchar;\"\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\nwith trace:\n\n[New LWP 569199]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: andrew postgres [local] ALTER TABLE\n '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f197493f859 in __GI_abort () at abort.c:79\n#2 0x00005562b3306fb9 in ExceptionalCondition\n(conditionName=0x5562b34dd740 \"reln->md_num_open_segs[forkNum] == 0\",\nerrorType=0x5562b34dd72c \"FailedAssertion\", fileName=0x5562b34dd727 \"md.c\",\nlineNumber=187) at assert.c:69\n#3 0x00005562b3148f15 in mdcreate (reln=0x5562b41abdc0,\nforkNum=MAIN_FORKNUM, isRedo=false) at md.c:187\n#4 0x00005562b314b73f in smgrcreate (reln=0x5562b41abdc0,\nforknum=MAIN_FORKNUM, isRedo=false) at smgr.c:335\n#5 0x00005562b2d88b23 in RelationCreateStorage (rnode=...,\nrelpersistence=103 'g', rel=0x7f196b597270) at storage.c:154\n#6 0x00005562b2d5a408 in index_build (heapRelation=0x7f196b58dc40,\nindexRelation=0x7f196b597270, indexInfo=0x5562b4167d60, isreindex=true,\nparallel=false) at index.c:3038\n#7 0x00005562b2d533c1 in RelationTruncateIndexes\n(heapRelation=0x7f196b58dc40, lockmode=1) at heap.c:3354\n#8 0x00005562b2d5360b in heap_truncate_one_rel (rel=0x7f196b58dc40) at\nheap.c:3452\n#9 0x00005562b2d53544 in heap_truncate (relids=0x5562b4167c58,\nis_global_temp=true) at heap.c:3410\n#10 0x00005562b2ea09fc in PreCommit_on_commit_actions () at\ntablecmds.c:16495\n#11 0x00005562b2d0d4ee in CommitTransaction () at xact.c:2140\n#12 0x00005562b2d0e320 in CommitTransactionCommand () at xact.c:2979\n#13 0x00005562b3151b7e in finish_xact_command () at postgres.c:2721\n#14 0x00005562b314f340 in exec_simple_query (query_string=0x5562b40c2170\n\"create global temp table t(b text) with(on_commit_delete_rows=true);\ncreate index idx_b on t (b); insert into t values('test'); alter table t\nalter b type varchar;\") at postgres.c:1239\n#15 0x00005562b3153f0a in PostgresMain (dbname=0x5562b40ed6e8 \"postgres\",\nusername=0x5562b40ed6c8 \"andrew\") at postgres.c:4497\n#16 0x00005562b307df6e in BackendRun (port=0x5562b40e4500) at\npostmaster.c:4560\n#17 0x00005562b307d853 in BackendStartup (port=0x5562b40e4500) at\npostmaster.c:4288\n#18 0x00005562b3079a1d in ServerLoop () at postmaster.c:1801\n#19 0x00005562b30791b6 in PostmasterMain (argc=3, argv=0x5562b40bc5b0) at\npostmaster.c:1473\n#20 0x00005562b2f6d98e in main (argc=3, argv=0x5562b40bc5b0) at main.c:198\n\nOn Mon, Oct 25, 2021 at 7:13 PM wenjing <wjzeng2012@gmail.com> wrote:\n\n>\n> I missed whole row and system column. It has been fixed in v58.\n> Please review the new code(v58) again\n>\n>\n\nThanks, the \"group by\" is fixedYet another crash (on v58 patches), reproduced with:psql -t -c \"create global temp table t(b text) with(on_commit_delete_rows=true); create index idx_b on t (b); insert into t values('test'); alter table t alter b type varchar;\"server closed the connection unexpectedly        This probably means the server terminated abnormally        before or while processing the request.connection to server was lostwith trace:[New LWP 569199][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".Core was generated by `postgres: andrew postgres [local] ALTER TABLE                                 '.Program terminated with signal SIGABRT, Aborted.#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5050      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1  0x00007f197493f859 in __GI_abort () at abort.c:79#2  0x00005562b3306fb9 in ExceptionalCondition (conditionName=0x5562b34dd740 \"reln->md_num_open_segs[forkNum] == 0\", errorType=0x5562b34dd72c \"FailedAssertion\", fileName=0x5562b34dd727 \"md.c\", lineNumber=187) at assert.c:69#3  0x00005562b3148f15 in mdcreate (reln=0x5562b41abdc0, forkNum=MAIN_FORKNUM, isRedo=false) at md.c:187#4  0x00005562b314b73f in smgrcreate (reln=0x5562b41abdc0, forknum=MAIN_FORKNUM, isRedo=false) at smgr.c:335#5  0x00005562b2d88b23 in RelationCreateStorage (rnode=..., relpersistence=103 'g', rel=0x7f196b597270) at storage.c:154#6  0x00005562b2d5a408 in index_build (heapRelation=0x7f196b58dc40, indexRelation=0x7f196b597270, indexInfo=0x5562b4167d60, isreindex=true, parallel=false) at index.c:3038#7  0x00005562b2d533c1 in RelationTruncateIndexes (heapRelation=0x7f196b58dc40, lockmode=1) at heap.c:3354#8  0x00005562b2d5360b in heap_truncate_one_rel (rel=0x7f196b58dc40) at heap.c:3452#9  0x00005562b2d53544 in heap_truncate (relids=0x5562b4167c58, is_global_temp=true) at heap.c:3410#10 0x00005562b2ea09fc in PreCommit_on_commit_actions () at tablecmds.c:16495#11 0x00005562b2d0d4ee in CommitTransaction () at xact.c:2140#12 0x00005562b2d0e320 in CommitTransactionCommand () at xact.c:2979#13 0x00005562b3151b7e in finish_xact_command () at postgres.c:2721#14 0x00005562b314f340 in exec_simple_query (query_string=0x5562b40c2170 \"create global temp table t(b text) with(on_commit_delete_rows=true); create index idx_b on t (b); insert into t values('test'); alter table t alter b type varchar;\") at postgres.c:1239#15 0x00005562b3153f0a in PostgresMain (dbname=0x5562b40ed6e8 \"postgres\", username=0x5562b40ed6c8 \"andrew\") at postgres.c:4497#16 0x00005562b307df6e in BackendRun (port=0x5562b40e4500) at postmaster.c:4560#17 0x00005562b307d853 in BackendStartup (port=0x5562b40e4500) at postmaster.c:4288#18 0x00005562b3079a1d in ServerLoop () at postmaster.c:1801#19 0x00005562b30791b6 in PostmasterMain (argc=3, argv=0x5562b40bc5b0) at postmaster.c:1473#20 0x00005562b2f6d98e in main (argc=3, argv=0x5562b40bc5b0) at main.c:198On Mon, Oct 25, 2021 at 7:13 PM wenjing <wjzeng2012@gmail.com> wrote:I missed whole row and system column. It has been fixed in v58.Please review the new code(v58) again", "msg_date": "Thu, 28 Oct 2021 17:30:21 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Andrew Bille <andrewbille@gmail.com> 于2021年10月28日周四 下午6:30写道:\n\n> Thanks, the \"group by\" is fixed\n>\n> Yet another crash (on v58 patches), reproduced with:\n>\n> psql -t -c \"create global temp table t(b text)\n> with(on_commit_delete_rows=true); create index idx_b on t (b); insert into\n> t values('test'); alter table t alter b type varchar;\"\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n>\nThank you for pointing that out.\nThis is due to an optimization point: ALTER Table reuses the relfilenode of\nthe old index.\nI have banned this optimization point for GTT, I am not entirely sure it is\nappropriate, maybe you can give some suggestions.\nPlease review the new code(v59).\n\n\nWenjing\n\n\n\n>\n> with trace:\n>\n> [New LWP 569199]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n> Core was generated by `postgres: andrew postgres [local] ALTER TABLE\n> '.\n> Program terminated with signal SIGABRT, Aborted.\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007f197493f859 in __GI_abort () at abort.c:79\n> #2 0x00005562b3306fb9 in ExceptionalCondition\n> (conditionName=0x5562b34dd740 \"reln->md_num_open_segs[forkNum] == 0\",\n> errorType=0x5562b34dd72c \"FailedAssertion\", fileName=0x5562b34dd727 \"md.c\",\n> lineNumber=187) at assert.c:69\n> #3 0x00005562b3148f15 in mdcreate (reln=0x5562b41abdc0,\n> forkNum=MAIN_FORKNUM, isRedo=false) at md.c:187\n> #4 0x00005562b314b73f in smgrcreate (reln=0x5562b41abdc0,\n> forknum=MAIN_FORKNUM, isRedo=false) at smgr.c:335\n> #5 0x00005562b2d88b23 in RelationCreateStorage (rnode=...,\n> relpersistence=103 'g', rel=0x7f196b597270) at storage.c:154\n> #6 0x00005562b2d5a408 in index_build (heapRelation=0x7f196b58dc40,\n> indexRelation=0x7f196b597270, indexInfo=0x5562b4167d60, isreindex=true,\n> parallel=false) at index.c:3038\n> #7 0x00005562b2d533c1 in RelationTruncateIndexes\n> (heapRelation=0x7f196b58dc40, lockmode=1) at heap.c:3354\n> #8 0x00005562b2d5360b in heap_truncate_one_rel (rel=0x7f196b58dc40) at\n> heap.c:3452\n> #9 0x00005562b2d53544 in heap_truncate (relids=0x5562b4167c58,\n> is_global_temp=true) at heap.c:3410\n> #10 0x00005562b2ea09fc in PreCommit_on_commit_actions () at\n> tablecmds.c:16495\n> #11 0x00005562b2d0d4ee in CommitTransaction () at xact.c:2140\n> #12 0x00005562b2d0e320 in CommitTransactionCommand () at xact.c:2979\n> #13 0x00005562b3151b7e in finish_xact_command () at postgres.c:2721\n> #14 0x00005562b314f340 in exec_simple_query (query_string=0x5562b40c2170\n> \"create global temp table t(b text) with(on_commit_delete_rows=true);\n> create index idx_b on t (b); insert into t values('test'); alter table t\n> alter b type varchar;\") at postgres.c:1239\n> #15 0x00005562b3153f0a in PostgresMain (dbname=0x5562b40ed6e8 \"postgres\",\n> username=0x5562b40ed6c8 \"andrew\") at postgres.c:4497\n> #16 0x00005562b307df6e in BackendRun (port=0x5562b40e4500) at\n> postmaster.c:4560\n> #17 0x00005562b307d853 in BackendStartup (port=0x5562b40e4500) at\n> postmaster.c:4288\n> #18 0x00005562b3079a1d in ServerLoop () at postmaster.c:1801\n> #19 0x00005562b30791b6 in PostmasterMain (argc=3, argv=0x5562b40bc5b0) at\n> postmaster.c:1473\n> #20 0x00005562b2f6d98e in main (argc=3, argv=0x5562b40bc5b0) at main.c:198\n>\n> On Mon, Oct 25, 2021 at 7:13 PM wenjing <wjzeng2012@gmail.com> wrote:\n>\n>>\n>> I missed whole row and system column. It has been fixed in v58.\n>> Please review the new code(v58) again\n>>\n>>\n>", "msg_date": "Sat, 30 Oct 2021 01:28:11 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "wenjing <wjzeng2012@gmail.com> 于2021年10月30日周六 上午1:28写道:\n\n>\n>\n> Andrew Bille <andrewbille@gmail.com> 于2021年10月28日周四 下午6:30写道:\n>\n>> Thanks, the \"group by\" is fixed\n>>\n>> Yet another crash (on v58 patches), reproduced with:\n>>\n>> psql -t -c \"create global temp table t(b text)\n>> with(on_commit_delete_rows=true); create index idx_b on t (b); insert into\n>> t values('test'); alter table t alter b type varchar;\"\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> connection to server was lost\n>>\n> Thank you for pointing that out.\n> This is due to an optimization point: ALTER Table reuses the relfilenode\n> of the old index.\n> I have banned this optimization point for GTT, I am not entirely sure it\n> is appropriate, maybe you can give some suggestions.\n> Please review the new code(v59).\n>\n>\n> Wenjing\n>\n>\n>\n>>\n>> with trace:\n>>\n>> [New LWP 569199]\n>> [Thread debugging using libthread_db enabled]\n>> Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n>> Core was generated by `postgres: andrew postgres [local] ALTER TABLE\n>> '.\n>> Program terminated with signal SIGABRT, Aborted.\n>> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n>> 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n>> (gdb) bt\n>> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n>> #1 0x00007f197493f859 in __GI_abort () at abort.c:79\n>> #2 0x00005562b3306fb9 in ExceptionalCondition\n>> (conditionName=0x5562b34dd740 \"reln->md_num_open_segs[forkNum] == 0\",\n>> errorType=0x5562b34dd72c \"FailedAssertion\", fileName=0x5562b34dd727 \"md.c\",\n>> lineNumber=187) at assert.c:69\n>> #3 0x00005562b3148f15 in mdcreate (reln=0x5562b41abdc0,\n>> forkNum=MAIN_FORKNUM, isRedo=false) at md.c:187\n>> #4 0x00005562b314b73f in smgrcreate (reln=0x5562b41abdc0,\n>> forknum=MAIN_FORKNUM, isRedo=false) at smgr.c:335\n>> #5 0x00005562b2d88b23 in RelationCreateStorage (rnode=...,\n>> relpersistence=103 'g', rel=0x7f196b597270) at storage.c:154\n>> #6 0x00005562b2d5a408 in index_build (heapRelation=0x7f196b58dc40,\n>> indexRelation=0x7f196b597270, indexInfo=0x5562b4167d60, isreindex=true,\n>> parallel=false) at index.c:3038\n>> #7 0x00005562b2d533c1 in RelationTruncateIndexes\n>> (heapRelation=0x7f196b58dc40, lockmode=1) at heap.c:3354\n>> #8 0x00005562b2d5360b in heap_truncate_one_rel (rel=0x7f196b58dc40) at\n>> heap.c:3452\n>> #9 0x00005562b2d53544 in heap_truncate (relids=0x5562b4167c58,\n>> is_global_temp=true) at heap.c:3410\n>> #10 0x00005562b2ea09fc in PreCommit_on_commit_actions () at\n>> tablecmds.c:16495\n>> #11 0x00005562b2d0d4ee in CommitTransaction () at xact.c:2140\n>> #12 0x00005562b2d0e320 in CommitTransactionCommand () at xact.c:2979\n>> #13 0x00005562b3151b7e in finish_xact_command () at postgres.c:2721\n>> #14 0x00005562b314f340 in exec_simple_query (query_string=0x5562b40c2170\n>> \"create global temp table t(b text) with(on_commit_delete_rows=true);\n>> create index idx_b on t (b); insert into t values('test'); alter table t\n>> alter b type varchar;\") at postgres.c:1239\n>> #15 0x00005562b3153f0a in PostgresMain (dbname=0x5562b40ed6e8 \"postgres\",\n>> username=0x5562b40ed6c8 \"andrew\") at postgres.c:4497\n>> #16 0x00005562b307df6e in BackendRun (port=0x5562b40e4500) at\n>> postmaster.c:4560\n>> #17 0x00005562b307d853 in BackendStartup (port=0x5562b40e4500) at\n>> postmaster.c:4288\n>> #18 0x00005562b3079a1d in ServerLoop () at postmaster.c:1801\n>> #19 0x00005562b30791b6 in PostmasterMain (argc=3, argv=0x5562b40bc5b0) at\n>> postmaster.c:1473\n>> #20 0x00005562b2f6d98e in main (argc=3, argv=0x5562b40bc5b0) at main.c:198\n>>\n>> On Mon, Oct 25, 2021 at 7:13 PM wenjing <wjzeng2012@gmail.com> wrote:\n>>\n>>>\n>>> I missed whole row and system column. It has been fixed in v58.\n>>> Please review the new code(v58) again\n>>>\n>>>\n>>\n>\n>\nHi Andrew\n\nI fixed a problem found during testing.\nGTT version updated to v60.\n\n\nWenjing.", "msg_date": "Tue, 9 Nov 2021 16:51:22 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "wenjing <wjzeng2012@gmail.com> 于2021年11月9日周二 下午4:51写道:\n\n>\n>\n> wenjing <wjzeng2012@gmail.com> 于2021年10月30日周六 上午1:28写道:\n>\n>>\n>>\n>> Andrew Bille <andrewbille@gmail.com> 于2021年10月28日周四 下午6:30写道:\n>>\n>>> Thanks, the \"group by\" is fixed\n>>>\n>>> Yet another crash (on v58 patches), reproduced with:\n>>>\n>>> psql -t -c \"create global temp table t(b text)\n>>> with(on_commit_delete_rows=true); create index idx_b on t (b); insert into\n>>> t values('test'); alter table t alter b type varchar;\"\n>>> server closed the connection unexpectedly\n>>> This probably means the server terminated abnormally\n>>> before or while processing the request.\n>>> connection to server was lost\n>>>\n>> Thank you for pointing that out.\n>> This is due to an optimization point: ALTER Table reuses the relfilenode\n>> of the old index.\n>> I have banned this optimization point for GTT, I am not entirely sure it\n>> is appropriate, maybe you can give some suggestions.\n>> Please review the new code(v59).\n>>\n>>\n>> Wenjing\n>>\n>>\n>>\n>>>\n>>> with trace:\n>>>\n>>> [New LWP 569199]\n>>> [Thread debugging using libthread_db enabled]\n>>> Using host libthread_db library\n>>> \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n>>> Core was generated by `postgres: andrew postgres [local] ALTER TABLE\n>>> '.\n>>> Program terminated with signal SIGABRT, Aborted.\n>>> #0 __GI_raise (sig=sig@entry=6) at\n>>> ../sysdeps/unix/sysv/linux/raise.c:50\n>>> 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n>>> (gdb) bt\n>>> #0 __GI_raise (sig=sig@entry=6) at\n>>> ../sysdeps/unix/sysv/linux/raise.c:50\n>>> #1 0x00007f197493f859 in __GI_abort () at abort.c:79\n>>> #2 0x00005562b3306fb9 in ExceptionalCondition\n>>> (conditionName=0x5562b34dd740 \"reln->md_num_open_segs[forkNum] == 0\",\n>>> errorType=0x5562b34dd72c \"FailedAssertion\", fileName=0x5562b34dd727 \"md.c\",\n>>> lineNumber=187) at assert.c:69\n>>> #3 0x00005562b3148f15 in mdcreate (reln=0x5562b41abdc0,\n>>> forkNum=MAIN_FORKNUM, isRedo=false) at md.c:187\n>>> #4 0x00005562b314b73f in smgrcreate (reln=0x5562b41abdc0,\n>>> forknum=MAIN_FORKNUM, isRedo=false) at smgr.c:335\n>>> #5 0x00005562b2d88b23 in RelationCreateStorage (rnode=...,\n>>> relpersistence=103 'g', rel=0x7f196b597270) at storage.c:154\n>>> #6 0x00005562b2d5a408 in index_build (heapRelation=0x7f196b58dc40,\n>>> indexRelation=0x7f196b597270, indexInfo=0x5562b4167d60, isreindex=true,\n>>> parallel=false) at index.c:3038\n>>> #7 0x00005562b2d533c1 in RelationTruncateIndexes\n>>> (heapRelation=0x7f196b58dc40, lockmode=1) at heap.c:3354\n>>> #8 0x00005562b2d5360b in heap_truncate_one_rel (rel=0x7f196b58dc40) at\n>>> heap.c:3452\n>>> #9 0x00005562b2d53544 in heap_truncate (relids=0x5562b4167c58,\n>>> is_global_temp=true) at heap.c:3410\n>>> #10 0x00005562b2ea09fc in PreCommit_on_commit_actions () at\n>>> tablecmds.c:16495\n>>> #11 0x00005562b2d0d4ee in CommitTransaction () at xact.c:2140\n>>> #12 0x00005562b2d0e320 in CommitTransactionCommand () at xact.c:2979\n>>> #13 0x00005562b3151b7e in finish_xact_command () at postgres.c:2721\n>>> #14 0x00005562b314f340 in exec_simple_query (query_string=0x5562b40c2170\n>>> \"create global temp table t(b text) with(on_commit_delete_rows=true);\n>>> create index idx_b on t (b); insert into t values('test'); alter table t\n>>> alter b type varchar;\") at postgres.c:1239\n>>> #15 0x00005562b3153f0a in PostgresMain (dbname=0x5562b40ed6e8\n>>> \"postgres\", username=0x5562b40ed6c8 \"andrew\") at postgres.c:4497\n>>> #16 0x00005562b307df6e in BackendRun (port=0x5562b40e4500) at\n>>> postmaster.c:4560\n>>> #17 0x00005562b307d853 in BackendStartup (port=0x5562b40e4500) at\n>>> postmaster.c:4288\n>>> #18 0x00005562b3079a1d in ServerLoop () at postmaster.c:1801\n>>> #19 0x00005562b30791b6 in PostmasterMain (argc=3, argv=0x5562b40bc5b0)\n>>> at postmaster.c:1473\n>>> #20 0x00005562b2f6d98e in main (argc=3, argv=0x5562b40bc5b0) at\n>>> main.c:198\n>>>\n>>> On Mon, Oct 25, 2021 at 7:13 PM wenjing <wjzeng2012@gmail.com> wrote:\n>>>\n>>>>\n>>>> I missed whole row and system column. It has been fixed in v58.\n>>>> Please review the new code(v58) again\n>>>>\n>>>>\n>>>\n>>\n>>\n> Hi Andrew\n>\n> I fixed a problem found during testing.\n> GTT version updated to v60.\n>\n>\n> Wenjing.\n>\n>\n>\n>\n\nFixed a bug in function pg_gtt_attached_pid.\nLooking forward to your reply.\n\n\nWenjing", "msg_date": "Thu, 11 Nov 2021 16:15:09 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Thanks for the patches. The feature has become much more stable.\nHowever, there is another simple case that generates an error:\nMaster with v61 patches\n\nCREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;\nERROR: could not open file \"base/13560/t3_16384\": No such file or directory\nAndrew\n\nOn Thu, Nov 11, 2021 at 3:15 PM wenjing <wjzeng2012@gmail.com> wrote:\n\n> Fixed a bug in function pg_gtt_attached_pid.\n> Looking forward to your reply.\n>\n>\n> Wenjing\n>\n>\n\nThanks for the patches. The feature has become much more stable.However, there is another simple case that generates an error: Master with v61 patchesCREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;ERROR:  could not open file \"base/13560/t3_16384\": No such file or directoryAndrewOn Thu, Nov 11, 2021 at 3:15 PM wenjing <wjzeng2012@gmail.com> wrote:Fixed a bug in function pg_gtt_attached_pid.Looking forward to your reply.Wenjing", "msg_date": "Mon, 15 Nov 2021 17:33:45 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Andrew Bille <andrewbille@gmail.com> 于2021年11月15日周一 下午6:34写道:\n\n> Thanks for the patches. The feature has become much more stable.\n> However, there is another simple case that generates an error:\n> Master with v61 patches\n>\n> CREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;\n> ERROR: could not open file \"base/13560/t3_16384\": No such file or\n> directory\n>\nThank you for pointing out that this part is not reasonable enough.\nThis issue has been fixed in v62.\nLooking forward to your reply.\n\n\nWenjing\n\n\n\n> Andrew\n>\n> On Thu, Nov 11, 2021 at 3:15 PM wenjing <wjzeng2012@gmail.com> wrote:\n>\n>> Fixed a bug in function pg_gtt_attached_pid.\n>> Looking forward to your reply.\n>>\n>>\n>> Wenjing\n>>\n>>", "msg_date": "Sat, 20 Nov 2021 01:31:09 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Post GTT v63 to fixed conflicts with the latest code.\n\n\n\nHi Andrew\n\nHave you found any new bugs recently?\n\n\n\nWenjing\n\n\n\n\n> 2021年11月20日 01:31,wenjing <wjzeng2012@gmail.com> 写道:\n> \n> \n> \n> Andrew Bille <andrewbille@gmail.com <mailto:andrewbille@gmail.com>> 于2021年11月15日周一 下午6:34写道:\n> Thanks for the patches. The feature has become much more stable.\n> However, there is another simple case that generates an error: \n> Master with v61 patches\n> \n> CREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;\n> ERROR: could not open file \"base/13560/t3_16384\": No such file or directory\n> Thank you for pointing out that this part is not reasonable enough.\n> This issue has been fixed in v62.\n> Looking forward to your reply.\n> \n> \n> Wenjing\n> \n> \n> Andrew\n> \n> On Thu, Nov 11, 2021 at 3:15 PM wenjing <wjzeng2012@gmail.com <mailto:wjzeng2012@gmail.com>> wrote:\n> Fixed a bug in function pg_gtt_attached_pid.\n> Looking forward to your reply.\n> \n> \n> Wenjing\n> \n> \n> \n> \n> <0001-gtt-v62-reademe.patch><0004-gtt-v62-regress.patch><0002-gtt-v62-doc.patch><0003-gtt-v62-implementation.patch>", "msg_date": "Mon, 20 Dec 2021 20:42:21 +0800", "msg_from": "wenjing zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Send an email to trigger the regress test.\n\nwenjing zeng <wjzeng2012@gmail.com> 于2021年12月20日周一 20:42写道:\n\n> Post GTT v63 to fixed conflicts with the latest code.\n>\n>\n>\n> Hi Andrew\n>\n> Have you found any new bugs recently?\n>\n>\n>\n> Wenjing\n>\n>\n>\n>\n> 2021年11月20日 01:31,wenjing <wjzeng2012@gmail.com> 写道:\n>\n>\n>\n> Andrew Bille <andrewbille@gmail.com> 于2021年11月15日周一 下午6:34写道:\n>\n>> Thanks for the patches. The feature has become much more stable.\n>> However, there is another simple case that generates an error:\n>> Master with v61 patches\n>>\n>> CREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;\n>> ERROR: could not open file \"base/13560/t3_16384\": No such file or\n>> directory\n>>\n> Thank you for pointing out that this part is not reasonable enough.\n> This issue has been fixed in v62.\n> Looking forward to your reply.\n>\n>\n> Wenjing\n>\n>\n>\n>> Andrew\n>>\n>> On Thu, Nov 11, 2021 at 3:15 PM wenjing <wjzeng2012@gmail.com> wrote:\n>>\n>>> Fixed a bug in function pg_gtt_attached_pid.\n>>> Looking forward to your reply.\n>>>\n>>>\n>>> Wenjing\n>>>\n>>>\n>\n>\n> <0001-gtt-v62-reademe.patch><0004-gtt-v62-regress.patch>\n> <0002-gtt-v62-doc.patch><0003-gtt-v62-implementation.patch>\n>\n>\n>\n>\n>", "msg_date": "Tue, 21 Dec 2021 11:04:06 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi!\nThanks for new patches.\nYet another crash reproduced on master with v63 patches:\n\nCREATE TABLESPACE ts LOCATION '/tmp/ts';\nCREATE GLOBAL TEMP TABLE tbl (num1 bigint);\nINSERT INTO tbl (num1) values (1);\nCREATE INDEX tbl_idx ON tbl (num1);\nREINDEX (TABLESPACE ts) TABLE tbl;\n\nGot error:\nCREATE TABLESPACE\nCREATE TABLE\nINSERT 0 1\nCREATE INDEX\nWARNING: AbortTransaction while in COMMIT state\nERROR: gtt relfilenode 16388 not found in rel 16388\nPANIC: cannot abort transaction 726, it was already committed\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n\nin log:\n2021-12-21 12:54:08.273 +07 [208725] ERROR: gtt relfilenode 16388 not\nfound in rel 16388\n2021-12-21 12:54:08.273 +07 [208725] STATEMENT: REINDEX (TABLESPACE ts)\nTABLE tbl;\n2021-12-21 12:54:08.273 +07 [208725] WARNING: AbortTransaction while in\nCOMMIT state\n2021-12-21 12:54:08.273 +07 [208725] PANIC: cannot abort transaction 726,\nit was already committed\n2021-12-21 12:54:08.775 +07 [208716] LOG: server process (PID 208725) was\nterminated by signal 6: Аварийный останов\n2021-12-21 12:54:08.775 +07 [208716] DETAIL: Failed process was running:\nREINDEX (TABLESPACE ts) TABLE tbl;\n2021-12-21 12:54:08.775 +07 [208716] LOG: terminating any other active\nserver processes\n2021-12-21 12:54:08.775 +07 [208716] LOG: all server processes terminated;\nreinitializing\n\nwith dump:\n[New LWP 208725]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: andrew postgres [local] REINDEX\n '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n50 ../sysdeps/unix/sysv/linux/raise.c: Нет такого файла или каталога.\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007feadfac7859 in __GI_abort () at abort.c:79\n#2 0x000055e36b6d9ec7 in errfinish (filename=0x55e36b786e20 \"xact.c\",\nlineno=1729, funcname=0x55e36b788660 <__func__.29619>\n\"RecordTransactionAbort\") at elog.c:680\n#3 0x000055e36b0d6e37 in RecordTransactionAbort (isSubXact=false) at\nxact.c:1729\n#4 0x000055e36b0d7f64 in AbortTransaction () at xact.c:2787\n#5 0x000055e36b0d88fa in AbortCurrentTransaction () at xact.c:3315\n#6 0x000055e36b524f33 in PostgresMain (dbname=0x55e36d4d97b8 \"postgres\",\nusername=0x55e36d4d9798 \"andrew\") at postgres.c:4252\n#7 0x000055e36b44d1e0 in BackendRun (port=0x55e36d4d1020) at\npostmaster.c:4594\n#8 0x000055e36b44cac5 in BackendStartup (port=0x55e36d4d1020) at\npostmaster.c:4322\n#9 0x000055e36b448bad in ServerLoop () at postmaster.c:1802\n#10 0x000055e36b448346 in PostmasterMain (argc=3, argv=0x55e36d4a84d0) at\npostmaster.c:1474\n#11 0x000055e36b33b5ca in main (argc=3, argv=0x55e36d4a84d0) at main.c:198\n\nRegards!\n\nOn Mon, Dec 20, 2021 at 7:42 PM wenjing zeng <wjzeng2012@gmail.com> wrote:\n\n> Post GTT v63 to fixed conflicts with the latest code.\n>\n>\n>\n> Hi Andrew\n>\n> Have you found any new bugs recently?\n>\n>\n>\n> Wenjing\n>\n>\n>\n>\n> 2021年11月20日 01:31,wenjing <wjzeng2012@gmail.com> 写道:\n>\n>\n>\n> Andrew Bille <andrewbille@gmail.com> 于2021年11月15日周一 下午6:34写道:\n>\n>> Thanks for the patches. The feature has become much more stable.\n>> However, there is another simple case that generates an error:\n>> Master with v61 patches\n>>\n>> CREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;\n>> ERROR: could not open file \"base/13560/t3_16384\": No such file or\n>> directory\n>>\n> Thank you for pointing out that this part is not reasonable enough.\n> This issue has been fixed in v62.\n> Looking forward to your reply.\n>\n>\n> Wenjing\n>\n>\n>\n>> Andrew\n>>\n>> On Thu, Nov 11, 2021 at 3:15 PM wenjing <wjzeng2012@gmail.com> wrote:\n>>\n>>> Fixed a bug in function pg_gtt_attached_pid.\n>>> Looking forward to your reply.\n>>>\n>>>\n>>> Wenjing\n>>>\n>>>\n>\n>\n> <0001-gtt-v62-reademe.patch><0004-gtt-v62-regress.patch>\n> <0002-gtt-v62-doc.patch><0003-gtt-v62-implementation.patch>\n>\n>\n>\n>\n>\n\nHi!Thanks for new patches.Yet another crash reproduced on master with v63 patches:CREATE TABLESPACE ts LOCATION '/tmp/ts';CREATE GLOBAL TEMP TABLE tbl (num1 bigint);INSERT INTO tbl (num1) values (1);CREATE INDEX tbl_idx ON tbl (num1);REINDEX (TABLESPACE ts) TABLE tbl;Got error:CREATE TABLESPACECREATE TABLEINSERT 0 1CREATE INDEXWARNING:  AbortTransaction while in COMMIT stateERROR:  gtt relfilenode 16388 not found in rel 16388PANIC:  cannot abort transaction 726, it was already committedserver closed the connection unexpectedly     This probably means the server terminated abnormally    before or while processing the request.connection to server was lostin log:2021-12-21 12:54:08.273 +07 [208725] ERROR:  gtt relfilenode 16388 not found in rel 163882021-12-21 12:54:08.273 +07 [208725] STATEMENT:  REINDEX (TABLESPACE ts) TABLE tbl;2021-12-21 12:54:08.273 +07 [208725] WARNING:  AbortTransaction while in COMMIT state2021-12-21 12:54:08.273 +07 [208725] PANIC:  cannot abort transaction 726, it was already committed2021-12-21 12:54:08.775 +07 [208716] LOG:  server process (PID 208725) was terminated by signal 6: Аварийный останов2021-12-21 12:54:08.775 +07 [208716] DETAIL:  Failed process was running: REINDEX (TABLESPACE ts) TABLE tbl;2021-12-21 12:54:08.775 +07 [208716] LOG:  terminating any other active server processes2021-12-21 12:54:08.775 +07 [208716] LOG:  all server processes terminated; reinitializingwith dump:[New LWP 208725][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".Core was generated by `postgres: andrew postgres [local] REINDEX              '.Program terminated with signal SIGABRT, Aborted.#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:5050      ../sysdeps/unix/sysv/linux/raise.c: Нет такого файла или каталога.(gdb) bt#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1  0x00007feadfac7859 in __GI_abort () at abort.c:79#2  0x000055e36b6d9ec7 in errfinish (filename=0x55e36b786e20 \"xact.c\", lineno=1729, funcname=0x55e36b788660 <__func__.29619> \"RecordTransactionAbort\") at elog.c:680#3  0x000055e36b0d6e37 in RecordTransactionAbort (isSubXact=false) at xact.c:1729#4  0x000055e36b0d7f64 in AbortTransaction () at xact.c:2787#5  0x000055e36b0d88fa in AbortCurrentTransaction () at xact.c:3315#6  0x000055e36b524f33 in PostgresMain (dbname=0x55e36d4d97b8 \"postgres\", username=0x55e36d4d9798 \"andrew\") at postgres.c:4252#7  0x000055e36b44d1e0 in BackendRun (port=0x55e36d4d1020) at postmaster.c:4594#8  0x000055e36b44cac5 in BackendStartup (port=0x55e36d4d1020) at postmaster.c:4322#9  0x000055e36b448bad in ServerLoop () at postmaster.c:1802#10 0x000055e36b448346 in PostmasterMain (argc=3, argv=0x55e36d4a84d0) at postmaster.c:1474#11 0x000055e36b33b5ca in main (argc=3, argv=0x55e36d4a84d0) at main.c:198Regards!On Mon, Dec 20, 2021 at 7:42 PM wenjing zeng <wjzeng2012@gmail.com> wrote:Post GTT v63 to fixed conflicts with the latest code.Hi AndrewHave you found any new bugs recently?Wenjing2021年11月20日 01:31,wenjing <wjzeng2012@gmail.com> 写道:Andrew Bille <andrewbille@gmail.com> 于2021年11月15日周一 下午6:34写道:Thanks for the patches. The feature has become much more stable.However, there is another simple case that generates an error: Master with v61 patchesCREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;ERROR:  could not open file \"base/13560/t3_16384\": No such file or directoryThank you for pointing out that this part is not reasonable enough.This issue has been fixed in v62.Looking forward to your reply.Wenjing AndrewOn Thu, Nov 11, 2021 at 3:15 PM wenjing <wjzeng2012@gmail.com> wrote:Fixed a bug in function pg_gtt_attached_pid.Looking forward to your reply.Wenjing \n<0001-gtt-v62-reademe.patch><0004-gtt-v62-regress.patch><0002-gtt-v62-doc.patch><0003-gtt-v62-implementation.patch>", "msg_date": "Tue, 21 Dec 2021 12:59:49 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Andrew Bille <andrewbille@gmail.com> 于2021年12月21日周二 14:00写道:\n\n> Hi!\n> Thanks for new patches.\n> Yet another crash reproduced on master with v63 patches:\n>\n> CREATE TABLESPACE ts LOCATION '/tmp/ts';\n> CREATE GLOBAL TEMP TABLE tbl (num1 bigint);\n> INSERT INTO tbl (num1) values (1);\n> CREATE INDEX tbl_idx ON tbl (num1);\n> REINDEX (TABLESPACE ts) TABLE tbl;\n>\nThis is a feature made in PG14 that supports reindex change tablespaces.\nThank you for pointing that out and I fixed it in v64.\nWaiting for your feedback.\n\nregards\n\nWenjing\n\n\n>\n> Got error:\n> CREATE TABLESPACE\n> CREATE TABLE\n> INSERT 0 1\n> CREATE INDEX\n> WARNING: AbortTransaction while in COMMIT state\n> ERROR: gtt relfilenode 16388 not found in rel 16388\n> PANIC: cannot abort transaction 726, it was already committed\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> connection to server was lost\n>\n> in log:\n> 2021-12-21 12:54:08.273 +07 [208725] ERROR: gtt relfilenode 16388 not\n> found in rel 16388\n> 2021-12-21 12:54:08.273 +07 [208725] STATEMENT: REINDEX (TABLESPACE ts)\n> TABLE tbl;\n> 2021-12-21 12:54:08.273 +07 [208725] WARNING: AbortTransaction while in\n> COMMIT state\n> 2021-12-21 12:54:08.273 +07 [208725] PANIC: cannot abort transaction 726,\n> it was already committed\n> 2021-12-21 12:54:08.775 +07 [208716] LOG: server process (PID 208725) was\n> terminated by signal 6: Аварийный останов\n> 2021-12-21 12:54:08.775 +07 [208716] DETAIL: Failed process was running:\n> REINDEX (TABLESPACE ts) TABLE tbl;\n> 2021-12-21 12:54:08.775 +07 [208716] LOG: terminating any other active\n> server processes\n> 2021-12-21 12:54:08.775 +07 [208716] LOG: all server processes\n> terminated; reinitializing\n>\n> with dump:\n> [New LWP 208725]\n> [Thread debugging using libthread_db enabled]\n> Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n> Core was generated by `postgres: andrew postgres [local] REINDEX\n> '.\n> Program terminated with signal SIGABRT, Aborted.\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> 50 ../sysdeps/unix/sysv/linux/raise.c: Нет такого файла или каталога.\n> (gdb) bt\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007feadfac7859 in __GI_abort () at abort.c:79\n> #2 0x000055e36b6d9ec7 in errfinish (filename=0x55e36b786e20 \"xact.c\",\n> lineno=1729, funcname=0x55e36b788660 <__func__.29619>\n> \"RecordTransactionAbort\") at elog.c:680\n> #3 0x000055e36b0d6e37 in RecordTransactionAbort (isSubXact=false) at\n> xact.c:1729\n> #4 0x000055e36b0d7f64 in AbortTransaction () at xact.c:2787\n> #5 0x000055e36b0d88fa in AbortCurrentTransaction () at xact.c:3315\n> #6 0x000055e36b524f33 in PostgresMain (dbname=0x55e36d4d97b8 \"postgres\",\n> username=0x55e36d4d9798 \"andrew\") at postgres.c:4252\n> #7 0x000055e36b44d1e0 in BackendRun (port=0x55e36d4d1020) at\n> postmaster.c:4594\n> #8 0x000055e36b44cac5 in BackendStartup (port=0x55e36d4d1020) at\n> postmaster.c:4322\n> #9 0x000055e36b448bad in ServerLoop () at postmaster.c:1802\n> #10 0x000055e36b448346 in PostmasterMain (argc=3, argv=0x55e36d4a84d0) at\n> postmaster.c:1474\n> #11 0x000055e36b33b5ca in main (argc=3, argv=0x55e36d4a84d0) at main.c:198\n>\n> Regards!\n>\n> On Mon, Dec 20, 2021 at 7:42 PM wenjing zeng <wjzeng2012@gmail.com> wrote:\n>\n>> Post GTT v63 to fixed conflicts with the latest code.\n>>\n>>\n>>\n>> Hi Andrew\n>>\n>> Have you found any new bugs recently?\n>>\n>>\n>>\n>> Wenjing\n>>\n>>\n>>\n>>\n>> 2021年11月20日 01:31,wenjing <wjzeng2012@gmail.com> 写道:\n>>\n>>\n>>\n>> Andrew Bille <andrewbille@gmail.com> 于2021年11月15日周一 下午6:34写道:\n>>\n>>> Thanks for the patches. The feature has become much more stable.\n>>> However, there is another simple case that generates an error:\n>>> Master with v61 patches\n>>>\n>>> CREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;\n>>> ERROR: could not open file \"base/13560/t3_16384\": No such file or\n>>> directory\n>>>\n>> Thank you for pointing out that this part is not reasonable enough.\n>> This issue has been fixed in v62.\n>> Looking forward to your reply.\n>>\n>>\n>> Wenjing\n>>\n>>\n>>\n>>> Andrew\n>>>\n>>> On Thu, Nov 11, 2021 at 3:15 PM wenjing <wjzeng2012@gmail.com> wrote:\n>>>\n>>>> Fixed a bug in function pg_gtt_attached_pid.\n>>>> Looking forward to your reply.\n>>>>\n>>>>\n>>>> Wenjing\n>>>>\n>>>>\n>>\n>>\n>> <0001-gtt-v62-reademe.patch><0004-gtt-v62-regress.patch>\n>> <0002-gtt-v62-doc.patch><0003-gtt-v62-implementation.patch>\n>>\n>>\n>>\n>>\n>>", "msg_date": "Thu, 23 Dec 2021 20:36:37 +0800", "msg_from": "wenjing <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi!\n\nI could not detect crashes with your last patch, so I think the patch is\nready for a review.\nPlease, also consider fixing error messages, as existing ones don't follow\nmessage writing guidelines.\nhttps://www.postgresql.org/docs/14/error-style-guide.html\n\nRegards, Andrew\n\nOn Thu, Dec 23, 2021 at 7:36 PM wenjing <wjzeng2012@gmail.com> wrote:\n\n>\n>\n> Andrew Bille <andrewbille@gmail.com> 于2021年12月21日周二 14:00写道:\n>\n>> Hi!\n>> Thanks for new patches.\n>> Yet another crash reproduced on master with v63 patches:\n>>\n>> CREATE TABLESPACE ts LOCATION '/tmp/ts';\n>> CREATE GLOBAL TEMP TABLE tbl (num1 bigint);\n>> INSERT INTO tbl (num1) values (1);\n>> CREATE INDEX tbl_idx ON tbl (num1);\n>> REINDEX (TABLESPACE ts) TABLE tbl;\n>>\n> This is a feature made in PG14 that supports reindex change tablespaces.\n> Thank you for pointing that out and I fixed it in v64.\n> Waiting for your feedback.\n>\n\nHi!I could not detect crashes with your last patch, so I think the patch is ready for a review.Please, also consider fixing error messages, as existing ones don't follow message writing guidelines. https://www.postgresql.org/docs/14/error-style-guide.htmlRegards, AndrewOn Thu, Dec 23, 2021 at 7:36 PM wenjing <wjzeng2012@gmail.com> wrote:Andrew Bille <andrewbille@gmail.com> 于2021年12月21日周二 14:00写道:Hi!Thanks for new patches.Yet another crash reproduced on master with v63 patches:CREATE TABLESPACE ts LOCATION '/tmp/ts';CREATE GLOBAL TEMP TABLE tbl (num1 bigint);INSERT INTO tbl (num1) values (1);CREATE INDEX tbl_idx ON tbl (num1);REINDEX (TABLESPACE ts) TABLE tbl;This is a feature made in PG14 that supports reindex change tablespaces. Thank you for pointing that out and I fixed it in v64.Waiting for your feedback.", "msg_date": "Mon, 10 Jan 2022 16:17:23 +0700", "msg_from": "Andrew Bille <andrewbille@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "very glad to see your reply.\nThank you very much for your review of the code and found so many problems.\nThere was a conflict between the latest code and patch, I have corrected it\nand provided a new patch (V65).\nWaiting for your feedback.\n\n\nRegards, Wenjing.\n\n\nAndrew Bille <andrewbille@gmail.com> 于2022年1月10日周一 17:17写道:\n\n> Hi!\n>\n> I could not detect crashes with your last patch, so I think the patch is\n> ready for a review.\n> Please, also consider fixing error messages, as existing ones don't follow\n> message writing guidelines.\n> https://www.postgresql.org/docs/14/error-style-guide.html\n>\n\nI corrected the ERROR message of GTT according to the link and the existing\nerror message.\nSome comments and code refactoring were also done.\n\n\n>\n> Regards, Andrew\n>\n> On Thu, Dec 23, 2021 at 7:36 PM wenjing <wjzeng2012@gmail.com> wrote:\n>\n>>\n>>\n>> Andrew Bille <andrewbille@gmail.com> 于2021年12月21日周二 14:00写道:\n>>\n>>> Hi!\n>>> Thanks for new patches.\n>>> Yet another crash reproduced on master with v63 patches:\n>>>\n>>> CREATE TABLESPACE ts LOCATION '/tmp/ts';\n>>> CREATE GLOBAL TEMP TABLE tbl (num1 bigint);\n>>> INSERT INTO tbl (num1) values (1);\n>>> CREATE INDEX tbl_idx ON tbl (num1);\n>>> REINDEX (TABLESPACE ts) TABLE tbl;\n>>>\n>> This is a feature made in PG14 that supports reindex change tablespaces.\n>> Thank you for pointing that out and I fixed it in v64.\n>> Waiting for your feedback.\n>>\n>", "msg_date": "Thu, 20 Jan 2022 17:53:05 +0800", "msg_from": "Wenjing Zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Update GTT v66 to fix conflicts with the latest code.\n\nRegards, Wenjing.\n\n\nWenjing Zeng <wjzeng2012@gmail.com> 于2022年1月20日周四 17:53写道:\n\n> very glad to see your reply.\n> Thank you very much for your review of the code and found so many problems.\n> There was a conflict between the latest code and patch, I have corrected\n> it and provided a new patch (V65).\n> Waiting for your feedback.\n>\n>\n> Regards, Wenjing.\n>\n>\n> Andrew Bille <andrewbille@gmail.com> 于2022年1月10日周一 17:17写道:\n>\n>> Hi!\n>>\n>> I could not detect crashes with your last patch, so I think the patch is\n>> ready for a review.\n>> Please, also consider fixing error messages, as existing ones don't\n>> follow message writing guidelines.\n>> https://www.postgresql.org/docs/14/error-style-guide.html\n>>\n>\n> I corrected the ERROR message of GTT according to the link and the\n> existing error message.\n> Some comments and code refactoring were also done.\n>\n>\n>>\n>> Regards, Andrew\n>>\n>> On Thu, Dec 23, 2021 at 7:36 PM wenjing <wjzeng2012@gmail.com> wrote:\n>>\n>>>\n>>>\n>>> Andrew Bille <andrewbille@gmail.com> 于2021年12月21日周二 14:00写道:\n>>>\n>>>> Hi!\n>>>> Thanks for new patches.\n>>>> Yet another crash reproduced on master with v63 patches:\n>>>>\n>>>> CREATE TABLESPACE ts LOCATION '/tmp/ts';\n>>>> CREATE GLOBAL TEMP TABLE tbl (num1 bigint);\n>>>> INSERT INTO tbl (num1) values (1);\n>>>> CREATE INDEX tbl_idx ON tbl (num1);\n>>>> REINDEX (TABLESPACE ts) TABLE tbl;\n>>>>\n>>> This is a feature made in PG14 that supports reindex change tablespaces.\n>>> Thank you for pointing that out and I fixed it in v64.\n>>> Waiting for your feedback.\n>>>\n>>\n>", "msg_date": "Tue, 15 Feb 2022 17:03:56 +0800", "msg_from": "Wenjing Zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Update GTT v67 to fix conflicts with the latest code.\n\nRegards, Wenjing.\n\nWenjing Zeng <wjzeng2012@gmail.com> 于2022年2月15日周二 17:03写道:\n\n> Update GTT v66 to fix conflicts with the latest code.\n>\n> Regards, Wenjing.\n>\n>\n> Wenjing Zeng <wjzeng2012@gmail.com> 于2022年1月20日周四 17:53写道:\n>\n>> very glad to see your reply.\n>> Thank you very much for your review of the code and found so many\n>> problems.\n>> There was a conflict between the latest code and patch, I have corrected\n>> it and provided a new patch (V65).\n>> Waiting for your feedback.\n>>\n>>\n>> Regards, Wenjing.\n>>\n>>\n>> Andrew Bille <andrewbille@gmail.com> 于2022年1月10日周一 17:17写道:\n>>\n>>> Hi!\n>>>\n>>> I could not detect crashes with your last patch, so I think the patch is\n>>> ready for a review.\n>>> Please, also consider fixing error messages, as existing ones don't\n>>> follow message writing guidelines.\n>>> https://www.postgresql.org/docs/14/error-style-guide.html\n>>>\n>>\n>> I corrected the ERROR message of GTT according to the link and the\n>> existing error message.\n>> Some comments and code refactoring were also done.\n>>\n>>\n>>>\n>>> Regards, Andrew\n>>>\n>>> On Thu, Dec 23, 2021 at 7:36 PM wenjing <wjzeng2012@gmail.com> wrote:\n>>>\n>>>>\n>>>>\n>>>> Andrew Bille <andrewbille@gmail.com> 于2021年12月21日周二 14:00写道:\n>>>>\n>>>>> Hi!\n>>>>> Thanks for new patches.\n>>>>> Yet another crash reproduced on master with v63 patches:\n>>>>>\n>>>>> CREATE TABLESPACE ts LOCATION '/tmp/ts';\n>>>>> CREATE GLOBAL TEMP TABLE tbl (num1 bigint);\n>>>>> INSERT INTO tbl (num1) values (1);\n>>>>> CREATE INDEX tbl_idx ON tbl (num1);\n>>>>> REINDEX (TABLESPACE ts) TABLE tbl;\n>>>>>\n>>>> This is a feature made in PG14 that supports reindex change\n>>>> tablespaces.\n>>>> Thank you for pointing that out and I fixed it in v64.\n>>>> Waiting for your feedback.\n>>>>\n>>>\n>>\n>\n>\n>", "msg_date": "Fri, 25 Feb 2022 14:26:47 +0800", "msg_from": "Wenjing Zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi,\n\n\nThis is a huge thread. Realistically reviewers and committers can't reread\nit. I think there needs to be more of a description of how this works included\nin the patchset and *why* it works that way. The readme does a bit of that,\nbut not particularly well.\n\n\nOn 2022-02-25 14:26:47 +0800, Wenjing Zeng wrote:\n> +++ b/README.gtt.txt\n> @@ -0,0 +1,172 @@\n> +Global Temporary Table(GTT)\n> +=========================================\n> +\n> +Feature description\n> +-----------------------------------------\n> +\n> +Previously, temporary tables are defined once and automatically\n> +exist (starting with empty contents) in every session before using them.\n\nI think for a README \"previously\" etc isn't good language - if it were\ncommited, it'd not be understandable anymore. It makes more sense for commit\nmessages etc.\n\n\n> +Main design ideas\n> +-----------------------------------------\n> +In general, GTT and LTT use the same storage and buffer design and\n> +implementation. The storage files for both types of temporary tables are named\n> +as t_backendid_relfilenode, and the local buffer is used to cache the data.\n\nWhat does \"named ast_backendid_relfilenode\" mean?\n\n\n> +The schema of GTTs is shared among sessions while their data are not. We build\n> +a new mechanisms to manage those non-shared data and their statistics.\n> +Here is the summary of changes:\n> +\n> +1) CATALOG\n> +GTTs store session-specific data. The storage information of GTTs'data, their\n> +transaction information, and their statistics are not stored in the catalog.\n> +\n> +2) STORAGE INFO & STATISTICS INFO & TRANSACTION INFO\n> +In order to maintain durability and availability of GTTs'session-specific data,\n> +their storage information, statistics, and transaction information is managed\n> +in a local hash table tt_storage_local_hash.\n\n\"maintain durability\"? Durable across what? In the context of databases it's\ntypically about crash safety, but that can't be the case here.\n\n\n> +3) DDL\n> +Currently, GTT supports almost all table'DDL except CLUSTER/VACUUM FULL.\n> +Part of the DDL behavior is limited by shared definitions and multiple copies of\n> +local data, and we added some structures to handle this.\n\n> +A shared hash table active_gtt_shared_hash is added to track the state of the\n> +GTT in a different session. This information is recorded in the hash table\n> +during the DDL execution of the GTT.\n\n> +The data stored in a GTT can only be modified or accessed by owning session.\n> +The statements that only modify data in a GTT do not need a high level of\n> +table locking. The operations making those changes include truncate GTT,\n> +reindex GTT, and lock GTT.\n\nI think you need to introduce a bit more terminology for any of this to make\nsense. Sometimes GTT means the global catalog entity, sometimes, like here, it\nappears to mean the session specific contents of a GTT.\n\nWhat state of a GTT in a nother session?\n\n\nHow do GTTs handle something like BEGIN; TRUNCATE some_gtt_table; ROLLBACK;?\n\n\n> +1.2 on commit clause\n> +LTT's status associated with on commit DELETE ROWS and on commit PRESERVE ROWS\n> +is not stored in catalog. Instead, GTTs need a bool value on_commit_delete_rows\n> +in reloptions which is shared among sessions.\n\nWhy?\n\n\n\n> +2.3 statistics info\n> +1) relpages reltuples relallvisible relfilenode\n\n?\n\n\n> +3 DDL\n> +3.1. active_gtt_shared_hash\n> +This is the hash table created in shared memory to trace the GTT files initialized\n> +in each session. Each hash entry contains a bitmap that records the backendid of\n> +the initialized GTT file. With this hash table, we know which backend/session\n> +is using this GTT. Such information is used during GTT's DDL operations.\n\nSo there's a separate locking protocol for GTTs that doesn't use the normal\nlocking infrastructure? Why?\n\n\n> +3.7 CLUSTER GTT/VACUUM FULL GTT\n> +The current version does not support.\n\nWhy?\n\n\n> +4 MVCC commit log(clog) cleanup\n> +\n> +The GTT storage file contains transaction information. Queries for GTT data rely\n> +on transaction information such as clog. The transaction information required by\n> +each session may be completely different.\n\nWhy is transaction information different between sessions? Or does this just\nmean that different transaction ids will be accessed?\n\n\n\n0003-gtt-v67-implementation.patch\n 71 files changed, 3167 insertions(+), 195 deletions(-)\n\nThis needs to be broken into smaller chunks to be reviewable.\n\n\n> @@ -677,6 +678,14 @@ _bt_getrootheight(Relation rel)\n> \t{\n> \t\tBuffer\t\tmetabuf;\n> \n> +\t\t/*\n> +\t\t * If a global temporary table storage file is not initialized in the\n> +\t\t * this session, its index does not have a root page, just returns 0.\n> +\t\t */\n> +\t\tif (RELATION_IS_GLOBAL_TEMP(rel) &&\n> +\t\t\t!gtt_storage_attached(RelationGetRelid(rel)))\n> +\t\t\treturn 0;\n> +\n> \t\tmetabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ);\n> \t\tmetad = _bt_getmeta(rel, metabuf);\n\nStuff like this seems not acceptable. Accesses would have to be prevented much\nearlier. Otherwise each index method is going to need copies of this logic. I\nalso doubt that _bt_getrootheight() is the only place that'd need this.\n\n\n> static void\n> index_update_stats(Relation rel,\n> \t\t\t\t bool hasindex,\n> -\t\t\t\t double reltuples)\n> +\t\t\t\t double reltuples,\n> +\t\t\t\t bool isreindex)\n> {\n> \tOid\t\t\trelid = RelationGetRelid(rel);\n> \tRelation\tpg_class;\n> @@ -2797,6 +2824,13 @@ index_update_stats(Relation rel,\n> \tForm_pg_class rd_rel;\n> \tbool\t\tdirty;\n> \n> +\t/*\n> +\t * Most of the global Temp table data is updated to the local hash, and reindex\n> +\t * does not refresh relcache, so call a separate function.\n> +\t */\n> +\tif (RELATION_IS_GLOBAL_TEMP(rel))\n> +\t\treturn index_update_gtt_relstats(rel, hasindex, reltuples, isreindex);\n> +\n\nSo basically every single place in the code that does catalog accesses is\ngoing to need a completely separate implementation for GTTs? That seems\nunmaintainable.\n\n\n\n> +/*-------------------------------------------------------------------------\n> + *\n> + * storage_gtt.c\n> + *\t The body implementation of Global temparary table.\n> + *\n> + * IDENTIFICATION\n> + *\t src/backend/catalog/storage_gtt.c\n> + *\n> + *\t See src/backend/catalog/GTT_README for Global temparary table's\n> + *\t requirements and design.\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n\nI don't think that path to the readme is correct.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Feb 2022 23:45:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "I read through this.\nFind attached some language fixes. You should be able to apply each \"fix\"\npatch on top of your own local branch with git am, and then squish them\ntogether. Let me know if you have trouble with that.\n\nI think get_seqence_start_value() should be static. (Or otherwise, it should\nbe in lsyscache.c).\n\nThe include added to execPartition.c seems to be unused.\n\n+#define RELATION_IS_TEMP_ON_CURRENT_SESSION(relation) \\\n+#define RELATION_IS_TEMP(relation) \\\n+#define RelpersistenceTsTemp(relpersistence) \\\n+#define RELATION_GTT_ON_COMMIT_DELETE(relation) \\\n\n=> These macros can evaluate their arguments multiple times.\nYou should add a comment to warn about that. And maybe avoid passing them a\nfunction argument, like: RelpersistenceTsTemp(get_rel_persistence(rte->relid))\n\n+list_all_backend_gtt_frozenxids should return TransactionId not int.\nThe function name should say \"oldest\" and not \"all\" ?\n\nI think the GUC should have a longer name. max_active_gtt is too short for a\nglobal var.\n\n+#define MIN_NUM_ACTIVE_GTT 0\n+#define DEFAULT_NUM_ACTIVE_GTT 1000\n+#define MAX_NUM_ACTIVE_GTT 1000000\n\n+int max_active_gtt = MIN_NUM_ACTIVE_GTT\n\nIt's being initialized to MIN, but then the GUC machinery sets it to DEFAULT.\nBy convention, it should be initialized to default.\n\nfout->remoteVersion >= 140000\n\n=> should say 15\n\ndescribe.c has gettext_noop(\"session\"), which is a half-truth. The data is\nper-session but the table definition is persistent..\n\nYou redirect stats from pg_class and pg_statistics to a local hash table.\nThis is pretty hairy :(\nI guess you'd also need to handle pg_statistic_ext and ext_data.\npg_stats doesn't work, since the data isn't in pg_statistic - it'd need to look\nat pg_get_gtt_statistics.\n\nI wonder if there's a better way to do it, like updating pg_statistic but\nforcing the changes to be rolled back when the session ends... But I think\nthat would make longrunning sessions behave badly, the same as \"longrunning\ntransactions\".\n\nHave you looked at Gilles Darold's GTT extension ?", "msg_date": "Sat, 26 Feb 2022 18:21:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\nYou redirect stats from pg_class and pg_statistics to a local hash table.\n> This is pretty hairy :(\n> I guess you'd also need to handle pg_statistic_ext and ext_data.\n> pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to\n> look\n> at pg_get_gtt_statistics.\n>\n\nWithout this, the GTT will be terribly slow like current temporary tables\nwith a lot of problems with bloating of pg_class, pg_attribute and\npg_depend tables.\n\nRegards\n\nPavel\n\nHi\nYou redirect stats from pg_class and pg_statistics to a local hash table.\nThis is pretty hairy :(\nI guess you'd also need to handle pg_statistic_ext and ext_data.\npg_stats doesn't work, since the data isn't in pg_statistic - it'd need to look\nat pg_get_gtt_statistics.Without this, the GTT will be terribly slow like current temporary tables with a lot of problems with bloating of pg_class, pg_attribute and pg_depend tables.RegardsPavel", "msg_date": "Sun, 27 Feb 2022 04:17:52 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi,\n\nOn 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:\n> > You redirect stats from pg_class and pg_statistics to a local hash table.\n> > This is pretty hairy :(\n\nAs is I think the patch is architecturally completely unacceptable. Having\ncode everywhere to redirect to manually written in-memory catalog table code\nisn't maintainable.\n\n\n> > I guess you'd also need to handle pg_statistic_ext and ext_data.\n> > pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to\n> > look\n> > at pg_get_gtt_statistics.\n>\n> Without this, the GTT will be terribly slow like current temporary tables\n> with a lot of problems with bloating of pg_class, pg_attribute and\n> pg_depend tables.\n\nI think it's not a great idea to solve multiple complicated problems at\nonce...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 26 Feb 2022 20:13:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "ne 27. 2. 2022 v 5:13 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:\n> > > You redirect stats from pg_class and pg_statistics to a local hash\n> table.\n> > > This is pretty hairy :(\n>\n> As is I think the patch is architecturally completely unacceptable. Having\n> code everywhere to redirect to manually written in-memory catalog table\n> code\n> isn't maintainable.\n>\n>\n> > > I guess you'd also need to handle pg_statistic_ext and ext_data.\n> > > pg_stats doesn't work, since the data isn't in pg_statistic - it'd\n> need to\n> > > look\n> > > at pg_get_gtt_statistics.\n> >\n> > Without this, the GTT will be terribly slow like current temporary tables\n> > with a lot of problems with bloating of pg_class, pg_attribute and\n> > pg_depend tables.\n>\n> I think it's not a great idea to solve multiple complicated problems at\n> once...\n>\n\nI thought about this issue for a very long time, and I didn't find any\nbetter (without more significant rewriting of pg storage). In a lot of\nprojects, that I know, the temporary tables are strictly prohibited due\npossible devastating impact to system catalog bloat. It is a serious\nproblem. So any implementation of GTT should solve the questions: a) how to\nreduce catalog bloating, b) how to allow session related statistics for\nGTT. I agree so implementation of GTT like template based LTT (local\ntemporary tables) can be very simple (it is possible by extension), but\nwith the same unhappy performance impacts.\n\nI don't say so current design should be accepted without any discussions\nand without changes. Maybe GTT based on LTT can be better than nothing\n(what we have now), and can be good enough for a lot of projects where the\nload is not too high (and almost all projects have low load).\nUnfortunately,it can be a trap for a lot of projects in future, so there\nshould be discussion and proposed solutions for fix of related issues. The\nperformance of GTT should be fixable, so any discussion about this topic\nshould have part about protections against catalog bloat and about cost\nrelated to frequent catalog updates.\n\nBut anyway, I invite (and probably not just me) any discussion on how to\nimplement this feature, how to solve performance issues, and how to divide\nimplementation into smaller steps. I am sure so fast GTT implementation\ncan be used for fast implementation of LTT too, and maybe with all other\ntemporary objects\n\nRegards\n\nPavel\n\n\n> Greetings,\n>\n> Andres Freund\n>\n\nne 27. 2. 2022 v 5:13 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:\n> > You redirect stats from pg_class and pg_statistics to a local hash table.\n> > This is pretty hairy :(\n\nAs is I think the patch is architecturally completely unacceptable. Having\ncode everywhere to redirect to manually written in-memory catalog table code\nisn't maintainable.\n\n\n> > I guess you'd also need to handle pg_statistic_ext and ext_data.\n> > pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to\n> > look\n> > at pg_get_gtt_statistics.\n>\n> Without this, the GTT will be terribly slow like current temporary tables\n> with a lot of problems with bloating of pg_class, pg_attribute and\n> pg_depend tables.\n\nI think it's not a great idea to solve multiple complicated problems at\nonce...I thought about this issue for a very long time, and I didn't find any better (without more significant rewriting of pg storage). In a lot of projects, that I know, the temporary tables are strictly prohibited due possible devastating impact to system catalog bloat.  It is a serious problem. So any implementation of GTT should solve the questions: a) how to reduce catalog bloating, b) how to allow session related statistics for GTT. I agree so implementation of GTT like template based LTT (local temporary tables) can be very simple (it is possible by extension), but with the same unhappy performance impacts. I don't say so current design should be accepted without any discussions and without changes. Maybe GTT based on LTT can be better than nothing (what we have now), and can be good enough for a lot of projects where the load is not too high (and almost all projects have low load). Unfortunately,it can be a trap for a lot of projects in future, so there should be discussion and proposed solutions for fix of related issues. The performance of GTT should be fixable, so any discussion about this topic should have part about protections against catalog bloat and about cost related to frequent catalog updates.But anyway, I invite (and probably not just me) any discussion on how to implement this feature, how to solve performance issues, and how to divide implementation into smaller steps. I am sure so fast GTT  implementation can be used for fast implementation of LTT too, and maybe with all other temporary objects RegardsPavel\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 27 Feb 2022 06:09:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2022年2月25日 15:45,Andres Freund <andres@anarazel.de> 写道:\n> \n> Hi,\n> \n> \n> This is a huge thread. Realistically reviewers and committers can't reread\n> it. I think there needs to be more of a description of how this works included\n> in the patchset and *why* it works that way. The readme does a bit of that,\n> but not particularly well.\nThank you for your review of the design and code.\nI'm always trying to improve it. If you are confused or need clarification on something, please point it out.\n\n\n> \n> \n> On 2022-02-25 14:26:47 +0800, Wenjing Zeng wrote:\n>> +++ b/README.gtt.txt\n>> @@ -0,0 +1,172 @@\n>> +Global Temporary Table(GTT)\n>> +=========================================\n>> +\n>> +Feature description\n>> +-----------------------------------------\n>> +\n>> +Previously, temporary tables are defined once and automatically\n>> +exist (starting with empty contents) in every session before using them.\n> \n> I think for a README \"previously\" etc isn't good language - if it were\n> commited, it'd not be understandable anymore. It makes more sense for commit\n> messages etc.\nThanks for pointing it out. I will adjust the description.\n\n> \n> \n>> +Main design ideas\n>> +-----------------------------------------\n>> +In general, GTT and LTT use the same storage and buffer design and\n>> +implementation. The storage files for both types of temporary tables are named\n>> +as t_backendid_relfilenode, and the local buffer is used to cache the data.\n> \n> What does \"named ast_backendid_relfilenode\" mean?\nThis is the storage file naming format for describing temporary tables.\nIt starts with 't', followed by backendid and relfilenode, connected by an underscore.\nFile naming rules are the same as LTT.\nThe data in the file is no different from regular tables and LTT.\n\n> \n> \n>> +The schema of GTTs is shared among sessions while their data are not. We build\n>> +a new mechanisms to manage those non-shared data and their statistics.\n>> +Here is the summary of changes:\n>> +\n>> +1) CATALOG\n>> +GTTs store session-specific data. The storage information of GTTs'data, their\n>> +transaction information, and their statistics are not stored in the catalog.\n>> +\n>> +2) STORAGE INFO & STATISTICS INFO & TRANSACTION INFO\n>> +In order to maintain durability and availability of GTTs'session-specific data,\n>> +their storage information, statistics, and transaction information is managed\n>> +in a local hash table tt_storage_local_hash.\n> \n> \"maintain durability\"? Durable across what? In the context of databases it's\n> typically about crash safety, but that can't be the case here.\nIt means that the transaction information(relfrozenxid/relminmxid) storage information(relfilenode)\nand statistics(relpages) of GTT, which are maintained in hashtable , not pg_class.\nThis is to allow GTT to store its own local data in different sessions and to avoid frequent catalog changes.\n\n> \n> \n>> +3) DDL\n>> +Currently, GTT supports almost all table'DDL except CLUSTER/VACUUM FULL.\n>> +Part of the DDL behavior is limited by shared definitions and multiple copies of\n>> +local data, and we added some structures to handle this.\n> \n>> +A shared hash table active_gtt_shared_hash is added to track the state of the\n>> +GTT in a different session. This information is recorded in the hash table\n>> +during the DDL execution of the GTT.\n> \n>> +The data stored in a GTT can only be modified or accessed by owning session.\n>> +The statements that only modify data in a GTT do not need a high level of\n>> +table locking. The operations making those changes include truncate GTT,\n>> +reindex GTT, and lock GTT.\n> \n> I think you need to introduce a bit more terminology for any of this to make\n> sense. Sometimes GTT means the global catalog entity, sometimes, like here, it\n> appears to mean the session specific contents of a GTT.\n> \n> What state of a GTT in a nother session?\n> \n> \n> How do GTTs handle something like BEGIN; TRUNCATE some_gtt_table; ROLLBACK;?\n\nGTT behaves exactly like a regular table.\nSpecifically, the latest relfilenode for the current session is stored in the hashtable and may change it.\nIf the transaction rolls back, the old relfilenode is enabled again, just as it is in pg_class.\n\n> \n> \n>> +1.2 on commit clause\n>> +LTT's status associated with on commit DELETE ROWS and on commit PRESERVE ROWS\n>> +is not stored in catalog. Instead, GTTs need a bool value on_commit_delete_rows\n>> +in reloptions which is shared among sessions.\n> \n> Why?\nThe LTT is always created and used in the current session. The on commit clause property\ndoes not need to be shared with other sessions. This is why LTT does not record the on commit clause\nin the catalog.\nHowever, GTT's table definitions are shared between sessions, including the on commit clause,\nso it needs to be saved in the catalog.\n\n\n> \n> \n> \n>> +2.3 statistics info\n>> +1) relpages reltuples relallvisible relfilenode\n> \n> ?\nIt was mentioned above.\n\n> \n>> +3 DDL\n>> +3.1. active_gtt_shared_hash\n>> +This is the hash table created in shared memory to trace the GTT files initialized\n>> +in each session. Each hash entry contains a bitmap that records the backendid of\n>> +the initialized GTT file. With this hash table, we know which backend/session\n>> +is using this GTT. Such information is used during GTT's DDL operations.\n> \n> So there's a separate locking protocol for GTTs that doesn't use the normal\n> locking infrastructure? Why?\n> \n> \n>> +3.7 CLUSTER GTT/VACUUM FULL GTT\n>> +The current version does not support.\n> \n> Why?\nCurrently, GTT cannot reuse clusters for regular table processes. I choose not to support it for now.\nAlso, I can't think of any scenario that would require clustering for temporary tables, which\nis another reason why not support cluster first.\n\n\n> \n> \n>> +4 MVCC commit log(clog) cleanup\n>> +\n>> +The GTT storage file contains transaction information. Queries for GTT data rely\n>> +on transaction information such as clog. The transaction information required by\n>> +each session may be completely different.\n> \n> Why is transaction information different between sessions? Or does this just\n> mean that different transaction ids will be accessed?\n\nIt has the same meaning as pg_class.relfrozenxid.\nFor the same GTT, the first transaction to write data in each session is different and\nthe data is independent of each other. They have a separate frozenxid.\nThe vacuum clog process needs to consider it.\n\n> \n> \n> \n> 0003-gtt-v67-implementation.patch\n> 71 files changed, 3167 insertions(+), 195 deletions(-)\n> \n> This needs to be broken into smaller chunks to be reviewable.\n> \n> \n>> @@ -677,6 +678,14 @@ _bt_getrootheight(Relation rel)\n>> \t{\n>> \t\tBuffer\t\tmetabuf;\n>> \n>> +\t\t/*\n>> +\t\t * If a global temporary table storage file is not initialized in the\n>> +\t\t * this session, its index does not have a root page, just returns 0.\n>> +\t\t */\n>> +\t\tif (RELATION_IS_GLOBAL_TEMP(rel) &&\n>> +\t\t\t!gtt_storage_attached(RelationGetRelid(rel)))\n>> +\t\t\treturn 0;\n>> +\n>> \t\tmetabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ);\n>> \t\tmetad = _bt_getmeta(rel, metabuf);\n> \n> Stuff like this seems not acceptable. Accesses would have to be prevented much\n> earlier. Otherwise each index method is going to need copies of this logic. I\n> also doubt that _bt_getrootheight() is the only place that'd need this.\nYou are right, this is done to solve the empty GTT being queried. I don't need it anymore,\nso I'll get rid of it.\n\n> \n> \n>> static void\n>> index_update_stats(Relation rel,\n>> \t\t\t\t bool hasindex,\n>> -\t\t\t\t double reltuples)\n>> +\t\t\t\t double reltuples,\n>> +\t\t\t\t bool isreindex)\n>> {\n>> \tOid\t\t\trelid = RelationGetRelid(rel);\n>> \tRelation\tpg_class;\n>> @@ -2797,6 +2824,13 @@ index_update_stats(Relation rel,\n>> \tForm_pg_class rd_rel;\n>> \tbool\t\tdirty;\n>> \n>> +\t/*\n>> +\t * Most of the global Temp table data is updated to the local hash, and reindex\n>> +\t * does not refresh relcache, so call a separate function.\n>> +\t */\n>> +\tif (RELATION_IS_GLOBAL_TEMP(rel))\n>> +\t\treturn index_update_gtt_relstats(rel, hasindex, reltuples, isreindex);\n>> +\n> \n> So basically every single place in the code that does catalog accesses is\n> going to need a completely separate implementation for GTTs? That seems\n> unmaintainable.\ncreate Index on GTT and VACUUM GTT do this.\nSome info of the table (relhasIndex) need to be updated to pg_class, \nwhile others not (relpages…).\nWould you prefer to extend it on the original function?\n\n\n> \n> \n> \n>> +/*-------------------------------------------------------------------------\n>> + *\n>> + * storage_gtt.c\n>> + *\t The body implementation of Global temparary table.\n>> + *\n>> + * IDENTIFICATION\n>> + *\t src/backend/catalog/storage_gtt.c\n>> + *\n>> + *\t See src/backend/catalog/GTT_README for Global temparary table's\n>> + *\t requirements and design.\n>> + *\n>> + *-------------------------------------------------------------------------\n>> + */\n> \n> I don't think that path to the readme is correct.\nI tried to reorganize it.\n\n\nRegards, Wenjing.\n\n\n\n> \n> Greetings,\n> \n> Andres Freund\n> \n> \n\n\n2022年2月25日 15:45,Andres Freund <andres@anarazel.de> 写道:Hi,This is a huge thread. Realistically reviewers and committers can't rereadit. I think there needs to be more of a description of how this works includedin the patchset and *why* it works that way. The readme does a bit of that,but not particularly well.Thank you for your review of the design and code.I'm always trying to improve it. If you are confused or need clarification on something, please point it out.On 2022-02-25 14:26:47 +0800, Wenjing Zeng wrote:+++ b/README.gtt.txt@@ -0,0 +1,172 @@+Global Temporary Table(GTT)+=========================================++Feature description+-----------------------------------------++Previously, temporary tables are defined once and automatically+exist (starting with empty contents) in every session before using them.I think for a README \"previously\" etc isn't good language - if it werecommited, it'd not be understandable anymore. It makes more sense for commitmessages etc.Thanks for pointing it out. I will adjust the description.+Main design ideas+-----------------------------------------+In general, GTT and LTT use the same storage and buffer design and+implementation. The storage files for both types of temporary tables are named+as t_backendid_relfilenode, and the local buffer is used to cache the data.What does \"named ast_backendid_relfilenode\" mean?This is the storage file naming format for describing temporary tables.It starts with 't', followed by backendid and relfilenode, connected by an underscore.File naming rules are the same as LTT.The data in the file is no different from regular tables and LTT.+The schema of GTTs is shared among sessions while their data are not. We build+a new mechanisms to manage those non-shared data and their statistics.+Here is the summary of changes:++1) CATALOG+GTTs store session-specific data. The storage information of GTTs'data, their+transaction information, and their statistics are not stored in the catalog.++2) STORAGE INFO & STATISTICS INFO & TRANSACTION INFO+In order to maintain durability and availability of GTTs'session-specific data,+their storage information, statistics, and transaction information is managed+in a local hash table tt_storage_local_hash.\"maintain durability\"? Durable across what? In the context of databases it'stypically about crash safety, but that can't be the case here.It means that the transaction information(relfrozenxid/relminmxid)  storage information(relfilenode)and statistics(relpages) of GTT, which are maintained in hashtable , not pg_class.This is to allow GTT to store its own local data in different sessions and to avoid frequent catalog changes.+3) DDL+Currently, GTT supports almost all table'DDL except CLUSTER/VACUUM FULL.+Part of the DDL behavior is limited by shared definitions and multiple copies of+local data, and we added some structures to handle this.+A shared hash table active_gtt_shared_hash is added to track the state of the+GTT in a different session. This information is recorded in the hash table+during the DDL execution of the GTT.+The data stored in a GTT can only be modified or accessed by owning session.+The statements that only modify data in a GTT do not need a high level of+table locking. The operations making those changes include truncate GTT,+reindex GTT, and lock GTT.I think you need to introduce a bit more terminology for any of this to makesense. Sometimes GTT means the global catalog entity, sometimes, like here, itappears to mean the session specific contents of a GTT.What state of a GTT in a nother session?How do GTTs handle something like BEGIN; TRUNCATE some_gtt_table; ROLLBACK;?GTT behaves exactly like a regular table.Specifically, the latest relfilenode for the current session is stored in the hashtable and may change it.If the transaction rolls back, the old relfilenode is enabled again, just as it is in pg_class.+1.2 on commit clause+LTT's status associated with on commit DELETE ROWS and on commit PRESERVE ROWS+is not stored in catalog. Instead, GTTs need a bool value on_commit_delete_rows+in reloptions which is shared among sessions.Why?The LTT is always created and used in the current session. The on commit clause propertydoes not need to be shared with other sessions. This is why LTT does not record the on commit clausein the catalog.However, GTT's table definitions are shared between sessions, including the on commit clause,so it needs to be saved in the catalog.+2.3 statistics info+1) relpages reltuples relallvisible relfilenode?It was mentioned above.+3 DDL+3.1. active_gtt_shared_hash+This is the hash table created in shared memory to trace the GTT files initialized+in each session. Each hash entry contains a bitmap that records the backendid of+the initialized GTT file. With this hash table, we know which backend/session+is using this GTT. Such information is used during GTT's DDL operations.So there's a separate locking protocol for GTTs that doesn't use the normallocking infrastructure? Why?+3.7 CLUSTER GTT/VACUUM FULL GTT+The current version does not support.Why?Currently, GTT cannot reuse clusters for regular table processes. I choose not to support it for now.Also, I can't think of any scenario that would require clustering for temporary tables, whichis another reason why not support cluster first.+4 MVCC commit log(clog) cleanup++The GTT storage file contains transaction information. Queries for GTT data rely+on transaction information such as clog. The transaction information required by+each session may be completely different.Why is transaction information different between sessions? Or does this justmean that different transaction ids will be accessed?It has the same meaning as pg_class.relfrozenxid.For the same GTT, the first transaction to write data in each session is different andthe data is independent of each other. They have a separate frozenxid.The vacuum clog process needs to consider it.0003-gtt-v67-implementation.patch 71 files changed, 3167 insertions(+), 195 deletions(-)This needs to be broken into smaller chunks to be reviewable.@@ -677,6 +678,14 @@ _bt_getrootheight(Relation rel) { Buffer metabuf;+ /*+ * If a global temporary table storage file is not initialized in the+ * this session, its index does not have a root page, just returns 0.+ */+ if (RELATION_IS_GLOBAL_TEMP(rel) &&+ !gtt_storage_attached(RelationGetRelid(rel)))+ return 0;+ metabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ); metad = _bt_getmeta(rel, metabuf);Stuff like this seems not acceptable. Accesses would have to be prevented muchearlier. Otherwise each index method is going to need copies of this logic. Ialso doubt that _bt_getrootheight() is the only place that'd need this.You are right, this is done to solve the empty GTT being queried. I don't need it anymore,so I'll get rid of it. static void index_update_stats(Relation rel,   bool hasindex,-   double reltuples)+   double reltuples,+   bool isreindex) { Oid relid = RelationGetRelid(rel); Relation pg_class;@@ -2797,6 +2824,13 @@ index_update_stats(Relation rel, Form_pg_class rd_rel; bool dirty;+ /*+ * Most of the global Temp table data is updated to the local hash, and reindex+ * does not refresh relcache, so call a separate function.+ */+ if (RELATION_IS_GLOBAL_TEMP(rel))+ return index_update_gtt_relstats(rel, hasindex, reltuples, isreindex);+So basically every single place in the code that does catalog accesses isgoing to need a completely separate implementation for GTTs? That seemsunmaintainable.create Index on GTT and VACUUM GTT do this.Some info of the table (relhasIndex) need to be updated to pg_class, while others not (relpages…).Would you prefer to extend it on the original function?+/*-------------------------------------------------------------------------+ *+ * storage_gtt.c+ *  The body implementation of Global temparary table.+ *+ * IDENTIFICATION+ *  src/backend/catalog/storage_gtt.c+ *+ *  See src/backend/catalog/GTT_README for Global temparary table's+ *  requirements and design.+ *+ *-------------------------------------------------------------------------+ */I don't think that path to the readme is correct.I tried to reorganize it.Regards, Wenjing.Greetings,Andres Freund", "msg_date": "Mon, 28 Feb 2022 20:08:03 +0800", "msg_from": "Wenjing Zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "> 2022年2月27日 08:21,Justin Pryzby <pryzby@telsasoft.com> 写道:\n> \n> I read through this.\n> Find attached some language fixes. You should be able to apply each \"fix\"\n> patch on top of your own local branch with git am, and then squish them\n> together. Let me know if you have trouble with that.\n> \n> I think get_seqence_start_value() should be static. (Or otherwise, it should\n> be in lsyscache.c).\n> \n> The include added to execPartition.c seems to be unused.\n> \n> +#define RELATION_IS_TEMP_ON_CURRENT_SESSION(relation) \\\n> +#define RELATION_IS_TEMP(relation) \\\n> +#define RelpersistenceTsTemp(relpersistence) \\\n> +#define RELATION_GTT_ON_COMMIT_DELETE(relation) \\\n> \n> => These macros can evaluate their arguments multiple times.\n> You should add a comment to warn about that. And maybe avoid passing them a\n> function argument, like: RelpersistenceTsTemp(get_rel_persistence(rte->relid))\n> \n> +list_all_backend_gtt_frozenxids should return TransactionId not int.\n> The function name should say \"oldest\" and not \"all\" ?\n> \n> I think the GUC should have a longer name. max_active_gtt is too short for a\n> global var.\n> \n> +#define MIN_NUM_ACTIVE_GTT 0\n> +#define DEFAULT_NUM_ACTIVE_GTT 1000\n> +#define MAX_NUM_ACTIVE_GTT 1000000\n> \n> +int max_active_gtt = MIN_NUM_ACTIVE_GTT\n> \n> It's being initialized to MIN, but then the GUC machinery sets it to DEFAULT.\n> By convention, it should be initialized to default.\n> \n> fout->remoteVersion >= 140000\n> \n> => should say 15\n> \n> describe.c has gettext_noop(\"session\"), which is a half-truth. The data is\n> per-session but the table definition is persistent..\nThanks for your advice, I will try to merge this part of the code.\n\n> \n> You redirect stats from pg_class and pg_statistics to a local hash table.\n> This is pretty hairy :(\n> I guess you'd also need to handle pg_statistic_ext and ext_data.\n> pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to look\n> at pg_get_gtt_statistics.\n> \n> I wonder if there's a better way to do it, like updating pg_statistic but\n> forcing the changes to be rolled back when the session ends... But I think\n> that would make longrunning sessions behave badly, the same as \"longrunning\n> transactions\".\n\nThere are three pieces of data related to session-level GTT data that need to be managed\n1 session-level storage info like relfilenode\n2 session-level like relfrozenxid\n3 session-level stats like relpages or column stats\n\nI think the 1 and 2 are necessary, but not for stats.\nIn the previous email, It has been suggested that GTT statistics not be processed.\nThis means that GTT statistics are not recorded in the localhash or catalog.\nIn my observation, very few users require an accurate query plan for temporary tables to\nperform manual analyze.\nOf course, doing this will also avoid catalog bloat and performance problems.\n\n\n> \n> Have you looked at Gilles Darold's GTT extension ?\nIf you are referring to https://github.com/darold/pgtt <https://github.com/darold/pgtt> , yes.\nIt is smart to use unlogged table as a template and then use LTT to read and write data.\nFor this implementation, I want to point out two things:\n1 For the first insert of GTT in each session, create table or create index is implicitly executed.\n2 The catalog bloat caused by LTT still exist.\n\n\nRegards, Wenjing.\n\n\n> <0002-f-0002-gtt-v64-doc.txt><0004-f-0003-gtt-v64-implementation.txt><0006-f-0004-gtt-v64-regress.txt>\n\n\n2022年2月27日 08:21,Justin Pryzby <pryzby@telsasoft.com> 写道:I read through this.Find attached some language fixes.  You should be able to apply each \"fix\"patch on top of your own local branch with git am, and then squish themtogether.  Let me know if you have trouble with that.I think get_seqence_start_value() should be static.  (Or otherwise, it shouldbe in lsyscache.c).The include added to execPartition.c seems to be unused.+#define RELATION_IS_TEMP_ON_CURRENT_SESSION(relation) \\+#define RELATION_IS_TEMP(relation) \\+#define RelpersistenceTsTemp(relpersistence) \\+#define RELATION_GTT_ON_COMMIT_DELETE(relation)    \\=> These macros can evaluate their arguments multiple times.You should add a comment to warn about that.  And maybe avoid passing them afunction argument, like: RelpersistenceTsTemp(get_rel_persistence(rte->relid))+list_all_backend_gtt_frozenxids should return TransactionId not int.The function name should say \"oldest\" and not \"all\" ?I think the GUC should have a longer name.  max_active_gtt is too short for aglobal var.+#define    MIN_NUM_ACTIVE_GTT          0+#define    DEFAULT_NUM_ACTIVE_GTT          1000+#define    MAX_NUM_ACTIVE_GTT          1000000+int        max_active_gtt = MIN_NUM_ACTIVE_GTTIt's being initialized to MIN, but then the GUC machinery sets it to DEFAULT.By convention, it should be initialized to default.fout->remoteVersion >= 140000=> should say 15describe.c has gettext_noop(\"session\"), which is a half-truth.  The data isper-session but the table definition is persistent..Thanks for your advice, I will try to merge this part of the code.You redirect stats from pg_class and pg_statistics to a local hash table.This is pretty hairy :(I guess you'd also need to handle pg_statistic_ext and ext_data.pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to lookat pg_get_gtt_statistics.I wonder if there's a better way to do it, like updating pg_statistic butforcing the changes to be rolled back when the session ends...  But I thinkthat would make longrunning sessions behave badly, the same as \"longrunningtransactions\".There are three pieces of data related to session-level GTT data that need to be managed1 session-level storage info like relfilenode2 session-level like relfrozenxid3 session-level stats like relpages or column statsI think the 1 and 2 are necessary, but not for stats.In the previous email, It has been suggested that GTT statistics not be processed.This means that GTT statistics are not recorded in the localhash or catalog.In my observation, very few users require an accurate query plan for temporary tables toperform manual analyze.Of course, doing this will also avoid catalog bloat and performance problems.Have you looked at Gilles Darold's GTT extension ?If you are referring to https://github.com/darold/pgtt , yes.It is smart to use unlogged table as a template and then use LTT to read and write data.For this implementation, I want to point out two things:1 For the first insert of GTT in each session, create table or create index is implicitly executed.2 The catalog bloat caused by LTT still exist.Regards, Wenjing.<0002-f-0002-gtt-v64-doc.txt><0004-f-0003-gtt-v64-implementation.txt><0006-f-0004-gtt-v64-regress.txt>", "msg_date": "Tue, 1 Mar 2022 15:10:11 +0800", "msg_from": "Wenjing Zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "\n\n> 2022年2月27日 12:13,Andres Freund <andres@anarazel.de> 写道:\n> \n> Hi,\n> \n> On 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:\n>>> You redirect stats from pg_class and pg_statistics to a local hash table.\n>>> This is pretty hairy :(\n> \n> As is I think the patch is architecturally completely unacceptable. Having\n> code everywhere to redirect to manually written in-memory catalog table code\n> isn't maintainable.\n> \n> \n>>> I guess you'd also need to handle pg_statistic_ext and ext_data.\n>>> pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to\n>>> look\n>>> at pg_get_gtt_statistics.\n>> \n>> Without this, the GTT will be terribly slow like current temporary tables\n>> with a lot of problems with bloating of pg_class, pg_attribute and\n>> pg_depend tables.\n> \n> I think it's not a great idea to solve multiple complicated problems at\n> once...\n\nI'm trying to break down the entire implementation into multiple sub-patches.\n\n\nRegards, Wenjing.\n\n\n> \n> Greetings,\n> \n> Andres Freund\n> \n> \n\n\n\n", "msg_date": "Wed, 2 Mar 2022 10:52:54 +0800", "msg_from": "Wenjing Zeng <wjzeng2012@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": ">In my observation, very few users require an accurate query plan for\ntemporary tables to\nperform manual analyze.\n\nAbsolutely not true in my observations or personal experience. It's one of\nthe main reasons I have needed to use (local) temporary tables rather than\njust materializing a CTE when decomposing queries that are too complex for\nPostgres to handle.\n\nI wish I could use GTT to avoid the catalog bloat in those instances, but\nthat will only be possible if the query plans are accurate.\n\n>In my observation, very few users require an accurate query plan for temporary tables toperform manual analyze.Absolutely not true in my observations or personal experience. It's one of the main reasons I have needed to use (local) temporary tables rather than just materializing a CTE when decomposing queries that are too complex for Postgres to handle.I wish I could use GTT to avoid the catalog bloat in those instances, but that will only be possible if the query plans are accurate.", "msg_date": "Wed, 2 Mar 2022 13:02:17 -0500", "msg_from": "Adam Brusselback <adambrusselback@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "st 2. 3. 2022 v 19:02 odesílatel Adam Brusselback <adambrusselback@gmail.com>\nnapsal:\n\n> >In my observation, very few users require an accurate query plan for\n> temporary tables to\n> perform manual analyze.\n>\n> Absolutely not true in my observations or personal experience. It's one of\n> the main reasons I have needed to use (local) temporary tables rather than\n> just materializing a CTE when decomposing queries that are too complex for\n> Postgres to handle.\n>\n> I wish I could use GTT to avoid the catalog bloat in those instances, but\n> that will only be possible if the query plans are accurate.\n>\n\nThis strongly depends on usage. Very common patterns from MSSQL don't need\nstatistics. But on second thought, sometimes, the query should be divided\nand temp tables are used for storing some middle results. In this case, you\ncannot exist without statistics. In the first case, the temp tables can be\nreplaced by arrays. In the second case, the temp tables are not replaceable.\n\nRegards\n\nPavel\n\nst 2. 3. 2022 v 19:02 odesílatel Adam Brusselback <adambrusselback@gmail.com> napsal:>In my observation, very few users require an accurate query plan for temporary tables toperform manual analyze.Absolutely not true in my observations or personal experience. It's one of the main reasons I have needed to use (local) temporary tables rather than just materializing a CTE when decomposing queries that are too complex for Postgres to handle.I wish I could use GTT to avoid the catalog bloat in those instances, but that will only be possible if the query plans are accurate.This strongly depends on usage.  Very common patterns from MSSQL don't need statistics. But on second thought, sometimes, the query should be divided and temp tables are used for storing some middle results. In this case, you cannot exist without statistics. In the first case, the temp tables can be replaced by arrays. In the second case, the temp tables are not replaceable.RegardsPavel", "msg_date": "Wed, 2 Mar 2022 19:08:12 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi,\n\nOn 2022-02-27 06:09:54 +0100, Pavel Stehule wrote:\n> ne 27. 2. 2022 v 5:13 odes�latel Andres Freund <andres@anarazel.de> napsal:\n> > On 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:\n> > > Without this, the GTT will be terribly slow like current temporary tables\n> > > with a lot of problems with bloating of pg_class, pg_attribute and\n> > > pg_depend tables.\n> >\n> > I think it's not a great idea to solve multiple complicated problems at\n> > once...\n\n> I thought about this issue for a very long time, and I didn't find any\n> better (without more significant rewriting of pg storage). In a lot of\n> projects, that I know, the temporary tables are strictly prohibited due\n> possible devastating impact to system catalog bloat. It is a serious\n> problem. So any implementation of GTT should solve the questions: a) how to\n> reduce catalog bloating, b) how to allow session related statistics for\n> GTT. I agree so implementation of GTT like template based LTT (local\n> temporary tables) can be very simple (it is possible by extension), but\n> with the same unhappy performance impacts.\n\n> I don't say so current design should be accepted without any discussions\n> and without changes. Maybe GTT based on LTT can be better than nothing\n> (what we have now), and can be good enough for a lot of projects where the\n> load is not too high (and almost all projects have low load).\n\nI think there's just no way that it can be merged with anything close to the\ncurrent design - it's unmaintainable. The need for the feature doesn't change\nthat.\n\nThat's not to say it's impossible to come up with a workable design. But it's\ndefinitely not easy. If I were to work on this - which I am not planning to -\nI'd try to solve the problems of \"LTT\" first, with an eye towards using the\ninfrastructure for GTT.\n\nI think you'd basically have to come up with a generic design for partitioning\ncatalog tables into local / non-local storage, without needing explicit code\nfor each catalog. That could also be used to store the default catalog\ncontents separately from user defined ones (e.g. pg_proc is pretty large).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 2 Mar 2022 13:17:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi\n\n\n> I think you'd basically have to come up with a generic design for\n> partitioning\n> catalog tables into local / non-local storage, without needing explicit\n> code\n> for each catalog. That could also be used to store the default catalog\n> contents separately from user defined ones (e.g. pg_proc is pretty large).\n>\n\nThere is still a risk of bloating in local storage, but, mainly, you\nprobably have to modify a lot of lines because the system cache doesn't\nsupport partitioning.\n\nRegards\n\nPavel\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi\n\nI think you'd basically have to come up with a generic design for partitioning\ncatalog tables into local / non-local storage, without needing explicit code\nfor each catalog. That could also be used to store the default catalog\ncontents separately from user defined ones (e.g. pg_proc is pretty large).There is still a risk of bloating in local storage, but, mainly, you probably have to modify a lot of lines because the system cache doesn't support partitioning. RegardsPavel \n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 3 Mar 2022 02:35:02 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Wed, Mar 2, 2022 at 4:18 PM Andres Freund <andres@anarazel.de> wrote:\n> I think there's just no way that it can be merged with anything close to the\n> current design - it's unmaintainable. The need for the feature doesn't change\n> that.\n\nI don't know whether the design is right or wrong, but I agree that a\nbad design isn't OK just because we need the feature. I'm not entirely\nconvinced that the change to _bt_getrootheight() is a red flag,\nalthough I agree that there is a need to explain and justify why\nsimilar changes aren't needed in other places. But I think overall\nthis patch is just too big and too unpolished to be seriously\nconsidered. It clearly needs to be broken down into incremental\npatches that are not just separated by topic but potentially\nindependently committable, with proposed commit messages for each.\n\nAnd, like, there's a long history on this thread of people pointing\nout particular crash bugs and particular problems with code comments\nor whatever and I guess those are getting fixed as they are reported,\nbut I do not have the feeling that the overall code quality is\nterribly high, because people just keep finding more stuff. Like look\nat this:\n\n+ uint8 flags = 0;\n+\n+ /* return 0 if feature is disabled */\n+ if (max_active_gtt <= 0)\n+ return InvalidTransactionId;\n+\n+ /* Disable in standby node */\n+ if (RecoveryInProgress())\n+ return InvalidTransactionId;\n+\n+ flags |= PROC_IS_AUTOVACUUM;\n+ flags |= PROC_IN_LOGICAL_DECODING;\n+\n+ LWLockAcquire(ProcArrayLock, LW_SHARED);\n+ arrayP = procArray;\n+ for (index = 0; index < arrayP->numProcs; index++)\n+ {\n+ int pgprocno = arrayP->pgprocnos[index];\n+ PGPROC *proc = &allProcs[pgprocno];\n+ uint8 statusFlags = ProcGlobal->statusFlags[index];\n+ TransactionId gtt_frozenxid = InvalidTransactionId;\n+\n+ if (statusFlags & flags)\n+ continue;\n\nThis looks like code someone wrote, modified multiple times as they\nfound problems, and never cleaned up. 'flags' gets set to 0, and then\nunconditionally gets two bits xor'd in, and then we test it against\nstatusFlags. Probably there shouldn't be a local variable at all, and\nif there is, the value should be set properly from the start instead\nof constructed incrementally as we go along. And there should be\ncomments. Why is it OK to return InvalidTransactionId in standby mode?\nWhy is it OK to pass that flags value? And, if we look at this\nfunction a little further down, is it really OK to hold ProcArrayLock\nacross an operation that could perform multiple memory allocation\noperations? I bet it's not, unless calls are very infrequent in\npractice.\n\nI'm not asking for this particular part of the code to be cleaned up.\nI'm asking for the whole patch to be cleaned up. Like, nobody who is a\ncommitter is going to have enough time to go through the patch\nfunction by function and point out issues on this level of detail in\nevery place where they occur. Worse, discussing all of those issues is\njust a distraction from the real task of figuring out whether the\ndesign needs adjustment. Because the patch is one massive code drop,\nand with not-really-that-clean code and not-that-great comments, it's\nalmost impossible to review. I don't plan to try unless the quality\nimproves a lot. I'm not saying it's the worst code ever written, but I\nthink it's kind of at a level of \"well, it seems to work for me,\" and\nthe standard around here is higher than that. It's not the job of the\ncommunity or of individual committers to prove that problems exist in\nthis patch and therefore it shouldn't be committed. It's the job of\nthe author to prove that there aren't and it should be. And I don't\nthink we're close to that at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Mar 2022 11:22:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "It doesn't look like this is going to get committed this release\ncycle. I understand more feedback could be valuable, especially on the\noverall design, but as this is the last commitfest of the release we\nshould focus on other patches for now and spend that time in the next\nrelease cycle.\n\nI'm going to bump this one now as Waiting on Author for the design\ndocumentation Robert asks for and probably a plan for how to separate\nthat design into multiple separable features as Andres suggested.\n\nI'm still hopeful we get to advance this early in 16 because I think\neveryone agrees the feature would be great.\n\n\n", "msg_date": "Thu, 3 Mar 2022 15:28:52 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On Thu, Mar 3, 2022 at 3:29 PM Greg Stark <stark@mit.edu> wrote:\n> I'm still hopeful we get to advance this early in 16 because I think\n> everyone agrees the feature would be great.\n\nI'm not saying this patch can't make progress, but I think the chances\nof this being ready to commit any time in the v16 release cycle, let\nalone at the beginning, are low. This patch set has been around since\n2019, and here Andres and I are saying it's not even really reviewable\nin the shape that it's in. I have done some review of it previously,\nBTW, but eventually I gave up because it just didn't seem like we were\nmaking any progress. And then a long time after that people were still\nfinding many server crashes with relatively simple test cases.\n\nI agree that the feature is desirable, but I think getting there is\ngoing to require a huge amount of effort that may amount to a total\nrewrite of the patch.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Mar 2022 16:07:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "Hi,\n\nOn 2022-03-03 16:07:37 -0500, Robert Haas wrote:\n> On Thu, Mar 3, 2022 at 3:29 PM Greg Stark <stark@mit.edu> wrote:\n> > I'm still hopeful we get to advance this early in 16 because I think\n> > everyone agrees the feature would be great.\n> \n> I'm not saying this patch can't make progress, but I think the chances\n> of this being ready to commit any time in the v16 release cycle, let\n> alone at the beginning, are low. This patch set has been around since\n> 2019, and here Andres and I are saying it's not even really reviewable\n> in the shape that it's in. I have done some review of it previously,\n> BTW, but eventually I gave up because it just didn't seem like we were\n> making any progress. And then a long time after that people were still\n> finding many server crashes with relatively simple test cases.\n> \n> I agree that the feature is desirable, but I think getting there is\n> going to require a huge amount of effort that may amount to a total\n> rewrite of the patch.\n\nAgreed. I think this needs very fundamental design work, and the patch itself\nisn't worth reviewing until that's tackled.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Mar 2022 13:20:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" }, { "msg_contents": "On 3/3/22 13:20, Andres Freund wrote:\n> On 2022-03-03 16:07:37 -0500, Robert Haas wrote:\n>> I agree that the feature is desirable, but I think getting there is\n>> going to require a huge amount of effort that may amount to a total\n>> rewrite of the patch.\n> \n> Agreed. I think this needs very fundamental design work, and the patch itself\n> isn't worth reviewing until that's tackled.\n\nGiven two opinions that the patch can't be effectively reviewed as-is, I\nwill mark this RwF for this commitfest. Anyone up for shepherding the\ndesign conversations, going forward?\n\n--Jacob\n\n\n", "msg_date": "Thu, 30 Jun 2022 13:54:39 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Global temporary tables" } ]
[ { "msg_contents": "I've reduced the failing query as much as possible to this:\n\n-- This is necessary to fail:\nSET enable_nestloop=off;\n\nSELECT * FROM\n (SELECT start_time, t1.site_id\n FROM pgw_kpi_view t1\n -- Apparently the where clause is necessary to fail...\n WHERE (start_time>='2019-10-10' AND start_time<'2019-10-11')\n -- The group by MAY be necessary to fail...\n GROUP BY 1,2\n ) AS data\nJOIN sites ON ( sites.site_location='' OR sites.site_office=data.site_id)\n\nThe view is actually a join of two relkind=p partitioned tables (which I\nwill acknowledge probably performs poorly).\n\n(gdb) bt\n#0 errfinish (dummy=dummy@entry=0) at elog.c:411\n#1 0x000000000087a959 in elog_finish (elevel=elevel@entry=20, fmt=fmt@entry=0x9d93d8 \"could not find pathkey item to sort\") at elog.c:1365\n#2 0x00000000006a587f in prepare_sort_from_pathkeys (lefttree=0x7f7cb84492e8, pathkeys=<optimized out>, relids=0x7f7cb8410700, reqColIdx=reqColIdx@entry=0x0, adjust_tlist_in_place=<optimized out>, \n adjust_tlist_in_place@entry=false, p_numsortkeys=p_numsortkeys@entry=0x7ffc4b2e10c4, p_sortColIdx=p_sortColIdx@entry=0x7ffc4b2e10c8, p_sortOperators=p_sortOperators@entry=0x7ffc4b2e10d0, \n p_collations=p_collations@entry=0x7ffc4b2e10d8, p_nullsFirst=p_nullsFirst@entry=0x7ffc4b2e10e0) at createplan.c:5893\n#3 0x00000000006a5a6a in make_sort_from_pathkeys (lefttree=<optimized out>, pathkeys=<optimized out>, relids=<optimized out>) at createplan.c:6020\n#4 0x00000000006a6e30 in create_sort_plan (flags=4, best_path=0x7f7cb8410cc8, root=0x7f7fdc3ac6b0) at createplan.c:1985\n#5 create_plan_recurse (root=root@entry=0x7f7fdc3ac6b0, best_path=0x7f7cb8410cc8, flags=flags@entry=4) at createplan.c:459\n#6 0x00000000006a6e4e in create_group_plan (best_path=0x7f7cb8410d58, root=0x7f7fdc3ac6b0) at createplan.c:2012\n#7 create_plan_recurse (root=root@entry=0x7f7fdc3ac6b0, best_path=best_path@entry=0x7f7cb8410d58, flags=flags@entry=1) at createplan.c:464\n#8 0x00000000006a8278 in create_merge_append_plan (flags=4, best_path=0x7f7cb8446cd8, root=0x7f7fdc3ac6b0) at createplan.c:1333\n#9 create_plan_recurse (root=root@entry=0x7f7fdc3ac6b0, best_path=0x7f7cb8446cd8, flags=flags@entry=4) at createplan.c:402\n#10 0x00000000006a6e4e in create_group_plan (best_path=0x7f7cb84486c8, root=0x7f7fdc3ac6b0) at createplan.c:2012\n#11 create_plan_recurse (root=root@entry=0x7f7fdc3ac6b0, best_path=0x7f7cb84486c8, flags=flags@entry=1) at createplan.c:464\n#12 0x00000000006a9739 in create_plan (root=0x7f7fdc3ac6b0, best_path=<optimized out>) at createplan.c:325\n#13 0x00000000006aa988 in create_subqueryscan_plan (scan_clauses=0x0, tlist=0x7f7cb8450820, best_path=0x7f7cb8448db8, root=0x7f7fdc34b948) at createplan.c:3385\n#14 create_scan_plan (root=root@entry=0x7f7fdc34b948, best_path=best_path@entry=0x7f7cb8448db8, flags=<optimized out>, flags@entry=0) at createplan.c:670\n#15 0x00000000006a6d31 in create_plan_recurse (root=root@entry=0x7f7fdc34b948, best_path=0x7f7cb8448db8, flags=flags@entry=0) at createplan.c:427\n#16 0x00000000006a983a in create_nestloop_plan (best_path=0x7f7cb844fb80, root=0x7f7fdc34b948) at createplan.c:4008\n#17 create_join_plan (root=root@entry=0x7f7fdc34b948, best_path=best_path@entry=0x7f7cb844fb80) at createplan.c:1020\n#18 0x00000000006a6d75 in create_plan_recurse (root=root@entry=0x7f7fdc34b948, best_path=0x7f7cb844fb80, flags=flags@entry=1) at createplan.c:393\n#19 0x00000000006a9739 in create_plan (root=root@entry=0x7f7fdc34b948, best_path=<optimized out>) at createplan.c:325\n#20 0x00000000006b5a04 in standard_planner (parse=0x1bd2308, cursorOptions=256, boundParams=0x0) at planner.c:413\n#21 0x000000000075fb2e in pg_plan_query (querytree=querytree@entry=0x1bd2308, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:878\n#22 0x000000000075fbee in pg_plan_queries (querytrees=<optimized out>, cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0) at postgres.c:968\n#23 0x000000000076007a in exec_simple_query (\n query_string=0x1ba36e0 \"SELECT * FROM\\n\\t(SELECT start_time, t1.site_id\\n\\tFROM pgw_kpi_view t1\\n\\t\\n\\tWHERE (start_time>='2019-10-10' AND start_time<'2019-10-11')\\n\\t\\n\\tGROUP BY 1,2\\n\\t) AS data\\nJOIN sites ON ( sites.site_location='' OR\"...) at postgres.c:1143\n#24 0x0000000000761212 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x1bd8e70, dbname=0x1bd8d10 \"ts\", username=<optimized out>) at postgres.c:4236\n#25 0x0000000000483d02 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4431\n#26 BackendStartup (port=0x1bd5190) at postmaster.c:4122\n#27 ServerLoop () at postmaster.c:1704\n#28 0x00000000006f0b1f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1b9e280) at postmaster.c:1377\n#29 0x0000000000484c93 in main (argc=3, argv=0x1b9e280) at main.c:228\n\n\nbt f:\n\n#2 0x00000000006a587f in prepare_sort_from_pathkeys (lefttree=0x7f7cb84492e8, pathkeys=<optimized out>, relids=0x7f7cb8410700, reqColIdx=reqColIdx@entry=0x0, adjust_tlist_in_place=<optimized out>, \n adjust_tlist_in_place@entry=false, p_numsortkeys=p_numsortkeys@entry=0x7ffc4b2e10c4, p_sortColIdx=p_sortColIdx@entry=0x7ffc4b2e10c8, p_sortOperators=p_sortOperators@entry=0x7ffc4b2e10d0, \n p_collations=p_collations@entry=0x7ffc4b2e10d8, p_nullsFirst=p_nullsFirst@entry=0x7ffc4b2e10e0) at createplan.c:5893\n sortexpr = <optimized out>\n ec = 0x7f7cb8edbe28\n em = <optimized out>\n tle = <optimized out>\n pathkey = <optimized out>\n pk_datatype = <optimized out>\n sortop = <optimized out>\n j = <optimized out>\n tlist = 0x7f7cb8451bb8\n i = 0x7f7cb8edc2d8\n numsortkeys = 0\n sortColIdx = 0x7f7cb8451c58\n sortOperators = 0x7f7cb8451c70\n collations = 0x7f7cb8451c88\n nullsFirst = 0x7f7cb8451ca0\n __func__ = \"prepare_sort_from_pathkeys\"\n#3 0x00000000006a5a6a in make_sort_from_pathkeys (lefttree=<optimized out>, pathkeys=<optimized out>, relids=<optimized out>) at createplan.c:6020\n numsortkeys = 32636\n sortColIdx = 0x7f7cb8447468\n sortOperators = 0x7f7cb83fa278\n collations = 0x0\n nullsFirst = 0x7f7cb8edc2f8\n\n\n", "msg_date": "Fri, 11 Oct 2019 09:37:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I've reduced the failing query as much as possible to this:\n> -- This is necessary to fail:\n> SET enable_nestloop=off;\n\n> SELECT * FROM\n> (SELECT start_time, t1.site_id\n> FROM pgw_kpi_view t1\n> -- Apparently the where clause is necessary to fail...\n> WHERE (start_time>='2019-10-10' AND start_time<'2019-10-11')\n> -- The group by MAY be necessary to fail...\n> GROUP BY 1,2\n> ) AS data\n> JOIN sites ON ( sites.site_location='' OR sites.site_office=data.site_id)\n\n> The view is actually a join of two relkind=p partitioned tables (which I\n> will acknowledge probably performs poorly).\n\nCould you provide a self-contained test case please?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Oct 2019 10:48:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "On Fri, Oct 11, 2019 at 10:48:37AM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > The view is actually a join of two relkind=p partitioned tables (which I\n> > will acknowledge probably performs poorly).\n> \n> Could you provide a self-contained test case please?\n\nWorking on it. FWIW explain for a v11 customer looks like this:\n\n Nested Loop (cost=10000011818.10..10000076508.23 rows=734500 width=159)\n Join Filter: ((s.site_location = ''::text) OR (s.site_office = (COALESCE(huawei_ggsn_201610.ne_name, huawei_ggsn_gw_201610.ne_name))))\n -> Group (cost=11818.10..11946.31 rows=2937 width=40)\n Group Key: (COALESCE(huawei_ggsn_201610.start_time, huawei_ggsn_gw_201610.start_time)), (COALESCE(huawei_ggsn_201610.ne_name, huawei_ggsn_gw_201610.ne_name))\n -> Merge Append (cost=11818.10..11931.59 rows=2944 width=40)\n Sort Key: (COALESCE(huawei_ggsn_201610.start_time, huawei_ggsn_gw_201610.start_time)), (COALESCE(huawei_ggsn_201610.ne_name, huawei_ggsn_gw_201610.ne_name))\n -> Group (cost=332.48..333.10 rows=83 width=40)\n Group Key: (COALESCE(huawei_ggsn_201610.start_time, huawei_ggsn_gw_201610.start_time)), (COALESCE(huawei_ggsn_201610.ne_name, huawei_ggsn_gw_201610.ne_name))\n -> Sort (cost=332.48..332.69 rows=83 width=40)\n Sort Key: (COALESCE(huawei_ggsn_201610.start_time, huawei_ggsn_gw_201610.start_time)), (COALESCE(huawei_ggsn_201610.ne_name, huawei_ggsn_gw_201610.ne_name))\n -> Hash Full Join (cost=46.48..329.84 rows=83 width=40)\n Hash Cond: ((huawei_ggsn_201610.ne_name = huawei_ggsn_gw_201610.ne_name) AND (huawei_ggsn_201610.ggsn_function = huawei_ggsn_gw_201610.ggsn_function) AND (huawei_ggsn_201610.start_time = huawei_\nggsn_gw_201610.start_time) AND (huawei_ggsn_201610.interval_seconds = huawei_ggsn_gw_201610.interval_seconds) AND (huawei_ggsn_201610.device_id = huawei_ggsn_gw_201610.device_id) AND (huawei_ggsn_201610.c_134710251 = huawei_ggs\nn_gw_201610.c_134710251) AND (huawei_ggsn_201610.c_134710252 = huawei_ggsn_gw_201610.c_134710252) AND (huawei_ggsn_201610.c_134710253 = huawei_ggsn_gw_201610.c_134710253) AND (huawei_ggsn_201610.ne_id = huawei_ggsn_gw_201610.ne\n_id) AND (huawei_ggsn_201610.ugw_function = huawei_ggsn_gw_201610.ugw_function))\n Filter: ((COALESCE(huawei_ggsn_201610.start_time, huawei_ggsn_gw_201610.start_time) >= '2019-10-01 00:00:00-11'::timestamp with time zone) AND (COALESCE(huawei_ggsn_201610.start_time, huawei_ggs\nn_gw_201610.start_time) < '2019-10-02 00:00:00-11'::timestamp with time zone))\n -> Seq Scan on huawei_ggsn_201610 (cost=0.00..255.44 rows=744 width=94)\n -> Hash (cost=20.44..20.44 rows=744 width=94)\n -> Seq Scan on huawei_ggsn_gw_201610 (cost=0.00..20.44 rows=744 width=94)\n[...]\n\nI'm suspecting this; is it useful to test with this commit reverted ?\n\ncommit 8edd0e79460b414b1d971895312e549e95e12e4f\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Mar 25 15:42:35 2019 -0400\n\n Suppress Append and MergeAppend plan nodes that have a single child.\n\n\n\n", "msg_date": "Fri, 11 Oct 2019 12:59:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Oct 11, 2019 at 10:48:37AM -0400, Tom Lane wrote:\n>> Could you provide a self-contained test case please?\n\n> I'm suspecting this; is it useful to test with this commit reverted ?\n\nI wouldn't bother; we'd still need a test case to find out what the\nproblem is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Oct 2019 15:13:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "On Fri, Oct 11, 2019 at 10:48:37AM -0400, Tom Lane wrote:\n> Could you provide a self-contained test case please?\n\nSET enable_partitionwise_aggregate = 'on';\nSET enable_partitionwise_join = 'on';\nSET max_parallel_workers_per_gather=0;\n-- maybe not important but explain(settings) suggests I should include them for completeness:\nSET effective_io_concurrency = '0';\nSET work_mem = '512MB';\nSET jit = 'off';\n\nCREATE TABLE s(site_id int, site_location text, site_office text);\nINSERT INTO s SELECT generate_series(1,99),'','';\n\nCREATE TABLE t(start_time timestamp, site_id text, i int)PARTITION BY RANGE(start_time);\nCREATE TABLE t1 PARTITION OF t FOR VALUES FROM ('2019-10-01')TO('2019-10-02');\nINSERT INTO t1 SELECT a,b FROM generate_series( '2019-10-01'::timestamp, '2019-10-01 23:45'::timestamp, '15 minutes')a, generate_series(1,99)b, generate_series(1,99)c;\nCREATE TABLE t2 PARTITION OF t FOR VALUES FROM ('2019-10-02')TO('2019-10-03');\nINSERT INTO t2 SELECT a,b FROM generate_series( '2019-10-02'::timestamp, '2019-10-02 23:45'::timestamp, '15 minutes')a, generate_series(1,99)b, generate_series(1,99)c;\n\nANALYZE s,t;\n\nexplain\nSELECT s.* FROM\n (SELECT start_time, site_id::int\n FROM t t1 FULL JOIN t t2 USING(start_time,site_id)\n WHERE (start_time>='2019-10-01' AND start_time<'2019-10-01 01:00')\n GROUP BY 1,2) AS data\nJOIN s ON (s.site_location='' OR s.site_office::int=data.site_id)\n\nJustin\n\n\n", "msg_date": "Sat, 12 Oct 2019 17:23:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Oct 11, 2019 at 10:48:37AM -0400, Tom Lane wrote:\n>> Could you provide a self-contained test case please?\n\n> [ test case ]\n\nYup, fails for me too. Will look shortly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Oct 2019 20:09:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Oct 11, 2019 at 10:48:37AM -0400, Tom Lane wrote:\n>> Could you provide a self-contained test case please?\n\n> [ test case ]\n\nOh, this is the same issue Amit described in \n\nhttps://www.postgresql.org/message-id/flat/CA%2BHiwqG2WVUGmLJqtR0tPFhniO%3DH%3D9qQ%2BZ3L_ZC%2BY3-EVQHFGg%40mail.gmail.com\n\nnamely that we're not generating EquivalenceClass members corresponding\nto sub-joins of a partitionwise join.\n\nAre you interested in helping to test the patches proposed there?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Oct 2019 14:06:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "On Sun, Oct 13, 2019 at 02:06:02PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Fri, Oct 11, 2019 at 10:48:37AM -0400, Tom Lane wrote:\n> >> Could you provide a self-contained test case please?\n> \n> > [ test case ]\n> \n> Oh, this is the same issue Amit described in \n> \n> https://www.postgresql.org/message-id/flat/CA%2BHiwqG2WVUGmLJqtR0tPFhniO%3DH%3D9qQ%2BZ3L_ZC%2BY3-EVQHFGg%40mail.gmail.com\n> \n> namely that we're not generating EquivalenceClass members corresponding\n> to sub-joins of a partitionwise join.\n> \n> Are you interested in helping to test the patches proposed there?\n\nSure. Any requests other than testing that our original query works correctly\nand maybe endeavoring to read the patch ?\n\nBTW it probably should've been documented as an \"Open Item\" for v12.\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n\n", "msg_date": "Sun, 13 Oct 2019 13:30:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "On Sun, Oct 13, 2019 at 01:30:29PM -0500, Justin Pryzby wrote:\n> BTW it probably should've been documented as an \"Open Item\" for v12.\n\nhttps://commitfest.postgresql.org/25/2278/\nI realized possibly people were thinking of that as a \"feature\" and not a\nbugfix for backpatch (?)\n\nBut, my issue is a query which worked under v11 PWJ but fails under v12\n(apparently broken by d25ea01275).\n\nIn my mind, if the planner doesn't support that query with PWJ, I think it\nshould run without PWJ rather than fail.\n\nJustin\n\n\n", "msg_date": "Mon, 14 Oct 2019 09:37:25 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Oct 13, 2019 at 01:30:29PM -0500, Justin Pryzby wrote:\n>> BTW it probably should've been documented as an \"Open Item\" for v12.\n\n> https://commitfest.postgresql.org/25/2278/\n> I realized possibly people were thinking of that as a \"feature\" and not a\n> bugfix for backpatch (?)\n> But, my issue is a query which worked under v11 PWJ but fails under v12\n> (apparently broken by d25ea01275).\n\nYeah, this should have been dealt with as an open item, but it\nslipped through the cracks. We'll make sure to get it fixed,\none way or another, for 12.1.\n\nIn view of the proposed patches being dependent on some other\n13-only changes, I wonder if we should fix v12 by reverting\nd25ea0127. The potential planner performance loss for large\npartition sets could be nasty, but failing to plan at all is worse.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Oct 2019 10:54:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Sorry about the late reply.\n\nOn Mon, Oct 14, 2019 at 11:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Sun, Oct 13, 2019 at 01:30:29PM -0500, Justin Pryzby wrote:\n> >> BTW it probably should've been documented as an \"Open Item\" for v12.\n>\n> > https://commitfest.postgresql.org/25/2278/\n> > I realized possibly people were thinking of that as a \"feature\" and not a\n> > bugfix for backpatch (?)\n> > But, my issue is a query which worked under v11 PWJ but fails under v12\n> > (apparently broken by d25ea01275).\n>\n> Yeah, this should have been dealt with as an open item, but it\n> slipped through the cracks. We'll make sure to get it fixed,\n> one way or another, for 12.1.\n>\n> In view of the proposed patches being dependent on some other\n> 13-only changes, I wonder if we should fix v12 by reverting\n> d25ea0127. The potential planner performance loss for large\n> partition sets could be nasty, but failing to plan at all is worse.\n\nActually, the patch I proposed to fix equivalence code can be applied\non its own. The example I gave on that thread needs us to fix\npartitionwise code to even work, which is perhaps a 13-only change,\nbut we have an example here that is broken due to d25ea01275.\nPerhaps, considering applying my patch seems better than alternatives\nwhich are either reverting d25ea01275 or avoiding doing partitionwise\njoin for such queries.\n\nSince we've got 3373c71553 (\"Speed up finding EquivalenceClasses for a\ngiven set of rels\") in HEAD, need two versions of the patch; please\nsee attached.\n\nThanks,\nAmit", "msg_date": "Thu, 24 Oct 2019 19:01:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Oct 14, 2019 at 11:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In view of the proposed patches being dependent on some other\n>> 13-only changes, I wonder if we should fix v12 by reverting\n>> d25ea0127. The potential planner performance loss for large\n>> partition sets could be nasty, but failing to plan at all is worse.\n\n> Actually, the patch I proposed to fix equivalence code can be applied\n> on its own. The example I gave on that thread needs us to fix\n> partitionwise code to even work, which is perhaps a 13-only change,\n> but we have an example here that is broken due to d25ea01275.\n> Perhaps, considering applying my patch seems better than alternatives\n> which are either reverting d25ea01275 or avoiding doing partitionwise\n> join for such queries.\n\nI looked at this a bit, and I see that the core idea is to generate\nthe missing EC members by applying add_child_rel_equivalences when\nbuilding child join rels. Perhaps we can make that work, but this\npatch fails to, because you've ignored the comment pointing out\nthat the nullable_relids fixup logic only works for baserels:\n\n * And likewise for nullable_relids. Note this code assumes\n * parent and child relids are singletons.\n\nWe could improve that to work for joinrels, I think, but it would become\nvery much more complicated (and slower). AFAICS the logic would have\nto be \"iterate through top_parent_relids, see which ones are in\nem_nullable_relids, and for each one that is, find the corresponding\nchild_relid and substitute that in new_nullable_relids\". This is a\nbit of a problem because we don't have any really convenient way to\nmap individual top parent relids to child relids. I wonder if we\nshould extend AppendRelInfo to include the top parent relid as well as\nthe immediate parent. (Perhaps, while we're at it, we could make\nadjust_appendrel_attrs_multilevel less of an inefficient and\nunderdocumented mess.)\n\nAlso, I'm pretty sure this addition is wrong/broken:\n\n+\n+ /*\n+ * There aren't going to be more expressions to translate in\n+ * the same EC.\n+ */\n+ break;\n\nWhat makes you think that an EC can only contain one entry per rel?\n\nMore generally, as long as this patch requires changing\nadd_child_rel_equivalences' API anyway, I wonder if we should\nrethink that altogether. Looking at the code now, I realize that\nd25ea01275 resulted in horribly bastardizing that function's API,\nbecause the parent_rel and appinfo arguments are only consulted in\nsome cases, while in other cases we disregard them and rely on\nchild_rel->top_parent_relids to figure out how to translate stuff.\nIt would be possible to make the argument list be just (root, child_rel)\nand always rely on child_rel->top_parent_relids. At the very least,\nif we keep the extra arguments, we should document them as being just\noptimizations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Oct 2019 12:51:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Thanks for taking a look and sorry about the delay in replying.\n\nOn Fri, Oct 25, 2019 at 1:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Mon, Oct 14, 2019 at 11:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> In view of the proposed patches being dependent on some other\n> >> 13-only changes, I wonder if we should fix v12 by reverting\n> >> d25ea0127. The potential planner performance loss for large\n> >> partition sets could be nasty, but failing to plan at all is worse.\n>\n> > Actually, the patch I proposed to fix equivalence code can be applied\n> > on its own. The example I gave on that thread needs us to fix\n> > partitionwise code to even work, which is perhaps a 13-only change,\n> > but we have an example here that is broken due to d25ea01275.\n> > Perhaps, considering applying my patch seems better than alternatives\n> > which are either reverting d25ea01275 or avoiding doing partitionwise\n> > join for such queries.\n>\n> I looked at this a bit, and I see that the core idea is to generate\n> the missing EC members by applying add_child_rel_equivalences when\n> building child join rels. Perhaps we can make that work, but this\n> patch fails to, because you've ignored the comment pointing out\n> that the nullable_relids fixup logic only works for baserels:\n>\n> * And likewise for nullable_relids. Note this code assumes\n> * parent and child relids are singletons.\n>\n> We could improve that to work for joinrels, I think, but it would become\n> very much more complicated (and slower). AFAICS the logic would have\n> to be \"iterate through top_parent_relids, see which ones are in\n> em_nullable_relids, and for each one that is, find the corresponding\n> child_relid and substitute that in new_nullable_relids\". This is a\n> bit of a problem because we don't have any really convenient way to\n> map individual top parent relids to child relids.\n\nActually, there is adjust_child_relids_multilevel() which translates\nthe top parent relids in em_nullable_relids to child relids.\n\nI have updated the patches that way.\n\n> I wonder if we\n> should extend AppendRelInfo to include the top parent relid as well as\n> the immediate parent. (Perhaps, while we're at it, we could make\n> adjust_appendrel_attrs_multilevel less of an inefficient and\n> underdocumented mess.)\n\nHmm, I agree we should try to fix that situation somehow. Even better\nif we could do away with child expressions in ECs, because they cause\nEC related code to show up in profiles when executing complex queries\nwith thousands of partitions.\n\n> Also, I'm pretty sure this addition is wrong/broken:\n>\n> +\n> + /*\n> + * There aren't going to be more expressions to translate in\n> + * the same EC.\n> + */\n> + break;\n>\n> What makes you think that an EC can only contain one entry per rel?\n\nI was wrong about that. Fixed.\n\n> More generally, as long as this patch requires changing\n> add_child_rel_equivalences' API anyway, I wonder if we should\n> rethink that altogether. Looking at the code now, I realize that\n> d25ea01275 resulted in horribly bastardizing that function's API,\n> because the parent_rel and appinfo arguments are only consulted in\n> some cases, while in other cases we disregard them and rely on\n> child_rel->top_parent_relids to figure out how to translate stuff.\n> It would be possible to make the argument list be just (root, child_rel)\n> and always rely on child_rel->top_parent_relids.\n\nActually, as of 3373c71553, add_child_rel_equivalences() looks at\nparent_rel->eclass_indexes to look up ECs, so maybe we can't take out\nparent_rel.\n\n> At the very least,\n> if we keep the extra arguments, we should document them as being just\n> optimizations.\n\nFor common cases that doesn't involve multi-level partitioning, it\nreally helps to have the appinfos be supplied by the caller because\nthey're already available. I've added a comment at the top about\nthat.\n\nAttached updated patches.\n\nThanks,\nAmit", "msg_date": "Wed, 30 Oct 2019 19:03:41 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Attached updated patches.\n\n[ looks at that... ] I seriously, seriously dislike what you did\nin build_join_rel, ie adding the new joinrel to the global data\nstructures before it's fully filled in. That's inevitably going\nto bite us on the ass someday, and you couldn't even be bothered\nwith a comment.\n\nWorse, the reason you did that seems to be so that\ngenerate_join_implied_equalities can find and modify the joinrel,\nwhich is an undocumented and not very well thought out side-effect.\nThere are other call sites for that where the joinrel may or may not\nalready exist, meaning that it might or might not add more members into\nthe joinrel's eclass_indexes. At best that's wasted work, and at\nworst it costs efficiency, since we could in principle get different\nsets of common relids depending on which join input pair we're\nconsidering. They're equally valid sets, but unioning them is\njust going to add more relids than we need.\n\nAlso, the existing logic around eclass_indexes is that it's only\nset for baserels and we know it is valid after we've finished\nEC merging. I don't much like modifying add_child_rel_equivalences\nto have some different opinions about that for joinrels.\n\nIt'd probably be better to just eat the cost of doing\nget_common_eclass_indexes over again when it's time to do\nadd_child_rel_equivalences for a joinrel, since we aren't (I hope)\ngoing to do that more than once per joinrel anyway. This would\nprobably require refactoring things so that there are separate\nentry points to add child equivalences for base rels and join rels.\nBut that seems cleaner anyway than what you've got here.\n\nDavid --- much of the complexity here comes from the addition of\nthe eclass_indexes infrastructure, so do you have any thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 12:09:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "On Thu, 31 Oct 2019 at 05:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David --- much of the complexity here comes from the addition of\n> the eclass_indexes infrastructure, so do you have any thoughts?\n\nHindsight makes me think I should have mentioned in the comment for\neclass_indexes that it's only used for simple rels and remains NULL\nfor anything else.\n\nAll the code in equivclass.c either uses get_common_eclass_indexes()\nand get_eclass_indexes_for_relids() which go down to the simple rel\nlevel to obtain their eclass_indexes. When calling\nget_eclass_indexes_for_relids() we'll build a union Bitmapset with the\nindexes from each simple rel that the join rel is made from. We only\never directly use the eclass_indexes field when we're certain we're\ndealing with a simple rel already. get_eclass_indexes_for_relids()\nwould do the same job, but using the field directly saves a bit of\nneedless effort and memory allocations. So, in short, I don't really\nsee why we need to set eclass_indexes for anything other than simple\nrels.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Thu, 31 Oct 2019 10:47:11 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Thanks for checking.\n\nOn Thu, Oct 31, 2019 at 1:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Also, the existing logic around eclass_indexes is that it's only\n> set for baserels and we know it is valid after we've finished\n> EC merging. I don't much like modifying add_child_rel_equivalences\n> to have some different opinions about that for joinrels.\n>\n> It'd probably be better to just eat the cost of doing\n> get_common_eclass_indexes over again when it's time to do\n> add_child_rel_equivalences for a joinrel, since we aren't (I hope)\n> going to do that more than once per joinrel anyway.\n\nIf you mean once per child joinrel, then yes. Implemented this\napproach in the attached updated patch.\n\nISTM, get_common_eclass_indexes() returns the same value for all the\nchild joinrels (or really for its outer and inner component rels) as\nit does for the parent joinrel. So, it might be better to set\neclass_indexes in parent joinrel once and use the same value for all\nits descendant joinrels. Although, we'd need to export\nget_common_eclass_indexes() out of equivclass.c to call it from\nbuild_join_rel() such that it doesn't require messing with where the\njoinrel is added to the global data structure. Maybe that complicates\neclass_indexes infrastructure though.\n\n> This would\n> probably require refactoring things so that there are separate\n> entry points to add child equivalences for base rels and join rels.\n> But that seems cleaner anyway than what you've got here.\n\nSeparate entry points sounds better, but only in HEAD? Should we have\nseparate entry points in PG 12 too?\n\nAttached updated patch only for HEAD.\n\nThanks,\nAmit", "msg_date": "Thu, 31 Oct 2019 15:45:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n>> This would\n>> probably require refactoring things so that there are separate\n>> entry points to add child equivalences for base rels and join rels.\n>> But that seems cleaner anyway than what you've got here.\n\n> Separate entry points sounds better, but only in HEAD?\n\nI had actually had in mind that we might have two wrappers around a\ncommon search-and-replace routine, but after studying the code I see that\nthere's just enough differences to make it probably not worth the trouble\nto do it like that. I did spend a bit of time removing cosmetic\ndifferences between the two versions, though, mostly by creating\ncommon local variables.\n\nI think the way you did the matching_ecs computation is actually wrong:\nwe need to find ECs that reference any rel of the join, not only those\nthat reference both inputs. If nothing else, the way you have it here\nmakes the results dependent on which pair of input rels gets considered\nfirst, and that's certainly bad for multiway joins.\n\nAlso, I thought we should try to put more conditions on whether we\ninvoke add_child_join_rel_equivalences at all. In the attached I did\n\n if ((enable_partitionwise_join || enable_partitionwise_aggregate) &&\n (joinrel->has_eclass_joins ||\n has_useful_pathkeys(root, parent_joinrel)))\n\nbut I wonder if we could do more, like checking to see if the parent\njoinrel is partitioned. Alternatively, maybe it's unnecessary because\nwe won't try to build child joinrels unless these conditions are true?\n\nI did not like the test case either. Creating a whole new (and rather\nlarge) test table just for this one case is unreasonably expensive ---\nit about doubles the runtime of the equivclass test for me. There's\nalready a perfectly good test table in partition_join.sql, which seems\nlike a more natural home for this anyhow. After a bit of finagling\nI was able to adapt the test query to fail on that table.\n\nPatch v4 attached. I've not looked at what we need to do to make this\nwork in v12.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 02 Nov 2019 15:43:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "On Sun, Nov 3, 2019 at 4:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> >> This would\n> >> probably require refactoring things so that there are separate\n> >> entry points to add child equivalences for base rels and join rels.\n> >> But that seems cleaner anyway than what you've got here.\n>\n> > Separate entry points sounds better, but only in HEAD?\n>\n> I had actually had in mind that we might have two wrappers around a\n> common search-and-replace routine, but after studying the code I see that\n> there's just enough differences to make it probably not worth the trouble\n> to do it like that. I did spend a bit of time removing cosmetic\n> differences between the two versions, though, mostly by creating\n> common local variables.\n\nAgree that having two totally separate routines is better.\n\n> I think the way you did the matching_ecs computation is actually wrong:\n> we need to find ECs that reference any rel of the join, not only those\n> that reference both inputs. If nothing else, the way you have it here\n> makes the results dependent on which pair of input rels gets considered\n> first, and that's certainly bad for multiway joins.\n\nI'm not sure I fully understand the problems, but maybe you're right.\n\n> Also, I thought we should try to put more conditions on whether we\n> invoke add_child_join_rel_equivalences at all. In the attached I did\n>\n> if ((enable_partitionwise_join || enable_partitionwise_aggregate) &&\n> (joinrel->has_eclass_joins ||\n> has_useful_pathkeys(root, parent_joinrel)))\n>\n> but I wonder if we could do more, like checking to see if the parent\n> joinrel is partitioned. Alternatively, maybe it's unnecessary because\n> we won't try to build child joinrels unless these conditions are true?\n\nActually, I think we can assert in add_child_rel_equivalences() that\nenable_partitionwise_join is true. Then checking\nenable_partitionwise_aggregate is unnecessary.\n\n> I did not like the test case either. Creating a whole new (and rather\n> large) test table just for this one case is unreasonably expensive ---\n> it about doubles the runtime of the equivclass test for me. There's\n> already a perfectly good test table in partition_join.sql, which seems\n> like a more natural home for this anyhow. After a bit of finagling\n> I was able to adapt the test query to fail on that table.\n\nThat's great. I tried but I can only finagle so much when it comes to\ntwisting around plan shapes to what I need. :)\n\n> Patch v4 attached. I've not looked at what we need to do to make this\n> work in v12.\n\nThanks a lot for the revised patch.\n\nMaybe the only difference between HEAD and v12 is that the former has\neclass_indexes infrastructure, whereas the latter doesn't? I have\nattached a version of your patch adapted for v12.\n\nAlso, looking at this in the patched code:\n\n+ /*\n+ * We may ignore expressions that reference a single baserel,\n+ * because add_child_rel_equivalences should have handled them.\n+ */\n+ if (bms_membership(cur_em->em_relids) != BMS_MULTIPLE)\n+ continue;\n\nI have been thinking maybe add_child_rel_equivalences() doesn't need\nto translate EC members that reference multiple appendrels, because\nthere top_parent_relids is always a singleton set, whereas em_relids\nof such expressions is not? Those half-translated expressions are\nuseless, only adding to the overhead of scanning ec_members. I'm\nthinking that we should apply this diff:\n\ndiff --git a/src/backend/optimizer/path/equivclass.c\nb/src/backend/optimizer/path/equivclass.c\nindex e8e9e9a314..d4d80c8101 100644\n--- a/src/backend/optimizer/path/equivclass.c\n+++ b/src/backend/optimizer/path/equivclass.c\n@@ -2169,7 +2169,7 @@ add_child_rel_equivalences(PlannerInfo *root,\n continue; /* ignore children here */\n\n /* Does this member reference child's topmost parent rel? */\n- if (bms_overlap(cur_em->em_relids, top_parent_relids))\n+ if (bms_is_subset(cur_em->em_relids, top_parent_relids))\n {\n /* Yes, generate transformed child version */\n Expr *child_expr;\n\nThoughts?\n\nThanks,\nAmit", "msg_date": "Tue, 5 Nov 2019 14:36:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Sun, Nov 3, 2019 at 4:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also, I thought we should try to put more conditions on whether we\n>> invoke add_child_join_rel_equivalences at all. In the attached I did\n>> if ((enable_partitionwise_join || enable_partitionwise_aggregate) &&\n>> (joinrel->has_eclass_joins ||\n>> has_useful_pathkeys(root, parent_joinrel)))\n>> but I wonder if we could do more, like checking to see if the parent\n>> joinrel is partitioned. Alternatively, maybe it's unnecessary because\n>> we won't try to build child joinrels unless these conditions are true?\n\n> Actually, I think we can assert in add_child_rel_equivalences() that\n> enable_partitionwise_join is true. Then checking\n> enable_partitionwise_aggregate is unnecessary.\n\nAfter tracing the call sites back a bit further, I agree that we won't\nbe here in the first place unless enable_partitionwise_join is true,\nso the extra tests I proposed are unnecessary. I took them out again.\n\n> I have been thinking maybe add_child_rel_equivalences() doesn't need\n> to translate EC members that reference multiple appendrels, because\n> there top_parent_relids is always a singleton set, whereas em_relids\n> of such expressions is not? Those half-translated expressions are\n> useless, only adding to the overhead of scanning ec_members. I'm\n> thinking that we should apply this diff:\n> - if (bms_overlap(cur_em->em_relids, top_parent_relids))\n> + if (bms_is_subset(cur_em->em_relids, top_parent_relids))\n\nMeh, I'm not really convinced. The case where this would be relevant\nis an EC generated from something like \"WHERE (a.x + b.y) = c.z\"\nwhere \"a\" is partitioned. It's possible that we'd never have a use\nfor a sort key corresponding to \"a_child.x + b.y\", but I think that's\nnot obvious, and probably short-sighted. Anyway such EC members are\npretty rare in the first place, so we're not going to win much\nperformance by trying to optimize them.\n\nAnyway, I've pushed the fix for Justin's problem to v12 and HEAD.\nThe problem with poor planning of multiway joins that you mentioned\nin the other thread remains open, but I imagine the patches you\nposted there are going to need rebasing over this commit, so I\nset that CF entry to Waiting On Author.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Nov 2019 11:51:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" }, { "msg_contents": "On Wed, Nov 6, 2019 at 1:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > I have been thinking maybe add_child_rel_equivalences() doesn't need\n> > to translate EC members that reference multiple appendrels, because\n> > there top_parent_relids is always a singleton set, whereas em_relids\n> > of such expressions is not? Those half-translated expressions are\n> > useless, only adding to the overhead of scanning ec_members. I'm\n> > thinking that we should apply this diff:\n> > - if (bms_overlap(cur_em->em_relids, top_parent_relids))\n> > + if (bms_is_subset(cur_em->em_relids, top_parent_relids))\n>\n> Meh, I'm not really convinced. The case where this would be relevant\n> is an EC generated from something like \"WHERE (a.x + b.y) = c.z\"\n> where \"a\" is partitioned. It's possible that we'd never have a use\n> for a sort key corresponding to \"a_child.x + b.y\", but I think that's\n> not obvious, and probably short-sighted. Anyway such EC members are\n> pretty rare in the first place, so we're not going to win much\n> performance by trying to optimize them.\n\nOK.\n\n> Anyway, I've pushed the fix for Justin's problem to v12 and HEAD.\n> The problem with poor planning of multiway joins that you mentioned\n> in the other thread remains open, but I imagine the patches you\n> posted there are going to need rebasing over this commit, so I\n> set that CF entry to Waiting On Author.\n\nThank you. I will send rebased patches on that thread.\n\nRegards,\nAmit\n\n\n", "msg_date": "Wed, 6 Nov 2019 10:31:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: ERROR: could not find pathkey item to sort" } ]
[ { "msg_contents": "I have some JS middleware that needs to securely connect to the \npostgresql back end.  Any number of different users may connect via \nwebsocket to this middleware to manage their connection to the \ndatabase.  I want the JS process to have a client certificate \nauthorizing it to connect to the database.\n\nI have this line in my pg_hba.conf:\n\nhostssl        all    +users        all        cert\n\nSo the idea is, I should be able to connect as any user that is a member \nof the role \"users.\"\n\nUnder this configuration, I can currently connect as the user \"users\" \nbut not as \"joe\" who is a member of the role \"users.\"  I get:\n\nFATAL:  certificate authentication failed for user \"joe\"\n\nThis makes sense as the commonName on the certificate is \"users\" and not \n\"joe.\"  But the documentation for pg_hba.conf states that prefixing the \nusername with a \"+\" should allow me to connect as any role who is a \nmember of the stated role.\n\nIs there a way to do this via client certificate authorization?  I have \nno way of knowing the specific usernames ahead of time, as new users may \nbe created in the database (thousands) and I can't really be creating \nseparate certificates for every different user.\n\n\n\n", "msg_date": "Fri, 11 Oct 2019 11:58:50 -0600", "msg_from": "Kyle Bateman <kyle@batemans.org>", "msg_from_op": true, "msg_subject": "Connect as multiple users using single client certificate" }, { "msg_contents": "\nOn 10/11/19 1:58 PM, Kyle Bateman wrote:\n> I have some JS middleware that needs to securely connect to the\n> postgresql back end.  Any number of different users may connect via\n> websocket to this middleware to manage their connection to the\n> database.  I want the JS process to have a client certificate\n> authorizing it to connect to the database.\n>\n> I have this line in my pg_hba.conf:\n>\n> hostssl        all    +users        all        cert\n>\n> So the idea is, I should be able to connect as any user that is a\n> member of the role \"users.\"\n>\n> Under this configuration, I can currently connect as the user \"users\"\n> but not as \"joe\" who is a member of the role \"users.\"  I get:\n>\n> FATAL:  certificate authentication failed for user \"joe\"\n>\n> This makes sense as the commonName on the certificate is \"users\" and\n> not \"joe.\"  But the documentation for pg_hba.conf states that\n> prefixing the username with a \"+\" should allow me to connect as any\n> role who is a member of the stated role.\n>\n> Is there a way to do this via client certificate authorization?  I\n> have no way of knowing the specific usernames ahead of time, as new\n> users may be created in the database (thousands) and I can't really be\n> creating separate certificates for every different user.\n>\n>\n\n\nI think the short answer is: No. The client certificate should match the\nusername and nothing else. If you don't want to generate certificates\nfor all your users I suggest using some other form of auth (e.g.\nscram-sha-256).\n\n\nThe long answer is that you can use maps, but it's probably not a good\nidea. e.g. you have a map allowing foo to connect as both bar and baz,\nand give both bar and baz a certificate with a CN of foo. But then bar\ncan connect as baz and vice versa, which isn't a good thing.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 11 Oct 2019 14:12:12 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Connect as multiple users using single client certificate" }, { "msg_contents": "On 10/11/19 12:12 PM, Andrew Dunstan wrote:\n> On 10/11/19 1:58 PM, Kyle Bateman wrote:\n>> I have some JS middleware that needs to securely connect to the\n>> postgresql back end.  Any number of different users may connect via\n>> websocket to this middleware to manage their connection to the\n>> database.  I want the JS process to have a client certificate\n>> authorizing it to connect to the database.\n>>\n>> I have this line in my pg_hba.conf:\n>>\n>> hostssl        all    +users        all        cert\n>>\n>> So the idea is, I should be able to connect as any user that is a\n>> member of the role \"users.\"\n>>\n>> Under this configuration, I can currently connect as the user \"users\"\n>> but not as \"joe\" who is a member of the role \"users.\"  I get:\n>>\n>> FATAL:  certificate authentication failed for user \"joe\"\n>>\n>> This makes sense as the commonName on the certificate is \"users\" and\n>> not \"joe.\"  But the documentation for pg_hba.conf states that\n>> prefixing the username with a \"+\" should allow me to connect as any\n>> role who is a member of the stated role.\n>>\n>> Is there a way to do this via client certificate authorization?  I\n>> have no way of knowing the specific usernames ahead of time, as new\n>> users may be created in the database (thousands) and I can't really be\n>> creating separate certificates for every different user.\n>>\n>>\n>\n> I think the short answer is: No. The client certificate should match the\n> username and nothing else. If you don't want to generate certificates\n> for all your users I suggest using some other form of auth (e.g.\n> scram-sha-256).\n>\n>\n> The long answer is that you can use maps, but it's probably not a good\n> idea. e.g. you have a map allowing foo to connect as both bar and baz,\n> and give both bar and baz a certificate with a CN of foo. But then bar\n> can connect as baz and vice versa, which isn't a good thing.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\nHmmm, too bad.  It would be nice to be able to generate a certificate, \nsay with a commonName of \"+users\" (or some other setting) which matches \nwhat is specified in pg_hba.conf, allowing connections from anyone \nwithin the specified group.  Seems like that is the intent of the \"+\" \nsyntax in the first place.\n\nIn my case, the middleware is validating end-users using distributed \nkeys, so no username/passwords are needed.  I was hoping to avoid all \nthat and just rely on SSL.\n\nAny idea if this is a viable feature enhancement?\n\nKyle\n\n\nKyle\n\n\n\n", "msg_date": "Fri, 11 Oct 2019 12:46:32 -0600", "msg_from": "Kyle Bateman <kyle@batemans.org>", "msg_from_op": true, "msg_subject": "Re: Connect as multiple users using single client certificate" }, { "msg_contents": "Kyle Bateman <kyle@batemans.org> writes:\n> On 10/11/19 12:12 PM, Andrew Dunstan wrote:\n>> I think the short answer is: No. The client certificate should match the\n>> username and nothing else. If you don't want to generate certificates\n>> for all your users I suggest using some other form of auth (e.g.\n>> scram-sha-256).\n>> The long answer is that you can use maps, but it's probably not a good\n>> idea. e.g. you have a map allowing foo to connect as both bar and baz,\n>> and give both bar and baz a certificate with a CN of foo. But then bar\n>> can connect as baz and vice versa, which isn't a good thing.\n\n> Hmmm, too bad.  It would be nice to be able to generate a certificate, \n> say with a commonName of \"+users\" (or some other setting) which matches \n> what is specified in pg_hba.conf, allowing connections from anyone \n> within the specified group.  Seems like that is the intent of the \"+\" \n> syntax in the first place.\n\nNo, it's not. The point of the +syntax is to let a collection of users\nlog in without having to adjust pg_hba.conf anytime you add a new user.\nIt's not meant to bypass the requirement that the users authenticate\nproperly. Would you expect that if you used +users with a password-\nbased auth method, then all the users would have the same password?\n\n> In my case, the middleware is validating end-users using distributed \n> keys, so no username/passwords are needed.  I was hoping to avoid all \n> that and just rely on SSL.\n> Any idea if this is a viable feature enhancement?\n\nI agree with Andrew that that's just silly. If you give all your users\nthe same cert then any of them can masquerade as any other. You might\nas well just tell them to share the same login id.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Oct 2019 15:05:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Connect as multiple users using single client certificate" }, { "msg_contents": "On 10/11/19 1:05 PM, Tom Lane wrote:\n> Kyle Bateman <kyle@batemans.org> writes:\n>> On 10/11/19 12:12 PM, Andrew Dunstan wrote:\n>>> I think the short answer is: No. The client certificate should match the\n>>> username and nothing else. If you don't want to generate certificates\n>>> for all your users I suggest using some other form of auth (e.g.\n>>> scram-sha-256).\n>>> The long answer is that you can use maps, but it's probably not a good\n>>> idea. e.g. you have a map allowing foo to connect as both bar and baz,\n>>> and give both bar and baz a certificate with a CN of foo. But then bar\n>>> can connect as baz and vice versa, which isn't a good thing.\n>> Hmmm, too bad.  It would be nice to be able to generate a certificate,\n>> say with a commonName of \"+users\" (or some other setting) which matches\n>> what is specified in pg_hba.conf, allowing connections from anyone\n>> within the specified group.  Seems like that is the intent of the \"+\"\n>> syntax in the first place.\n> No, it's not. The point of the +syntax is to let a collection of users\n> log in without having to adjust pg_hba.conf anytime you add a new user.\n> It's not meant to bypass the requirement that the users authenticate\n> properly. Would you expect that if you used +users with a password-\n> based auth method, then all the users would have the same password?\n>\n>> In my case, the middleware is validating end-users using distributed\n>> keys, so no username/passwords are needed.  I was hoping to avoid all\n>> that and just rely on SSL.\n>> Any idea if this is a viable feature enhancement?\n> I agree with Andrew that that's just silly. If you give all your users\n> the same cert then any of them can masquerade as any other. You might\n> as well just tell them to share the same login id.\nIn my implementation, I'm not giving the cert to all my users.  I'm only \ngiving it to the middleware server that manages connections.\n\nWhat I hope to accomplish is: Establish a secure, encrypted connection \nto Postgresql from a trusted process, possibly running on another \nmachine, whom I trust to tell me which user (within a limited set, \ndefined by a role) it would like to connect as.  That process does it's \nown robust authentication of users before letting them through to the \ndatabase by the username they claim.  However, it is still useful to \nconnect as different users because my views and functions operate \ndifferently depending on which user is on the other end of the connection.\n\nIs there a way I can accomplish this using the existing authentication \nmethods (other than trust)?\n\n\n\n", "msg_date": "Fri, 11 Oct 2019 13:28:45 -0600", "msg_from": "Kyle Bateman <kyle@batemans.org>", "msg_from_op": true, "msg_subject": "Re: Connect as multiple users using single client certificate" }, { "msg_contents": "Kyle Bateman <kyle@batemans.org> writes:\n> On 10/11/19 1:05 PM, Tom Lane wrote:\n>> I agree with Andrew that that's just silly. If you give all your users\n>> the same cert then any of them can masquerade as any other. You might\n>> as well just tell them to share the same login id.\n\n> In my implementation, I'm not giving the cert to all my users.  I'm only \n> giving it to the middleware server that manages connections.\n\n> What I hope to accomplish is: Establish a secure, encrypted connection \n> to Postgresql from a trusted process, possibly running on another \n> machine, whom I trust to tell me which user (within a limited set, \n> defined by a role) it would like to connect as.  That process does it's \n> own robust authentication of users before letting them through to the \n> database by the username they claim.  However, it is still useful to \n> connect as different users because my views and functions operate \n> differently depending on which user is on the other end of the connection.\n\nWell, you can do that, it's just not cert authentication.\n\nWhat you might consider is (1) set up an ssl_ca_file, so that the\nserver only believes client certs traceable to that CA, and (2) require\nSSL connections (use \"hostssl\" entries in pg_hba.conf). Then you\nexpect that possession of a cert issued by your CA is enough to\nauthorize connections to the database. But don't use the cert\nauth method --- based on what you said here, you might even just\nuse \"trust\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Oct 2019 15:48:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Connect as multiple users using single client certificate" }, { "msg_contents": "Greetings,\n\n* Kyle Bateman (kyle@batemans.org) wrote:\n> What I hope to accomplish is: Establish a secure, encrypted connection to\n> Postgresql from a trusted process, possibly running on another machine, whom\n> I trust to tell me which user (within a limited set, defined by a role) it\n> would like to connect as.  That process does it's own robust authentication\n> of users before letting them through to the database by the username they\n> claim.  However, it is still useful to connect as different users because my\n> views and functions operate differently depending on which user is on the\n> other end of the connection.\n> \n> Is there a way I can accomplish this using the existing authentication\n> methods (other than trust)?\n\nHave you considered just having a regular client-side cert for the\nmiddleware that logs in as a common user to the PG database, and then\nperforms a SET ROLE to whichever user the middleware has authenticated\nthe user as? That seems to match pretty closely what you're looking for\nand has the advantage that it'll also allow you to work through\nconnection poolers.\n\nThanks,\n\nStephen", "msg_date": "Wed, 16 Oct 2019 18:53:04 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Connect as multiple users using single client certificate" } ]
[ { "msg_contents": "I'm not sure why we have that index, and my script probably should have known\nto choose a better one to cluster on, but still..\n\nts=# CLUSTER huawei_m2000_config_enodebcell_enodeb USING huawei_m2000_config_enodebcell_enodeb_coalesce_idx ;\nDEBUG: 00000: building index \"pg_toast_1840151315_index\" on table \"pg_toast_1840151315\" serially\nLOCATION: index_build, index.c:2791\nDEBUG: 00000: clustering \"public.huawei_m2000_config_enodebcell_enodeb\" using sequential scan and sort\nLOCATION: copy_table_data, cluster.c:907\nERROR: XX000: trying to store a heap tuple into wrong type of slot\nLOCATION: ExecStoreHeapTuple, execTuples.c:1328\n\nts=# \\dt+ huawei_m2000_config_enodebcell_enodeb\n public | huawei_m2000_config_enodebcell_enodeb | table | telsasoft | 3480 kB | \n\nts=# \\d+ huawei_m2000_config_enodebcell_enodeb_coalesce_idx\n Index \"public.huawei_m2000_config_enodebcell_enodeb_coalesce_idx\"\n Column | Type | Key? | Definition | Storage | Stats target \n----------+------+------+---------------------------------------+----------+--------------\n coalesce | text | yes | COALESCE(enodebfunctionname, ne_name) | extended | \nbtree, for table \"public.huawei_m2000_config_enodebcell_enodeb\"\n\nts=# \\d+ huawei_m2000_config_enodebcell_enodeb\n...\nIndexes:\n \"huawei_m2000_config_enodebcell_enodeb_unique_idx\" UNIQUE, btree (ne_name, tsoft_fake_key, device_name)\n \"huawei_m2000_config_enodebcell_enodeb_cellid_idx\" btree (cellid) CLUSTER\n \"huawei_m2000_config_enodebcell_enodeb_coalesce_cellid_idx\" btree (COALESCE(enodebfunctionname, ne_name), cellid)\n \"huawei_m2000_config_enodebcell_enodeb_coalesce_idx\" btree (COALESCE(enodebfunctionname, ne_name))\nStatistics objects:\n \"public\".\"huawei_m2000_config_enodebcell_enodeb\" (ndistinct) ON ne_name, tsoft_fake_key, device_name FROM huawei_m2000_config_enodebcell_\nenodeb\nAccess method: heap\n\n(gdb) bt\n#0 errfinish (dummy=dummy@entry=0) at elog.c:411\n#1 0x000000000087a959 in elog_finish (elevel=elevel@entry=20,\n fmt=fmt@entry=0x9c4d70 \"trying to store a heap tuple into wrong type of slot\") at elog.c:1365\n#2 0x000000000061eea8 in ExecStoreHeapTuple (tuple=tuple@entry=0x1e06950, slot=slot@entry=0x1e05080, shouldFree=shouldFree@entry=false)\n at execTuples.c:1328\n#3 0x00000000008a7a06 in comparetup_cluster (a=<optimized out>, b=<optimized out>, state=0x1e04940) at tuplesort.c:3795\n#4 0x00000000008a5895 in qsort_tuple (a=0x2254b08, n=7699, cmp_tuple=0x8a7960 <comparetup_cluster>, state=state@entry=0x1e04940)\n at qsort_tuple.c:112\n#5 0x00000000008a98bb in tuplesort_sort_memtuples (state=state@entry=0x1e04940) at tuplesort.c:3320\n#6 0x00000000008ab434 in tuplesort_performsort (state=state@entry=0x1e04940) at tuplesort.c:1811\n#7 0x00000000004c9404 in heapam_relation_copy_for_cluster (OldHeap=0x7f21606695d8, NewHeap=0x7f2160585048, OldIndex=<optimized out>,\n use_sort=<optimized out>, OldestXmin=288843233, xid_cutoff=<optimized out>, multi_cutoff=0x7ffc05e6ba04, num_tuples=0x7ffc05e6ba08,\n tups_vacuumed=0x7ffc05e6ba10, tups_recently_dead=0x7ffc05e6ba18) at heapam_handler.c:944\n#8 0x000000000059cf07 in table_relation_copy_for_cluster (tups_recently_dead=0x7ffc05e6ba18, tups_vacuumed=0x7ffc05e6ba10,\n num_tuples=0x7ffc05e6ba08, multi_cutoff=0x7ffc05e6ba04, xid_cutoff=0x7ffc05e6ba00, OldestXmin=<optimized out>, use_sort=true,\n OldIndex=0x7f2160585f38, NewTable=0x7f2160585048, OldTable=0x7f21606695d8) at ../../../src/include/access/tableam.h:1410\n#9 copy_table_data (pCutoffMulti=<synthetic pointer>, pFreezeXid=<synthetic pointer>, pSwapToastByContent=<synthetic pointer>,\n verbose=<optimized out>, OIDOldIndex=13, OIDOldHeap=1499600032, OIDNewHeap=1840150111) at cluster.c:920\n#10 rebuild_relation (verbose=<optimized out>, indexOid=13, OldHeap=<optimized out>) at cluster.c:616\n#11 cluster_rel (tableOid=tableOid@entry=1499600032, indexOid=indexOid@entry=3081287757, options=<optimized out>) at cluster.c:429\n#12 0x000000000059d35e in cluster (stmt=stmt@entry=0x1d051f8, isTopLevel=isTopLevel@entry=true) at cluster.c:186\n#13 0x000000000076547f in standard_ProcessUtility (pstmt=pstmt@entry=0x1d05518,\n queryString=queryString@entry=0x1d046e0 \"CLUSTER huawei_m2000_config_enodebcell_enodeb USING huawei_m2000_config_enodebcell_enodeb_coalesce_idx ;\", context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x1d055f8,\n completionTag=completionTag@entry=0x7ffc05e6c0a0 \"\") at utility.c:659\n#14 0x00007f21517204ef in pgss_ProcessUtility (pstmt=0x1d05518,\n queryString=0x1d046e0 \"CLUSTER huawei_m2000_config_enodebcell_enodeb USING huawei_m2000_config_enodebcell_enodeb_coalesce_idx ;\",\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1d055f8, completionTag=0x7ffc05e6c0a0 \"\")\n at pg_stat_statements.c:1006\n#15 0x0000000000762816 in PortalRunUtility (portal=0x1d7a4e0, pstmt=0x1d05518, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>,\n dest=0x1d055f8, completionTag=0x7ffc05e6c0a0 \"\") at pquery.c:1175\n#16 0x0000000000763267 in PortalRunMulti (portal=portal@entry=0x1d7a4e0, isTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x1d055f8, altdest=altdest@entry=0x1d055f8,\n completionTag=completionTag@entry=0x7ffc05e6c0a0 \"\") at pquery.c:1328\n---Type <return> to continue, or q <return> to quit---\n#17 0x0000000000763e45 in PortalRun (portal=<optimized out>, count=9223372036854775807, isTopLevel=<optimized out>,\n run_once=<optimized out>, dest=0x1d055f8, altdest=0x1d055f8, completionTag=0x7ffc05e6c0a0 \"\") at pquery.c:796\n#18 0x000000000075ff45 in exec_simple_query (query_string=<optimized out>) at postgres.c:1215\n#19 0x0000000000761212 in PostgresMain (argc=<optimized out>, argv=<optimized out>, dbname=<optimized out>, username=<optimized out>)\n at postgres.c:4236\n#20 0x0000000000483d02 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4431\n#21 BackendStartup (port=0x1d2b340) at postmaster.c:4122\n#22 ServerLoop () at postmaster.c:1704\n#23 0x00000000006f0b1f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1cff280) at postmaster.c:1377\n#24 0x0000000000484c93 in main (argc=3, argv=0x1cff280) at main.c:228\n(gdb)\n\n#2 0x000000000061eea8 in ExecStoreHeapTuple (tuple=tuple@entry=0x1e06950, slot=slot@entry=0x1e05080, shouldFree=shouldFree@entry=false)\n at execTuples.c:1328\n tuple = 0x1e06950\n slot = 0x1e05080\n shouldFree = 64\n#3 0x00000000008a7a06 in comparetup_cluster (a=<optimized out>, b=<optimized out>, state=0x1e04940) at tuplesort.c:3795\n l_index_values = {140720407492095, 139781285162339, 206158430232, 8920320, 20, 140720407492304, 0, 0, 31, 4294967295, 0, \n 139781288589152, 22, 540258, 139781288589152, 1, 13947976, 7541298, 139774490219072, 139774490219072, 31371296, 1, 433, 7550010, \n 2040109468721, 18435224211311474416, 0, 0, 140720407492471, 112, 140720407492335, 139779728308152}\n r_index_values = {70729521432191, 1840042467, 433, 2097168, 139781288589240, 139781285165739, 2097152, 17520358731135336960, 32768, \n 139781327984088, 0, 433, 433, 139778931751520, 31578680, 7552416, 140720407492471, 540257, 139774490219008, 7547611, \n 72197375326517088, 17520358731135336960, 0, 19, 31578680, 4970402, 31360768, 98368, 0, 540258, 568, 540258}\n ecxt_scantuple = 0x1e05080\n l_index_isnull = {false <repeats 16 times>, 48, false, false, false, 91, false, false, false, 86, 101, 224, 93, 33, 127, false, \n false}\n r_index_isnull = {96, 119, 13, 94, 33, 127, false, false, 64, false, 3, false, false, false, false, false, 192, 74, 37, 2, false, \n false, false, false, 89, 155, 217, 93, 33, 127, false, false}\n sortKey = 0x1e05490\n ltup = 0x1e06950\n rtup = 0x1e06b60\n tupDesc = <optimized out>\n nkey = 0\n compare = <optimized out>\n\n datum1 = <optimized out>\n datum2 = <optimized out>\n isnull1 = false\n isnull2 = false\n leading = <optimized out>\n#4 0x00000000008a5895 in qsort_tuple (a=0x2254b08, n=7699, cmp_tuple=0x8a7960 <comparetup_cluster>, state=state@entry=0x1e04940)\n at qsort_tuple.c:112\n pa = <optimized out>\n pb = <optimized out>\n pc = <optimized out>\n pd = <optimized out>\n pl = <optimized out>\n pm = 0x2254b20\n pn = <optimized out>\n d1 = <optimized out>\n d2 = <optimized out>\n r = <optimized out>\n presorted = 1\n#5 0x00000000008a98bb in tuplesort_sort_memtuples (state=state@entry=0x1e04940) at tuplesort.c:3320\nNo locals.\n#6 0x00000000008ab434 in tuplesort_performsort (state=state@entry=0x1e04940) at tuplesort.c:1811\n oldcontext = 0x1de8700\n __func__ = \"tuplesort_performsort\"\n\n#7 0x00000000004c9404 in heapam_relation_copy_for_cluster (OldHeap=0x7f21606695d8, NewHeap=0x7f2160585048, OldIndex=<optimized out>,\n use_sort=<optimized out>, OldestXmin=288843233, xid_cutoff=<optimized out>, multi_cutoff=0x7ffc05e6ba04, num_tuples=0x7ffc05e6ba08,\n tups_vacuumed=0x7ffc05e6ba10, tups_recently_dead=0x7ffc05e6ba18) at heapam_handler.c:944\n n_tuples = 0\n rwstate = 0x1dfe910\n indexScan = 0x0\n tableScan = 0x1e1da38\n heapScan = 0x1e1da38\n use_wal = <optimized out>\n is_system_catalog = false\n tuplesort = <optimized out>\n\n oldTupDesc = <optimized out>\n newTupDesc = <optimized out>\n slot = 0x1e1de48\n values = 0x1e1c848\n isnull = 0x1e1d9e8\n hslot = 0x1e1de48\n __func__ = \"heapam_relation_copy_for_cluster\"\n#8 0x000000000059cf07 in table_relation_copy_for_cluster (tups_recently_dead=0x7ffc05e6ba18, tups_vacuumed=0x7ffc05e6ba10, \n num_tuples=0x7ffc05e6ba08, multi_cutoff=0x7ffc05e6ba04, xid_cutoff=0x7ffc05e6ba00, OldestXmin=<optimized out>, use_sort=true, \n OldIndex=0x7f2160585f38, NewTable=0x7f2160585048, OldTable=0x7f21606695d8) at ../../../src/include/access/tableam.h:1410\nNo locals.\n#9 copy_table_data (pCutoffMulti=<synthetic pointer>, pFreezeXid=<synthetic pointer>, pSwapToastByContent=<synthetic pointer>, \n verbose=<optimized out>, OIDOldIndex=13, OIDOldHeap=1499600032, OIDNewHeap=1840150111) at cluster.c:920\n relRelation = <optimized out>\n relform = <optimized out>\n newTupDesc = <optimized out>\n MultiXactCutoff = 22262\n num_tuples = 7699\n tups_recently_dead = 0\n ru0 = {tv = {tv_sec = 1570827033, tv_usec = 106788}, ru = {ru_utime = {tv_sec = 0, tv_usec = 339198}, ru_stime = {tv_sec = 0, \n tv_usec = 77498}, {ru_maxrss = 111588, __ru_maxrss_word = 111588}, {ru_ixrss = 0, __ru_ixrss_word = 0}, {ru_idrss = 0, \n __ru_idrss_word = 0}, {ru_isrss = 0, __ru_isrss_word = 0}, {ru_minflt = 32814, __ru_minflt_word = 32814}, {ru_majflt = 0, \n __ru_majflt_word = 0}, {ru_nswap = 0, __ru_nswap_word = 0}, {ru_inblock = 0, __ru_inblock_word = 0}, {ru_oublock = 296, \n __ru_oublock_word = 296}, {ru_msgsnd = 0, __ru_msgsnd_word = 0}, {ru_msgrcv = 0, __ru_msgrcv_word = 0}, {ru_nsignals = 0, \n __ru_nsignals_word = 0}, {ru_nvcsw = 41, __ru_nvcsw_word = 41}, {ru_nivcsw = 46, __ru_nivcsw_word = 46}}}\n oldTupDesc = <optimized out>\n OldHeap = 0x7f21606695d8\n reltup = <optimized out>\n use_sort = <optimized out>\n tups_vacuumed = 0\n num_pages = <optimized out>\n NewHeap = 0x7f2160585048\n OldIndex = 0x7f2160585f38\n OldestXmin = 288843233\n\n FreezeXid = 288843233\n elevel = 13\n#10 rebuild_relation (verbose=<optimized out>, indexOid=13, OldHeap=<optimized out>) at cluster.c:616\n tableOid = 1499600032\n tableSpace = <optimized out>\n OIDNewHeap = 1840150111\n relpersistence = 112 'p'\n swap_toast_by_content = true\n frozenXid = <optimized out>\n is_system_catalog = false\n cutoffMulti = <optimized out>\n#11 cluster_rel (tableOid=tableOid@entry=1499600032, indexOid=indexOid@entry=3081287757, options=<optimized out>) at cluster.c:429\n OldHeap = <optimized out>\n verbose = <optimized out>\n recheck = <optimized out>\n __func__ = \"cluster_rel\"\n#12 0x000000000059d35e in cluster (stmt=stmt@entry=0x1d051f8, isTopLevel=isTopLevel@entry=true) at cluster.c:186\n tableOid = 1499600032\n indexOid = 3081287757\n rel = 0x7f21606695d8\n __func__ = \"cluster\"\n#13 0x000000000076547f in standard_ProcessUtility (pstmt=pstmt@entry=0x1d05518, \n queryString=queryString@entry=0x1d046e0 \"CLUSTER huawei_m2000_config_enodebcell_enodeb USING huawei_m2000_config_enodebcell_enodeb_coalesce_idx ;\", context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x1d055f8, \n completionTag=completionTag@entry=0x7ffc05e6c0a0 \"\") at utility.c:659\n parsetree = 0x1d051f8\n isTopLevel = true\n isAtomicContext = false\n pstate = 0x1de8810\n __func__ = \"standard_ProcessUtility\"\n\n\n", "msg_date": "Fri, 11 Oct 2019 16:03:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v12.0 ERROR: trying to store a heap tuple into wrong type of slot" }, { "msg_contents": "Hi,\n\nOn 2019-10-11 16:03:20 -0500, Justin Pryzby wrote:\n> I'm not sure why we have that index, and my script probably should have known\n> to choose a better one to cluster on, but still..\n>\n> ts=# CLUSTER huawei_m2000_config_enodebcell_enodeb USING huawei_m2000_config_enodebcell_enodeb_coalesce_idx ;\n> DEBUG: 00000: building index \"pg_toast_1840151315_index\" on table \"pg_toast_1840151315\" serially\n> LOCATION: index_build, index.c:2791\n> DEBUG: 00000: clustering \"public.huawei_m2000_config_enodebcell_enodeb\" using sequential scan and sort\n> LOCATION: copy_table_data, cluster.c:907\n> ERROR: XX000: trying to store a heap tuple into wrong type of slot\n> LOCATION: ExecStoreHeapTuple, execTuples.c:1328\n\nWell, that's annoying. There apparently is not a single test covering\ncluster on expression indexes, that' really ought to not be the\ncase. Equally annoying that I just broke this without noticing at all\n:(.\n\nThe cause of the error is that, while that sounds like it should be the\ncase, a virtual slot isn't sufficient for tuplesort_begin_cluster(). So\nthe fix is pretty trivial. Will fix.\n\nI started a separate thread about test coverage of tuplesort at\nhttps://www.postgresql.org/message-id/20191013144153.ooxrfglvnaocsrx2%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 13 Oct 2019 07:51:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: v12.0 ERROR: trying to store a heap tuple into wrong type of\n slot" }, { "msg_contents": "Hi,\n\nOn 2019-10-13 07:51:06 -0700, Andres Freund wrote:\n> On 2019-10-11 16:03:20 -0500, Justin Pryzby wrote:\n> > I'm not sure why we have that index, and my script probably should have known\n> > to choose a better one to cluster on, but still..\n> >\n> > ts=# CLUSTER huawei_m2000_config_enodebcell_enodeb USING huawei_m2000_config_enodebcell_enodeb_coalesce_idx ;\n> > DEBUG: 00000: building index \"pg_toast_1840151315_index\" on table \"pg_toast_1840151315\" serially\n> > LOCATION: index_build, index.c:2791\n> > DEBUG: 00000: clustering \"public.huawei_m2000_config_enodebcell_enodeb\" using sequential scan and sort\n> > LOCATION: copy_table_data, cluster.c:907\n> > ERROR: XX000: trying to store a heap tuple into wrong type of slot\n> > LOCATION: ExecStoreHeapTuple, execTuples.c:1328\n> \n> Well, that's annoying. There apparently is not a single test covering\n> cluster on expression indexes, that' really ought to not be the\n> case. Equally annoying that I just broke this without noticing at all\n> :(.\n> \n> The cause of the error is that, while that sounds like it should be the\n> case, a virtual slot isn't sufficient for tuplesort_begin_cluster(). So\n> the fix is pretty trivial. Will fix.\n\nI pushed the fix, including a few tests, a few hours ago. I hope that\nfixes the issue for you?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Oct 2019 13:50:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: v12.0 ERROR: trying to store a heap tuple into wrong type of\n slot" }, { "msg_contents": "On Tue, Oct 15, 2019 at 01:50:09PM -0700, Andres Freund wrote:\n> On 2019-10-13 07:51:06 -0700, Andres Freund wrote:\n> > On 2019-10-11 16:03:20 -0500, Justin Pryzby wrote:\n> > > ts=# CLUSTER huawei_m2000_config_enodebcell_enodeb USING huawei_m2000_config_enodebcell_enodeb_coalesce_idx ;\n> > The cause of the error is that, while that sounds like it should be the\n> > case, a virtual slot isn't sufficient for tuplesort_begin_cluster(). So\n> > the fix is pretty trivial. Will fix.\n> \n> I pushed the fix, including a few tests, a few hours ago. I hope that\n> fixes the issue for you?\n\nOn another server already running REL_12_STABLE, I created index with same\ndefinition, which didn't previously exist. I failed before pulling and\nworks after.\n\nts=# CLUSTER huawei_m2000_config_enodebcell_enodeb USING huawei_m2000_config_enodebcell_enodeb_coalesce_idx ;\nCLUSTER\n\nThanks,\nJustin\n\n\n", "msg_date": "Tue, 15 Oct 2019 17:26:59 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0 ERROR: trying to store a heap tuple into wrong type of\n slot" } ]
[ { "msg_contents": "One of our servers crashed last night like this:\n\n< 2019-10-10 22:31:02.186 EDT postgres >STATEMENT: REINDEX INDEX CONCURRENTLY child.eric_umts_rnc_utrancell_hsdsch_eul_201910_site_idx\n< 2019-10-10 22:31:02.399 EDT >LOG: server process (PID 29857) was terminated by signal 11: Segmentation fault\n< 2019-10-10 22:31:02.399 EDT >DETAIL: Failed process was running: REINDEX INDEX CONCURRENTLY child.eric_umts_rnc_utrancell_hsdsch_eul_201910_site_idx\n< 2019-10-10 22:31:02.399 EDT >LOG: terminating any other active server processes\n\nts=# \\d+ child.eric_umts_rnc_utrancell_hsdsch_eul_201910_site_idx\nIndex \"child.eric_umts_rnc_utrancell_hsdsch_eul_201910_site_idx\"\n Column | Type | Key? | Definition | Storage | Stats target\n---------+---------+------+------------+---------+--------------\n site_id | integer | yes | site_id | plain |\nbtree, for table \"child.eric_umts_rnc_utrancell_hsdsch_eul_201910\"\n\nThat's an index on a table partition, but not itself a child of a relkind=I\nindex.\n\nUnfortunately, there was no core file, and I'm still trying to reproduce it.\n\nI can't see that the table was INSERTed into during the reindex...\nBut looks like it was SELECTed from, and the report finished within 1sec of the\ncrash:\n\n(2019-10-10 22:30:50,485 - p1604 t140325365622592 - INFO): PID 1604 finished running report; est=None rows=552; cols=83; [...] duration:12\n\npostgres=# SELECT log_time, pid, session_id, left(message,99), detail FROM postgres_log_2019_10_10_2200 WHERE pid=29857 OR (log_time BETWEEN '2019-10-10 22:31:02.18' AND '2019-10-10 22:31:02.4' AND NOT message~'crash of another') ORDER BY log_time LIMIT 9;\n 2019-10-10 22:30:24.441-04 | 29857 | 5d9fe93f.74a1 | temporary file: path \"base/pgsql_tmp/pgsql_tmp29857.0.sharedfileset/0.0\", size 3096576 | \n 2019-10-10 22:30:24.442-04 | 29857 | 5d9fe93f.74a1 | temporary file: path \"base/pgsql_tmp/pgsql_tmp29857.0.sharedfileset/1.0\", size 2809856 | \n 2019-10-10 22:30:24.907-04 | 29857 | 5d9fe93f.74a1 | process 29857 still waiting for ShareLock on virtual transaction 30/103010 after 333.078 ms | Process holding the lock: 29671. Wait queue: 29857.\n 2019-10-10 22:31:02.186-04 | 29857 | 5d9fe93f.74a1 | process 29857 acquired ShareLock on virtual transaction 30/103010 after 37611.995 ms | \n 2019-10-10 22:31:02.186-04 | 29671 | 5d9fe92a.73e7 | duration: 50044.778 ms statement: SELECT fn, sz FROM +| \n | | | (SELECT file_name fn, file_size_bytes sz, +| \n | | | | \n 2019-10-10 22:31:02.399-04 | 1161 | 5d9cad9e.489 | terminating any other active server processes | \n 2019-10-10 22:31:02.399-04 | 1161 | 5d9cad9e.489 | server process (PID 29857) was terminated by signal 11: Segmentation fault | Failed process was running: REINDEX INDEX CONCURRENTLY child.eric_umts_rnc_utrancell_hsdsch_eul_201910_site_idx\n\nJustin\n\n\n", "msg_date": "Fri, 11 Oct 2019 19:44:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Fri, Oct 11, 2019 at 07:44:46PM -0500, Justin Pryzby wrote:\n> That's an index on a table partition, but not itself a child of a relkind=I\n> index.\n\nInteresting. Testing with a partition tree, and indexes on leaves\nwhich do not have dependencies with a parent I cannot reproduce\nanything. Perhaps you have some concurrent operations going on?\n\n> Unfortunately, there was no core file, and I'm still trying to reproduce it.\n\nForgot to set ulimit -c? Having a backtrace would surely help.\n--\nMichael", "msg_date": "Sun, 13 Oct 2019 18:06:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Sun, Oct 13, 2019 at 06:06:43PM +0900, Michael Paquier wrote:\n> On Fri, Oct 11, 2019 at 07:44:46PM -0500, Justin Pryzby wrote:\n> > Unfortunately, there was no core file, and I'm still trying to reproduce it.\n> \n> Forgot to set ulimit -c? Having a backtrace would surely help.\n\nFortunately (?) another server hit crashed last night.\n(Doesn't appear to be relevant, but this table has no inheritence/partition-ness).\n\nLooks like it's a race condition and dereferencing *holder=NULL. The first\ncrash was probably the same bug, due to report query running during \"reindex\nCONCURRENTLY\", and probably finished at nearly the same time as another locker.\n\nRelevant code introduced here:\n\ncommit ab0dfc961b6a821f23d9c40c723d11380ce195a6\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Tue Apr 2 15:18:08 2019 -0300\n\n Report progress of CREATE INDEX operations\n\nNeeds to be conditionalized (as anticipated by the comment)\n\n+\t\t\tif (holder)\n pgstat_progress_update_param(PROGRESS_WAITFOR_CURRENT_PID,\n holder->pid);\n\n\nCore was generated by `postgres: postgres ts [local] REINDEX '.\nProgram terminated with signal 11, Segmentation fault.\n\n#0 WaitForLockersMultiple (locktags=locktags@entry=0x1d30548, lockmode=lockmode@entry=5, progress=progress@entry=true) at lmgr.c:911\n#1 0x00000000005c2ac8 in ReindexRelationConcurrently (relationOid=relationOid@entry=17618, options=options@entry=0) at indexcmds.c:3090\n#2 0x00000000005c328a in ReindexIndex (indexRelation=<optimized out>, options=0, concurrent=<optimized out>) at indexcmds.c:2352\n#3 0x00000000007657fe in standard_ProcessUtility (pstmt=pstmt@entry=0x1d05468, queryString=queryString@entry=0x1d046e0 \"REINDEX INDEX CONCURRENTLY loaded_cdr_files_filename\",\n context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x1d05548,\n completionTag=completionTag@entry=0x7ffc05e6c0a0 \"\") at utility.c:787\n#4 0x00007f21517204ef in pgss_ProcessUtility (pstmt=0x1d05468, queryString=0x1d046e0 \"REINDEX INDEX CONCURRENTLY loaded_cdr_files_filename\", context=PROCESS_UTILITY_TOPLEVEL,\n params=0x0, queryEnv=0x0, dest=0x1d05548, completionTag=0x7ffc05e6c0a0 \"\") at pg_stat_statements.c:1006\n#5 0x0000000000762816 in PortalRunUtility (portal=0x1d7a4e0, pstmt=0x1d05468, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x1d05548,\n completionTag=0x7ffc05e6c0a0 \"\") at pquery.c:1175\n#6 0x0000000000763267 in PortalRunMulti (portal=portal@entry=0x1d7a4e0, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x1d05548,\n altdest=altdest@entry=0x1d05548, completionTag=completionTag@entry=0x7ffc05e6c0a0 \"\") at pquery.c:1328\n#7 0x0000000000763e45 in PortalRun (portal=<optimized out>, count=9223372036854775807, isTopLevel=<optimized out>, run_once=<optimized out>, dest=0x1d05548, altdest=0x1d05548,\n completionTag=0x7ffc05e6c0a0 \"\") at pquery.c:796\n#8 0x000000000075ff45 in exec_simple_query (query_string=<optimized out>) at postgres.c:1215\n#9 0x0000000000761212 in PostgresMain (argc=<optimized out>, argv=<optimized out>, dbname=<optimized out>, username=<optimized out>) at postgres.c:4236\n#10 0x0000000000483d02 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4431\n#11 BackendStartup (port=0x1d2b340) at postmaster.c:4122\n#12 ServerLoop () at postmaster.c:1704\n#13 0x00000000006f0b1f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1cff280) at postmaster.c:1377\n#14 0x0000000000484c93 in main (argc=3, argv=0x1cff280) at main.c:228\n\nbt f\n\n#0 WaitForLockersMultiple (locktags=locktags@entry=0x1d30548, lockmode=lockmode@entry=5, progress=progress@entry=true) at lmgr.c:911\n holder = 0x0\n lockholders = 0x1d9b778\n holders = <optimized out>\n lc = 0x1d9bf80\n total = <optimized out>\n done = 1\n#1 0x00000000005c2ac8 in ReindexRelationConcurrently (relationOid=relationOid@entry=17618, options=options@entry=0) at indexcmds.c:3090\n heapRelationIds = 0x1d30360\n indexIds = 0x1d303b0\n newIndexIds = <optimized out>\n relationLocks = <optimized out>\n lockTags = <optimized out>\n lc = 0x0\n lc2 = 0x0\n private_context = <optimized out>\n oldcontext = <optimized out>\n relkind = 105 'i'\n relationName = 0x0\n relationNamespace = 0x0\n ru0 = {tv = {tv_sec = 30592544, tv_usec = 7232025}, ru = {ru_utime = {tv_sec = 281483566645394, tv_usec = 75668733820930}, ru_stime = {tv_sec = 0, tv_usec = 30592272}, {\n ru_maxrss = 0, __ru_maxrss_word = 0}, {ru_ixrss = 0, __ru_ixrss_word = 0}, {ru_idrss = 105, __ru_idrss_word = 105}, {ru_isrss = -926385342574214656, \n __ru_isrss_word = -926385342574214656}, {ru_minflt = 8924839, __ru_minflt_word = 8924839}, {ru_majflt = 0, __ru_majflt_word = 0}, {ru_nswap = 17618, \n __ru_nswap_word = 17618}, {ru_inblock = 139781327898864, __ru_inblock_word = 139781327898864}, {ru_oublock = 30430312, __ru_oublock_word = 30430312}, {\n ru_msgsnd = 139781327898864, __ru_msgsnd_word = 139781327898864}, {ru_msgrcv = 139781327898864, __ru_msgrcv_word = 139781327898864}, {ru_nsignals = 139781327898864, \n __ru_nsignals_word = 139781327898864}, {ru_nvcsw = 139781327898864, __ru_nvcsw_word = 139781327898864}, {ru_nivcsw = 139781327898864, \n __ru_nivcsw_word = 139781327898864}}}\n __func__ = \"ReindexRelationConcurrently\"\n#2 0x00000000005c328a in ReindexIndex (indexRelation=<optimized out>, options=0, concurrent=<optimized out>) at indexcmds.c:2352\n state = {concurrent = true, locked_table_oid = 17608}\n indOid = 17618\n irel = <optimized out>\n persistence = 112 'p'\n...\n\n\n", "msg_date": "Sun, 13 Oct 2019 08:03:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "Resending this message, which didn't make it to the list when I sent it\nearlier. (And, notified -www).\n\nOn Sun, Oct 13, 2019 at 06:06:43PM +0900, Michael Paquier wrote:\n> On Fri, Oct 11, 2019 at 07:44:46PM -0500, Justin Pryzby wrote:\n> > Unfortunately, there was no core file, and I'm still trying to reproduce it.\n> \n> Forgot to set ulimit -c? Having a backtrace would surely help.\n\nFortunately (?) another server hit crashed last night.\n(Doesn't appear to be relevant, but this table has no inheritence/partition-ness).\n\nLooks like it's a race condition and dereferencing *holder=NULL. The first\ncrash was probably the same bug, due to report query running during \"reindex\nCONCURRENTLY\", and probably finished at nearly the same time as another locker.\n\nRelevant code introduced here:\n\ncommit ab0dfc961b6a821f23d9c40c723d11380ce195a6\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Tue Apr 2 15:18:08 2019 -0300\n\n Report progress of CREATE INDEX operations\n\nNeeds to be conditionalized (as anticipated by the comment)\n\n+\t\t\tif (holder)\n pgstat_progress_update_param(PROGRESS_WAITFOR_CURRENT_PID,\n holder->pid);\n\n\nCore was generated by `postgres: postgres ts [local] REINDEX '.\nProgram terminated with signal 11, Segmentation fault.\n\n#0 WaitForLockersMultiple (locktags=locktags@entry=0x1d30548, lockmode=lockmode@entry=5, progress=progress@entry=true) at lmgr.c:911\n#1 0x00000000005c2ac8 in ReindexRelationConcurrently (relationOid=relationOid@entry=17618, options=options@entry=0) at indexcmds.c:3090\n#2 0x00000000005c328a in ReindexIndex (indexRelation=<optimized out>, options=0, concurrent=<optimized out>) at indexcmds.c:2352\n#3 0x00000000007657fe in standard_ProcessUtility (pstmt=pstmt@entry=0x1d05468, queryString=queryString@entry=0x1d046e0 \"REINDEX INDEX CONCURRENTLY loaded_cdr_files_filename\",\n context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x1d05548,\n completionTag=completionTag@entry=0x7ffc05e6c0a0 \"\") at utility.c:787\n#4 0x00007f21517204ef in pgss_ProcessUtility (pstmt=0x1d05468, queryString=0x1d046e0 \"REINDEX INDEX CONCURRENTLY loaded_cdr_files_filename\", context=PROCESS_UTILITY_TOPLEVEL,\n params=0x0, queryEnv=0x0, dest=0x1d05548, completionTag=0x7ffc05e6c0a0 \"\") at pg_stat_statements.c:1006\n#5 0x0000000000762816 in PortalRunUtility (portal=0x1d7a4e0, pstmt=0x1d05468, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x1d05548,\n completionTag=0x7ffc05e6c0a0 \"\") at pquery.c:1175\n#6 0x0000000000763267 in PortalRunMulti (portal=portal@entry=0x1d7a4e0, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x1d05548,\n altdest=altdest@entry=0x1d05548, completionTag=completionTag@entry=0x7ffc05e6c0a0 \"\") at pquery.c:1328\n#7 0x0000000000763e45 in PortalRun (portal=<optimized out>, count=9223372036854775807, isTopLevel=<optimized out>, run_once=<optimized out>, dest=0x1d05548, altdest=0x1d05548,\n completionTag=0x7ffc05e6c0a0 \"\") at pquery.c:796\n#8 0x000000000075ff45 in exec_simple_query (query_string=<optimized out>) at postgres.c:1215\n#9 0x0000000000761212 in PostgresMain (argc=<optimized out>, argv=<optimized out>, dbname=<optimized out>, username=<optimized out>) at postgres.c:4236\n#10 0x0000000000483d02 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4431\n#11 BackendStartup (port=0x1d2b340) at postmaster.c:4122\n#12 ServerLoop () at postmaster.c:1704\n#13 0x00000000006f0b1f in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1cff280) at postmaster.c:1377\n#14 0x0000000000484c93 in main (argc=3, argv=0x1cff280) at main.c:228\n\nbt f\n\n#0 WaitForLockersMultiple (locktags=locktags@entry=0x1d30548, lockmode=lockmode@entry=5, progress=progress@entry=true) at lmgr.c:911\n holder = 0x0\n lockholders = 0x1d9b778\n holders = <optimized out>\n lc = 0x1d9bf80\n total = <optimized out>\n done = 1\n#1 0x00000000005c2ac8 in ReindexRelationConcurrently (relationOid=relationOid@entry=17618, options=options@entry=0) at indexcmds.c:3090\n heapRelationIds = 0x1d30360\n indexIds = 0x1d303b0\n newIndexIds = <optimized out>\n relationLocks = <optimized out>\n lockTags = <optimized out>\n lc = 0x0\n lc2 = 0x0\n private_context = <optimized out>\n oldcontext = <optimized out>\n relkind = 105 'i'\n relationName = 0x0\n relationNamespace = 0x0\n ru0 = {tv = {tv_sec = 30592544, tv_usec = 7232025}, ru = {ru_utime = {tv_sec = 281483566645394, tv_usec = 75668733820930}, ru_stime = {tv_sec = 0, tv_usec = 30592272}, {\n ru_maxrss = 0, __ru_maxrss_word = 0}, {ru_ixrss = 0, __ru_ixrss_word = 0}, {ru_idrss = 105, __ru_idrss_word = 105}, {ru_isrss = -926385342574214656, \n __ru_isrss_word = -926385342574214656}, {ru_minflt = 8924839, __ru_minflt_word = 8924839}, {ru_majflt = 0, __ru_majflt_word = 0}, {ru_nswap = 17618, \n __ru_nswap_word = 17618}, {ru_inblock = 139781327898864, __ru_inblock_word = 139781327898864}, {ru_oublock = 30430312, __ru_oublock_word = 30430312}, {\n ru_msgsnd = 139781327898864, __ru_msgsnd_word = 139781327898864}, {ru_msgrcv = 139781327898864, __ru_msgrcv_word = 139781327898864}, {ru_nsignals = 139781327898864, \n __ru_nsignals_word = 139781327898864}, {ru_nvcsw = 139781327898864, __ru_nvcsw_word = 139781327898864}, {ru_nivcsw = 139781327898864, \n __ru_nivcsw_word = 139781327898864}}}\n __func__ = \"ReindexRelationConcurrently\"\n#2 0x00000000005c328a in ReindexIndex (indexRelation=<optimized out>, options=0, concurrent=<optimized out>) at indexcmds.c:2352\n state = {concurrent = true, locked_table_oid = 17608}\n indOid = 17618\n irel = <optimized out>\n persistence = 112 'p'\n...\n\n\n", "msg_date": "Sun, 13 Oct 2019 11:24:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On 2019-Oct-13, Justin Pryzby wrote:\n\n> Looks like it's a race condition and dereferencing *holder=NULL. The first\n> crash was probably the same bug, due to report query running during \"reindex\n> CONCURRENTLY\", and probably finished at nearly the same time as another locker.\n\nOoh, right, makes sense. There's another spot with the same mistake ...\nthis patch should fix it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 13 Oct 2019 15:10:21 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Sun, Oct 13, 2019 at 03:10:21PM -0300, Alvaro Herrera wrote:\n> On 2019-Oct-13, Justin Pryzby wrote:\n> \n> > Looks like it's a race condition and dereferencing *holder=NULL. The first\n> > crash was probably the same bug, due to report query running during \"reindex\n> > CONCURRENTLY\", and probably finished at nearly the same time as another locker.\n> \n> Ooh, right, makes sense. There's another spot with the same mistake ...\n> this patch should fix it.\n\nI would maybe chop off the 2nd sentence, since conditionalizing indicates that\nwe do actually care.\n\n+ * If requested, publish who we're going to wait for. This is not\n+ * 100% accurate if they're already gone, but we don't care.\n\nJustin\n\n\n", "msg_date": "Sun, 13 Oct 2019 13:14:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On 2019-Oct-13, Justin Pryzby wrote:\n\n> On Sun, Oct 13, 2019 at 03:10:21PM -0300, Alvaro Herrera wrote:\n> > On 2019-Oct-13, Justin Pryzby wrote:\n> > \n> > > Looks like it's a race condition and dereferencing *holder=NULL. The first\n> > > crash was probably the same bug, due to report query running during \"reindex\n> > > CONCURRENTLY\", and probably finished at nearly the same time as another locker.\n> > \n> > Ooh, right, makes sense. There's another spot with the same mistake ...\n> > this patch should fix it.\n> \n> I would maybe chop off the 2nd sentence, since conditionalizing indicates that\n> we do actually care.\n> \n> + * If requested, publish who we're going to wait for. This is not\n> + * 100% accurate if they're already gone, but we don't care.\n\nTrue. And we can copy the resulting comment to the other spot.\n\n(FWIW I expect the crash is possible not just in reindex but also in\nCREATE INDEX CONCURRENTLY.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 13 Oct 2019 16:18:34 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Sun, Oct 13, 2019 at 04:18:34PM -0300, Alvaro Herrera wrote:\n> True. And we can copy the resulting comment to the other spot.\n> \n> (FWIW I expect the crash is possible not just in reindex but also in\n> CREATE INDEX CONCURRENTLY.)\n\nI need to think about that, but shouldn't we have a way to reproduce\nthat case rather reliably with an isolation test? The patch looks to\ngood to me, these are also the two places I spotted yesterday after a\nquick lookup. The only other caller is isTempNamespaceInUse() which\ndoes its thing correctly.\n--\nMichael", "msg_date": "Mon, 14 Oct 2019 08:57:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Mon, Oct 14, 2019 at 08:57:16AM +0900, Michael Paquier wrote:\n> I need to think about that, but shouldn't we have a way to reproduce\n> that case rather reliably with an isolation test? The patch looks to\n> good to me, these are also the two places I spotted yesterday after a\n> quick lookup. The only other caller is isTempNamespaceInUse() which\n> does its thing correctly.\n\nActually, reindex-concurrently.spec stresses that, except that in\norder to reproduce the failure we need to close the connection exactly\nin the waiting loop before sending the progress report but after\nlooking at VirtualTransactionIdIsValid. Using a debugger and a simple\ncheckpoint I can easily reproduce the crash, but we'd need more to\nmake that test case deterministic, like a termination with the correct\ntiming.\n\nSo, Alvaro, your patch looks good to me. Could you apply it?\n--\nMichael", "msg_date": "Tue, 15 Oct 2019 15:35:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On 2019-Oct-15, Michael Paquier wrote:\n\n> So, Alvaro, your patch looks good to me. Could you apply it?\n\nThanks, pushed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 16 Oct 2019 09:53:56 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Sun, Oct 13, 2019 at 04:18:34PM -0300, Alvaro Herrera wrote:\n> (FWIW I expect the crash is possible not just in reindex but also in\n> CREATE INDEX CONCURRENTLY.)\n\nFWIW, for sake of list archives, and for anyone running v12 hoping to avoid\ncrashing, I believe we hit this for DROP INDEX CONCURRENTLY, although I don't\nhave the backtrace to prove it.\n\nJustin\n\n\n", "msg_date": "Wed, 16 Oct 2019 16:11:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Wed, Oct 16, 2019 at 09:53:56AM -0300, Alvaro Herrera wrote:\n> Thanks, pushed.\n\nThanks, Alvaro.\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 09:49:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Wed, Oct 16, 2019 at 04:11:46PM -0500, Justin Pryzby wrote:\n> On Sun, Oct 13, 2019 at 04:18:34PM -0300, Alvaro Herrera wrote:\n>> (FWIW I expect the crash is possible not just in reindex but also in\n>> CREATE INDEX CONCURRENTLY.)\n> \n> FWIW, for sake of list archives, and for anyone running v12 hoping to avoid\n> crashing, I believe we hit this for DROP INDEX CONCURRENTLY, although I don't\n> have the backtrace to prove it.\n\nYou may not have a backtrace, but I think that you are right:\nWaitForLockers() gets called in index_drop() with progress reporting\nenabled. index_drop() would also be taken by REINDEX CONCURRENTLY\nthrough performMultipleDeletions() but we cannot know if it gets used\nfor REINDEX CONCURRENTLY or for DROP INDEX CONCURRENTLY as it goes\nthrough the central deletion machinery, so we have to mark progress\nreporting as true anyway. Maybe that's worth a comment in index_drop\nwhen calling WaitForLockers() because it is not actually that obvious,\nsay like that:\n@@ -2157,7 +2157,10 @@ index_drop(Oid indexId, bool concurrent, bool\nconcurrent_lock_mode)\n\n /*\n * Wait till every transaction that saw the old index state has\n- * finished.\n+ * finished. Progress reporting is enabled here for REINDEX\n+ * CONCURRENTLY, but not for DROP INDEX CONCURRENTLY. Track\n+ * the progress through WaitForLockers() anyway, the information\n+ * will not show up if using DROP INDEX CONCURRENTLY.\n */\n WaitForLockers(heaplocktag, AccessExclusiveLock, true);\n\nThoughts?\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 10:04:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On 2019-Oct-17, Michael Paquier wrote:\n\n> You may not have a backtrace, but I think that you are right:\n> WaitForLockers() gets called in index_drop() with progress reporting\n> enabled. index_drop() would also be taken by REINDEX CONCURRENTLY\n> through performMultipleDeletions() but we cannot know if it gets used\n> for REINDEX CONCURRENTLY or for DROP INDEX CONCURRENTLY as it goes\n> through the central deletion machinery, so we have to mark progress\n> reporting as true anyway. Maybe that's worth a comment in index_drop\n> when calling WaitForLockers() because it is not actually that obvious,\n> say like that:\n\nHmm, I wonder if it isn't the right solution to set 'progress' to false\nin that spot, instead. index_drop says it must only be called by the\ndependency machinery; are we depending on that to pass-through the need\nto update progress status? I'm going over that code now.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 17 Oct 2019 05:33:22 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Thu, Oct 17, 2019 at 05:33:22AM -0300, Alvaro Herrera wrote:\n> Hmm, I wonder if it isn't the right solution to set 'progress' to false\n> in that spot, instead. index_drop says it must only be called by the\n> dependency machinery; are we depending on that to pass-through the need\n> to update progress status? I'm going over that code now.\n\npgstat_progress_end_command() is done for REINDEX CONCURRENTLY after\nthe concurrent drop, so it made sense to me to still report any PID\nREINDEX CONC is waiting for at this stage.\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 17:50:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On 2019-Oct-17, Michael Paquier wrote:\n\n> On Thu, Oct 17, 2019 at 05:33:22AM -0300, Alvaro Herrera wrote:\n> > Hmm, I wonder if it isn't the right solution to set 'progress' to false\n> > in that spot, instead. index_drop says it must only be called by the\n> > dependency machinery; are we depending on that to pass-through the need\n> > to update progress status? I'm going over that code now.\n> \n> pgstat_progress_end_command() is done for REINDEX CONCURRENTLY after\n> the concurrent drop, so it made sense to me to still report any PID\n> REINDEX CONC is waiting for at this stage.\n\nYeah, okay. So let's talk about your proposed new comment. First,\nthere are two spots where WaitForLockers is called in index_drop and\nyou're proposing to patch the second one. I think we should patch the\nfirst one and reference that one from the second one. I propose\nsomething like this (sorry for crude pasting):\n\n\t * Note: the reason we use actual lock acquisition here, rather than\n\t * just checking the ProcArray and sleeping, is that deadlock is\n\t * possible if one of the transactions in question is blocked trying\n\t * to acquire an exclusive lock on our table. The lock code will\n\t * detect deadlock and error out properly.\n\t * \n\t * Note: we report progress through WaitForLockers() unconditionally\n\t * here, even though it will only be used by REINDEX CONCURRENTLY and\n\t * not DROP INDEX CONCURRENTLY.\n\t */\n\nand then\n\n /*\n * Wait till every transaction that saw the old index state has\n- * finished.\n+ * finished. See above about progress reporting.\n */\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 17 Oct 2019 06:56:48 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Thu, Oct 17, 2019 at 06:56:48AM -0300, Alvaro Herrera wrote:\n> On 2019-Oct-17, Michael Paquier wrote:\n>> pgstat_progress_end_command() is done for REINDEX CONCURRENTLY after\n>> the concurrent drop, so it made sense to me to still report any PID\n>> REINDEX CONC is waiting for at this stage.\n> \n> Yeah, okay. So let's talk about your proposed new comment. First,\n> there are two spots where WaitForLockers is called in index_drop and\n> you're proposing to patch the second one. I think we should patch the\n> first one and reference that one from the second one. I propose\n> something like this (sorry for crude pasting):\n>\n> <comments>\n\nWhat you are proposing here sounds fine to me. Perhaps you would\nprefer to adjust the code yourself?\n--\nMichael", "msg_date": "Fri, 18 Oct 2019 10:23:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On 2019-Oct-18, Michael Paquier wrote:\n\n> What you are proposing here sounds fine to me. Perhaps you would\n> prefer to adjust the code yourself?\n\nSure thing, thanks, done :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 18 Oct 2019 07:30:37 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" }, { "msg_contents": "On Fri, Oct 18, 2019 at 07:30:37AM -0300, Alvaro Herrera wrote:\n> Sure thing, thanks, done :-)\n\nThanks, Alvaro.\n--\nMichael", "msg_date": "Sat, 19 Oct 2019 11:14:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: segfault in reindex CONCURRENTLY" } ]
[ { "msg_contents": "Hi,\n\nover in pgsql-bugs [1] we got a report about CREATE TEXT SEARCH\nDICTIONARY causing segfaults on 12.0. Simply running\n\n CREATE TEXT SEARCH DICTIONARY hunspell_num (Template=ispell,\n DictFile=hunspell_sample_num, AffFile=hunspell_sample_long);\n\ndoes trigger a crash, 100% of the time. The crash was reported on 12.0,\nbut it's in fact present since 9.6.\n\nOn 9.5 the example does not work, because that version does not (a)\ninclude the hunspell dictionaries used in the example, and (b) it does\nnot support long flags. So even after copying the dictionaries and\ntweaking them a bit it still passes without a crash.\n\nLooking at the commit history of spell.c, there seems to be a bunch of\ncommits in 2016 (e.g. f4ceed6ceba3) touching exactly this part of the\ncode (hunspell), and it also correlates quite nicely with the affected\nbranches (9.6+). So my best guess is it's a bug in those changes.\n\nA complete backtrace looks like this:\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x00000000008fca10 in getCompoundAffixFlagValue (Conf=0x20dd3b8, s=0x7f7f7f7f7f7f7f7f <error: Cannot access memory at address 0x7f7f7f7f7f7f7f7f>) at spell.c:1126\n1126\t\twhile (*flagcur)\n(gdb) bt\n#0 0x00000000008fca10 in getCompoundAffixFlagValue (Conf=0x20dd3b8, s=0x7f7f7f7f7f7f7f7f <error: Cannot access memory at address 0x7f7f7f7f7f7f7f7f>) at spell.c:1126\n#1 0x00000000008fdd1c in makeCompoundFlags (Conf=0x20dd3b8, affix=303) at spell.c:1608\n#2 0x00000000008fe04e in mkSPNode (Conf=0x20dd3b8, low=0, high=1, level=3) at spell.c:1680\n#3 0x00000000008fe113 in mkSPNode (Conf=0x20dd3b8, low=0, high=1, level=2) at spell.c:1692\n#4 0x00000000008fde89 in mkSPNode (Conf=0x20dd3b8, low=0, high=4, level=1) at spell.c:1652\n#5 0x00000000008fde89 in mkSPNode (Conf=0x20dd3b8, low=0, high=9, level=0) at spell.c:1652\n#6 0x00000000008fe50b in NISortDictionary (Conf=0x20dd3b8) at spell.c:1785\n#7 0x00000000008f9e14 in dispell_init (fcinfo=0x7ffdda6abc90) at dict_ispell.c:89\n#8 0x0000000000a6210a in FunctionCall1Coll (flinfo=0x7ffdda6abcf0, collation=0, arg1=34478896) at fmgr.c:1140\n#9 0x0000000000a62c72 in OidFunctionCall1Coll (functionId=3731, collation=0, arg1=34478896) at fmgr.c:1418\n#10 0x00000000006c2dcb in verify_dictoptions (tmplId=3733, dictoptions=0x20e1b30) at tsearchcmds.c:402\n#11 0x00000000006c2f4c in DefineTSDictionary (names=0x20ba278, parameters=0x20ba458) at tsearchcmds.c:463\n#12 0x00000000008eb274 in ProcessUtilitySlow (pstate=0x20db518, pstmt=0x20bab88, queryString=0x20b97a8 \"CREATE TEXT SEARCH DICTIONARY hunspell_num (Template=ispell,\\nDictFile=hunspell_sample_num, AffFile=hunspell_sample_long);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x20bac80, \n completionTag=0x7ffdda6ac540 \"\") at utility.c:1272\n#13 0x00000000008ea7e5 in standard_ProcessUtility (pstmt=0x20bab88, queryString=0x20b97a8 \"CREATE TEXT SEARCH DICTIONARY hunspell_num (Template=ispell,\\nDictFile=hunspell_sample_num, AffFile=hunspell_sample_long);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x20bac80, \n completionTag=0x7ffdda6ac540 \"\") at utility.c:927\n#14 0x00000000008e991a in ProcessUtility (pstmt=0x20bab88, queryString=0x20b97a8 \"CREATE TEXT SEARCH DICTIONARY hunspell_num (Template=ispell,\\nDictFile=hunspell_sample_num, AffFile=hunspell_sample_long);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x20bac80, completionTag=0x7ffdda6ac540 \"\")\n at utility.c:360\n#15 0x00000000008e88e1 in PortalRunUtility (portal=0x2121368, pstmt=0x20bab88, isTopLevel=true, setHoldSnapshot=false, dest=0x20bac80, completionTag=0x7ffdda6ac540 \"\") at pquery.c:1175\n#16 0x00000000008e8afe in PortalRunMulti (portal=0x2121368, isTopLevel=true, setHoldSnapshot=false, dest=0x20bac80, altdest=0x20bac80, completionTag=0x7ffdda6ac540 \"\") at pquery.c:1321\n#17 0x00000000008e8032 in PortalRun (portal=0x2121368, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x20bac80, altdest=0x20bac80, completionTag=0x7ffdda6ac540 \"\") at pquery.c:796\n#18 0x00000000008e1f51 in exec_simple_query (query_string=0x20b97a8 \"CREATE TEXT SEARCH DICTIONARY hunspell_num (Template=ispell,\\nDictFile=hunspell_sample_num, AffFile=hunspell_sample_long);\") at postgres.c:1215\n#19 0x00000000008e6243 in PostgresMain (argc=1, argv=0x20e54f8, dbname=0x20e5340 \"test\", username=0x20b53e8 \"user\") at postgres.c:4236\n#20 0x000000000083c5e2 in BackendRun (port=0x20dd980) at postmaster.c:4437\n#21 0x000000000083bdb3 in BackendStartup (port=0x20dd980) at postmaster.c:4128\n#22 0x00000000008381d7 in ServerLoop () at postmaster.c:1704\n#23 0x0000000000837a83 in PostmasterMain (argc=3, argv=0x20b3350) at postmaster.c:1377\n#24 0x0000000000759507 in main (argc=3, argv=0x20b3350) at main.c:228\n(gdb) up\n#1 0x00000000008fdd1c in makeCompoundFlags (Conf=0x20dd3b8, affix=303) at spell.c:1608\n1608\t\treturn (getCompoundAffixFlagValue(Conf, str) & FF_COMPOUNDFLAGMASK);\n(gdb) p *Conf\n$1 = {maffixes = 16, naffixes = 10, Affix = 0x2181fd0, Suffix = 0x0, Prefix = 0x0, Dictionary = 0x0, AffixData = 0x20e1fa8, lenAffixData = 12, nAffixData = 12, useFlagAliases = true, CompoundAffix = 0x0, usecompound = true, flagMode = FM_LONG, CompoundAffixFlags = 0x217d328, nCompoundAffixFlag = 6, \n mCompoundAffixFlag = 10, buildCxt = 0x217cf20, Spell = 0x7bd99b4f6050, nspell = 9, mspell = 20480, firstfree = 0x217f1b8 \"\", avail = 7608}\n(gdb) p affix\n$2 = 303\n\nSo the affix value is rather strange, because it's clearly outside the\nset of flags in Conf (it only has 12 items, so 303 is waaaay too high).\n\nI don't have time to investigate this further and I'm getting lost in\nspell.c, so I'm adding Teodor who committed f4ceed6ceba3 in 2016. One\ninteresting fact is that this is likely due to some discrepancy between\nthe dictfile and afffile - the segfaulting command appers to mix\nhunspell_sample_num and hunspell_sample_long:\n\n CREATE TEXT SEARCH DICTIONARY hunspell_num (Template=ispell,\n DictFile=hunspell_sample_num, AffFile=hunspell_sample_long);\n\nBut when using the \"same\" group for both dictfile and afffile, it seems\nto work just fine.\n\n[1] https://www.postgresql.org/message-id/flat/16050-024ae722464ab604%40postgresql.org\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 13 Oct 2019 03:26:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "CREATE TEXT SEARCH DICTIONARY segfaulting on 9.6+" }, { "msg_contents": "I spent a bit of time investigating this, and it seems the new code is\nsomewhat too trusting when it comes to data from the affix/dict files.\nIn this particular case, it boils down to this code in NISortDictionary:\n\n if (Conf->useFlagAliases)\n {\n for (i = 0; i < Conf->nspell; i++)\n {\n char *end;\n\n if (*Conf->Spell[i]->p.flag != '\\0')\n {\n curaffix = strtol(Conf->Spell[i]->p.flag, &end, 10);\n if (Conf->Spell[i]->p.flag == end || errno == ERANGE)\n ereport(ERROR,\n (errcode(ERRCODE_CONFIG_FILE_ERROR),\n errmsg(\"invalid affix alias \\\"%s\\\"\",\n Conf->Spell[i]->p.flag)));\n }\n ...\n Conf->Spell[i]->p.d.affix = curaffix;\n ...\n }\n ...\n }\n\nSo it simply grabs whatever it finds in the dict file, parses it and\nthen (later) we use it as index to access the AffixData array, even if\nthe value is way out of bounds.\n\nFor example in the example, hunspell_sample_long.affix contains about\n10 affixes, but then we parse the hunspell_sample_num.dict file, and we\nstumble upon\n\n book/302,301,202,303\n\nand we parse the flags as integers, and interpret them as indexes in the\nAffixData array. Clearly, 303 is waaaay out of bounds, triggering the\nsegfault crash.\n\nSo I think we need some sort of cross-check here. We certainly need to\nmake NISortDictionary() check the affix value is within AffixData\nbounds, and error out when the index is non-sensical (maybe negative\nand/or exceeding nAffixData). Maybe there's a simple way to check if the\naffix/dict files match. The failing affix has\n\n FLAG num\n\nwhile with\n\n FLAG long\n\nit works just fine. But I'm not sure that's actually possible, because I\ndon't see anything in hunspell_sample_num.dict that would allow us to\ndecide that it expects \"FLAG num\" and not \"FLAG long\". Furthermore, we\ncertainly can't rely on this - we still need to check the range.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 13 Oct 2019 23:38:08 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: CREATE TEXT SEARCH DICTIONARY segfaulting on 9.6+" }, { "msg_contents": "Hello Tomas,\n\nOn 2019/10/13 10:26, Tomas Vondra wrote:\n> over in pgsql-bugs [1] we got a report about CREATE TEXT SEARCH\n> DICTIONARY causing segfaults on 12.0. Simply running\n> \n>    CREATE TEXT SEARCH DICTIONARY hunspell_num (Template=ispell,\n>    DictFile=hunspell_sample_num, AffFile=hunspell_sample_long);\n> \n> does trigger a crash, 100% of the time. The crash was reported on 12.0,\n> but it's in fact present since 9.6.\n> \n> On 9.5 the example does not work, because that version does not (a)\n> include the hunspell dictionaries used in the example, and (b) it does\n> not support long flags. So even after copying the dictionaries and\n> tweaking them a bit it still passes without a crash.\n\nThis crash is not because of long flags, but because of aliases (more \nthoughts below).\n\n> Looking at the commit history of spell.c, there seems to be a bunch of\n> commits in 2016 (e.g. f4ceed6ceba3) touching exactly this part of the\n> code (hunspell), and it also correlates quite nicely with the affected\n> branches (9.6+). So my best guess is it's a bug in those changes.\n\nYeah, there was a lot changes.\n\n> So it simply grabs whatever it finds in the dict file, parses it and\n> then (later) we use it as index to access the AffixData array, even if\n> the value is way out of bounds.\n\nYes, we enter this code if an affix file defines aliases (AF parameter). \nAffixData array is used to store those aliases.\n\nMore about hunspell format you can find here:\nhttps://linux.die.net/man/4/hunspell\n\nIn the example we have the following aliases:\nAF 11\nAF cZ\t\t#1\nAF cL\t\t#2\n...\nAF sB\t\t#11\n\nAnd in the dictionary file we should use their indexes (from 1 to 11). \nThese aliases defines set of flags and in the dict file we can use only \nsingle index:\nbook/3\nbook/11\n\nbut not:\nbook/3,4\nbook/2,11\n\nI added checking of this last case in the attached patch. PostgreSQL \nwill raise an error if it sees non-numeric and non-whitespace character \nafter the index.\n\nAliases can be used with all flag types: 'default' (i.e. FM_CHAR), \n'long', and if I'm not mistaken 'num'.\n\n> So I think we need some sort of cross-check here. We certainly need to\n> make NISortDictionary() check the affix value is within AffixData\n> bounds, and error out when the index is non-sensical (maybe negative\n> and/or exceeding nAffixData).\n\nI agree, I attached the patch which do this. I also added couple \nasserts, tests and fixed condition in getAffixFlagSet():\n\n-\t\tif (curaffix > 0 && curaffix <= Conf->nAffixData)\n+\t\tif (curaffix > 0 && curaffix < Conf->nAffixData)\n\nI think it could be a bug, because curaffix can't be equal to \nConf->nAffixData.\n\n> Maybe there's a simple way to check if the affix/dict files match.\n\nI'm not sure how to properly fix this either. The only thing we could \ncheck is commas in affix flags in a dict file:\n\nbook/302,301,202,303\n\nFM_CHAR and FM_LONG dictionaries can't have commas. They should have the \nfollowing affix flags:\n\nbook/sGsJpUsS\t# 4 affixes for FM_LONG\nbook/GJUS\t# 4 affixes for FM_CHAR\n\nBut I guess they could have numbers in flags (as help says \"Set flag \ntype. Default type is the extended ASCII (8-bit) character.\") and other \nnon alphanumeric characters (as some language dictionaries have):\n\nbook/s1s2s3s4\t# 4 affixes for FM_LONG\n\n-- \nArtur", "msg_date": "Mon, 28 Oct 2019 11:59:01 +0900", "msg_from": "Arthur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TEXT SEARCH DICTIONARY segfaulting on 9.6+" }, { "msg_contents": "Arthur Zakirov <zaartur@gmail.com> writes:\n> On 2019/10/13 10:26, Tomas Vondra wrote:\n>> So I think we need some sort of cross-check here. We certainly need to\n>> make NISortDictionary() check the affix value is within AffixData\n>> bounds, and error out when the index is non-sensical (maybe negative\n>> and/or exceeding nAffixData).\n\n> I agree, I attached the patch which do this. I also added couple \n> asserts, tests and fixed condition in getAffixFlagSet():\n\n> -\t\tif (curaffix > 0 && curaffix <= Conf->nAffixData)\n> +\t\tif (curaffix > 0 && curaffix < Conf->nAffixData)\n\nLooks reasonable to me, and we need to get something done before\nthe upcoming releases, so I pushed this. Perhaps there's more\nthat could be done later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Nov 2019 16:48:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE TEXT SEARCH DICTIONARY segfaulting on 9.6+" }, { "msg_contents": "On Sun, Nov 3, 2019 at 5:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Arthur Zakirov <zaartur@gmail.com> writes:\n> > On 2019/10/13 10:26, Tomas Vondra wrote:\n> >> So I think we need some sort of cross-check here. We certainly need to\n> >> make NISortDictionary() check the affix value is within AffixData\n> >> bounds, and error out when the index is non-sensical (maybe negative\n> >> and/or exceeding nAffixData).\n>\n> > I agree, I attached the patch which do this. I also added couple\n> > asserts, tests and fixed condition in getAffixFlagSet():\n>\n> > - if (curaffix > 0 && curaffix <= Conf->nAffixData)\n> > + if (curaffix > 0 && curaffix < Conf->nAffixData)\n>\n> Looks reasonable to me, and we need to get something done before\n> the upcoming releases, so I pushed this. Perhaps there's more\n> that could be done later.\n\nGreat, thank you!\n\n-- \nArtur\n\n\n", "msg_date": "Sun, 3 Nov 2019 12:50:45 +0900", "msg_from": "Artur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE TEXT SEARCH DICTIONARY segfaulting on 9.6+" } ]
[ { "msg_contents": "I ran into this while trying to trigger the previously-reported segfault. \n\nCREATE TABLE t(i) AS SELECT * FROM generate_series(1,9);\nCREATE INDEX ON t(i);\n\n[pryzbyj@database ~]$ for i in `seq 1 9`; do PGOPTIONS='-cstatement_timeout=9' psql postgres --host /tmp --port 5678 -c \"REINDEX INDEX CONCURRENTLY t_i_idx\" ; done\nERROR: canceling statement due to statement timeout\nERROR: lock ShareUpdateExclusiveLock on object 14185/47287/0 is already held\n[...]\n\nVariations on this seem to leave the locks table (?) or something else in a\nReal Bad state, such that I cannot truncate the table or drop it; or at least\ncommands are unreasonably delayed for minutes, on this otherwise-empty test\ncluster.\n\nJustin\n\n\n", "msg_date": "Sat, 12 Oct 2019 21:51:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock on object\n 14185/39327/0 is already held" }, { "msg_contents": "On Sat, Oct 12, 2019 at 09:51:45PM -0500, Justin Pryzby wrote:\n> Variations on this seem to leave the locks table (?) or something else in a\n> Real Bad state, such that I cannot truncate the table or drop it; or at least\n> commands are unreasonably delayed for minutes, on this otherwise-empty test\n> cluster.\n\nI got an assertion failure on that:\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at\n../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f417a283535 in __GI_abort () at abort.c:79\n#2 0x0000564c351f0f4f in ExceptionalCondition\n(conditionName=0x564c353d0ac8\n\"SHMQueueEmpty(&(MyProc->myProcLocks[i]))\", errorType=0x564c353d09de\n\"FailedAssertion\",\nfileName=0x564c353d09d7 \"proc.c\", lineNumber=832) at assert.c:54\n#3 0x0000564c3504debe in ProcKill (code=0, arg=0) at proc.c:832\n#4 0x0000564c3503430e in shmem_exit (code=0) at ipc.c:272\n#5 0x0000564c3503413d in proc_exit_prepare (code=0) at ipc.c:194\n#6 0x0000564c3503409c in proc_exit (code=0) at ipc.c:107\n#7 0x0000564c3506a629 in PostgresMain (argc=1,\nargv=0x564c35c12ae0, dbname=0x564c35c129d0 \"ioltas\",\nusername=0x564c35c129b0 \"ioltas\") at postgres.c:4464\n#8 0x0000564c34fb94ed in BackendRun (port=0x564c35c0c6b0) at\npostmaster.c:4465\n#9 0x0000564c34fb8c59 in BackendStartup (port=0x564c35c0c6b0) at\npostmaster.c:4156\n#10 0x0000564c34fb4c7f in ServerLoop () at postmaster.c:1718\n#11 0x0000564c34fb44ad in PostmasterMain (argc=3,\nargv=0x564c35bdefd0) at postmaster.c:1391\n#12 0x0000564c34ec0d3d in main (argc=3, argv=0x564c35bdefd0) at main.c:210\n\nThis means that all the locks hold have not actually been released\nwhen the timeout has kicked in. Not sure that this is only an issue\nrelated to REINDEX CONCURRENTLY, but if that's the case then we are\nmissing a cleanup step.\n--\nMichael", "msg_date": "Sun, 13 Oct 2019 18:21:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock on\n object 14185/39327/0 is already held" }, { "msg_contents": "On a badly-overloaded VM, we hit the previously-reported segfault in progress\nreporting. This left around some *ccold indices. I tried to drop them but:\n\nsentinel=# DROP INDEX child.alarms_null_alarm_id_idx1_ccold; -- child.alarms_null_alarm_time_idx_ccold; -- alarms_null_alarm_id_idx_ccold;\nERROR: could not find tuple for parent of relation 41351896\n\nThose are children of relkind=I index on relkind=p table.\n\npostgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\npostgres=# CREATE TABLE t1 PARTITION OF t FOR VALUES FROM (1)TO(100);\npostgres=# INSERT INTO t1 SELECT 1 FROM generate_series(1,99999);\npostgres=# CREATE INDEX ON t(i);\n\npostgres=# begin; SELECT * FROM t; -- DO THIS IN ANOTHER SESSION\n\npostgres=# REINDEX INDEX CONCURRENTLY t1_i_idx; -- cancel this one\n^CCancel request sent\nERROR: canceling statement due to user request\n\npostgres=# \\d t1\n...\n \"t1_i_idx\" btree (i)\n \"t1_i_idx_ccold\" btree (i) INVALID\n\npostgres=# SELECT inhrelid::regclass FROM pg_inherits WHERE inhparent='t_i_idx'::regclass;\ninhrelid\nt1_i_idx\n(1 row)\n\nNot only can't I DROP the _ccold indexes, but also dropping the table doesn't\ncause them to be dropped, and then I can't even slash dee them anymore:\n\njtp=# DROP INDEX t1_i_idx_ccold;\nERROR: could not find tuple for parent of relation 290818869\n\njtp=# DROP TABLE t; -- does not fail, but ..\n\njtp=# \\d t1_i_idx_ccold\nERROR: cache lookup failed for relation 290818865\n\njtp=# SELECT indrelid::regclass, * FROM pg_index WHERE indexrelid='t1_i_idx_ccold'::regclass;\nindrelid | 290818865\nindexrelid | 290818869\nindrelid | 290818865\n[...]\n\nJustin\n\n\n", "msg_date": "Tue, 15 Oct 2019 11:40:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v12.0: interrupt reindex CONCURRENTLY: ccold: ERROR: could not find\n tuple for parent of relation ..." }, { "msg_contents": "Checking if anybody is working on either of these\nhttps://www.postgresql.org/message-id/20191013025145.GC4475%40telsasoft.com\nhttps://www.postgresql.org/message-id/20191015164047.GA22729%40telsasoft.com\n\nOn Sat, Oct 12, 2019 at 09:51:45PM -0500, Justin Pryzby wrote:\n> I ran into this while trying to trigger the previously-reported segfault. \n> \n> CREATE TABLE t(i) AS SELECT * FROM generate_series(1,9);\n> CREATE INDEX ON t(i);\n> \n> [pryzbyj@database ~]$ for i in `seq 1 9`; do PGOPTIONS='-cstatement_timeout=9' psql postgres --host /tmp --port 5678 -c \"REINDEX INDEX CONCURRENTLY t_i_idx\" ; done\n> ERROR: canceling statement due to statement timeout\n> ERROR: lock ShareUpdateExclusiveLock on object 14185/47287/0 is already held\n> [...]\n> \n> Variations on this seem to leave the locks table (?) or something else in a\n> Real Bad state, such that I cannot truncate the table or drop it; or at least\n> commands are unreasonably delayed for minutes, on this otherwise-empty test\n> cluster.\n\nOn Tue, Oct 15, 2019 at 11:40:47AM -0500, Justin Pryzby wrote:\n> On a badly-overloaded VM, we hit the previously-reported segfault in progress\n> reporting. This left around some *ccold indices. I tried to drop them but:\n> \n> sentinel=# DROP INDEX child.alarms_null_alarm_id_idx1_ccold; -- child.alarms_null_alarm_time_idx_ccold; -- alarms_null_alarm_id_idx_ccold;\n> ERROR: could not find tuple for parent of relation 41351896\n> \n> Those are children of relkind=I index on relkind=p table.\n> \n> postgres=# CREATE TABLE t(i int)PARTITION BY RANGE(i);\n> postgres=# CREATE TABLE t1 PARTITION OF t FOR VALUES FROM (1)TO(100);\n> postgres=# INSERT INTO t1 SELECT 1 FROM generate_series(1,99999);\n> postgres=# CREATE INDEX ON t(i);\n> \n> postgres=# begin; SELECT * FROM t; -- DO THIS IN ANOTHER SESSION\n> \n> postgres=# REINDEX INDEX CONCURRENTLY t1_i_idx; -- cancel this one\n> ^CCancel request sent\n> ERROR: canceling statement due to user request\n> \n> postgres=# \\d t1\n> ...\n> \"t1_i_idx\" btree (i)\n> \"t1_i_idx_ccold\" btree (i) INVALID\n> \n> postgres=# SELECT inhrelid::regclass FROM pg_inherits WHERE inhparent='t_i_idx'::regclass;\n> inhrelid\n> t1_i_idx\n> (1 row)\n> \n> Not only can't I DROP the _ccold indexes, but also dropping the table doesn't\n> cause them to be dropped, and then I can't even slash dee them anymore:\n> \n> jtp=# DROP INDEX t1_i_idx_ccold;\n> ERROR: could not find tuple for parent of relation 290818869\n> \n> jtp=# DROP TABLE t; -- does not fail, but ..\n> \n> jtp=# \\d t1_i_idx_ccold\n> ERROR: cache lookup failed for relation 290818865\n> \n> jtp=# SELECT indrelid::regclass, * FROM pg_index WHERE indexrelid='t1_i_idx_ccold'::regclass;\n> indrelid | 290818865\n> indexrelid | 290818869\n> indrelid | 290818865\n> [...]\n\n\n", "msg_date": "Fri, 18 Oct 2019 13:26:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock on\n object 14185/39327/0 is already held" }, { "msg_contents": "On Fri, Oct 18, 2019 at 01:26:27PM -0500, Justin Pryzby wrote:\n> Checking if anybody is working on either of these\n> https://www.postgresql.org/message-id/20191013025145.GC4475%40telsasoft.com\n> https://www.postgresql.org/message-id/20191015164047.GA22729%40telsasoft.com\n\nFWIW, I have spent an hour or two poking at this issue the last couple\nof days so I am not ignoring both, not as much as I would have liked\nbut well... My initial lookup leads me to think that something is\ngoing wrong with the cleanup of the session-level lock on the parent\ntable taken in the first transaction doing the REINDEX CONCURRENTLY.\n--\nMichael", "msg_date": "Sat, 19 Oct 2019 11:41:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock on\n object 14185/39327/0 is already held" }, { "msg_contents": "On Sat, Oct 19, 2019 at 11:41:06AM +0900, Michael Paquier wrote:\n> FWIW, I have spent an hour or two poking at this issue the last couple\n> of days so I am not ignoring both, not as much as I would have liked\n> but well... My initial lookup leads me to think that something is\n> going wrong with the cleanup of the session-level lock on the parent\n> table taken in the first transaction doing the REINDEX CONCURRENTLY.\n\nI can confirm that this is an issue related to session locks which are\nnot cleaned up when in an out-of-transaction state, state that can be\nreached between a transaction commit or start while holding at least\none session lock within one single command of VACUUM, CIC or REINDEX\nCONCURRENTLY. The failure is actually pretty easy to reproduce if you\nadd an elog(ERROR) after a CommitTransactionCommand() call and then\nshut down the connection. I am starting a new thread about that. The\nproblem is larger than it looks, and exists for a long time.\n--\nMichael", "msg_date": "Wed, 23 Oct 2019 19:18:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock on\n object 14185/39327/0 is already held" }, { "msg_contents": "On Wed, Oct 23, 2019 at 07:18:33PM +0900, Michael Paquier wrote:\n> I can confirm that this is an issue related to session locks which are\n> not cleaned up when in an out-of-transaction state, state that can be\n> reached between a transaction commit or start while holding at least\n> one session lock within one single command of VACUUM, CIC or REINDEX\n> CONCURRENTLY.\n\nPlease let me back-pedal a bit on this one after sleeping on it.\nActually, if you look at CIC and VACUUM, those code paths are much\nmore careful regarding the position of CHECK_FOR_INTERRUPTS() than\nREINDEX CONCURRENTLY is in the fact that they happen only within a\ntransaction context. In the case of REINDEX CONCURRENTLY and the\nfailure reported here, the current code is careless: it depends of\ncourse on the timing of statement_timeout, but session locks would\nremain behind when hitting an interruption at the beginning of phase 2\nor 3 in indexcmds.c. So the answer is simple: by moving the interrupt\nchecks within a transaction context, the problem gets solved. This\nalso fixes a second issue as the original code would cause xact.c to\ngenerate some useless warnings.\n\nPlease see the attached. Justin, does it fix your problems regarding\nthe locks? For me it does.\n\n> The failure is actually pretty easy to reproduce if you\n> add an elog(ERROR) after a CommitTransactionCommand() call and then\n> shut down the connection. I am starting a new thread about that. The\n> problem is larger than it looks, and exists for a long time.\n\nI am still wondering if we could put more safeguards in this area\nthough...\n--\nMichael", "msg_date": "Thu, 24 Oct 2019 11:42:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock on\n object 14185/39327/0 is already held" }, { "msg_contents": "On Thu, Oct 24, 2019 at 11:42:04AM +0900, Michael Paquier wrote:\n> Please see the attached. Justin, does it fix your problems regarding\n> the locks?\n\nConfirmed.\n\nThanks,\nJustin\n\n\n", "msg_date": "Wed, 23 Oct 2019 22:08:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock on\n object 14185/39327/0 is already held" }, { "msg_contents": "On Tue, Oct 15, 2019 at 11:40:47AM -0500, Justin Pryzby wrote:\n> Not only can't I DROP the _ccold indexes, but also dropping the table doesn't\n> cause them to be dropped, and then I can't even slash dee them anymore:\n\nYes, I can confirm the report. In the case of this scenario the\nreindex is waiting for the first transaction to finish before step 5,\nthe cancellation causing the follow-up process to not be done\n(set_dead & the next ones). So at this stage the swap has actually\nhappened. I am still analyzing the report in depths, but you don't\nhave any problems with a plain index when interrupting at this stage,\nand the old index can be cleanly dropped with the new one present, so\nmy first thoughts are that we are just missing some more dependency\ncleanup at the swap phase when dealing with a partition index.\n--\nMichael", "msg_date": "Thu, 24 Oct 2019 13:59:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: interrupt reindex CONCURRENTLY: ccold: ERROR: could not\n find tuple for parent of relation ..." }, { "msg_contents": "On Wed, Oct 23, 2019 at 10:08:21PM -0500, Justin Pryzby wrote:\n> On Thu, Oct 24, 2019 at 11:42:04AM +0900, Michael Paquier wrote:\n>> Please see the attached. Justin, does it fix your problems regarding\n>> the locks?\n> \n> Confirmed.\n\nOkay, committed and back-patched. I have checked manually all the\ninterruptions for plain indexes and it is possible to clean up the\ninvalid indexes properly (old or new depending on the phase).\nPartition indexes have other issues as you reported, but let's see\nabout that on the other thread. \n--\nMichael", "msg_date": "Fri, 25 Oct 2019 10:21:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: reindex CONCURRENTLY: lock ShareUpdateExclusiveLock on\n object 14185/39327/0 is already held" }, { "msg_contents": "On Thu, Oct 24, 2019 at 01:59:29PM +0900, Michael Paquier wrote:\n> Yes, I can confirm the report. In the case of this scenario the\n> reindex is waiting for the first transaction to finish before step 5,\n> the cancellation causing the follow-up process to not be done\n> (set_dead & the next ones). So at this stage the swap has actually\n> happened. I am still analyzing the report in depths, but you don't\n> have any problems with a plain index when interrupting at this stage,\n> and the old index can be cleanly dropped with the new one present, so\n> my first thoughts are that we are just missing some more dependency\n> cleanup at the swap phase when dealing with a partition index.\n\nOkay, I have found this one. The issue is that at the swap phase\npg_class.relispartition of the new index is updated to use the value\nof the old index (true for a partition index), however relispartition\nneeds to be updated as well for the old index or when trying to\ninteract with it we get failures as the old index is part of no\ninheritance trees. We could use just use false as the index created\nconcurrently is not attached to a partition with its inheritance links\nupdated until the swap phase, but it feels more natural to just swap\nrelispartition for the old and the new index, as per the attached.\n\nThis brings also the point that you could just update pg_class to fix\nthings if you have a broken cluster.\n\nIn short, the attached fixes the issue for me, and that's the last bug\nI know of in what has been reported..\n--\nMichael", "msg_date": "Mon, 28 Oct 2019 16:14:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: interrupt reindex CONCURRENTLY: ccold: ERROR: could not\n find tuple for parent of relation ..." }, { "msg_contents": "On Mon, Oct 28, 2019 at 04:14:41PM +0900, Michael Paquier wrote:\n> This brings also the point that you could just update pg_class to fix\n> things if you have a broken cluster.\n> \n> In short, the attached fixes the issue for me, and that's the last bug\n> I know of in what has been reported..\n\nThis one is now done. Justin has also confirmed me offline that it\nfixed his problems.\n--\nMichael", "msg_date": "Tue, 29 Oct 2019 11:20:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12.0: interrupt reindex CONCURRENTLY: ccold: ERROR: could not\n find tuple for parent of relation ..." } ]
[ { "msg_contents": "Hi,\n\n[1] made me look at tuplesorts test coverage at\nhttps://coverage.postgresql.org/src/backend/utils/sort/tuplesort.c.gcov.html\nWe don't have coverage for a quite a number of things:\n- cluster for expression indexes (line 935)\n- sorts exceeding INT_MAX / 2 memory (line 1337), but that seems hard to\n test realistically\n- aborted abbreviated keys (lines 1522, 1608, 1774, 3620, 3739, 3867, 4266)\n- in memory backwards scans (lines 1936, 3042)\n- *any* coverage for TSS_SORTEDONTAPE (line 1964)\n- disk sort skiptuples (line 2325)\n- mergeruns without abbrev key (line 2582)\n- disk sorts with more than one run (lines 2707, 2789)\n- any disk based tuplesort_begin_heap() (lines 3649, 3676)\n- Seems copytup_index currently is essentially dead, because\n tuplesort_putindextuplevalues() doesn't use COPYTUP (line 4142)\n- any disk based tuplesort_begin_datum (lines 4282, 4323)\n\nI'm pretty unhappy that tuplesort has been whacked around pretty heavily\nin the last few years, while *reducing* effective test coverage\nnoticeably, rather than increasing it. There's pretty substantial and\nnontrivial areas without any tests - do we have actually have any\nconfidence that they work?\n\nThe largest culprits for that seem to be abbreviated keys, the tape\nlogic overhaul, and the increase of work mem.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 13 Oct 2019 07:41:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "tuplesort test coverage" }, { "msg_contents": "On Sun, Oct 13, 2019 at 3:41 PM Andres Freund <andres@anarazel.de> wrote:\n> - cluster for expression indexes (line 935)\n\nWe've never had coverage of this, but perhaps that can be added now.\n\n> - sorts exceeding INT_MAX / 2 memory (line 1337), but that seems hard to\n> test realistically\n\nI don't think that that can be tested, realistically.\n\n> - aborted abbreviated keys (lines 1522, 1608, 1774, 3620, 3739, 3867, 4266)\n\nAlso hard to test -- there was a bug here when abbreviated keys first\nwent in -- that was detected by amcheck.\n\nAll of the places where we abort are essentially the same, though.\n\n> - in memory backwards scans (lines 1936, 3042)\n> - *any* coverage for TSS_SORTEDONTAPE (line 1964)\n\nThat used to exist, but it went away when we killed replacement selection sort.\n\n> - disk sort skiptuples (line 2325)\n\nCouldn't hurt.\n\n> - mergeruns without abbrev key (line 2582)\n\nmergeruns() doesn't use abbreviated keys -- this code disables their\nuse in the standard way.\n\n> - disk sorts with more than one run (lines 2707, 2789)\n> - any disk based tuplesort_begin_heap() (lines 3649, 3676)\n\nI had to convince Tom to get the coverage of external sorts we have\nnow. Apparently there are buildfarm animals that are very sensitive to\nthat cost, that could have substantially increased test runtimes were\nwe to do more. Perhaps this could be revisited.\n\n> - Seems copytup_index currently is essentially dead, because\n> tuplesort_putindextuplevalues() doesn't use COPYTUP (line 4142)\n\nYeah, that looks like dead code -- it should just be a stub with a\n\"can't happen\" error.\n\n> I'm pretty unhappy that tuplesort has been whacked around pretty heavily\n> in the last few years, while *reducing* effective test coverage\n> noticeably, rather than increasing it.\n\nI don't think that that's true, on balance. There are only 1,000 extra\nlines of code in tuplesort.c in master compared to 9.4, even though we\nadded parallel sorts and abbreviated keys, two huge enhancements. In\nmany ways, tuplesort is now simpler than ever.\n\n> There's pretty substantial and\n> nontrivial areas without any tests - do we have actually have any\n> confidence that they work?\n\nEverything that you're talking about has existed since v11 came out a\nyear ago, and most of it is a year or two older than that. So yeah,\nI'm pretty confident that it works.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Oct 2019 13:05:32 +0100", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: tuplesort test coverage" }, { "msg_contents": "Hi,\n\nOn 2019-10-15 13:05:32 +0100, Peter Geoghegan wrote:\n> > - aborted abbreviated keys (lines 1522, 1608, 1774, 3620, 3739, 3867, 4266)\n> \n> Also hard to test -- there was a bug here when abbreviated keys first\n> went in -- that was detected by amcheck.\n> \n> All of the places where we abort are essentially the same, though.\n\nWhy is it that hard? Seems fairly easy to create cases that reliably\nabort.\n\nI really don't think it's ok to have as many abbrev abort related paths\nwithout any coverage - the relevant code isn't that trivial. And\nsomething like amcheck really doesn't strike me as sufficient. For one,\nit doesn't provide any coverage either. For another, plenty sorts don't\nend up in a form that amcheck sees.\n\nTests aren't just there to verify that the current behaviour isn't\nbroken, they're also there to allow to develop with some confidence. And\nI don't think tuplesort as is really allows that, and e.g. abbreviated\nkeys made that substantially worse. That happens, but I think it'd be\ngood if you could help improving the situation.\n\nE.g.\nSELECT * FROM (SELECT ('00000000-0000-0000-0000-'||to_char(g.i, '000000000000FM'))::uuid uuid FROM generate_series(15000, 0, -1) g(i)) d ORDER BY uuid\nreliably triggers abbreviated keys, and it looks to me like that should\nbe portable. With a few tweaks it'd be fairly easy to use that to\nprovide some OK coverage for most the abbrev key cases.\n\n\n> > - in memory backwards scans (lines 1936, 3042) - *any* coverage for\n> > TSS_SORTEDONTAPE (line 1964)\n> \n> That used to exist, but it went away when we killed replacement selection sort.\n\nYes, that's kind of my point? Either that patch reduced coverage, or it\ncreated dead code. Neither is good.\n\n\n\n> > - mergeruns without abbrev key (line 2582)\n> \n> mergeruns() doesn't use abbreviated keys -- this code disables their\n> use in the standard way.\n\nWell, then reformulate the point that we should have coverage for\nmergeruns() when initially abbreviated keys were set up.\n\n\n> > - disk sorts with more than one run (lines 2707, 2789)\n> > - any disk based tuplesort_begin_heap() (lines 3649, 3676)\n> \n> I had to convince Tom to get the coverage of external sorts we have\n> now. Apparently there are buildfarm animals that are very sensitive to\n> that cost, that could have substantially increased test runtimes were\n> we to do more. Perhaps this could be revisited.\n\nHm. I'm a bit confused. Isn't all that's required to set a tiny amount\nof work_mem? Then it's easy to trigger many passes without a lot of IO?\n\n\n\n> > I'm pretty unhappy that tuplesort has been whacked around pretty heavily\n> > in the last few years, while *reducing* effective test coverage\n> > noticeably, rather than increasing it.\n> \n> I don't think that that's true, on balance. There are only 1,000 extra\n> lines of code in tuplesort.c in master compared to 9.4, even though we\n> added parallel sorts and abbreviated keys, two huge enhancements. In\n> many ways, tuplesort is now simpler than ever.\n\nI'm not saying that tuplesort has gotten worse or anything. Just that\nthere's been too much development without adding tests.\n\n\n> > There's pretty substantial and\n> > nontrivial areas without any tests - do we have actually have any\n> > confidence that they work?\n> \n> Everything that you're talking about has existed since v11 came out a\n> year ago, and most of it is a year or two older than that. So yeah,\n> I'm pretty confident that it works.\n\nThat's may be true, but there's also basically no way to discover bugs\nexcept manual testing, and users encountering the bugs. That's not good\nenough.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Oct 2019 11:10:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: tuplesort test coverage" }, { "msg_contents": "Hi,\n\nOn 2019-10-24 11:10:34 -0700, Andres Freund wrote:\n> On 2019-10-15 13:05:32 +0100, Peter Geoghegan wrote:\n> > > - aborted abbreviated keys (lines 1522, 1608, 1774, 3620, 3739, 3867, 4266)\n> > \n> > Also hard to test -- there was a bug here when abbreviated keys first\n> > went in -- that was detected by amcheck.\n> > \n> > All of the places where we abort are essentially the same, though.\n> \n> Why is it that hard? Seems fairly easy to create cases that reliably\n> abort.\n> \n> I really don't think it's ok to have as many abbrev abort related paths\n> without any coverage - the relevant code isn't that trivial. And\n> something like amcheck really doesn't strike me as sufficient. For one,\n> it doesn't provide any coverage either. For another, plenty sorts don't\n> end up in a form that amcheck sees.\n> \n> Tests aren't just there to verify that the current behaviour isn't\n> broken, they're also there to allow to develop with some confidence. And\n> I don't think tuplesort as is really allows that, and e.g. abbreviated\n> keys made that substantially worse. That happens, but I think it'd be\n> good if you could help improving the situation.\n> \n> E.g.\n> SELECT * FROM (SELECT ('00000000-0000-0000-0000-'||to_char(g.i, '000000000000FM'))::uuid uuid FROM generate_series(15000, 0, -1) g(i)) d ORDER BY uuid\n> reliably triggers abbreviated keys, and it looks to me like that should\n> be portable. With a few tweaks it'd be fairly easy to use that to\n> provide some OK coverage for most the abbrev key cases.\n\nHere's a first stab at getting the coverage of tuplesort.c to a\nsatisfying level. There's still bits uncovered, but that's largely\neither a) trace_sort related b) hopefully unreachable stuff c) explain\nrelated. The largest actually missing thing is a disk-based\nmark/restore, which probably ought be covered.\n\nI think the the test time of this would still be OK, but if not we could\nalso work a bit more on that angle.\n\nI'm pretty sure there's some minor copy & paste mistakes in the test,\nbut I want to get this out there and get some reactions before investing\nfurther time.\n\nPeter, Tom?\n\n- Andres", "msg_date": "Thu, 24 Oct 2019 14:10:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: tuplesort test coverage" }, { "msg_contents": "On Thu, Oct 24, 2019 at 7:10 PM Andres Freund <andres@anarazel.de> wrote:\n> I really don't think it's ok to have as many abbrev abort related paths\n> without any coverage - the relevant code isn't that trivial. And\n> something like amcheck really doesn't strike me as sufficient. For one,\n> it doesn't provide any coverage either. For another, plenty sorts don't\n> end up in a form that amcheck sees.\n\nI agree.\n\n> Tests aren't just there to verify that the current behaviour isn't\n> broken, they're also there to allow to develop with some confidence. And\n> I don't think tuplesort as is really allows that, and e.g. abbreviated\n> keys made that substantially worse. That happens, but I think it'd be\n> good if you could help improving the situation.\n\nI would like to improve this. I am mostly just pointing out that there\nhas been resistance to this historically. I am in favor of\nmechanically increasing test coverage of tuplesort.c along the lines\nyou describe. I'm just a bit concerned that Tom or others may see it\ndifferently.\n\n> E.g.\n> SELECT * FROM (SELECT ('00000000-0000-0000-0000-'||to_char(g.i, '000000000000FM'))::uuid uuid FROM generate_series(15000, 0, -1) g(i)) d ORDER BY uuid\n> reliably triggers abbreviated keys, and it looks to me like that should\n> be portable. With a few tweaks it'd be fairly easy to use that to\n> provide some OK coverage for most the abbrev key cases.\n\nI agree.\n\n> Yes, that's kind of my point? Either that patch reduced coverage, or it\n> created dead code. Neither is good.\n\nI agree.\n\n> > mergeruns() doesn't use abbreviated keys -- this code disables their\n> > use in the standard way.\n>\n> Well, then reformulate the point that we should have coverage for\n> mergeruns() when initially abbreviated keys were set up.\n\nThat doesn't seem essentially, but I'm okay with it.\n\n> > I had to convince Tom to get the coverage of external sorts we have\n> > now. Apparently there are buildfarm animals that are very sensitive to\n> > that cost, that could have substantially increased test runtimes were\n> > we to do more. Perhaps this could be revisited.\n>\n> Hm. I'm a bit confused. Isn't all that's required to set a tiny amount\n> of work_mem? Then it's easy to trigger many passes without a lot of IO?\n\nYes, but Tom felt that this might not be good enough when this was\ndiscussed in 2016. However, I seem to recall that he was pleasantly\nsurprised at how small the overhead turned out to be.\n\nIt's hard for me to test how much overhead this will have on a machine\nwith horribly slow I/O. Though I just bought a new Raspberry Pi, and\ncould test on that when I get back home from my trip to Europe -- it\nuses an SD card, which is pretty slow.\n\n> I'm not saying that tuplesort has gotten worse or anything. Just that\n> there's been too much development without adding tests.\n\nI agree.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 25 Oct 2019 12:30:26 +0100", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: tuplesort test coverage" }, { "msg_contents": "On Thu, Oct 24, 2019 at 10:10 PM Andres Freund <andres@anarazel.de> wrote:\n> Here's a first stab at getting the coverage of tuplesort.c to a\n> satisfying level. There's still bits uncovered, but that's largely\n> either a) trace_sort related b) hopefully unreachable stuff c) explain\n> related. The largest actually missing thing is a disk-based\n> mark/restore, which probably ought be covered.\n\nYeah. It looks like function coverage of logtape.c will be 100% once\nyou have coverage of mark and restore.\n\n> I think the the test time of this would still be OK, but if not we could\n> also work a bit more on that angle.\n\nThat's hard for me to test right now, but offhand this general\napproach looks good to me. I am pretty sure it's portable.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 25 Oct 2019 12:37:38 +0100", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: tuplesort test coverage" }, { "msg_contents": "Hi,\n\nOn 2019-10-25 12:37:38 +0100, Peter Geoghegan wrote:\n> On Thu, Oct 24, 2019 at 10:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > Here's a first stab at getting the coverage of tuplesort.c to a\n> > satisfying level. There's still bits uncovered, but that's largely\n> > either a) trace_sort related b) hopefully unreachable stuff c) explain\n> > related. The largest actually missing thing is a disk-based\n> > mark/restore, which probably ought be covered.\n> \n> Yeah. It looks like function coverage of logtape.c will be 100% once\n> you have coverage of mark and restore.\n\nYea, it's definitely better after.\n\n\n> > I think the the test time of this would still be OK, but if not we could\n> > also work a bit more on that angle.\n> \n> That's hard for me to test right now, but offhand this general\n> approach looks good to me. I am pretty sure it's portable.\n\nI pushed this now. We'll see what the slower buildfarm animals say. I'll\ntry to see how long they took in a few days.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 13 Nov 2019 16:25:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: tuplesort test coverage" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I pushed this now. We'll see what the slower buildfarm animals say. I'll\n> try to see how long they took in a few days.\n\nfriarbird (a CLOBBER_CACHE_ALWAYS animal) just showed a failure in this:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=friarbird&dt=2019-12-12%2006%3A20%3A02\n\n================== pgsql.build/src/test/regress/regression.diffs ===================\ndiff -U3 /pgbuild/root/HEAD/pgsql.build/../pgsql/src/test/regress/expected/tuplesort.out /pgbuild/root/HEAD/pgsql.build/src/test/regress/results/tuplesort.out\n--- /pgbuild/root/HEAD/pgsql.build/../pgsql/src/test/regress/expected/tuplesort.out\t2019-11-13 19:54:11.000000000 -0500\n+++ /pgbuild/root/HEAD/pgsql.build/src/test/regress/results/tuplesort.out\t2019-12-12 08:25:23.000000000 -0500\n@@ -625,13 +625,13 @@\n Group Key: a.col12\n Filter: (count(*) > 1)\n -> Merge Join\n- Merge Cond: (a.col12 = b.col12)\n- -> Sort\n- Sort Key: a.col12 DESC\n- -> Seq Scan on test_mark_restore a\n+ Merge Cond: (b.col12 = a.col12)\n -> Sort\n Sort Key: b.col12 DESC\n -> Seq Scan on test_mark_restore b\n+ -> Sort\n+ Sort Key: a.col12 DESC\n+ -> Seq Scan on test_mark_restore a\n (14 rows)\n \n :qry;\n\nSince a and b are exactly the same table, in principle it's a matter of\nchance which one the planner will put on the outside of the join.\nI think what happened here is that the test ran long enough for\nautovacuum/autoanalyze to come along and scan the table, changing its\nstats in between where the planner picked up the stats for a and those\nfor b, and we ended up making the opposite join order choice.\n\nI considered fixing this by adding some restriction clause on b so\nthat the join order choice isn't such a coin flip. But it's not\nclear that the problem couldn't recur anyway --- the table stats\nwould change significantly on auto-analyze, since the test script\nisn't bothering to create any stats itself.\n\nWhat seems like a simpler and more reliable fix is to make\ntest_mark_restore a temp table, thus keeping autovac away from it.\nIs there a reason in terms of the test's goals not to do that?\n\nAlso ... why in the world does the script drop its tables at the end\nwith IF EXISTS? They'd better exist at that point. I object\nto the DROP IF EXISTS up at the top, too. The regression tests\ndo not need to be designed to deal with an unpredictable start state,\nand coding them to do so can have no effect other than possibly\nmasking problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Dec 2019 09:27:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuplesort test coverage" }, { "msg_contents": "Hi,\n\nOn 2019-12-12 09:27:04 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I pushed this now. We'll see what the slower buildfarm animals say. I'll\n> > try to see how long they took in a few days.\n> \n> friarbird (a CLOBBER_CACHE_ALWAYS animal) just showed a failure in this:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=friarbird&dt=2019-12-12%2006%3A20%3A02\n> \n> ================== pgsql.build/src/test/regress/regression.diffs ===================\n> diff -U3 /pgbuild/root/HEAD/pgsql.build/../pgsql/src/test/regress/expected/tuplesort.out /pgbuild/root/HEAD/pgsql.build/src/test/regress/results/tuplesort.out\n> --- /pgbuild/root/HEAD/pgsql.build/../pgsql/src/test/regress/expected/tuplesort.out\t2019-11-13 19:54:11.000000000 -0500\n> +++ /pgbuild/root/HEAD/pgsql.build/src/test/regress/results/tuplesort.out\t2019-12-12 08:25:23.000000000 -0500\n> @@ -625,13 +625,13 @@\n> Group Key: a.col12\n> Filter: (count(*) > 1)\n> -> Merge Join\n> - Merge Cond: (a.col12 = b.col12)\n> - -> Sort\n> - Sort Key: a.col12 DESC\n> - -> Seq Scan on test_mark_restore a\n> + Merge Cond: (b.col12 = a.col12)\n> -> Sort\n> Sort Key: b.col12 DESC\n> -> Seq Scan on test_mark_restore b\n> + -> Sort\n> + Sort Key: a.col12 DESC\n> + -> Seq Scan on test_mark_restore a\n> (14 rows)\n> \n> :qry;\n> \n> Since a and b are exactly the same table, in principle it's a matter of\n> chance which one the planner will put on the outside of the join.\n\nYea.\n\n\n> I think what happened here is that the test ran long enough for\n> autovacuum/autoanalyze to come along and scan the table, changing its\n> stats in between where the planner picked up the stats for a and those\n> for b, and we ended up making the opposite join order choice.\n\nSounds reasonable.\n\n\n> What seems like a simpler and more reliable fix is to make\n> test_mark_restore a temp table, thus keeping autovac away from it.\n> Is there a reason in terms of the test's goals not to do that?\n\nI can't see any reason. The sorting code shouldn't care about the source\nof tuples. I guess there could at some point be tests for parallel\nsorting, but that'd just use a different table.\n\n\n> Also ... why in the world does the script drop its tables at the end\n> with IF EXISTS? They'd better exist at that point. I object\n> to the DROP IF EXISTS up at the top, too. The regression tests\n> do not need to be designed to deal with an unpredictable start state,\n> and coding them to do so can have no effect other than possibly\n> masking problems.\n\nWell, it makes it a heck of a lot easier to run tests in isolation while\nevolving them. While I personally think it's good to leave cleanup for\npartial states in for cases where it was helpful during development, I\nalso don't care about it strongly.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Dec 2019 15:25:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: tuplesort test coverage" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-12-12 09:27:04 -0500, Tom Lane wrote:\n>> What seems like a simpler and more reliable fix is to make\n>> test_mark_restore a temp table, thus keeping autovac away from it.\n>> Is there a reason in terms of the test's goals not to do that?\n\n> I can't see any reason. The sorting code shouldn't care about the source\n> of tuples. I guess there could at some point be tests for parallel\n> sorting, but that'd just use a different table.\n\nOK, done that way.\n\n>> Also ... why in the world does the script drop its tables at the end\n>> with IF EXISTS? They'd better exist at that point. I object\n>> to the DROP IF EXISTS up at the top, too. The regression tests\n>> do not need to be designed to deal with an unpredictable start state,\n>> and coding them to do so can have no effect other than possibly\n>> masking problems.\n\n> Well, it makes it a heck of a lot easier to run tests in isolation while\n> evolving them. While I personally think it's good to leave cleanup for\n> partial states in for cases where it was helpful during development, I\n> also don't care about it strongly.\n\nAs far as that goes, making the tables temp is an even better solution.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Dec 2019 15:03:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tuplesort test coverage" } ]
[ { "msg_contents": "During the cleanup of the _MSC_VER versions (commit\n38d8dce61fff09daae0edb6bcdd42b0c7f10ebcd), I found it useful to use\n-Wundef, but that resulted in a bunch of gratuitous warnings. Here is a\npatch to fix those. Most of these are just stylistic cleanups, but the\nchange in pg_bswap.h is potentially useful to avoid misuse by\nthird-party extensions.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 13 Oct 2019 21:25:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Fix most -Wundef warnings" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> During the cleanup of the _MSC_VER versions (commit\n> 38d8dce61fff09daae0edb6bcdd42b0c7f10ebcd), I found it useful to use\n> -Wundef, but that resulted in a bunch of gratuitous warnings. Here is a\n> patch to fix those. Most of these are just stylistic cleanups, but the\n> change in pg_bswap.h is potentially useful to avoid misuse by\n> third-party extensions.\n\nLooks reasonable offhand.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Oct 2019 15:56:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix most -Wundef warnings" }, { "msg_contents": "\n\nOn 10/13/19 12:25 PM, Peter Eisentraut wrote:\n> diff --git a/contrib/hstore/hstore_compat.c b/contrib/hstore/hstore_compat.c\n> index 1d4e7484e4..d75e9cb23f 100644\n> --- a/contrib/hstore/hstore_compat.c\n> +++ b/contrib/hstore/hstore_compat.c\n> @@ -299,7 +299,7 @@ hstoreUpgrade(Datum orig)\n> \n> \tif (valid_new)\n> \t{\n> -#if HSTORE_IS_HSTORE_NEW\n> +#ifdef HSTORE_IS_HSTORE_NEW\n> \t\telog(WARNING, \"ambiguous hstore value resolved as hstore-new\");\n\nChecking the current sources, git history, and various older commits, I \ndid not find where HSTORE_IS_HSTORE_NEW was ever defined. I expect it \nwas defined at some point, but I checked back as far as 9.0 (where the \ncurrent contrib/hstore was originally committed) and did not see it. \nWhere did you find this, and can we add a code comment? This one #ifdef \nis the only line in the entire repository where this label is used, \nmaking it hard to check if changing from #if was the right decision.\n\nThe check on HSTORE_IS_HSTORE_NEW goes back at least as far as 2006, \nsuggesting it was needed for migrating from some version pre-9.0, making \nme wonder if anybody would need this in the field. Should we drop \nsupport for this? I don't have a strong reason to advocate dropping \nsupport other than that this #define appears to be undocumented.\n\nmark\n\n\n", "msg_date": "Mon, 14 Oct 2019 08:12:29 -0700", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix most -Wundef warnings" }, { "msg_contents": ">>>>> \"Mark\" == Mark Dilger <hornschnorter@gmail.com> writes:\n\n >> +#ifdef HSTORE_IS_HSTORE_NEW\n\n Mark> Checking the current sources, git history, and various older\n Mark> commits, I did not find where HSTORE_IS_HSTORE_NEW was ever\n Mark> defined.\n\nIn contrib/hstore, it never was.\n\nThe current version of contrib/hstore had a brief life as a separate\nextension module called hstore-new, which existed to backport its\nfunctionality into 8.4. The data format for hstore-new was almost\nidentical to the new contrib/hstore one (and thus different from the old\ncontrib/hstore), and changed at one point before its final release, so\nthere were four possible upgrade paths as explained in the comments.\n\nThe block comment with the most pertinent explanation seems to have\nbeen a victim of pgindent, but the relevant part is this:\n\n * [...] So the upshot of all this\n * is that we can treat all the edge cases as \"new\" if we're being built\n * as hstore-new, and \"old\" if we're being built as contrib/hstore.\n\nSo, HSTORE_IS_HSTORE_NEW was defined if you were building a pgxs module\ncalled \"hstore-new\" (which was distributed separately on pgfoundry but\nthe C code was the same), and not if you're building \"hstore\" (whether\nan in or out of tree build).\n\n Mark> The check on HSTORE_IS_HSTORE_NEW goes back at least as far as\n Mark> 2006, suggesting it was needed for migrating from some version\n Mark> pre-9.0, making me wonder if anybody would need this in the\n Mark> field. Should we drop support for this? I don't have a strong\n Mark> reason to advocate dropping support other than that this #define\n Mark> appears to be undocumented.\n\nThe only reason not to remove most of hstore_compat.c is that there is\nno way to know what data survives in the wild in each of the three\npossible hstore formats (8.4 contrib, pre-final hstore-new, current). I\nthink it's most unlikely that any of the pre-final hstore-new data still\nexists, but how would anyone know?\n\n(The fact that there have been exactly zero field reports of either of\nthe WARNING messages unfortunately doesn't prove much. Almost all\npossible non-current hstore values are unambiguously in one or other of\nthe possible formats, the ambiguity is only possible because the old\ncode didn't always set the varlena length to the correct size, but left\nunused space at the end.)\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Tue, 15 Oct 2019 13:23:43 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Fix most -Wundef warnings" }, { "msg_contents": "On 2019-10-13 21:25, Peter Eisentraut wrote:\n> During the cleanup of the _MSC_VER versions (commit\n> 38d8dce61fff09daae0edb6bcdd42b0c7f10ebcd), I found it useful to use\n> -Wundef, but that resulted in a bunch of gratuitous warnings. Here is a\n> patch to fix those. Most of these are just stylistic cleanups, but the\n> change in pg_bswap.h is potentially useful to avoid misuse by\n> third-party extensions.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 19 Oct 2019 18:50:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix most -Wundef warnings" }, { "msg_contents": "On 2019-10-14 17:12, Mark Dilger wrote:\n> The check on HSTORE_IS_HSTORE_NEW goes back at least as far as 2006, \n> suggesting it was needed for migrating from some version pre-9.0, making \n> me wonder if anybody would need this in the field. Should we drop \n> support for this? I don't have a strong reason to advocate dropping \n> support other than that this #define appears to be undocumented.\n\nPer subsequent messages in this thread, this issue is outside the scope\nof my patch, so I proceeded with my patch as I had proposed it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 19 Oct 2019 18:51:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix most -Wundef warnings" }, { "msg_contents": "\n\nOn 10/15/19 5:23 AM, Andrew Gierth wrote:\n>>>>>> \"Mark\" == Mark Dilger <hornschnorter@gmail.com> writes:\n> \n> >> +#ifdef HSTORE_IS_HSTORE_NEW\n> \n> Mark> Checking the current sources, git history, and various older\n> Mark> commits, I did not find where HSTORE_IS_HSTORE_NEW was ever\n> Mark> defined.\n> \n> In contrib/hstore, it never was.\n> \n> The current version of contrib/hstore had a brief life as a separate\n> extension module called hstore-new, which existed to backport its\n> functionality into 8.4. The data format for hstore-new was almost\n> identical to the new contrib/hstore one (and thus different from the old\n> contrib/hstore), and changed at one point before its final release, so\n> there were four possible upgrade paths as explained in the comments.\n> \n> The block comment with the most pertinent explanation seems to have\n> been a victim of pgindent, but the relevant part is this:\n> \n> * [...] So the upshot of all this\n> * is that we can treat all the edge cases as \"new\" if we're being built\n> * as hstore-new, and \"old\" if we're being built as contrib/hstore.\n> \n> So, HSTORE_IS_HSTORE_NEW was defined if you were building a pgxs module\n> called \"hstore-new\" (which was distributed separately on pgfoundry but\n> the C code was the same), and not if you're building \"hstore\" (whether\n> an in or out of tree build).\n\nI don't really dispute your claim, but it doesn't unambiguously follow \nfrom the wording of the comment. The part that tripped me up while \nreviewing Peter's patch is that he changed the preprocessor logic to use \n#ifdef rather than #if, implying that he believes HSTORE_IS_HSTORE_NEW \nwill only be defined when true, and undefined when false, rather than \nsomething like:\n\n #if OLD_STUFF\n #define HSTORE_IS_HSTORE_NEW 0\n #else\n #define HSTORE_IS_HSTORE_NEW 1\n #endif\n\nwhich is admittedly a less common coding pattern than only defining it \nwhen true, but the choice of #if rather than #ifdef in the original \nsources might have been intentional.\n\nI tried briefly to download this project from pgfoundry without success. \n Do you have a copy of the relevant code where you can see how this \ngets defined, and can you include it in a reply?\n\nThanks,\n\nmark\n\n\n\n\n", "msg_date": "Sun, 20 Oct 2019 20:02:18 -0700", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix most -Wundef warnings" }, { "msg_contents": ">>>>> \"Mark\" == Mark Dilger <hornschnorter@gmail.com> writes:\n\n Mark> I tried briefly to download this project from pgfoundry without\n Mark> success. Do you have a copy of the relevant code where you can\n Mark> see how this gets defined, and can you include it in a reply?\n\nI have a backup of the CVS from the pgfoundry version, but the thing is\nso obsolete that I had never bothered converting it to git; it hasn't\nbeen touched in 10 years.\n\nThe Makefile had this:\n\nPG_CPPFLAGS = -DHSTORE_IS_HSTORE_NEW\n\nThe only possible use for this code is if someone were to discover an\nold 8.4 install with an old hstore-new module in use. I think the\nchances of this are small enough not to be of much concern.\n\nI have put up a CVS->Git conversion for the benefit of software\narchaeologists only at: https://github.com/RhodiumToad/hstore-ancient\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Mon, 21 Oct 2019 05:38:06 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Fix most -Wundef warnings" } ]
[ { "msg_contents": "Attached is a v1 patch to add a Glossary to the appendix of our current\ndocumentation.\n\nI believe that our documentation needs a glossary for a few reasons:\n\n1. It's hard to ask for help if you don't know the proper terminology of\nthe problem you're having.\n\n2. Readers who are new to databases may not understand a few of the terms\nthat are used casually both in the documentation and in forums. This helps\nto make our documentation a bit more useful as a teaching tool.\n\n3. Readers whose primary language is not English may struggle to find the\ncorrect search terms, and this glossary may help them grasp that a given\nterm has a usage in databases that is different from common English usage.\n\n3b. If we are not able to find the resources to translate all of the\ndocumentation into a given language, translating the glossary page would be\na good first step.\n\n4. The glossary would be web-searchable, and draw viewers to the official\ndocumentation.\n\n5. adding link anchors to each term would make them cite-able, useful in\nforum conversations.\n\n\nA few notes about this patch:\n\n1. It's obviously incomplete. There are more terms, a lot more, to add.\n\n2. The individual definitions supplied are off-the-cuff, and should be\nthoroughly reviewed.\n\n3. The definitions as a whole should be reviewed by an actual tech writer\n(one was initially involved but had to step back due to prior commitments),\nand the definitions should be normalized in terms of voice, tone, audience,\netc.\n\n4. My understanding of DocBook is not strong. The glossary vs glosslist tag\nissue is a bit confusing to me, and I'm not sure if the glossary tag is\neven appropriate for our needs.\n\n5. I've made no effort at making each term an anchor, nor have I done any\nCSS styling at all.\n\n6. I'm not quite sure how to handle terms that have different definitions\nin different contexts. Should that be two glossdefs following one\nglossterm, or two separate def/term pairs?\n\nPlease review and share your thoughts.", "msg_date": "Sun, 13 Oct 2019 16:52:05 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Add A Glossary" }, { "msg_contents": "Hello Corey,\n\nMy 0.02€:\n\nOn principle, I'm fine with having a glossary, i.e. word definitions, \nwhich are expected to be rather stable in the long run.\n\nI'm wondering whether the effort would not be made redundant by other \non-line effort such as wikipedia, wiktionary, stackoverflow, standards, \nwhatever.\n\nWhen explaining something, the teacher I am usually provides some level of \nexample. This may or may not be appropriate there.\n\nISTM that there should be pointers to relevant sections in the \ndocumentation, for instance \"Analytics\" provided definition suggests\npointing to windowing functions.\n\nThere is significant redundancy involved, because a lot of term would be \ndefined in other sections anyway.\n\nThere should be cross references, eg \"Column\" definition talks about \nAttribute, Table & View, which should be linked to.\n\nI'd consider making SQL keywords uppercase.\n\nDeveloping that is a significant undertaking. Do we have the available \nenergy?\n\nPatch generates a warning on \"git apply\".\n\n sh> git apply ...\n ... terms-and-definitions.patch:159: tab in indent. [...]\n warning: 1 line adds whitespace errors.\n\n\"Record\" def as nested <para> for some unclear reason.\n\nBasically the redacted definitions look pretty clear and well written to \nthe non-native English speaker I am.\n\nOn Sun, 13 Oct 2019, Corey Huinker wrote:\n\n> Date: Sun, 13 Oct 2019 16:52:05 -0400\n> From: Corey Huinker <corey.huinker@gmail.com>\n> To: pgsql-hackers@postgresql.org\n> Subject: Add A Glossary\n> \n> Attached is a v1 patch to add a Glossary to the appendix of our current\n> documentation.\n>\n> I believe that our documentation needs a glossary for a few reasons:\n>\n> 1. It's hard to ask for help if you don't know the proper terminology of\n> the problem you're having.\n>\n> 2. Readers who are new to databases may not understand a few of the terms\n> that are used casually both in the documentation and in forums. This helps\n> to make our documentation a bit more useful as a teaching tool.\n>\n> 3. Readers whose primary language is not English may struggle to find the\n> correct search terms, and this glossary may help them grasp that a given\n> term has a usage in databases that is different from common English usage.\n>\n> 3b. If we are not able to find the resources to translate all of the\n> documentation into a given language, translating the glossary page would be\n> a good first step.\n>\n> 4. The glossary would be web-searchable, and draw viewers to the official\n> documentation.\n>\n> 5. adding link anchors to each term would make them cite-able, useful in\n> forum conversations.\n>\n>\n> A few notes about this patch:\n>\n> 1. It's obviously incomplete. There are more terms, a lot more, to add.\n>\n> 2. The individual definitions supplied are off-the-cuff, and should be\n> thoroughly reviewed.\n>\n> 3. The definitions as a whole should be reviewed by an actual tech writer\n> (one was initially involved but had to step back due to prior commitments),\n> and the definitions should be normalized in terms of voice, tone, audience,\n> etc.\n>\n> 4. My understanding of DocBook is not strong. The glossary vs glosslist tag\n> issue is a bit confusing to me, and I'm not sure if the glossary tag is\n> even appropriate for our needs.\n>\n> 5. I've made no effort at making each term an anchor, nor have I done any\n> CSS styling at all.\n>\n> 6. I'm not quite sure how to handle terms that have different definitions\n> in different contexts. Should that be two glossdefs following one\n> glossterm, or two separate def/term pairs?\n>\n> Please review and share your thoughts.\n>\n\n-- \nFabien Coelho - CRI, MINES ParisTech", "msg_date": "Sat, 9 Nov 2019 09:19:16 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Sat, Nov 09, 2019 at 09:19:16AM +0100, Fabien COELHO wrote:\n> On principle, I'm fine with having a glossary, i.e. word definitions, which\n> are expected to be rather stable in the long run.\n> \n> I'm wondering whether the effort would not be made redundant by other\n> on-line effort such as wikipedia, wiktionary, stackoverflow, standards,\n> whatever.\n> \n> When explaining something, the teacher I am usually provides some level of\n> example. This may or may not be appropriate there.\n\nThat's exactly a good reason for being a reviewer here. You have\nquite some insight here.\n\n> I'd consider making SQL keywords uppercase.\n> \n> Developing that is a significant undertaking. Do we have the available\n> energy?\n\nIt seems like this could be a good idea, still the patch has been\nwaiting on his author for more than two weeks now, so I have marked it\nas returned with feedback.\n--\nMichael", "msg_date": "Mon, 25 Nov 2019 16:55:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n> It seems like this could be a good idea, still the patch has been\n> waiting on his author for more than two weeks now, so I have marked it\n> as returned with feedback.\n>\n\nIn light of feedback, I enlisted the help of an actual technical writer\n(Roger Harkavy, CCed) and we eventually found the time to take a second\npass at this.\n\nAttached is a revised patch.", "msg_date": "Tue, 11 Feb 2020 23:22:43 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "This latest version is an attempt at merging the work of Jürgen Purtz into\nwhat I had posted earlier. There was relatively little overlap in the terms\nwe had chosen to define.\n\nEach glossary definition now has a reference id (good idea Jürgen), the\nform of which is \"glossary-term\". So we can link to the glossary from\noutside if we so choose.\n\nI encourage everyone to read the definitions, and suggest fixes to any\ninaccuracies or awkward phrasings. Mostly, though, I'm seeking feedback on\nthe structure itself, and hoping to get that committed.\n\n\nOn Tue, Feb 11, 2020 at 11:22 PM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n> It seems like this could be a good idea, still the patch has been\n>> waiting on his author for more than two weeks now, so I have marked it\n>> as returned with feedback.\n>>\n>\n> In light of feedback, I enlisted the help of an actual technical writer\n> (Roger Harkavy, CCed) and we eventually found the time to take a second\n> pass at this.\n>\n> Attached is a revised patch.\n>\n>", "msg_date": "Tue, 10 Mar 2020 11:37:41 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "Hello, everyone, I'm Roger, the tech writer who worked with Corey on the\nglossary file. I just thought I'd announce that I am also on the list, and\nI'm looking forward to any questions or comments people may have. Thanks!\n\nOn Tue, Mar 10, 2020 at 11:37 AM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n> This latest version is an attempt at merging the work of Jürgen Purtz into\n> what I had posted earlier. There was relatively little overlap in the terms\n> we had chosen to define.\n>\n> Each glossary definition now has a reference id (good idea Jürgen), the\n> form of which is \"glossary-term\". So we can link to the glossary from\n> outside if we so choose.\n>\n> I encourage everyone to read the definitions, and suggest fixes to any\n> inaccuracies or awkward phrasings. Mostly, though, I'm seeking feedback on\n> the structure itself, and hoping to get that committed.\n>\n>\n> On Tue, Feb 11, 2020 at 11:22 PM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n>\n>> It seems like this could be a good idea, still the patch has been\n>>> waiting on his author for more than two weeks now, so I have marked it\n>>> as returned with feedback.\n>>>\n>>\n>> In light of feedback, I enlisted the help of an actual technical writer\n>> (Roger Harkavy, CCed) and we eventually found the time to take a second\n>> pass at this.\n>>\n>> Attached is a revised patch.\n>>\n>>\n>\n\nHello, everyone, \nI'm Roger, the tech writer who worked with Corey on the glossary file.\n\nI just thought I'd announce that I am also on the list, and I'm looking forward to any questions or comments people may have. Thanks!On Tue, Mar 10, 2020 at 11:37 AM Corey Huinker <corey.huinker@gmail.com> wrote:This latest version is an attempt at merging the work of Jürgen Purtz into what I had posted earlier. There was relatively little overlap in the terms we had chosen to define.Each glossary definition now has a reference id (good idea Jürgen), the form of which is \"glossary-term\". So we can link to the glossary from outside if we so choose.I encourage everyone to read the definitions, and suggest fixes to any inaccuracies or awkward phrasings. Mostly, though, I'm seeking feedback on the structure itself, and hoping to get that committed.On Tue, Feb 11, 2020 at 11:22 PM Corey Huinker <corey.huinker@gmail.com> wrote:It seems like this could be a good idea, still the patch has been\nwaiting on his author for more than two weeks now, so I have marked it\nas returned with feedback.In light of feedback, I enlisted the help of an actual technical writer (Roger Harkavy, CCed) and we eventually found the time to take a second pass at this.Attached is a revised patch.", "msg_date": "Wed, 11 Mar 2020 09:40:45 -0400", "msg_from": "Roger Harkavy <rogerharkavy@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "I made changes on top of 0001-add-glossary-page.patch which was supplied \nby C. Huinker. This affects not only terms proposed by me but also his \noriginal terms. If my changes are not obvious, please let me know and I \nwill describe my motivation.\n\nPlease note especially lines marked with question marks.\n\nIt will be helpful for diff-ing to restrict the length of lines in the \nSGML files to 71 characters (as usual).\n\nJ. Purtz", "msg_date": "Wed, 11 Mar 2020 22:50:28 +0600", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Wed, Mar 11, 2020 at 12:50 PM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> I made changes on top of 0001-add-glossary-page.patch which was supplied\n> by C. Huinker. This affects not only terms proposed by me but also his\n> original terms. If my changes are not obvious, please let me know and I\n> will describe my motivation.\n>\n> Please note especially lines marked with question marks.\n>\n> It will be helpful for diff-ing to restrict the length of lines in the\n> SGML files to 71 characters (as usual).\n>\n> J. Purtz\n>\n\nA new person replied off-list with some suggested edits, all of which\nseemed pretty good. I'll incorporate them myself if that person chooses to\nremain off-list.\n\nOn Wed, Mar 11, 2020 at 12:50 PM Jürgen Purtz <juergen@purtz.de> wrote:I made changes on top of 0001-add-glossary-page.patch which was supplied \nby C. Huinker. This affects not only terms proposed by me but also his \noriginal terms. If my changes are not obvious, please let me know and I \nwill describe my motivation.\n\nPlease note especially lines marked with question marks.\n\nIt will be helpful for diff-ing to restrict the length of lines in the \nSGML files to 71 characters (as usual).\n\nJ. PurtzA new person replied off-list with some suggested edits, all of which seemed pretty good. I'll incorporate them myself if that person chooses to remain off-list.", "msg_date": "Wed, 11 Mar 2020 12:56:55 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n> It will be helpful for diff-ing to restrict the length of lines in the\n> SGML files to 71 characters (as usual).\n\n\nI did it that way for the following reasons\n1. It aids grep-ability\n2. The committers seem to be moving towards that for SQL strings, mostly\nfor reason #1\n3. I recall that the code is put through a linter as one of the final steps\nbefore release, I assumed that the SGML gets the same.\n4. Even if #3 is false, its easy enough to do manually for me to do for\nthis one file once we've settled on the text of the definitions.\n\nAs for the changes, most things seem fine, I specifically like:\n* Checkpoint - looks good\n* yes, PGDATA should have been a literal\n* Partition - the a/b split works for me\n* Unlogged - it reads better\n\nI'm not so sure on / responses to your ???s:\n* The statement that names of schema objects are unique isn't *strictly* true,\njust *mostly* true. Take the case of a unique constraints. The constraint\nhas a name and the unique index has the same name, to the point where\nadding a unique constraint using an existing index renames that index to\nconform to the constraint name.\n* Serializable \"other way around\" question - It's both. Outside the\ntransaction you can't see changes made inside another transaction (though\nyou can be blocked by them), and inside serializable you can't see any\nchanges made since you started. Does that make sense? Were you asking a\ndifferent question?\n* Transaction - yes, all those things could be \"visible\" or they could be\n\"side effects\". It may be best to leave the over-simplified definition in\nplace, and add a \"For more information see <<linref to\ntutorial-transactions>>\n\nIt will be helpful for diff-ing to restrict the length of lines in the \nSGML files to 71 characters (as usual).I did it that way for the following reasons1. It aids grep-ability2. The committers seem to be moving towards that for SQL strings, mostly for reason #13. I recall that the code is put through a linter as one of the final steps before release, I assumed that the SGML gets the same.4. Even if #3 is false, its easy enough to do manually for me to do for this one file once we've settled on the text of the definitions.As for the changes, most things seem fine, I specifically like:* Checkpoint - looks good* yes, PGDATA should have been a literal* Partition - the a/b split works for me* Unlogged - it reads betterI'm not so sure on / responses to your ???s:* The statement that names of schema objects are unique isn't strictly true, just mostly true. Take the case of a unique constraints. The constraint has a name and the unique index has the same name, to the point where adding a unique constraint using an existing index renames that index to conform to the constraint name.* Serializable \"other way around\" question - It's both. Outside the transaction you can't see changes made inside another transaction (though you can be blocked by them), and inside serializable you can't see any changes made since you started. Does that make sense? Were you asking a different question?* Transaction - yes, all those things could be \"visible\" or they could be \"side effects\". It may be best to leave the over-simplified definition in place, and add a \"For more information see <<linref to tutorial-transactions>>", "msg_date": "Wed, 11 Mar 2020 13:23:57 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n>\n> * Transaction - yes, all those things could be \"visible\" or they could be\n> \"side effects\". It may be best to leave the over-simplified definition in\n> place, and add a \"For more information see <<linref to\n> tutorial-transactions>>\n>\n\ntransaction-iso would be a better linkref in this case\n\n* Transaction - yes, all those things could be \"visible\" or they could be \"side effects\". It may be best to leave the over-simplified definition in place, and add a \"For more information see <<linref to tutorial-transactions>>transaction-iso would be a better linkref in this case", "msg_date": "Wed, 11 Mar 2020 13:36:08 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "> The statement that names of schema objects are unique isn't \n> /strictly/ true, just /mostly/ true. Take the case of a unique \n> constraints. \n\nConcerning CONSTRAINTS you are right. Constraints seems to be an exception:\n\n * Their name belongs to a schema, but are not necessarily unique\n within this context:\n https://www.postgresql.org/docs/current/catalog-pg-constraint.html.\n * There is a UNIQUE index within the system catalog pg_constraints:\n \"pg_constraint_conrelid_contypid_conname_index\" UNIQUE, btree\n (conrelid, contypid, conname), which expresses that names are unique\n within the context of a table/constraint-type. Nevertheless tests\n have shown that some stronger restrictions exists across\n table-boarders (,which seems to be implemented in CREATE statements\n - or as a consequence of your mentioned correlation between\n constraint and index ?).\n\nI hope that there are no more such exception to the global rule 'object \nnames in a schema are unique': \nhttps://www.postgresql.org/docs/current/sql-createschema.html\n\nThis facts must be mentioned as a short note in glossary and in more \ndetail in the later patch about the architecture.\n\nJ. Purtz\n\n\n\n\n\n\n\n\n\nThe\n statement that names of schema objects are unique isn't strictly true,\n just mostly true. Take the case of a unique constraints. \nConcerning CONSTRAINTS you are right. Constraints seems to be an\n exception:\n\nTheir name belongs to a schema, but are not necessarily unique\n within this context:\n https://www.postgresql.org/docs/current/catalog-pg-constraint.html.\nThere is a UNIQUE index within the system catalog\n pg_constraints:  \"pg_constraint_conrelid_contypid_conname_index\"\n UNIQUE, btree (conrelid, contypid, conname), which\n expresses that names are unique within the context of a\n table/constraint-type. Nevertheless tests have shown that some\n stronger restrictions exists across table-boarders (,which seems\n to be implemented in CREATE statements - or as a consequence of\n your mentioned correlation between constraint and index ?).\n\n\nI hope that there are no more such exception to the global rule\n 'object names in a schema are unique':\n https://www.postgresql.org/docs/current/sql-createschema.html\n\nThis facts must be mentioned as a short note in glossary and in\n more detail in the later patch about the architecture.\nJ. Purtz", "msg_date": "Fri, 13 Mar 2020 10:18:40 +0600", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Fri, Mar 13, 2020 at 12:18 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n>\n> The statement that names of schema objects are unique isn't *strictly* true,\n> just *mostly* true. Take the case of a unique constraints.\n>\n> Concerning CONSTRAINTS you are right. Constraints seems to be an exception:\n>\n> - Their name belongs to a schema, but are not necessarily unique\n> within this context:\n> https://www.postgresql.org/docs/current/catalog-pg-constraint.html.\n> - There is a UNIQUE index within the system catalog pg_constraints: \"pg_constraint_conrelid_contypid_conname_index\"\n> UNIQUE, btree (conrelid, contypid, conname), which expresses that\n> names are unique within the context of a table/constraint-type.\n> Nevertheless tests have shown that some stronger restrictions exists across\n> table-boarders (,which seems to be implemented in CREATE statements - or as\n> a consequence of your mentioned correlation between constraint and index ?).\n>\n> I hope that there are no more such exception to the global rule 'object\n> names in a schema are unique':\n> https://www.postgresql.org/docs/current/sql-createschema.html\n>\n> This facts must be mentioned as a short note in glossary and in more\n> detail in the later patch about the architecture.\n>\n>\n> I did what I could to address the near uniqueness, as well as incorporate\nyour earlier edits into this new, squashed patch attached.", "msg_date": "Wed, 18 Mar 2020 22:34:25 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "I gave this a look. I first reformatted it so I could read it; that's\n0001. Second I changed all the long <link> items into <xref>s, which\nare shorter and don't have to repeat the title of the refered to page.\n(Of course, this changes the link to be in the same style as every other\nlink in our documentation; some people don't like it. But it's our\nstyle.)\n\nThere are some mistakes. \"Tupple\" is most glaring one -- not just the\ntypo but also the fact that it goes to sql-revoke. A few definitions\nwe'll want to modify. Nothing too big. In general I like this work and\nI think we should have it in pg13.\n\nPlease bikeshed the definition of your favorite term, and suggest what\nother terms to add. No pointing out of mere typos yet, please.\n\nI think we should have the terms Consistency, Isolation, Durability.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 19 Mar 2020 21:11:22 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Thu, Mar 19, 2020 at 8:11 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> I gave this a look. I first reformatted it so I could read it; that's\n> 0001. Second I changed all the long <link> items into <xref>s, which\n>\n\nThanks! I didn't know about xrefs, that is a big improvement.\n\n\n> are shorter and don't have to repeat the title of the refered to page.\n> (Of course, this changes the link to be in the same style as every other\n> link in our documentation; some people don't like it. But it's our\n> style.)\n>\n> There are some mistakes. \"Tupple\" is most glaring one -- not just the\n> typo but also the fact that it goes to sql-revoke. A few definitions\n> we'll want to modify. Nothing too big. In general I like this work and\n> I think we should have it in pg13.\n>\n> Please bikeshed the definition of your favorite term, and suggest what\n> other terms to add. No pointing out of mere typos yet, please.\n>\n\nJürgen mentioned off-list that the man page doesn't build. I was going to\nlook into that, but if anyone has more familiarity with that, I'm listening.\n\n\n> I think we should have the terms Consistency, Isolation, Durability.\n>\n\n+1\n\nOn Thu, Mar 19, 2020 at 8:11 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:I gave this a look.  I first reformatted it so I could read it; that's\n0001.  Second I changed all the long <link> items into <xref>s, whichThanks! I didn't know about xrefs, that is a big improvement. \nare shorter and don't have to repeat the title of the refered to page.\n(Of course, this changes the link to be in the same style as every other\nlink in our documentation; some people don't like it. But it's our\nstyle.)\n\nThere are some mistakes.  \"Tupple\" is most glaring one -- not just the\ntypo but also the fact that it goes to sql-revoke.  A few definitions\nwe'll want to modify.  Nothing too big.  In general I like this work and\nI think we should have it in pg13.\n\nPlease bikeshed the definition of your favorite term, and suggest what\nother terms to add.  No pointing out of mere typos yet, please.Jürgen mentioned off-list that the man page doesn't build. I was going to look into that, but if anyone has more familiarity with that, I'm listening.\n\nI think we should have the terms Consistency, Isolation, Durability.+1", "msg_date": "Thu, 19 Mar 2020 21:41:54 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n> Jürgen mentioned off-list that the man page doesn't build. I was going to\n>> look into that, but if anyone has more familiarity with that, I'm listening.\n>>\n>\nLooking at this some more, I'm not sure anything needs to be done for man\npages. man1 is for executables, man3 seems to be dblink and SPI, and man7\nis all SQL commands. This isn't any of those. The only possible thing left\nwould be how to render the text of a <glossterm>foo</glossterm, and so I\nlooked to see what we do in man pages for acronyms, and the answer appears\nto be \"nothing\":\n\npostgres/doc/src$ git grep acronym | grep -v '\\/acronym'\nsgml/filelist.sgml:<!ENTITY acronyms SYSTEM \"acronyms.sgml\">\nsgml/postgres.sgml: &acronyms;\nsgml/release.sgml:[A-Z][A-Z_ ]+[A-Z_] <command>, <literal>,\n<envar>, <acronym>\nsgml/stylesheet.css:acronym { font-style: inherit; }\n\nfilelist.sgml, postgres.sgml, ans stylesheet.css already have the\ncorresponding change, and the release.sgml is just an incidental mention of\nacronym.\n\nOf course I could be missing something.\n\n>\n\nJürgen mentioned off-list that the man page doesn't build. I was going to look into that, but if anyone has more familiarity with that, I'm listening.Looking at this some more, I'm not sure anything needs to be done for man pages. man1 is for executables, man3 seems to be dblink and SPI, and man7 is all SQL commands. This isn't any of those. The only possible thing left would be how to render the text of a <glossterm>foo</glossterm, and so I looked to see what we do in man pages for acronyms, and the answer appears to be \"nothing\":postgres/doc/src$ git grep acronym | grep -v '\\/acronym'sgml/filelist.sgml:<!ENTITY acronyms   SYSTEM \"acronyms.sgml\">sgml/postgres.sgml:  &acronyms;sgml/release.sgml:[A-Z][A-Z_ ]+[A-Z_]             <command>, <literal>, <envar>, <acronym>sgml/stylesheet.css:acronym\t\t{ font-style: inherit; }filelist.sgml, postgres.sgml, ans stylesheet.css already have the corresponding change, and the release.sgml is just an incidental mention of acronym.Of course I could be missing something.", "msg_date": "Fri, 20 Mar 2020 11:48:24 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Mar-20, Corey Huinker wrote:\n\n> > J�rgen mentioned off-list that the man page doesn't build. I was going to\n> > look into that, but if anyone has more familiarity with that, I'm listening.\n\n> Looking at this some more, I'm not sure anything needs to be done for man\n> pages.\n\nYeah, I don't think he was saying that we needed to do anything to\nproduce a glossary man page; rather that the \"make man\" command failed.\nI tried it here, and indeed it failed. But on further investigation,\nafter a \"make maintainer-clean\" it no longer failed. I'm not sure what\nto make of it, but it seems that this patch needn't concern itself with\nthat.\n\nI gave a read through the first few actual definitions. It's a much\nslower work than I thought! Attached you'll find the first few edits\nthat I propose.\n\nLooking at the definition of \"Aggregate\" it seemed weird to have it\nstand as a verb infinitive. I looked up other glossaries, found this\none\nhttps://www.gartner.com/en/information-technology/glossary?glossaryletter=T\nand realized that when they do verbs, they put the present participle\n(-ing) form. So I changed it to \"Aggregating\", and split out the\n\"Aggregate function\" into its own term.\n\nIn Atomic, there seemed to be excessive use of <glossterm> in the\ndefinitions. Style guides seem to suggest to do that only the first\ntime you use a term in a definition. I removed some markup.\n\nI'm not sure about some terms such as \"analytic\" and \"backend server\".\nI put them in XML comments for now.\n\nThe other changes should be self-explanatory.\n\nIt's hard to review work from a professional tech writer. I'm under the\nconstant impression that I'm ruining somebody's perfect end product,\nmaking a fool of myself.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 20 Mar 2020 14:51:44 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "Alvaro, I know that you are joking, but I want to impress on everyone:\nplease don't feel like anyone here is breaking anything when it comes to\nmodifying the content and structure of this glossary.\n\nI do have technical writing experience, but everyone else here is a subject\nmatter expert when it comes to the world of databases and how this one in\nparticular functions.\n\nOn Fri, Mar 20, 2020 at 1:51 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Mar-20, Corey Huinker wrote:\n>\n> > > Jürgen mentioned off-list that the man page doesn't build. I was going\n> to\n> > > look into that, but if anyone has more familiarity with that, I'm\n> listening.\n>\n> > Looking at this some more, I'm not sure anything needs to be done for man\n> > pages.\n>\n> Yeah, I don't think he was saying that we needed to do anything to\n> produce a glossary man page; rather that the \"make man\" command failed.\n> I tried it here, and indeed it failed. But on further investigation,\n> after a \"make maintainer-clean\" it no longer failed. I'm not sure what\n> to make of it, but it seems that this patch needn't concern itself with\n> that.\n>\n> I gave a read through the first few actual definitions. It's a much\n> slower work than I thought! Attached you'll find the first few edits\n> that I propose.\n>\n> Looking at the definition of \"Aggregate\" it seemed weird to have it\n> stand as a verb infinitive. I looked up other glossaries, found this\n> one\n> https://www.gartner.com/en/information-technology/glossary?glossaryletter=T\n> and realized that when they do verbs, they put the present participle\n> (-ing) form. So I changed it to \"Aggregating\", and split out the\n> \"Aggregate function\" into its own term.\n>\n> In Atomic, there seemed to be excessive use of <glossterm> in the\n> definitions. Style guides seem to suggest to do that only the first\n> time you use a term in a definition. I removed some markup.\n>\n> I'm not sure about some terms such as \"analytic\" and \"backend server\".\n> I put them in XML comments for now.\n>\n> The other changes should be self-explanatory.\n>\n> It's hard to review work from a professional tech writer. I'm under the\n> constant impression that I'm ruining somebody's perfect end product,\n> making a fool of myself.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nAlvaro, I know that you are joking, but I want to impress on everyone: please don't feel like anyone here is breaking anything when it comes to modifying the content and structure of this glossary.I do have technical writing experience, but everyone else here is a subject matter expert when it comes to the world of databases and how this one in particular functions.On Fri, Mar 20, 2020 at 1:51 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Mar-20, Corey Huinker wrote:\n\n> > Jürgen mentioned off-list that the man page doesn't build. I was going to\n> > look into that, but if anyone has more familiarity with that, I'm listening.\n\n> Looking at this some more, I'm not sure anything needs to be done for man\n> pages.\n\nYeah, I don't think he was saying that we needed to do anything to\nproduce a glossary man page; rather that the \"make man\" command failed.\nI tried it here, and indeed it failed.  But on further investigation,\nafter a \"make maintainer-clean\" it no longer failed.  I'm not sure what\nto make of it, but it seems that this patch needn't concern itself with\nthat.\n\nI gave a read through the first few actual definitions.  It's a much\nslower work than I thought!  Attached you'll find the first few edits\nthat I propose.\n\nLooking at the definition of \"Aggregate\" it seemed weird to have it\nstand as a verb infinitive.  I looked up other glossaries, found this\none\nhttps://www.gartner.com/en/information-technology/glossary?glossaryletter=T\nand realized that when they do verbs, they put the present participle\n(-ing) form.  So I changed it to \"Aggregating\", and split out the\n\"Aggregate function\" into its own term.\n\nIn Atomic, there seemed to be excessive use of <glossterm> in the\ndefinitions.  Style guides seem to suggest to do that only the first\ntime you use a term in a definition.  I removed some markup.\n\nI'm not sure about some terms such as \"analytic\" and \"backend server\".\nI put them in XML comments for now.\n\nThe other changes should be self-explanatory.\n\nIt's hard to review work from a professional tech writer.  I'm under the\nconstant impression that I'm ruining somebody's perfect end product,\nmaking a fool of myself.\n\n-- \nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 20 Mar 2020 14:08:14 -0400", "msg_from": "Roger Harkavy <rogerharkavy@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n> It's hard to review work from a professional tech writer. I'm under the\n> constant impression that I'm ruining somebody's perfect end product,\n> making a fool of myself.\n\n\nIf it makes you feel better, it's a mix of definitions I wrote that Roger\nproofed and restructured, ones that Jürgen had written for a separate\neffort which then got a Roger-pass, and then some edits of my own and some\nby Jürgen which I merged without consulting Roger.\n\nIt's hard to review work from a professional tech writer.  I'm under the\nconstant impression that I'm ruining somebody's perfect end product,\nmaking a fool of myself.If it makes you feel better, it's a mix of definitions I wrote that Roger proofed and restructured, ones that Jürgen had written for a separate effort which then got a Roger-pass, and then some edits of my own and some by Jürgen which I merged without consulting Roger.", "msg_date": "Fri, 20 Mar 2020 14:16:06 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Thu, Mar 19, 2020 at 09:11:22PM -0300, Alvaro Herrera wrote:\n> + <glossterm>Aggregate</glossterm>\n> + <glossdef>\n> + <para>\n> + To combine a collection of data values into a single value, whose\n> + value may not be of the same type as the original values.\n> + <glossterm>Aggregate</glossterm> <glossterm>Functions</glossterm>\n> + combine multiple <glossterm>Rows</glossterm> that share a common set\n> + of values into one <glossterm>Row</glossterm>, which means that the\n> + only data visible in the values in common, and the aggregates of the\n\nIS the values in common ?\n(or, \"is the shared values\")\n\n> + <glossterm>Analytic</glossterm>\n> + <glossdef>\n> + <para>\n> + A <glossterm>Function</glossterm> whose computed value can reference\n> + values found in nearby <glossterm>Rows</glossterm> of the same\n> + <glossterm>Result Set</glossterm>.\n\n> + <glossterm>Archiver</glossterm>\n\nCan you change that to archiver process ?\n\n> + <glossterm>Atomic</glossterm>\n..\n> + <para>\n> + In reference to an operation: An event that cannot be completed in\n> + part: it must either entirely succeed or entirely fail. A series of\n\nCan you say: \"an action which is not allowed to partially succed and then fail,\n...\"\n\n> + <glossterm>Autovacuum</glossterm>\n\nSay autovacuum process ?\n\n> + <glossdef>\n> + <para>\n> + Processes that remove outdated <acronym>MVCC</acronym>\n\nI would say \"A set of processes that remove...\"\n\n> + <glossterm>Records</glossterm> of the <glossterm>Heap</glossterm> and\n\nI'm not sure, can you say \"tuples\" ?\n\n> + <glossterm>Backend Process</glossterm>\n> + <glossdef>\n> + <para>\n> + Processes of an <glossterm>Instance</glossterm> which act on behalf of\n\nSay DATABASE instance\n\n> + <glossterm>Backend Server</glossterm>\n> + <glossdef>\n> + <para>\n> + See <glossterm>Instance</glossterm>.\nsame\n\n> + <glossterm>Background Worker</glossterm>\n> + <glossdef>\n> + <para>\n> + Individual processes within an <glossterm>Instance</glossterm>, which\nsame\n\n> + run system- or user-supplied code. Typical use cases are processes\n> + which handle parts of an <acronym>SQL</acronym> query to take\n> + advantage of parallel execution on servers with multiple\n> + <acronym>CPUs</acronym>.\n\nI would say \"A typical use case is\"\n\n> + <glossterm>Background Writer</glossterm>\n\nAdd \"process\" ?\n\n> + <glossdef>\n> + <para>\n> + Writes continuously dirty pages from <glossterm>Shared\n\nSay \"Continuously writes\"\n\n> + Memory</glossterm> to the file system. It starts periodically, but\n\nHm, maybe \"wakes up periodically\"\n\n> + <glossterm>Cast</glossterm>\n> + <glossdef>\n> + <para>\n> + A conversion of a <glossterm>Datum</glossterm> from its current data\n> + type to another data type.\n\nmaybe just say\nA conversion of a <glossterm>Datum</glossterm> another data type.\n\n> + <glossterm>Catalog</glossterm>\n> + <glossdef>\n> + <para>\n> + The <acronym>SQL</acronym> standard uses this standalone term to\n> + indicate what is called a <glossterm>Database</glossterm> in\n> + <productname>PostgreSQL</productname>'s terminology.\n\nMaybe remove \"standalone\" ?\n\n> + <glossterm>Checkpointer</glossterm>\n\nProcess\n\n> + A process that writes dirty pages and <glossterm>WAL\n> + Records</glossterm> to the file system and creates a special\n\nDoes the chckpointer actually write WAL ?\n\n> + checkpoint record. This process is initiated when predefined\n> + conditions are met, such as a specified amount of time has passed, or\n> + a certain volume of records have been collected.\n\ncollected or written?\n\nI would say:\n> + A checkpoint is usually initiated by\n> + a specified amount of time having passed, or\n> + a certain volume of records having been written.\n\n> + <glossterm>Checkpoint</glossterm>\n> + <glossdef>\n> + <para>\n> + A <link linkend=\"sql-checkpoint\"> Checkpoint</link> is a point in time\n\nExtra space\n\n> + <glossentry id=\"glossary-connection\">\n> + <glossterm>Connection</glossterm>\n> + <glossdef>\n> + <para>\n> + A <acronym>TCP/IP</acronym> or socket line for inter-process\n\nI don't know if I've ever heard the phase \"socket line\"\nI guess you mean a unix socket.\n\n> + <glossterm>Constraint</glossterm>\n> + <glossdef>\n> + <para>\n> + A concept of restricting the values of data allowed within a\n> + <glossterm>Table</glossterm>.\n\nJust say: \"A restriction on the values...\"?\n\n> + <glossterm>Data Area</glossterm>\n\nRemove this ? I've never heard this phrase before.\n\n> + <glossdef>\n> + <para>\n> + The base directory on the filesystem of a\n> + <glossterm>Server</glossterm> that contains all data files and\n> + subdirectories associated with a <glossterm>Cluster</glossterm> with\n> + the exception of tablespaces. The environment variable\n\nShould add an entry for \"tablespace\".\n\n> + <glossterm>Datum</glossterm>\n> + <glossdef>\n> + <para>\n> + The internal representation of a <acronym>SQL</acronym> data type.\n\nI'm not sure if should use \"a SQL\" or \"an SQL\", but not both.\n\n> + <glossterm>Delete</glossterm>\n> + <glossdef>\n> + <para>\n> + A <acronym>SQL</acronym> command whose purpose is to remove\n\njust say \"which removes\"\n\n> + <glossentry id=\"glossary-file-segment\">\n> + <glossterm>File Segment</glossterm>\n> + <glossdef>\n> + <para>\n> + If a heap or index file grows in size over 1 GB, it will be split\n\n1GB is the default \"segment size\", which you should define.\n\n> + <glossentry id=\"glossary-foreign-data-wrapper\">\n> + <glossterm>Foreign Data Wrapper</glossterm>\n> + <glossdef>\n> + <para>\n> + A means of representing data that is not contained in the local\n> + <glossterm>Database</glossterm> as if were in local\n> + <glossterm>Table</glossterm>(s).\n\nI'd say:\n\n+ A means of representing data as a <glossterm>Table</glossterm>(s) even though\n+ it is not contained in the local <glossterm>Database</glossterm> \n\n\n> + <glossentry id=\"glossary-foreign-key\">\n> + <glossterm>Foreign Key</glossterm>\n> + <glossdef>\n> + <para>\n> + A type of <glossterm>Constraint</glossterm> defined on one or more\n> + <glossterm>Column</glossterm>s in a <glossterm>Table</glossterm> which\n> + requires the value in those <glossterm>Column</glossterm>s to uniquely\n> + identify a <glossterm>Row</glossterm> in the specified\n> + <glossterm>Table</glossterm>.\n\nAn FK doesn't require the values in its table to be unique, right ?\nI'd say something like: \"..which enforces that the values in those Columns are\nalso present in an(other) table.\"\nReference Referential Integrity?\n\n> + <glossterm>Function</glossterm>\n> + <glossdef>\n> + <para>\n> + Any pre-defined transformation of data. Many\n> + <glossterm>Functions</glossterm> are already defined within\n> + <productname>PostgreSQL</productname> itself, but can also be\n> + user-defined.\n\nI would remove \"pre-\", since you mentioned that it can be user-defined.\n\n> + <glossterm>Global SQL Object</glossterm>\n> + <glossdef>\n> + <para>\n> + <!-- FIXME -->\n> + Not all <glossterm>SQL Objects</glossterm> belong to a certain\n> + <glossterm>Schema</glossterm>. Some belong to the complete\n> + <glossterm>Database</glossterm>, or even to the complete\n> + <glossterm>Cluster</glossterm>. These are referred to as\n> + <glossterm>Global SQL Objects</glossterm>. Collations and Extensions\n> + such as <glossterm>Foreign Data Wrappers</glossterm> reside at the\n> + <glossterm>Database</glossterm> level; <glossterm>Database</glossterm>\n> + names, <glossterm>Roles</glossterm>,\n> + <glossterm>Tablespaces</glossterm>, <glossterm>Replication</glossterm>\n> + origins, and subscriptions for logical\n> + <glossterm>Replication</glossterm> at the\n> + <glossterm>Cluster</glossterm> level.\n\nI think \"complete\" is the wrong world.\nI would say:\n\"An object which is not specific to a given database, but instead shared across\nthe entire Cluster\".\n\n> + <glossentry id=\"glossary-grant\">\n> + <glossterm>Grant</glossterm>\n> + <glossdef>\n> + <para>\n> + A <acronym>SQL</acronym> command that is used to enable\n\nI'd say \"allow\"\n\n> + <glossentry id=\"glossary-heap\">\n> + <glossterm>Heap</glossterm>\n> + <glossdef>\n> + <para>\n> + Contains the original values of <glossterm>Row</glossterm> attributes\n\nI'm not sure what \"original\" means here ?\n\n> + (i.e. the data). The <glossterm>Heap</glossterm> is realized within\n> + <glossterm>Database</glossterm> files and mirrored in\n> + <glossterm>Shared Memory</glossterm>.\n\nI wouldn't say mirrored, and probably just remove at least the part after \"and\".\n\n> + <glossentry id=\"glossary-host\">\n> + <glossterm>Host</glossterm>\n> + <glossdef>\n> + <para>\n> + See <glossterm>Server</glossterm>.\n\nOr client. Or proxy at some layer or other intermediate thing. Maybe just\nremove this.\n\n> + <glossentry id=\"glossary-index\">\n> + <glossterm>Index</glossterm>\n> + <glossdef>\n> + <para>\n> + A <glossterm>Relation</glossterm> that contains data derived from a\n> + <glossterm>Table</glossterm> (or <glossterm>Relation</glossterm> such\n> + as a <glossterm>Materialized View</glossterm>). It's internal\n\nIts\n\n> + structure supports very fast retrieval of and access to the original\n> + data.\n\n> + <glossterm>Instance</glossterm>\n> + <glossdef>\n> + <para>\n...\n> + <para>\n> + Many <glossterm>Instances</glossterm> can run on the same server as\n> + long as they use different <acronym>IP</acronym> ports and manage\n\nI would say \"as long as their TCP/IP ports or sockets don't conflict, and manage...\"\n\n> + <glossterm>Join</glossterm>\n> + <glossdef>\n> + <para>\n> + A technique used with <command>SELECT</command> statements for\n> + correlating data in one or more <glossterm>Relations</glossterm>.\n\nI would refer to this as a SQL keyword allowing to combine data from multiple\nrelations.\n\n> + <glossterm>Lock</glossterm>\n> + <glossdef>\n> + <para>\n> + A mechanism for one process temporarily preventing data from being\n> + manipulated by any other process.\n\nI'd say:\n\n+ A mechanism by which a process protects simultaneous access to a resource\n+ by other processes.\n\n(I said \"protects\" since shared locks don't prevent all access, and it's easier\nthan explaining \"unsafe access\").\n\n\n> + <glossentry id=\"glossary-log-file\">\n> + <glossterm>Log File</glossterm>\n> + <glossdef>\n> + <para>\n> + <link linkend=\"logfile-maintenance\">LOG files</link> contain readable\n> + text lines about serious and non-serious events, e.g.: use of wrong\n> + password, long-running queries, ... .\n\nSerious and non-serious?\n\n> + <glossterm>Log Writer</glossterm>\n\nprocess\n\n> + <glossdef>\n> + <para>\n> + If activated and parameterized, the\n\nI don't know what parameterized means here\n\n> + <link linkend=\"runtime-config-logging\">Log Writer</link> process\n> + writes information about database events into the current\n> + <glossterm>Log file</glossterm>. When reaching certain time- or\n> + volume-dependent criterias, he <!-- FIXME \"he\"? --> creates a new\n\nI think criteria is the plural..\n\n> + <glossterm>Log Record</glossterm>\n\nCan we remove this ?\nCouple releases ago, \"pg_xlog\" was renamed to pg_wal.\nI'd prefer to avoid defining something called \"Log Record\" about WAL that's\nright next to text logs.\n\n> + <glossterm>Logged</glossterm>\n> + <glossdef>\n> + <para>\n> + A <glossterm>Table</glossterm> is considered\n> + <glossterm>Logged</glossterm> if changes to it are sent to the\n> + <glossterm>WAL Log</glossterm>. By default, all regular\n> + <glossterm>Tables</glossterm> are <glossterm>Logged</glossterm>. A\n> + <glossterm>Table</glossterm> can be speficied as unlogged either at\n> + creation time or via the <command>ALTER TABLE</command> command. The\n> + primary use of unlogged <glossterm>Tables</glossterm> is for storing\n> + transient work data that must be shared across processes, but with a\n> + final result stored in logged <glossterm>Tables</glossterm>.\n> + <glossterm>Temporary Tables</glossterm> are always unlogged.\n> + </para>\n> + </glossdef>\n> + </glossentry>\n\nMaybe it's be better to define \"unlogged\", since 1) logged is the default; and\n2) it's right next to text logs.\n\n> + <glossterm>Master</glossterm>\n> + <glossdef>\n> + <para>\n> + When two or more <glossterm>Databases</glossterm> are linked via\n> + <glossterm>Replication</glossterm>, the <glossterm>Server</glossterm>\n> + that is considered the authoritative source of information is called\n> + the <glossterm>Master</glossterm>.\n\nI think it'd actually be the <<instance>> which is authoritative, in case they're\nrunning on the same <<Server>>\n\n> + <glossentry id=\"glossary-materialized\">\n> + <glossterm>Materialized</glossterm>\n> + <glossdef>\n> + <para>\n> + The act of storing information rather than just the means of accessing\n\nremove \"means of\" ?\n\n> + the information. This term is used in <glossterm>Materialized\n> + Views</glossterm> meaning that the data derived from the\n> + <glossterm>View</glossterm> is actually stored on disk separate from\n\nseparately\n\n> + the sources of that data. When the term\n> + <glossterm>Materialized</glossterm> is used in speaking about\n> + mulit-step queries, it means that the data of a given step is stored\n\nmulti\n\n> + (in memory, but that storage may spill over onto disk).\n> + </para>\n> + </glossdef>\n> + </glossentry>\n> +\n> + <glossentry id=\"glossary-materialized-view\">\n> + <glossterm>Materialized View</glossterm>\n> + <glossdef>\n> + <para>\n> + A <glossterm>Relation</glossterm> that is defined in the same way that\n> + a <glossterm>View</glossterm> is, but it stores data in the same way\n\nchange \"it stores\" to stores\n\n> + <glossentry id=\"glossary-partition\">\n> + <glossterm>Partition</glossterm>\n> + <glossdef>\n> + <para>\n> + <!-- FIXME should this use the style used in \"atomic\"? -->\n> + a) A <glossterm>Table</glossterm> that can be queried independently by\n> + its own name, but can also be queried via another\n\njust say \"on its own\" or \"directly\"\n\n> + <glossterm>Table</glossterm>, a partitionend\n\npartitioned\nalso, put it in parens, like \"via another table (a partitioned table)...\"\n\n> + <glossterm>Table</glossterm>, which is a collection of\n\nSay \"set\" here since you later talk about \"subsets\" and sets.\n\n> + <glossentry id=\"glossary-primary-key\">\n> + <glossterm>Primary Key</glossterm>\n> + <glossdef>\n> + <para>\n> + A special case of <glossterm>Unique Index</glossterm> defined on a\n> + <glossterm>Table</glossterm> or other <glossterm>Relation</glossterm>\n> + that also guarantees that all of the <glossterm>Attributes</glossterm>\n> + within the <glossterm>Primary Key</glossterm> do not have\n> + <glossterm>Null</glossterm> values. As the name implies, there can be\n> + only one <glossterm>Primary Key</glossterm> per\n> + <glossterm>Table</glossterm>, though it is possible to have multiple\n> + <glossterm>Unique Indexes</glossterm> that also have no\n> + <glossterm>Null</glossterm>-capable <glossterm>Attributes</glossterm>.\n\nI would say \"multiple >>unique indexes<< on >>attributes<< defined as not\nnullable.\n\n> + <glossterm>Procedure</glossterm>\n> + <glossdef>\n> + <para>\n> + A defined set of instructions for manipulating data within a\n> + <glossterm>Database</glossterm>. <glossterm>Procedure</glossterm> can\n\n\"procedures\" or \"a procedure\"\n\n> + <glossterm>Record</glossterm>\n> + <glossdef>\n> + <para>\n> + See <link linkend=\"sql-revoke\">Tupple</link>.\n\nTupple is back. And again below.\n\n> + A single <glossterm>Row</glossterm> of a <glossterm>Table</glossterm>\n> + or other Relation.\n\nI think it's commonly used to mean \"an instance of a row\" (in an MVCC sense),\nbut maybe that's too much detail for here.\n\n> + <glossterm>Referential Integrity</glossterm>\n> + <glossdef>\n> + <para>\n> + The means of restricting data in one <glossterm>Relation</glossterm>\n\nA means\n\n> + <glossentry id=\"glossary-relation\">\n> + <glossterm>Relation</glossterm>\n> + <glossdef>\n> + <para>\n> + The generic term for all objects in a <glossterm>Database</glossterm>\n\n\"A generic term for any object in a >>database<< that has a name and...\"\n\n> + <glossentry id=\"glossary-result-set\">\n> + <glossterm>Result Set</glossterm>\n> + <glossdef>\n> + <para>\n> + A data structure transmitted from a <glossterm>Server</glossterm> to\n> + client program upon the completion of a <acronym>SQL</acronym>\n> + command, usually a <command>SELECT</command> but it can be an\n> + <command>INSERT</command>, <command>UPDATE</command>, or\n> + <command>DELETE</command> command if the <literal>RETURNING</literal>\n> + clause is specified.\n\nI'd remove everything in that sentence after \"usually\".\n\n> + <glossterm>Revoke</glossterm>\n> + <glossdef>\n> + <para>\n> + A command to reduce access to a named set of\n\ns/reduce/prevent/ ?\n\n> + <glossterm>Row</glossterm>\n> + <glossdef>\n> + <para>\n> + See <link linkend=\"sql-revoke\">Tupple</link>.\n\ntuple\n\n> + <glossentry id=\"glossary-savepoint\">\n> + <glossterm>Savepoint</glossterm>\n> + <glossdef>\n> + <para>\n> + A special mark (such as a timestamp) inside a\n> + <glossterm>Transaction</glossterm>. Data modifications after this\n> + point in time may be rolled back to the time of the savepoint.\n\nI don't think \"timestamp\" is a useful or accurate analogy for this.\n\n> + <glossterm>Schema</glossterm>\n> + <glossdef>\n> + <para>\n> + A <link linkend=\"ddl-schemas\">schema</link> is a namespace for\n> + <glossterm>SQL objects</glossterm>, which all reside in the same\n> + <glossterm>database</glossterm>. Each <glossterm>SQL\n> + object</glossterm> must reside in exactly one\n> + <glossterm>Schema</glossterm>.\n> + </para>\n\n> + <para>\n> + In general, the names of <glossterm>SQL objects</glossterm> in the\n> + schema are unique - even across different types of objects. The lone\n> + exception is the case of <glossterm>Unique</glossterm>\n> + <glossterm>Constraint</glossterm>s, in which case there\n> + <emphasis>must</emphasis> be a <glossterm>Unique Index</glossterm>\n> + with the same name and <glossterm>Schema</glossterm> as the\n> + <glossterm>Constraint</glossterm>. There is no restriction on having\n> + a name used in multiple <glossterm>Schema</glossterm>s.\n\nI think there's some confusion. Constraints are not objects, right ?\n\nBut, constraints do have an exception (not just unique constraints, though):\nthe constraint is only unique on its table, not in its database/schema.\n\n \"pg_constraint_conrelid_contypid_conname_index\" UNIQUE, btree (conrelid, contypid, conname) CLUSTER\n\n> + <glossterm>Select</glossterm>\n> + <glossdef>\n> + <para>\n> + The command used to query a <glossterm>Database</glossterm>. Normally,\n> + <command>SELECT</command>s are not expected to modify the\n> + <glossterm>Database</glossterm> in any way, but it is possible that\n> + <glossterm>Functions</glossterm> invoked within the query could have\n> + side-effects that do modify data. </para>\n\nI think there should be references to the sql-* pages for this and others.\n\n> + <glossentry id=\"glossary-serializable\">\n> + <glossterm>Serializable</glossterm>\n> + <glossdef>\n> + <para>\n> + Transactions defined as <literal>SERIALIZABLE</literal> are unable to\n> + see changes made within other transactions. In effect, for the\n> + initializing session the entire <glossterm>Database</glossterm>\n> + appears to be frozen duration such a\n> + <glossterm>Transaction</glossterm>.\n\nDo you mean \"for the duration of the >>Transaction<<\"\n\n> + <glossentry id=\"glossary-session\">\n> + <glossterm>Session</glossterm>\n> + <glossdef>\n> + <para>\n> + A <glossterm>Connection</glossterm> to the <glossterm>Database</glossterm>.\n> + </para>\n> + <para>\n> + A description of the commands that were issued in the life cycle of a\n> + particular <glossterm>Connection</glossterm> to the\n> + <glossterm>Database</glossterm>.\n\nI'm not sure what this <para> means.\n\n> + <glossterm>Sequence</glossterm>\n> + <glossdef>\n> + <para>\n> + <!-- sounds excessively complicated a definition -->\n> + An <glossterm>Database</glossterm> object which represents the\n\nA not An\n\n> + mathematical concept of a numerical integral sequence. It can be\n> + thought of as a <glossterm>Table</glossterm> with exactly one\n> + <glossterm>Row</glossterm> and one <glossterm>Column</glossterm>. The\n> + value stored is known as the current value. A\n> + <glossterm>Sequence</glossterm> has a defined direction (almost always\n> + increasing) and an interval step (usually 1). Whenever the\n> + <literal>NEXTVAL</literal> pseudo-column of a\n> + <glossterm>Sequence</glossterm> is accessed, the current value is moved\n> + in the defined direction by the defined interval step, and that value\n\nsay \"given interval step\"\n\n> + <glossterm>Shared Memory</glossterm>\n> + <glossdef>\n> + <para>\n> + <acronym>RAM</acronym> which is used by the processes common to an\n> + <glossterm>Instance</glossterm>. It mirrors parts of\n> + <glossterm>Database</glossterm> files, provides an area for\n> + <glossterm>WAL Records</glossterm>,\n\nDo we use shared_buffers for WAL ?\n\n> + <glossentry id=\"glossary-table\">\n> + <glossterm>Table</glossterm>\n> + <glossdef>\n> + <para>\n> + A collection of <glossterm>Tuples</glossterm> (also known as\n> + <glossterm>Rows</glossterm> or <glossterm>Records</glossterm>) having\n> + a common data structure (the same number of\n> + <glossterm>Attributes</glossterm>s, in the same order, having the same\n\nAttributes has two esses.\n\n> + name and type per position). A <glossterm>Table</glossterm> is the\n\nI don't think you need to say here that the columns of a table all have the\nsame type and order.\n\n> + <glossterm>Temporary Tables</glossterm>\n> + <glossdef>\n> + <para>\n> + <glossterm>Table</glossterm>s that exist either for the lifetime of a\n> + <glossterm>Session</glossterm> or a\n> + <glossterm>Transaction</glossterm>, as defined at creation time. The\n\nI would say \"as specified at the time of its creation\".\n\n> + <glossterm>Transaction</glossterm>\n> + <glossdef>\n> + <para>\n> + A combination of one or more commands that must act as a single\n\nRemove \"one or more\"\n\n> + <glossterm>Trigger</glossterm>\n> + <glossdef>\n> + <para>\n> + A <glossterm>Function</glossterm> which can be defined to execute\n> + whenever a certain operation (<command>INSERT</command>,\n> + <command>UPDATE</command>, or <command>DELTE</command>) is applied to\n> + that <glossterm>Relation</glossterm>. A <glossterm>Trigger</glossterm>\n\ns/that/a/\n\n> + <glossentry id=\"glossary-unique\">\n> + <glossterm>Unique</glossterm>\n> + <glossdef>\n> + <para>\n> + The condition of having no matching values in the same\n\ns/matching/duplicate/\n\n> + <glossterm>Relation</glossterm>. Most often used in the concept of\n\ns/concept/context/\n\n> + <glossentry id=\"glossary-update\">\n> + <glossterm>Update</glossterm>\n> + <glossdef>\n> + <para>\n> + A command used to modify <glossterm>Rows</glossterm> that already\n\nor 'may already'\n\n> + <glossterm>WAL File</glossterm>\n...\n> + <para>\n> + The sequence of <glossterm>WAL Records</glossterm> in combination with\n> + the sequence of <glossterm>WAL Files</glossterm> represents the\n\nRemove \"in combination with the sequence of >WAL Files<\"\n\n> + <glossentry id=\"glossary-wal-log\">\n> + <glossterm>WAL Log</glossterm>\n\nCan you just say WAL or \"write-ahead log\".\n\n> + <glossdef>\n> + <para>\n> + A <glossterm>WAL Record</glossterm> contains either new or changed\n> + <glossterm>Heap</glossterm> or <glossterm>Index</glossterm> data or\n> + information about a <command>COMMIT</command>,\n> + <command>ROLLBACK</command>, <command>SAVEPOINT</command>, or\n> + <glossterm>Checkpointer</glossterm> operation. WAL records use a\n> + non-printabe binary format.\n\nnon-printable\nOr just remove it.\nOr just remove the sentence.\n\n> + <glossterm>WAL Writer</glossterm>\n\nprocess\n\n> + <glossentry id=\"glossary-window-function\">\n> + <glossterm>Window Function</glossterm>\n> + <glossdef>\n> + <para>\n> + A type of <glossterm>Function</glossterm> similar to an\n> + <glossterm>Aggregate</glossterm> in that can derive its value from a\n\nin that IT\n\n> + set of <glossterm>Rows</glossterm> in a <glossterm>Result\n> + Set</glossterm>, but still retaining the original source data.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 20 Mar 2020 14:58:41 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "man pages: Sorry, if I confused someone with my poor English. I just \nwant to express in my 'offline' mail that we don't have to worry about \nman page generation. The patch doesn't affect files in the /ref \nsubdirectory from where man pages are created.\n\nreview process: Yes, it will be time-consumptive and it may be a hard \njob because of a) the patch has multiple authors with divergent writing \nstyles and b) the terms affect different fundamental issues: SQL basics \nand PG basics. Concerning PG basics in the past we used a wide range of \nsimilar terms with different meanings as well as different terms for the \nsame matter - within our documentation as well as in secondary \npublications. The terms \"backend server\" / \"instance\" are such an \nexample and there shall be a clear decision in favor of one of the two. \nPresumably we will see more discussions about the question which one is \nthe preferred term (remember the discussion concerning the terms \nmaster/slave, primary/secondary some weeks ago).\n\nongoing: Intermediate questions for clarifications are welcome.\n\n\nKind regards, Jürgen\n\n\n\n\n", "msg_date": "Fri, 20 Mar 2020 23:32:17 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "\nOn 20.03.20 20:58, Justin Pryzby wrote:\n> On Thu, Mar 19, 2020 at 09:11:22PM -0300, Alvaro Herrera wrote:\n>> + <glossterm>Aggregate</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + To combine a collection of data values into a single value, whose\n>> + value may not be of the same type as the original values.\n>> + <glossterm>Aggregate</glossterm> <glossterm>Functions</glossterm>\n>> + combine multiple <glossterm>Rows</glossterm> that share a common set\n>> + of values into one <glossterm>Row</glossterm>, which means that the\n>> + only data visible in the values in common, and the aggregates of the\n> IS the values in common ?\n> (or, \"is the shared values\")\n>\n>> + <glossterm>Analytic</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <glossterm>Function</glossterm> whose computed value can reference\n>> + values found in nearby <glossterm>Rows</glossterm> of the same\n>> + <glossterm>Result Set</glossterm>.\n>> + <glossterm>Archiver</glossterm>\n> Can you change that to archiver process ?\n\n\nI prefer the short term without the addition of 'process' - concerning \n'Archiver' as well as the other cases. But I'm not an native English \nspeaker.\n\n\n>> + <glossterm>Atomic</glossterm>\n> ..\n>> + <para>\n>> + In reference to an operation: An event that cannot be completed in\n>> + part: it must either entirely succeed or entirely fail. A series of\n> Can you say: \"an action which is not allowed to partially succed and then fail,\n> ...\"\n>\n>> + <glossterm>Autovacuum</glossterm>\n> Say autovacuum process ?\n>\n>> + <glossdef>\n>> + <para>\n>> + Processes that remove outdated <acronym>MVCC</acronym>\n> I would say \"A set of processes that remove...\"\n>\n>> + <glossterm>Records</glossterm> of the <glossterm>Heap</glossterm> and\n> I'm not sure, can you say \"tuples\" ?\n\n\nThis concerns the upcomming MVCC terms. We need a linguistic distinction \nbetween the different versions of 'records' or 'tuples'. In my \nunderstanding the term 'tuple' is nearer to a logical construct \n(relational algebra) and a 'record' some concrete implementation on \ndisc. Therefor I prefer 'record' in this context.\n\n\n>> + <glossterm>Backend Process</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + Processes of an <glossterm>Instance</glossterm> which act on behalf of\n> Say DATABASE instance\n\n\n-1: The term 'database' is used inflationary. We shall restrict it to a \nfew cases.\n\n\n>> + <glossterm>Backend Server</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + See <glossterm>Instance</glossterm>.\n> same\n>\n>> + <glossterm>Background Worker</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + Individual processes within an <glossterm>Instance</glossterm>, which\n> same\n>\n>> + run system- or user-supplied code. Typical use cases are processes\n>> + which handle parts of an <acronym>SQL</acronym> query to take\n>> + advantage of parallel execution on servers with multiple\n>> + <acronym>CPUs</acronym>.\n> I would say \"A typical use case is\"\n+1\n>> + <glossterm>Background Writer</glossterm>\n> Add \"process\" ?\n>\n>> + <glossdef>\n>> + <para>\n>> + Writes continuously dirty pages from <glossterm>Shared\n> Say \"Continuously writes\"\n+1\n>> + Memory</glossterm> to the file system. It starts periodically, but\n> Hm, maybe \"wakes up periodically\"\n+1\n>> + <glossterm>Cast</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A conversion of a <glossterm>Datum</glossterm> from its current data\n>> + type to another data type.\n> maybe just say\n> A conversion of a <glossterm>Datum</glossterm> another data type.\n>\n>> + <glossterm>Catalog</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + The <acronym>SQL</acronym> standard uses this standalone term to\n>> + indicate what is called a <glossterm>Database</glossterm> in\n>> + <productname>PostgreSQL</productname>'s terminology.\n> Maybe remove \"standalone\" ?\n>\n>> + <glossterm>Checkpointer</glossterm>\n> Process\n>\n>> + A process that writes dirty pages and <glossterm>WAL\n>> + Records</glossterm> to the file system and creates a special\n> Does the chckpointer actually write WAL ?\n\n\nYES, not only WAL Writer.\n\n\n>> + checkpoint record. This process is initiated when predefined\n>> + conditions are met, such as a specified amount of time has passed, or\n>> + a certain volume of records have been collected.\n> collected or written?\n>\n> I would say:\n>> + A checkpoint is usually initiated by\n>> + a specified amount of time having passed, or\n>> + a certain volume of records having been written.\n\n\n+-0\n\n\n>> + <glossterm>Checkpoint</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <link linkend=\"sql-checkpoint\"> Checkpoint</link> is a point in time\n> Extra space\n>\n>> + <glossentry id=\"glossary-connection\">\n>> + <glossterm>Connection</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <acronym>TCP/IP</acronym> or socket line for inter-process\n> I don't know if I've ever heard the phase \"socket line\"\n> I guess you mean a unix socket.\n\n\n+1\n\n\n>> + <glossterm>Constraint</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A concept of restricting the values of data allowed within a\n>> + <glossterm>Table</glossterm>.\n> Just say: \"A restriction on the values...\"?\n>\n>> + <glossterm>Data Area</glossterm>\n> Remove this ? I've never heard this phrase before.\n\n\ngrep on *.sgml delivers 4 occurrences.\n\n\n>> + <glossdef>\n>> + <para>\n>> + The base directory on the filesystem of a\n>> + <glossterm>Server</glossterm> that contains all data files and\n>> + subdirectories associated with a <glossterm>Cluster</glossterm> with\n>> + the exception of tablespaces. The environment variable\n> Should add an entry for \"tablespace\".\n\n\n+1\n\n\n>> + <glossterm>Datum</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + The internal representation of a <acronym>SQL</acronym> data type.\n> I'm not sure if should use \"a SQL\" or \"an SQL\", but not both.\n\n\ngrep | wc delivers 106 occurrences for \"an SQL\" and 63 for \"a SQL\". It \ndepends on how people pronounce the term SQL: \"an esquel\" or \"a sequel\".\n\n\n>> + <glossterm>Delete</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <acronym>SQL</acronym> command whose purpose is to remove\n> just say \"which removes\"\n\n\n+1\n\n\n>> + <glossentry id=\"glossary-file-segment\">\n>> + <glossterm>File Segment</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + If a heap or index file grows in size over 1 GB, it will be split\n> 1GB is the default \"segment size\", which you should define.\n\n\n???\n\n\n>> + <glossentry id=\"glossary-foreign-data-wrapper\">\n>> + <glossterm>Foreign Data Wrapper</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A means of representing data that is not contained in the local\n>> + <glossterm>Database</glossterm> as if were in local\n>> + <glossterm>Table</glossterm>(s).\n> I'd say:\n>\n> + A means of representing data as a <glossterm>Table</glossterm>(s) even though\n> + it is not contained in the local <glossterm>Database</glossterm>\n>\n>\n>> + <glossentry id=\"glossary-foreign-key\">\n>> + <glossterm>Foreign Key</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A type of <glossterm>Constraint</glossterm> defined on one or more\n>> + <glossterm>Column</glossterm>s in a <glossterm>Table</glossterm> which\n>> + requires the value in those <glossterm>Column</glossterm>s to uniquely\n>> + identify a <glossterm>Row</glossterm> in the specified\n>> + <glossterm>Table</glossterm>.\n> An FK doesn't require the values in its table to be unique, right ?\n> I'd say something like: \"..which enforces that the values in those Columns are\n> also present in an(other) table.\"\n> Reference Referential Integrity?\n>\n>> + <glossterm>Function</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + Any pre-defined transformation of data. Many\n>> + <glossterm>Functions</glossterm> are already defined within\n>> + <productname>PostgreSQL</productname> itself, but can also be\n>> + user-defined.\n> I would remove \"pre-\", since you mentioned that it can be user-defined.\n>\n>> + <glossterm>Global SQL Object</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + <!-- FIXME -->\n>> + Not all <glossterm>SQL Objects</glossterm> belong to a certain\n>> + <glossterm>Schema</glossterm>. Some belong to the complete\n>> + <glossterm>Database</glossterm>, or even to the complete\n>> + <glossterm>Cluster</glossterm>. These are referred to as\n>> + <glossterm>Global SQL Objects</glossterm>. Collations and Extensions\n>> + such as <glossterm>Foreign Data Wrappers</glossterm> reside at the\n>> + <glossterm>Database</glossterm> level; <glossterm>Database</glossterm>\n>> + names, <glossterm>Roles</glossterm>,\n>> + <glossterm>Tablespaces</glossterm>, <glossterm>Replication</glossterm>\n>> + origins, and subscriptions for logical\n>> + <glossterm>Replication</glossterm> at the\n>> + <glossterm>Cluster</glossterm> level.\n> I think \"complete\" is the wrong world.\n> I would say:\n> \"An object which is not specific to a given database, but instead shared across\n> the entire Cluster\".\n\n\nThis phrase seems to be too simple. We must differentiate between the \ndifferent levels: schema, database, cluster. Possibly someone finds a \nbetter phrase.\n\n\n>> + <glossentry id=\"glossary-grant\">\n>> + <glossterm>Grant</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <acronym>SQL</acronym> command that is used to enable\n> I'd say \"allow\"\n>\n>> + <glossentry id=\"glossary-heap\">\n>> + <glossterm>Heap</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + Contains the original values of <glossterm>Row</glossterm> attributes\n> I'm not sure what \"original\" means here ?\n\n\nYes, this may be misleading. I want to express, that values are stored \nin the heap (the 'original') and possibly repeated as a key in an index.\n\n\n>> + (i.e. the data). The <glossterm>Heap</glossterm> is realized within\n>> + <glossterm>Database</glossterm> files and mirrored in\n>> + <glossterm>Shared Memory</glossterm>.\n> I wouldn't say mirrored, and probably just remove at least the part after \"and\".\n\n\n+-0\n\n\n>> + <glossentry id=\"glossary-host\">\n>> + <glossterm>Host</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + See <glossterm>Server</glossterm>.\n> Or client. Or proxy at some layer or other intermediate thing. Maybe just\n> remove this.\n\n\nSometimes the term \"host\" is used in a different meaning. Therefor we \nshall have this glossary entry for clarification that it shall be used \nonly in the sense of a \"server\".\n\n\n>> + <glossentry id=\"glossary-index\">\n>> + <glossterm>Index</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <glossterm>Relation</glossterm> that contains data derived from a\n>> + <glossterm>Table</glossterm> (or <glossterm>Relation</glossterm> such\n>> + as a <glossterm>Materialized View</glossterm>). It's internal\n> Its\n>\n>> + structure supports very fast retrieval of and access to the original\n>> + data.\n>> + <glossterm>Instance</glossterm>\n>> + <glossdef>\n>> + <para>\n> ...\n>> + <para>\n>> + Many <glossterm>Instances</glossterm> can run on the same server as\n>> + long as they use different <acronym>IP</acronym> ports and manage\n> I would say \"as long as their TCP/IP ports or sockets don't conflict, and manage...\"\n\n\n+1\n\n\n>> + <glossterm>Join</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A technique used with <command>SELECT</command> statements for\n>> + correlating data in one or more <glossterm>Relations</glossterm>.\n> I would refer to this as a SQL keyword allowing to combine data from multiple\n> relations.\n>\n>> + <glossterm>Lock</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A mechanism for one process temporarily preventing data from being\n>> + manipulated by any other process.\n> I'd say:\n>\n> + A mechanism by which a process protects simultaneous access to a resource\n> + by other processes.\n>\n> (I said \"protects\" since shared locks don't prevent all access, and it's easier\n> than explaining \"unsafe access\").\n>\n>\n>> + <glossentry id=\"glossary-log-file\">\n>> + <glossterm>Log File</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + <link linkend=\"logfile-maintenance\">LOG files</link> contain readable\n>> + text lines about serious and non-serious events, e.g.: use of wrong\n>> + password, long-running queries, ... .\n> Serious and non-serious?\n\n\nok, can be removed: 'events' only.\n\n\n>> + <glossterm>Log Writer</glossterm>\n> process\n>\n>> + <glossdef>\n>> + <para>\n>> + If activated and parameterized, the\n> I don't know what parameterized means here\n\n\nok, unnecessary term. (There are parameters for the Log Writer process \nin the config file.)\n\n\n>> + <link linkend=\"runtime-config-logging\">Log Writer</link> process\n>> + writes information about database events into the current\n>> + <glossterm>Log file</glossterm>. When reaching certain time- or\n>> + volume-dependent criterias, he <!-- FIXME \"he\"? --> creates a new\n> I think criteria is the plural..\n\n\n+1\n\n\n>> + <glossterm>Log Record</glossterm>\n> Can we remove this ?\n> Couple releases ago, \"pg_xlog\" was renamed to pg_wal.\n> I'd prefer to avoid defining something called \"Log Record\" about WAL that's\n> right next to text logs.\n\n\n\"... that's right next to text logs.\"  This is the problem, which shall \nbe clarified. The rename of the directory does not affect the records \nwhich are written into the WAL files or are used for replication. The \nterm \"log record\" is used in the documentation as well as in error messages.\n\n\n>> + <glossterm>Logged</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <glossterm>Table</glossterm> is considered\n>> + <glossterm>Logged</glossterm> if changes to it are sent to the\n>> + <glossterm>WAL Log</glossterm>. By default, all regular\n>> + <glossterm>Tables</glossterm> are <glossterm>Logged</glossterm>. A\n>> + <glossterm>Table</glossterm> can be speficied as unlogged either at\n>> + creation time or via the <command>ALTER TABLE</command> command. The\n>> + primary use of unlogged <glossterm>Tables</glossterm> is for storing\n>> + transient work data that must be shared across processes, but with a\n>> + final result stored in logged <glossterm>Tables</glossterm>.\n>> + <glossterm>Temporary Tables</glossterm> are always unlogged.\n>> + </para>\n>> + </glossdef>\n>> + </glossentry>\n> Maybe it's be better to define \"unlogged\", since 1) logged is the default; and\n> 2) it's right next to text logs.\n>\n>> + <glossterm>Master</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + When two or more <glossterm>Databases</glossterm> are linked via\n>> + <glossterm>Replication</glossterm>, the <glossterm>Server</glossterm>\n>> + that is considered the authoritative source of information is called\n>> + the <glossterm>Master</glossterm>.\n> I think it'd actually be the <<instance>> which is authoritative, in case they're\n> running on the same <<Server>>\n\n\nIn this phase of the glossary we shall avoid the discussion about \nmaster/slave vs. primary/secondary. Some weeks ago we have seen many \ncontributions without a clear result. In one of the next phases of the \nglossary we shall discuss all terms concerning replication separately.\n\n\n>> + <glossentry id=\"glossary-materialized\">\n>> + <glossterm>Materialized</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + The act of storing information rather than just the means of accessing\n> remove \"means of\" ?\n>\n>> + the information. This term is used in <glossterm>Materialized\n>> + Views</glossterm> meaning that the data derived from the\n>> + <glossterm>View</glossterm> is actually stored on disk separate from\n> separately\n>\n>> + the sources of that data. When the term\n>> + <glossterm>Materialized</glossterm> is used in speaking about\n>> + mulit-step queries, it means that the data of a given step is stored\n> multi\n>\n>> + (in memory, but that storage may spill over onto disk).\n>> + </para>\n>> + </glossdef>\n>> + </glossentry>\n>> +\n>> + <glossentry id=\"glossary-materialized-view\">\n>> + <glossterm>Materialized View</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <glossterm>Relation</glossterm> that is defined in the same way that\n>> + a <glossterm>View</glossterm> is, but it stores data in the same way\n> change \"it stores\" to stores\n>\n>> + <glossentry id=\"glossary-partition\">\n>> + <glossterm>Partition</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + <!-- FIXME should this use the style used in \"atomic\"? -->\n>> + a) A <glossterm>Table</glossterm> that can be queried independently by\n>> + its own name, but can also be queried via another\n> just say \"on its own\" or \"directly\"\n>\n>> + <glossterm>Table</glossterm>, a partitionend\n> partitioned\n> also, put it in parens, like \"via another table (a partitioned table)...\"\n>\n>> + <glossterm>Table</glossterm>, which is a collection of\n> Say \"set\" here since you later talk about \"subsets\" and sets.\n>\n>> + <glossentry id=\"glossary-primary-key\">\n>> + <glossterm>Primary Key</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A special case of <glossterm>Unique Index</glossterm> defined on a\n>> + <glossterm>Table</glossterm> or other <glossterm>Relation</glossterm>\n>> + that also guarantees that all of the <glossterm>Attributes</glossterm>\n>> + within the <glossterm>Primary Key</glossterm> do not have\n>> + <glossterm>Null</glossterm> values. As the name implies, there can be\n>> + only one <glossterm>Primary Key</glossterm> per\n>> + <glossterm>Table</glossterm>, though it is possible to have multiple\n>> + <glossterm>Unique Indexes</glossterm> that also have no\n>> + <glossterm>Null</glossterm>-capable <glossterm>Attributes</glossterm>.\n> I would say \"multiple >>unique indexes<< on >>attributes<< defined as not\n> nullable.\n>\n>> + <glossterm>Procedure</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A defined set of instructions for manipulating data within a\n>> + <glossterm>Database</glossterm>. <glossterm>Procedure</glossterm> can\n> \"procedures\" or \"a procedure\"\n>\n>> + <glossterm>Record</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + See <link linkend=\"sql-revoke\">Tupple</link>.\n> Tupple is back. And again below.\n>\n>> + A single <glossterm>Row</glossterm> of a <glossterm>Table</glossterm>\n>> + or other Relation.\n> I think it's commonly used to mean \"an instance of a row\" (in an MVCC sense),\n> but maybe that's too much detail for here.\n>\n>> + <glossterm>Referential Integrity</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + The means of restricting data in one <glossterm>Relation</glossterm>\n> A means\n>\n>> + <glossentry id=\"glossary-relation\">\n>> + <glossterm>Relation</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + The generic term for all objects in a <glossterm>Database</glossterm>\n> \"A generic term for any object in a >>database<< that has a name and...\"\n>\n>> + <glossentry id=\"glossary-result-set\">\n>> + <glossterm>Result Set</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A data structure transmitted from a <glossterm>Server</glossterm> to\n>> + client program upon the completion of a <acronym>SQL</acronym>\n>> + command, usually a <command>SELECT</command> but it can be an\n>> + <command>INSERT</command>, <command>UPDATE</command>, or\n>> + <command>DELETE</command> command if the <literal>RETURNING</literal>\n>> + clause is specified.\n> I'd remove everything in that sentence after \"usually\".\n>\n>> + <glossterm>Revoke</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A command to reduce access to a named set of\n> s/reduce/prevent/ ?\n>\n>> + <glossterm>Row</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + See <link linkend=\"sql-revoke\">Tupple</link>.\n> tuple\n>\n>> + <glossentry id=\"glossary-savepoint\">\n>> + <glossterm>Savepoint</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A special mark (such as a timestamp) inside a\n>> + <glossterm>Transaction</glossterm>. Data modifications after this\n>> + point in time may be rolled back to the time of the savepoint.\n> I don't think \"timestamp\" is a useful or accurate analogy for this.\n\n\n+1\n\n\n>> + <glossterm>Schema</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <link linkend=\"ddl-schemas\">schema</link> is a namespace for\n>> + <glossterm>SQL objects</glossterm>, which all reside in the same\n>> + <glossterm>database</glossterm>. Each <glossterm>SQL\n>> + object</glossterm> must reside in exactly one\n>> + <glossterm>Schema</glossterm>.\n>> + </para>\n>> + <para>\n>> + In general, the names of <glossterm>SQL objects</glossterm> in the\n>> + schema are unique - even across different types of objects. The lone\n>> + exception is the case of <glossterm>Unique</glossterm>\n>> + <glossterm>Constraint</glossterm>s, in which case there\n>> + <emphasis>must</emphasis> be a <glossterm>Unique Index</glossterm>\n>> + with the same name and <glossterm>Schema</glossterm> as the\n>> + <glossterm>Constraint</glossterm>. There is no restriction on having\n>> + a name used in multiple <glossterm>Schema</glossterm>s.\n> I think there's some confusion. Constraints are not objects, right ?\n>\n> But, constraints do have an exception (not just unique constraints, though):\n> the constraint is only unique on its table, not in its database/schema.\n>\n> \"pg_constraint_conrelid_contypid_conname_index\" UNIQUE, btree (conrelid, contypid, conname) CLUSTER\n\n\nYes, you are right. But give me some time for a better suggestion.\n\n\n>> + <glossterm>Select</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + The command used to query a <glossterm>Database</glossterm>. Normally,\n>> + <command>SELECT</command>s are not expected to modify the\n>> + <glossterm>Database</glossterm> in any way, but it is possible that\n>> + <glossterm>Functions</glossterm> invoked within the query could have\n>> + side-effects that do modify data. </para>\n> I think there should be references to the sql-* pages for this and others.\n>\n>> + <glossentry id=\"glossary-serializable\">\n>> + <glossterm>Serializable</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + Transactions defined as <literal>SERIALIZABLE</literal> are unable to\n>> + see changes made within other transactions. In effect, for the\n>> + initializing session the entire <glossterm>Database</glossterm>\n>> + appears to be frozen duration such a\n>> + <glossterm>Transaction</glossterm>.\n> Do you mean \"for the duration of the >>Transaction<<\"\n>\n>> + <glossentry id=\"glossary-session\">\n>> + <glossterm>Session</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <glossterm>Connection</glossterm> to the <glossterm>Database</glossterm>.\n>> + </para>\n>> + <para>\n>> + A description of the commands that were issued in the life cycle of a\n>> + particular <glossterm>Connection</glossterm> to the\n>> + <glossterm>Database</glossterm>.\n> I'm not sure what this <para> means.\n>\n>> + <glossterm>Sequence</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + <!-- sounds excessively complicated a definition -->\n>> + An <glossterm>Database</glossterm> object which represents the\n> A not An\n>\n>> + mathematical concept of a numerical integral sequence. It can be\n>> + thought of as a <glossterm>Table</glossterm> with exactly one\n>> + <glossterm>Row</glossterm> and one <glossterm>Column</glossterm>. The\n>> + value stored is known as the current value. A\n>> + <glossterm>Sequence</glossterm> has a defined direction (almost always\n>> + increasing) and an interval step (usually 1). Whenever the\n>> + <literal>NEXTVAL</literal> pseudo-column of a\n>> + <glossterm>Sequence</glossterm> is accessed, the current value is moved\n>> + in the defined direction by the defined interval step, and that value\n> say \"given interval step\"\n>\n>> + <glossterm>Shared Memory</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + <acronym>RAM</acronym> which is used by the processes common to an\n>> + <glossterm>Instance</glossterm>. It mirrors parts of\n>> + <glossterm>Database</glossterm> files, provides an area for\n>> + <glossterm>WAL Records</glossterm>,\n> Do we use shared_buffers for WAL ?\n\n\nYes, my information is that WAL records are part of the shared_buffers.\n\n\n>> + <glossentry id=\"glossary-table\">\n>> + <glossterm>Table</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A collection of <glossterm>Tuples</glossterm> (also known as\n>> + <glossterm>Rows</glossterm> or <glossterm>Records</glossterm>) having\n>> + a common data structure (the same number of\n>> + <glossterm>Attributes</glossterm>s, in the same order, having the same\n> Attributes has two esses.\n>\n>> + name and type per position). A <glossterm>Table</glossterm> is the\n> I don't think you need to say here that the columns of a table all have the\n> same type and order.\n\n\nIn my opinion this is an essential information.\n\n\n>> + <glossterm>Temporary Tables</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + <glossterm>Table</glossterm>s that exist either for the lifetime of a\n>> + <glossterm>Session</glossterm> or a\n>> + <glossterm>Transaction</glossterm>, as defined at creation time. The\n> I would say \"as specified at the time of its creation\".\n>\n>> + <glossterm>Transaction</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A combination of one or more commands that must act as a single\n> Remove \"one or more\"\n>\n>> + <glossterm>Trigger</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A <glossterm>Function</glossterm> which can be defined to execute\n>> + whenever a certain operation (<command>INSERT</command>,\n>> + <command>UPDATE</command>, or <command>DELTE</command>) is applied to\n>> + that <glossterm>Relation</glossterm>. A <glossterm>Trigger</glossterm>\n> s/that/a/\n>\n>> + <glossentry id=\"glossary-unique\">\n>> + <glossterm>Unique</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + The condition of having no matching values in the same\n> s/matching/duplicate/\n>\n>> + <glossterm>Relation</glossterm>. Most often used in the concept of\n> s/concept/context/\n>\n>> + <glossentry id=\"glossary-update\">\n>> + <glossterm>Update</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A command used to modify <glossterm>Rows</glossterm> that already\n> or 'may already'\n>\n>> + <glossterm>WAL File</glossterm>\n> ...\n>> + <para>\n>> + The sequence of <glossterm>WAL Records</glossterm> in combination with\n>> + the sequence of <glossterm>WAL Files</glossterm> represents the\n> Remove \"in combination with the sequence of >WAL Files<\"\n>\n>> + <glossentry id=\"glossary-wal-log\">\n>> + <glossterm>WAL Log</glossterm>\n> Can you just say WAL or \"write-ahead log\".\n\n\nSometimes the term \"WAL log\" is used in the documentation. But the \npreferred term is \"WAL file\". This glossary entry does nothing but \npoints to the preferred term, which indicates that he shall be avoided \nin the future.\n\n\n>> + <glossdef>\n>> + <para>\n>> + A <glossterm>WAL Record</glossterm> contains either new or changed\n>> + <glossterm>Heap</glossterm> or <glossterm>Index</glossterm> data or\n>> + information about a <command>COMMIT</command>,\n>> + <command>ROLLBACK</command>, <command>SAVEPOINT</command>, or\n>> + <glossterm>Checkpointer</glossterm> operation. WAL records use a\n>> + non-printabe binary format.\n> non-printable\n+1\n> Or just remove it.\n> Or just remove the sentence.\n>\n>> + <glossterm>WAL Writer</glossterm>\n> process\n>\n>> + <glossentry id=\"glossary-window-function\">\n>> + <glossterm>Window Function</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + A type of <glossterm>Function</glossterm> similar to an\n>> + <glossterm>Aggregate</glossterm> in that can derive its value from a\n> in that IT\n>\n>> + set of <glossterm>Rows</glossterm> in a <glossterm>Result\n>> + Set</glossterm>, but still retaining the original source data.\n\n\nKind regards, Jürgen\n\n\n\n\n", "msg_date": "Fri, 20 Mar 2020 23:32:25 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Fri, Mar 20, 2020 at 11:32:25PM +0100, J�rgen Purtz wrote:\n> > > + <glossentry id=\"glossary-file-segment\">\n> > > + <glossterm>File Segment</glossterm>\n> > > + <glossdef>\n> > > + <para>\n> > > + If a heap or index file grows in size over 1 GB, it will be split\n> > 1GB is the default \"segment size\", which you should define.\n> \n> ???\n\n\"A <<Table>> or other >>Relation<<\" is larger than a >Cluster's< segment size\nis stored in multiple physical files. This avoids file size limitations which\nvary across operating systems.\"\n\nhttps://www.postgresql.org/docs/devel/runtime-config-preset.html\n\nts=# SELECT name, setting, unit, category, short_desc FROM pg_settings WHERE name~'block_size|segment_size';\n name | setting | unit | category | short_desc \n------------------+----------+------+----------------+----------------------------------------------\n block_size | 8192 | | Preset Options | Shows the size of a disk block.\n segment_size | 131072 | 8kB | Preset Options | Shows the number of pages per disk file.\n wal_block_size | 8192 | | Preset Options | Shows the block size in the write ahead log.\n wal_segment_size | 16777216 | B | Preset Options | Shows the size of write ahead log segments.\n\n> > > + <glossentry id=\"glossary-heap\">\n> > > + <glossterm>Heap</glossterm>\n> > > + <glossdef>\n> > > + <para>\n> > > + Contains the original values of <glossterm>Row</glossterm> attributes\n> > I'm not sure what \"original\" means here ?\n> \n> Yes, this may be misleading. I want to express, that values are stored in\n> the heap (the 'original') and possibly repeated as a key in an index.\n\nMaybe \"this is the content of rows/attributes in >>Tables<< or other >>Relations<<\".\nor \"this is the data store for ...\"\n\n> > > + <glossentry id=\"glossary-host\">\n> > > + <glossterm>Host</glossterm>\n> > > + <glossdef>\n> > > + <para>\n> > > + See <glossterm>Server</glossterm>.\n> > Or client. Or proxy at some layer or other intermediate thing. Maybe just\n> > remove this.\n> \n> Sometimes the term \"host\" is used in a different meaning. Therefor we shall\n> have this glossary entry for clarification that it shall be used only in the\n> sense of a \"server\".\n\nI think that suggests just removing \"host\" and consistently saying \"server\".\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 20 Mar 2020 18:03:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Fri, Mar 20, 2020 at 6:32 PM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> man pages: Sorry, if I confused someone with my poor English. I just\n> want to express in my 'offline' mail that we don't have to worry about\n> man page generation. The patch doesn't affect files in the /ref\n> subdirectory from where man pages are created.\n>\n\nIt wasn't your poor English - everyone else understood what you meant. I\nhad wondered if our docs went into man page format as well, so my research\nwas still time well spent.\n\nOn Fri, Mar 20, 2020 at 6:32 PM Jürgen Purtz <juergen@purtz.de> wrote:man pages: Sorry, if I confused someone with my poor English. I just \nwant to express in my 'offline' mail that we don't have to worry about \nman page generation. The patch doesn't affect files in the /ref \nsubdirectory from where man pages are created.It wasn't your poor English - everyone else understood what you meant. I had wondered if our docs went into man page format as well, so my research was still time well spent.", "msg_date": "Fri, 20 Mar 2020 23:45:31 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 21.03.20 00:03, Justin Pryzby wrote:\n>>>> + <glossentry id=\"glossary-host\">\n>>>> + <glossterm>Host</glossterm>\n>>>> + <glossdef>\n>>>> + <para>\n>>>> + See <glossterm>Server</glossterm>.\n>>> Or client. Or proxy at some layer or other intermediate thing. Maybe just\n>>> remove this.\n>> Sometimes the term \"host\" is used in a different meaning. Therefor we shall\n>> have this glossary entry for clarification that it shall be used only in the\n>> sense of a \"server\".\n> I think that suggests just removing \"host\" and consistently saying \"server\".\n\n\"server\", \"host\", \"database server\": All three terms are used \nintensively in the documentation. When we define glossary terms, we \nshould also take into account the consequences for those parts. \n\"database server\" is the most diffuse. E.g.: In 'config.sgml' he is used \nin the sense of some hardware or VM \"... If you have a dedicated \ndatabase server with 1GB or more of RAM ...\" as well as in the sense of \nan instance \"... To start the database server on the command prompt \n...\". Additionally the term is completely misleading. In both cases we \ndo not mean something which is related to a database but something which \nis related to a cluster.\n\nIn the past, people accepted such blurs. My - minimal - intention is to \nraise awareness of such ambiguities, or - better - to clearly define the \nsituation in the glossary. But this is only a first step. The second \nstep shall be a rework of the documentation to use the preferred terms \ndefined in the glossary. Because there will be a time gap between the \ntwo steps, we may want to be a little chatty in the glossary and define \nambiguous terms as shown in the following example:\n\n---\n\nServer: The term \"Server\" denotes ....  .\n\nHost: An outdated term which will be replaced by \n<xref-to-the-glossary>Server</xref> over time.\n\nDatabase Server: An outdated term which sometimes denotes a \n<xref-to-the-glossary>Server</xref> and sometimes an \n<xref-to-the-glossary>Instance</xref>.\n\n---\n\nThis is a pattern for all terms which we currently described with the \nphrase \"See ...\". Later, after reviewing the documentation by \neliminating the non-preferred terms, the glossary entries with \"An \noutdated term ...\" can be dropped.\n\nIn the last days we have seen many huge and small proposals. I think, it \nwill be helpful to summarize this work by waiting for a patch from \nAlvaro containing everything it deems useful from his point of view.\n\nKind regards, Jürgen\n\n\n\n\n\n\n\n\nOn 21.03.20 00:03, Justin Pryzby wrote:\n\n\n\n\n\n+ <glossentry id=\"glossary-host\">\n+ <glossterm>Host</glossterm>\n+ <glossdef>\n+ <para>\n+ See <glossterm>Server</glossterm>.\n\n\nOr client. Or proxy at some layer or other intermediate thing. Maybe just\nremove this.\n\n\nSometimes the term \"host\" is used in a different meaning. Therefor we shall\nhave this glossary entry for clarification that it shall be used only in the\nsense of a \"server\".\n\n\nI think that suggests just removing \"host\" and consistently saying \"server\".\n\n\n\"server\", \"host\", \"database server\": All three terms are used\n intensively in the documentation. When we define glossary terms,\n we should also take into account the consequences for those parts.\n \"database server\" is the most diffuse. E.g.: In 'config.sgml' he\n is used in the sense of some hardware or VM \"... If you have a\n dedicated database server with 1GB or more of RAM ...\" as well as\n in the sense of an instance \"... To start the database server on\n the command prompt ...\". Additionally the term is completely\n misleading. In both cases we do not mean something which is\n related to a database but something which is related to a cluster.\nIn the past, people accepted such blurs. My - minimal - intention\n is to raise awareness of such ambiguities, or - better - to\n clearly define the situation in the glossary. But this is only a\n first step. The second step shall be a rework of the documentation\n to use the preferred terms defined in the glossary. Because there\n will be a time gap between the two steps, we may want to be a\n little chatty in the glossary and define ambiguous terms as shown\n in the following example:\n---\nServer: The term \"Server\" denotes ....  .\nHost: An outdated term which will be replaced by\n <xref-to-the-glossary>Server</xref> over time.\n\nDatabase Server: An outdated term which sometimes denotes a\n <xref-to-the-glossary>Server</xref> and sometimes an\n <xref-to-the-glossary>Instance</xref>.\n---\nThis is a pattern for all terms which we currently described with\n the phrase \"See ...\". Later, after reviewing the documentation by\n eliminating the non-preferred terms, the glossary entries with \"An\n outdated term ...\" can be dropped.\n\nIn the last days we have seen many huge and small proposals. I\n think, it will be helpful to summarize this work by waiting for a\n patch from Alvaro containing everything it deems useful from his\n point of view.\n\nKind regards, Jürgen", "msg_date": "Sat, 21 Mar 2020 15:08:30 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Sat, Mar 21, 2020 at 03:08:30PM +0100, J�rgen Purtz wrote:\n> On 21.03.20 00:03, Justin Pryzby wrote:\n> > > > > + <glossentry id=\"glossary-host\">\n> > > > > + <glossterm>Host</glossterm>\n> > > > > + <glossdef>\n> > > > > + <para>\n> > > > > + See <glossterm>Server</glossterm>.\n> > > > Or client. Or proxy at some layer or other intermediate thing. Maybe just\n> > > > remove this.\n> > > Sometimes the term \"host\" is used in a different meaning. Therefor we shall\n> > > have this glossary entry for clarification that it shall be used only in the\n> > > sense of a \"server\".\n> > I think that suggests just removing \"host\" and consistently saying \"server\".\n> \n> \"server\", \"host\", \"database server\": All three terms are used intensively in\n> the documentation. When we define glossary terms, we should also take into\n> account the consequences for those parts.\n\nThe documentation uses \"host\", but doesn't always mean \"server\".\n\n$ git grep -Fw host doc/src/\ndoc/src/sgml/backup.sgml: that you can perform this backup procedure from any remote host that has\n\npg_hba appears to be all about client \"hosts\".\nFATAL: no pg_hba.conf entry for host \"123.123.123.123\", user \"andym\", database \"testdb\"\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 21 Mar 2020 10:15:13 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-03-20 01:11, Alvaro Herrera wrote:\n> I gave this a look. I first reformatted it so I could read it; that's\n> 0001. Second I changed all the long <link> items into <xref>s, which\n> are shorter and don't have to repeat the title of the refered to page.\n> (Of course, this changes the link to be in the same style as every other\n> link in our documentation; some people don't like it. But it's our\n> style.)\n\nAFAICT, all the <link> elements in this patch should be changed to <xref>.\n\nIf there is something undesirable about the output style, we can change \nthat, but it's not this patch's job to make up its own rules.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 24 Mar 2020 19:26:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Fri, Mar 20, 2020 at 3:58 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > + A process that writes dirty pages and <glossterm>WAL\n> > + Records</glossterm> to the file system and creates a special\n>\n> Does the chckpointer actually write WAL ?\n\nYes.\n\n> An FK doesn't require the values in its table to be unique, right ?\n\nI believe it does require that the values are unique.\n\n> I think there's some confusion. Constraints are not objects, right ?\n\nI think constraints are definitely objects. They have names and you\ncan, for example, COMMENT on them.\n\n> Do we use shared_buffers for WAL ?\n\nNo.\n\n(I have not reviewed the patch; these are just a few comments on your comments.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 24 Mar 2020 14:46:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n>\n> > > + Records</glossterm> to the file system and creates a special\n> >\n> > Does the chckpointer actually write WAL ?\n>\n> Yes.\n>\n> > An FK doesn't require the values in its table to be unique, right ?\n>\n> I believe it does require that the values are unique.\n>\n> > I think there's some confusion. Constraints are not objects, right ?\n>\n> I think constraints are definitely objects. They have names and you\n> can, for example, COMMENT on them.\n>\n> > Do we use shared_buffers for WAL ?\n>\n> No.\n>\n> (I have not reviewed the patch; these are just a few comments on your\n> comments.)\n>\n>\nI'm going to be coalescing the feedback into an updated patch very soon\n(tonight/tomorrow), so please keep the feedback on the text/wording coming\nuntil then.\nIf anyone has a first attempt at all the ACID definitions, I'd love to see\nthose as well.\n\n> > +      Records</glossterm> to the file system and creates a special\n>\n> Does the chckpointer actually write WAL ?\n\nYes.\n\n> An FK doesn't require the values in its table to be unique, right ?\n\nI believe it does require that the values are unique.\n\n> I think there's some confusion.  Constraints are not objects, right ?\n\nI think constraints are definitely objects. They have names and you\ncan, for example, COMMENT on them.\n\n> Do we use shared_buffers for WAL ?\n\nNo.\n\n(I have not reviewed the patch; these are just a few comments on your comments.)I'm going to be coalescing the feedback into an updated patch very soon (tonight/tomorrow), so please keep the feedback on the text/wording coming until then.If anyone has a first attempt at all the ACID definitions, I'd love to see those as well.", "msg_date": "Tue, 24 Mar 2020 15:27:21 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 24.03.20 19:46, Robert Haas wrote:\n>> Do we use shared_buffers for WAL ?\n> No.\n\nWhat's about the explanation in \nhttps://www.postgresql.org/docs/12/runtime-config-wal.html : \n\"wal_buffers (integer)    The amount of shared memory used for WAL data \nthat has not yet been written to disk. The default setting of -1 selects \na size equal to 1/32nd (about 3%) of shared_buffers, ... \" ? My \nunderstanding was, that the parameter wal_buffers grabs some of the \nexisting shared_buffers for its own purpose. Is this a \nmisinterpretation? Are shared_buffers and wal_buffers two different \nshared memory areas?\n\nKind regards, Jürgen\n\n\n\n\n\n\n\n\n\n\nOn 24.03.20 19:46, Robert Haas wrote:\n\n\n\nDo we use shared_buffers for WAL ?\n\n\nNo.\n\nWhat's about the explanation in\n https://www.postgresql.org/docs/12/runtime-config-wal.html :\n \"wal_buffers (integer)    The amount of shared memory used for WAL\n data that has not yet been written to disk. The default setting of\n -1 selects a size equal to 1/32nd (about 3%) of shared_buffers,\n ... \" ? My understanding was, that the parameter wal_buffers grabs\n some of the existing shared_buffers for its own purpose. Is this a\n misinterpretation? Are shared_buffers and wal_buffers two\n different shared memory areas?\n\nKind regards, Jürgen", "msg_date": "Tue, 24 Mar 2020 20:40:20 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Tue, Mar 24, 2020 at 3:40 PM Jürgen Purtz <juergen@purtz.de> wrote:\n> On 24.03.20 19:46, Robert Haas wrote:\n> Do we use shared_buffers for WAL ?\n>\n> No.\n>\n> What's about the explanation in https://www.postgresql.org/docs/12/runtime-config-wal.html : \"wal_buffers (integer) The amount of shared memory used for WAL data that has not yet been written to disk. The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers, ... \" ? My understanding was, that the parameter wal_buffers grabs some of the existing shared_buffers for its own purpose. Is this a misinterpretation? Are shared_buffers and wal_buffers two different shared memory areas?\n\nYes. The code adds up the shared memory requests from all of the\ndifferent subsystems and then allocates one giant chunk of shared\nmemory which is divided up between them. The overwhelming majority of\nthat memory goes into shared_buffers, but not all of it. You can use\nthe new pg_get_shmem_allocations() function to see how it's used. For\nexample, with shared_buffers=4GB:\n\nrhaas=# select name, pg_size_pretty(size) from\npg_get_shmem_allocations() order by size desc limit 10;\n name | pg_size_pretty\n----------------------+----------------\n Buffer Blocks | 4096 MB\n Buffer Descriptors | 32 MB\n <anonymous> | 32 MB\n XLOG Ctl | 16 MB\n Buffer IO Locks | 16 MB\n Checkpointer Data | 12 MB\n Checkpoint BufferIds | 10 MB\n clog | 2067 kB\n | 1876 kB\n subtrans | 261 kB\n(10 rows)\n\nrhaas=# select count(*), pg_size_pretty(sum(size)) from\npg_get_shmem_allocations();\n count | pg_size_pretty\n-------+----------------\n 54 | 4219 MB\n(1 row)\n\nSo, in this configuration, there whole shared memory segment is\n4219MB, of which 4096MB is allocated to shared_buffers and the rest to\ndozens of smaller allocations, with 1876 kB left over that might get\nsnapped up later by an extension that wants some shared memory.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 24 Mar 2020 15:58:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Fri, Mar 20, 2020 at 11:32:25PM +0100, J�rgen Purtz wrote:\n> > > + <glossterm>Archiver</glossterm>\n> > Can you change that to archiver process ?\n> \n> I prefer the short term without the addition of 'process' - concerning\n> 'Archiver' as well as the other cases. But I'm not an native English\n> speaker.\n\nI didn't like it due to lack of context.\n\nWhat about \"wal archiver\" ?\n\nIt occured to me when I read this.\nhttps://www.postgresql.org/message-id/20200327.163007.128069746774242774.horikyota.ntt%40gmail.com\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 27 Mar 2020 15:12:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 27.03.20 21:12, Justin Pryzby wrote:\n> On Fri, Mar 20, 2020 at 11:32:25PM +0100, Jürgen Purtz wrote:\n>>>> + <glossterm>Archiver</glossterm>\n>>> Can you change that to archiver process ?\n>> I prefer the short term without the addition of 'process' - concerning\n>> 'Archiver' as well as the other cases. But I'm not an native English\n>> speaker.\n> I didn't like it due to lack of context.\n>\n> What about \"wal archiver\" ?\n>\n> It occured to me when I read this.\n> https://www.postgresql.org/message-id/20200327.163007.128069746774242774.horikyota.ntt%40gmail.com\n>\n\"WAL archiver\" is ok for me. In the current documentation we have 2 \nplaces with \"WAL archiver\" and 4 with \"archiver\"-only \n(high-availability.sgml, monitoring.sgml).\n\n\"backend process\" is an exception to the other terms because the \nstandalone term \"backend\" is sensibly used in diverse situations.\n\nKind regards, Jürgen\n\n\n\n\n", "msg_date": "Sun, 29 Mar 2020 11:29:50 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Sun, Mar 29, 2020 at 5:29 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> On 27.03.20 21:12, Justin Pryzby wrote:\n> > On Fri, Mar 20, 2020 at 11:32:25PM +0100, Jürgen Purtz wrote:\n> >>>> + <glossterm>Archiver</glossterm>\n> >>> Can you change that to archiver process ?\n> >> I prefer the short term without the addition of 'process' - concerning\n> >> 'Archiver' as well as the other cases. But I'm not an native English\n> >> speaker.\n> > I didn't like it due to lack of context.\n> >\n> > What about \"wal archiver\" ?\n> >\n> > It occured to me when I read this.\n> >\n> https://www.postgresql.org/message-id/20200327.163007.128069746774242774.horikyota.ntt%40gmail.com\n> >\n> \"WAL archiver\" is ok for me. In the current documentation we have 2\n> places with \"WAL archiver\" and 4 with \"archiver\"-only\n> (high-availability.sgml, monitoring.sgml).\n>\n> \"backend process\" is an exception to the other terms because the\n> standalone term \"backend\" is sensibly used in diverse situations.\n>\n> Kind regards, Jürgen\n>\n\nI've taken Alvarao's fixes and done my best to incorporate the feedback\ninto a new patch, which Roger's (tech writer) reviewed yesterday.\n\nThe changes are too numerous to list, but the highlights are:\n\nNew definitions:\n* All four ACID terms\n* Vacuum (split off from Autovacuum)\n* Tablespace\n* WAL Archiver (replaces Archiver)\n\nChanges to existing terms:\n* Implemented most wording changes recommended by Justin\n* all remaining links were either made into xrefs or edited out of existence\n\n* de-tagged most second uses of of a term within a definition\n\n\nDid not do\n* Addressed the \" Process\" suffix suggested by Justin. There isn't\nconsensus on these changes, and I'm neutral on the matter\n* change the Cast definition. I think it's important to express that a cast\nhas a FROM datatype as well as a TO\n* anything host/server related as I couldn't see a consensus reached\n\nOther thoughts:\n* Trivial definitions that are just see-other-definition are ok with me, as\nthe goal of this glossary is to aid in discovery of term meanings, so\nknowing that two terms are interchangable is itself helpful\n\n\nIt is my hope that this revision represents the final _structural_ change\nto the glossary. New definitions and edits to existing definitions will, of\ncourse, go on forever.", "msg_date": "Mon, 30 Mar 2020 13:10:19 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 30.03.20 19:10, Corey Huinker wrote:\n>\n>\n> On Sun, Mar 29, 2020 at 5:29 AM Jürgen Purtz <juergen@purtz.de \n> <mailto:juergen@purtz.de>> wrote:\n>\n> On 27.03.20 21:12, Justin Pryzby wrote:\n> > On Fri, Mar 20, 2020 at 11:32:25PM +0100, Jürgen Purtz wrote:\n> >>>> + <glossterm>Archiver</glossterm>\n> >>> Can you change that to archiver process ?\n> >> I prefer the short term without the addition of 'process' -\n> concerning\n> >> 'Archiver' as well as the other cases. But I'm not an native\n> English\n> >> speaker.\n> > I didn't like it due to lack of context.\n> >\n> > What about \"wal archiver\" ?\n> >\n> > It occured to me when I read this.\n> >\n> https://www.postgresql.org/message-id/20200327.163007.128069746774242774.horikyota.ntt%40gmail.com\n> >\n> \"WAL archiver\" is ok for me. In the current documentation we have 2\n> places with \"WAL archiver\" and 4 with \"archiver\"-only\n> (high-availability.sgml, monitoring.sgml).\n>\n> \"backend process\" is an exception to the other terms because the\n> standalone term \"backend\" is sensibly used in diverse situations.\n>\n> Kind regards, Jürgen\n>\n>\n> I've taken Alvarao's fixes and done my best to incorporate the \n> feedback into a new patch, which Roger's (tech writer) reviewed yesterday.\n>\n> The changes are too numerous to list, but the highlights are:\n>\n> New definitions:\n> * All four ACID terms\n> * Vacuum (split off from Autovacuum)\n> * Tablespace\n> * WAL Archiver (replaces Archiver)\n>\n> Changes to existing terms:\n> * Implemented most wording changes recommended by Justin\n> * all remaining links were either made into xrefs or edited out of\n> existence\n>\n> * de-tagged most second uses of of a term within a definition\n>\n>\n> Did not do\n> * Addressed the \" Process\" suffix suggested by Justin. There isn't\n> consensus on these changes, and I'm neutral on the matter\n> * change the Cast definition. I think it's important to express\n> that a cast has a FROM datatype as well as a TO\n> * anything host/server related as I couldn't see a consensus reached\n>\n> Other thoughts:\n> * Trivial definitions that are just see-other-definition are ok\n> with me, as the goal of this glossary is to aid in discovery of\n> term meanings, so knowing that two terms are interchangable is\n> itself helpful\n>\n>\n> It is my hope that this revision represents the final _structural_ \n> change to the glossary. New definitions and edits to existing \n> definitions will, of course, go on forever.\n\nPlease find some minor suggestions in the attachment. They are based on \nCorey's last patch 0001-glossary-v4.patch.\n\nKind regards, Jürgen", "msg_date": "Tue, 31 Mar 2020 16:13:00 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Tue, Mar 31, 2020 at 04:13:00PM +0200, J�rgen Purtz wrote:\n> Please find some minor suggestions in the attachment. They are based on\n> Corey's last patch 0001-glossary-v4.patch.\n\n> @@ -220,7 +220,7 @@\n> Record</glossterm>s to the file system and creates a special\n> checkpoint record. This process is initiated when predefined\n> conditions are met, such as a specified amount of time has passed, or\n> - a certain volume of records have been collected.\n> + a certain volume of records has been collected.\n\nI think you're correct in that \"volume\" is singular. But I think \"collected\"\nis the wrong world. I suggested \"written\".\n\n> <para>\n> - One of the <acronym>ACID</acronym> properties. This means that concurrently running \n> + One of the <acronym>ACID</acronym> properties. This means that concurrently running\n\nThese could maybe say \"required\" or \"essential\" >ACID< properties\n\n> <para>\n> + In reference to a <glossterm>Table</glossterm>:\n> A <glossterm>Table</glossterm> that can be queried directly,\n\nMaybe: \"In reference to a >Relation<: A table which can be queried directly,\"\n\n> table in the collection.\n> </para>\n> <para>\n> - When referring to an <glossterm>Analytic</glossterm>\n> - <glossterm>Function</glossterm>: a partition is a definition\n> - that identifies which neighboring\n> + In reference to a <glossterm>Analytic Function</glossterm>:\ns/a/an/\n\n> @@ -1333,7 +1334,8 @@\n> <glossdef>\n> <para>\n> The condition of having no duplicate values in the same\n> - <glossterm>Relation</glossterm>. Often used in the concept of\n> + <glossterm>Column</glossterm> of a <glossterm>Relation</glossterm>.\n> + Often used in the concept of\n\ns/concept/context/, but I said that before, so maybe it was rejected.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 31 Mar 2020 12:58:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Mon, Mar 30, 2020 at 01:10:19PM -0400, Corey Huinker wrote:\n> + <glossentry id=\"glossary-aggregating\">\n> + <glossterm>Aggregating</glossterm>\n> + <glossdef>\n> + <para>\n> + The act of combining a collection of data (input) values into\n> + a single output value, which may not be of the same type as the\n> + input values.\n\nI think we maybe already tried to address this ; but could we define a noun\nform ? But not \"aggregate\" since it's the same word as the verb form. I think\nit would maybe be best to merge with \"aggregate function\", below.\n\n> + <glossentry id=\"glossary-consistency\">\n> + <glossterm>Consistency</glossterm>\n> + <glossdef>\n> + <para>\n> + One of the <acronym>ACID</acronym> properties. This means that the database\n> + is always in compliance with its own rules such as <glossterm>Table</glossterm>\n> + structure, <glossterm>Constraint</glossterm>s,\n\nI don't think the definition of \"compliance\" is good. The state of being\nconsistent means an absense of corruption more than that an absense of data\nintegrity issues (which could be caused by corruption).\n\n> + <glossentry id=\"glossary-datum\">\n> + <glossterm>Datum</glossterm>\n> + <glossdef>\n> + <para>\n> + The internal representation of a <acronym>SQL</acronym> data type.\n\nCould you say \"..used by PostgreSQL\" ?\n\n> + <glossterm>File Segment</glossterm>\n> + <glossdef>\n> + <para>\n> + A physical file which stores data for a given\n> + <glossterm>Heap</glossterm> or <glossterm>Index</glossterm> object.\n> + <glossterm>File Segment</glossterm>s are limited in size by a\n> + configuration value and if that size is exceeded, it will be split\n> + into multiple physical files.\n\nSay \"if an object exceeds that size, then it will be stored across multiple\nphysical files\".\n\n> + which handles parts of an <acronym>SQL</acronym> query to take\n...\n> + A <acronym>SQL</acronym> command used to add new data into a\n\nI mentioned before, please be consistent: \"A SQL or An SQL\".\n\n> + </para>\n> + <para>\n> + Many <glossterm>Instance</glossterm>s can run on the same server as\n\nSay \"multiple\" not many.\n\n> + <glossentry id=\"glossary-join\">\n> + <glossterm>Join</glossterm>\n> + <glossdef>\n> + <para>\n> + A <acronym>SQL</acronym> keyword used in <command>SELECT</command> statements for\n> + combining data from multiple <glossterm>Relation</glossterm>s.\n\nCould you add a link to the docs ?\n\n> + <glossentry id=\"glossary-log-writer\">\n> + <glossterm>Log Writer</glossterm>\n> + <glossdef>\n> + <para>\n> + If activated and parameterized, the\n\nI still don't know what parameterized means here.\n\n> + <glossentry id=\"glossary-system-catalog\">\n> + <glossterm>System Catalog</glossterm>\n> + <glossdef>\n> + <para>\n> + A collection of <glossterm>Table</glossterm>s and\n> + <glossterm>View</glossterm>s which describe the structure of all\n> + <acronym>SQL</acronym> objects of the <glossterm>Database</glossterm>\n\nI would say \"... a PostgreSQL >Database<\"\n\n> + and the <glossterm>Global SQL Object</glossterm>s of the\n> + <glossterm>Cluster</glossterm>. The <glossterm>System\n> + Catalog</glossterm> resides in the schema\n> + <literal>pg_catalog</literal>. Main parts are mirrored as\n> + <glossterm>View</glossterm>s in the <glossterm>Schema</glossterm>\n> + <literal>information_schema</literal>.\n\nI wouldn't say \"mirror\": Some information is also exposed as >Views< in the\n>information_schema< >Schema<.\n\n> + <glossentry id=\"glossary-tablespace\">\n> + <glossterm>Tablespace</glossterm>\n> + <glossdef>\n> + <para>\n> + A named location on the server filesystem. All <glossterm>SQL Object</glossterm>s\n> + which require storage beyond their definition in the\n> + <glossterm>System Catalog</glossterm>\n> + must belong to a single tablespace.\n\nRemove \"single\" as it sounds like we only support one.\n\n> + <glossterm>Transaction</glossterm>\n> + <glossdef>\n> + <para>\n> + A combination of commands that must act as a single\n> + <glossterm>Atomic</glossterm> command: they all succeed or all fail\n> + as a single unit, and their effects are not visible to other\n> + <glossterm>Session</glossterm>s until\n> + the <glossterm>Transaction</glossterm> is complete.\n\ns/complete/commited/ ?\n\n\n> + <glossentry id=\"glossary-unique\">\n> + <glossterm>Unique</glossterm>\n> + <glossdef>\n> + <para>\n> + The condition of having no duplicate values in the same\n> + <glossterm>Relation</glossterm>. Often used in the concept of\n\ns/concept/context/\n\n> + <glossterm>Vacuum</glossterm>\n> + <glossdef>\n> + <para>\n> + The process of removing outdated <acronym>MVCC</acronym>\n\nMaybe say \"tuples which were deleted or obsoleted by an UPDATE\".\nBut maybe you're trying to use generic language.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 31 Mar 2020 13:07:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Sun, Oct 13, 2019 at 04:52:05PM -0400, Corey Huinker wrote:\n> 1. It's obviously incomplete. There are more terms, a lot more, to add.\n\nHow did you come up with the initial list of terms ?\n\nHere's some ideas; I'm *not* suggesting to include all of everything, but\nhopefully start with a coherent, self-contained list.\n\ngrep -roh '<firstterm>[^<]*' doc/src/ |sed 's/.*/\\L&/' |sort |uniq -c |sort -nr |less\n\nMaybe also:\nobject identifier\noperator classes\noperator family\nvisibility map\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 31 Mar 2020 13:09:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Tue, Mar 31, 2020 at 2:09 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sun, Oct 13, 2019 at 04:52:05PM -0400, Corey Huinker wrote:\n> > 1. It's obviously incomplete. There are more terms, a lot more, to add.\n>\n> How did you come up with the initial list of terms ?\n>\n\n1. I asked some newer database people to come up with a list of terms that\nthey used.\n2. I then added some more terms that seemed obvious given that first list.\n3. That combined list was long on general database concepts and theory, and\nshort on administration concepts\n4. Then Jürgen suggested that we integrate his working list of terms, very\nmuch focused on internals, so I did that.\n5. Everything after that was applying suggested edits and new terms.\n\n\n> Here's some ideas; I'm *not* suggesting to include all of everything, but\n> hopefully start with a coherent, self-contained list.\n>\n\nI don't think this list will ever be complete. It will always be a work in\nprogress. I'd prefer to get the general structure of a glossary committed\nin the short term, and we're free to follow up with edits that focus on the\nwording.\n\n\n>\n> grep -roh '<firstterm>[^<]*' doc/src/ |sed 's/.*/\\L&/' |sort |uniq -c\n> |sort -nr |less\n>\n> Maybe also:\n> object identifier\n> operator classes\n> operator family\n> visibility map\n>\n\nJust so I can prioritize my work, which of these things, along with your\nsuggestions in previous emails, would you say is a barrier to considering\nthis ready for a committer?\n\nOn Tue, Mar 31, 2020 at 2:09 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sun, Oct 13, 2019 at 04:52:05PM -0400, Corey Huinker wrote:\n> 1. It's obviously incomplete. There are more terms, a lot more, to add.\n\nHow did you come up with the initial list of terms ?1. I asked some newer database people to come up with a list of terms that they used.2. I then added some more terms that seemed obvious given that first list.3. That combined list was long on general database concepts and theory, and short on administration concepts4. Then Jürgen suggested that we integrate his working list of terms, very much focused on internals, so I did that.5. Everything after that was applying suggested edits and new terms. Here's some ideas; I'm *not* suggesting to include all of everything, but\nhopefully start with a coherent, self-contained list.I don't think this list will ever be complete. It will always be a work in progress. I'd prefer to get the general structure of a glossary committed in the short term, and we're free to follow up with edits that focus on the wording. \n\ngrep -roh '<firstterm>[^<]*' doc/src/ |sed 's/.*/\\L&/' |sort |uniq -c |sort -nr |less\n\nMaybe also:\nobject identifier\noperator classes\noperator family\nvisibility mapJust so I can prioritize my work, which of these things, along with your suggestions in previous emails, would you say is a barrier to considering this ready for a committer?", "msg_date": "Tue, 31 Mar 2020 15:26:02 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "\nOn 31.03.20 19:58, Justin Pryzby wrote:\n> On Tue, Mar 31, 2020 at 04:13:00PM +0200, Jürgen Purtz wrote:\n>> Please find some minor suggestions in the attachment. They are based on\n>> Corey's last patch 0001-glossary-v4.patch.\n>> @@ -220,7 +220,7 @@\n>> Record</glossterm>s to the file system and creates a special\n>> checkpoint record. This process is initiated when predefined\n>> conditions are met, such as a specified amount of time has passed, or\n>> - a certain volume of records have been collected.\n>> + a certain volume of records has been collected.\n> I think you're correct in that \"volume\" is singular. But I think \"collected\"\n> is the wrong world. I suggested \"written\".\n>\n\"collected\" is not optimal. I suggest \"created\". Please avoid \"written\", \nthe WAL records will be written when the Checkpointer is running, not \nbefore. So:\n\n  \"a certain volume of <glossterm>WAL records<glossterm> has been \ncollected.\"\n\n\nEvery thing else is ok for me.\n\nKind regards, Jürgen\n\n\n", "msg_date": "Wed, 1 Apr 2020 09:34:41 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "\nOn 31.03.20 20:07, Justin Pryzby wrote:\n> On Mon, Mar 30, 2020 at 01:10:19PM -0400, Corey Huinker wrote:\n>> + <glossentry id=\"glossary-aggregating\">\n>> + <glossterm>Aggregating</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + The act of combining a collection of data (input) values into\n>> + a single output value, which may not be of the same type as the\n>> + input values.\n> I think we maybe already tried to address this ; but could we define a noun\n> form ? But not \"aggregate\" since it's the same word as the verb form. I think\n> it would maybe be best to merge with \"aggregate function\", below.\n\nYes, combine the two. Or remove \"aggregating\" at all.\n\n\n> + <glossentry id=\"glossary-log-writer\">\n>> + <glossterm>Log Writer</glossterm>\n>> + <glossdef>\n>> + <para>\n>> + If activated and parameterized, the\n> I still don't know what parameterized means here.\n\nRemove \"and parameterized\". The Log Writer always has (default) parameters.\n\n\nEvery thing else is ok for me.\n\nKind regards, Jürgen\n\n\n", "msg_date": "Wed, 1 Apr 2020 09:34:56 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Apr-01, J�rgen Purtz wrote:\n\n> \n> On 31.03.20 19:58, Justin Pryzby wrote:\n> > On Tue, Mar 31, 2020 at 04:13:00PM +0200, J�rgen Purtz wrote:\n> > > Please find some minor suggestions in the attachment. They are based on\n> > > Corey's last patch 0001-glossary-v4.patch.\n> > > @@ -220,7 +220,7 @@\n> > > Record</glossterm>s to the file system and creates a special\n> > > checkpoint record. This process is initiated when predefined\n> > > conditions are met, such as a specified amount of time has passed, or\n> > > - a certain volume of records have been collected.\n> > > + a certain volume of records has been collected.\n> > I think you're correct in that \"volume\" is singular. But I think \"collected\"\n> > is the wrong world. I suggested \"written\".\n> > \n> \"collected\" is not optimal. I suggest \"created\". Please avoid \"written\", the\n> WAL records will be written when the Checkpointer is running, not before.\n\nActually, you're mistaken; the checkpointer hardly writes any WAL\nrecords. In fact, it only writes *one* wal record, which is the\ncheckpoint record itself. All the other wal records are written either\nby the backends that produce it, or by the wal writer process. By the\ntime the checkpoint runs, the wal records are long expected to be written.\n\nAnyway I changed a lot of terms again, as well as changing the way the\nterms are marked up -- for two reasons:\n\n1. I didn't like the way the WAL-related entries were structured. I\ncreated a new entry called \"Write-Ahead Log\", which explains what WAL\nis; this replaces the term \"WAL Log\", which is redundant (since the L in\nWAL stands for \"log\" already). I kept the id as glossary-wal, though,\nbecause it's shorter and *shrug*. The definition uses the terms \"wal\nrecord\" and \"wal file\", which I also rewrote.\n\n2. I found out that \"see xyz\" and \"see also\" have bespoke markup in\nDocbook -- <glosssee> and <glossseealso>. I changed some glossentries\nto use those, removing some glossdefs and changing a couple of paras to\nglossseealsos. I also removed all \"id\" properties from glossentries\nthat are just <glosssee>, because I think it's a mistake to have\nreferences to entries that will make the reader look up a different\nterm; for me as a reader that's annoying, and I don't like to annoy\npeople.\n\n\nWhile at it, I again came across \"analytic\", which is a term we don't\nuse much, so I made it a glosssee for \"window function\"; and while at it\nI realized we didn't clearly explain what a window was. So I added\n\"window frame\" for that. I considered adding the term \"partition\" which\nis used in this context, but decided it wasn't necessary.\n\nI also added \"(process)\" to terms that define processes. So\nnow we have \"checkpointer (process)\" and so on.\n\nI rewrote the definition for \"atomic\" once again. Made it two\nglossdefs, because I can. If you don't like this, I can undo.\n\nI added \"recycling\".\n\nI still have to go through some other defs.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 1 Apr 2020 21:09:25 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Tue, Mar 31, 2020 at 03:26:02PM -0400, Corey Huinker wrote:\n> Just so I can prioritize my work, which of these things, along with your\n> suggestions in previous emails, would you say is a barrier to considering\n> this ready for a committer?\n\nTo answer your off-list inquiry, I'm not likely to mark it \"ready\" myself.\nI don't know if any of these would be a \"blocker\" for someone else.\n\n> > Here's some ideas; I'm *not* suggesting to include all of everything, but\n> > hopefully start with a coherent, self-contained list.\n> \n> > grep -roh '<firstterm>[^<]*' doc/src/ |sed 's/.*/\\L&/' |sort |uniq -c\n> > |sort -nr |less\n\nI looked through that list and found these that might be good to include now or\nin the future. Probably all of these need language polishing; I'm not\nrequesting you to just copy them in just to say they're there.\n\njoin: concept of combining columns from two tables or other relations. The\nresult of joining a table with N rows to another table with M rows might have\nup to N*M rows (if every row from the first table \"joins to\" every row on the\nsecond table).\n\nnormalized: A database schema is said to be \"normalized\" if its redundancy has\nbeen removed. Typically a \"normalized\" schema has a larger number of tables,\nwhich include ID columns, and queries typically involve joining together\nmultiple tables.\n\nquery: a request send by a client to a server, usually to return results or to\nmodify data on the server;\n\nquery plan: the particular procedure by which the database server executes a\nquery. A simple query involving a single table could might be planned using a\nsequential scan or an index scan. For a complex query involving multiple\ntables joined togther, the optimizer attempts to determine the\ncheapest/fastest/best way to execute the query, by joining tables in the\noptimal order, and with the optimal join strategy.\n\nplanner/optimizer: ...\n\ntransaction isolation:\npsql: ...\n\nsynchronous: An action is said to be \"synchronous\" if it does not return to its\nrequestor until its completion;\n\nbind parameters: arguments to a SQL query that are sent separately from the\nquery text. For example, the query text \"SELECT * FROM tbl WHERE col=$1\" might\nbe executed for some certain value of the $1 parameter. If parameters are sent\n\"in-line\" as a part of the query text, they need to be properly\nquoted/escaped/sanitized, to avoid accidental or malicious misbehavior if the\ninput contains special characters like semicolons or quotes.\n\n> > Maybe also:\n> > object identifier\n> > operator classes\n> > operator family\n> > visibility map\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Apr 2020 20:09:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Apr-01, Justin Pryzby wrote:\n\n> planner/optimizer: ...\n\nI propose we define \"planner\" and make \"optimizer\" a <glosssee> entry.\n\nI further propose not to define the term \"normalized\", at least not for\nnow. That seems a very deep rabbit hole.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Apr 2020 22:41:11 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n> 2. I found out that \"see xyz\" and \"see also\" have bespoke markup in\n> Docbook -- <glosssee> and <glossseealso>. I changed some glossentries\n> to use those, removing some glossdefs and changing a couple of paras to\n> glossseealsos. I also removed all \"id\" properties from glossentries\n> that are just <glosssee>, because I think it's a mistake to have\n> references to entries that will make the reader look up a different\n> term; for me as a reader that's annoying, and I don't like to annoy\n> people.\n>\n\n+1 These structural enhancements are great. I'm fine with removing the id\nfrom just-glossee, and glad that we're keeping the entry to aid discovery.\n\n\n> I rewrote the definition for \"atomic\" once again. Made it two\n> glossdefs, because I can. If you don't like this, I can undo.\n>\n\n+1 Splitting this into two definitions, one for each context, is the most\nsensible thing and I don't know why I didn't do that in the first place.\n\n2. I found out that \"see xyz\" and \"see also\" have bespoke markup in\nDocbook -- <glosssee> and <glossseealso>.  I changed some glossentries\nto use those, removing some glossdefs and changing a couple of paras to\nglossseealsos.  I also removed all \"id\" properties from glossentries\nthat are just <glosssee>, because I think it's a mistake to have\nreferences to entries that will make the reader look up a different\nterm; for me as a reader that's annoying, and I don't like to annoy\npeople.+1 These structural enhancements are great. I'm fine with removing the id from just-glossee, and glad that we're keeping the entry to aid discovery. I rewrote the definition for \"atomic\" once again.  Made it two\nglossdefs, because I can.  If you don't like this, I can undo.+1 Splitting this into two definitions, one for each context, is the most sensible thing and I don't know why I didn't do that in the first place.", "msg_date": "Wed, 1 Apr 2020 23:34:31 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n> I propose we define \"planner\" and make \"optimizer\" a <glosssee> entry.\n>\n\nI have no objection to more entries, or edits to entries, but am concerned\nthat the process leads to someone having to manually merge several\nstart-from-scratch patches, with no clear sense of when we'll be done. I\nmay make sense to appoint an edit-collector.\n\n\n> I further propose not to define the term \"normalized\", at least not for\n> now. That seems a very deep rabbit hole.\n>\n\n+1 I think we appointed a guy named Xeno to work on that definition. He\nsays he's getting close...\n\n\nI propose we define \"planner\" and make \"optimizer\" a <glosssee> entry.I have no objection to more entries, or edits to entries, but am concerned that the process leads to someone having to manually merge several start-from-scratch patches, with no clear sense of when we'll be done. I may make sense to appoint an edit-collector. I further propose not to define the term \"normalized\", at least not for\nnow.  That seems a very deep rabbit hole.+1 I think we appointed a guy named Xeno to work on that definition. He says he's getting close...", "msg_date": "Wed, 1 Apr 2020 23:44:56 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Apr-01, Corey Huinker wrote:\n\n> > I propose we define \"planner\" and make \"optimizer\" a <glosssee> entry.\n> \n> I have no objection to more entries, or edits to entries, but am concerned\n> that the process leads to someone having to manually merge several\n> start-from-scratch patches, with no clear sense of when we'll be done. I\n> may make sense to appoint an edit-collector.\n\nI added \"query planner\" (please suggest edits) and \"query\" (using\nJustin's def) and edited the defs of the ACID terms a little bit (in\nparticular moved the definition of atomic transaction to \"atomicity\"\nfrom \"atomic\", and made the latter reference the former instead of the\nother way around). Also removed \"Aggregating\" as suggested upthread. I\nmoved \"master\" over to \"primary (server)\", keeping the ref; we don't use\nthe former much.\n\nThere's only one \"serious\" mistake in the defs AFAICS which is that of\n\"global objects\". Only roles, tablespace, databases are global objects.\nObjects that are not in a schema (extensions, etc) are not \"global\" in\nthat sense.\n\nI think all <glossterm> used in definitions should have linkend.\n\nI hope to get this committed today, but I'm going to sleep now so if you\nwant to suggest further edits, now's the time. I think the terms\nproposed by Justin are good to have -- please discuss the defs he\nproposed -- only \"normalized\" I'd rather stay away from.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 2 Apr 2020 05:43:28 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "+1 and many thanks to Alvaros edits.\n\n\nKind regards\n\nJ�rgen Purtz\n\n\n\n\n", "msg_date": "Thu, 2 Apr 2020 14:44:26 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Thu, Apr 2, 2020 at 8:44 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> +1 and many thanks to Alvaros edits.\n>\n>\nI did some of the grunt work Alvaro alluded to in v6, and the results are\nattached and they build, which means there are no invalid links.\n\nNotes:\n* no definition wordings were changed\n* added a linkend to all remaining glossterms that do not immediately\nfollow a glossentry\n* renamed id glossary-temporary-tables to glossary-temporary-table\n* temporarily re-added an id for glossary-row as we have many references to\nthat. unsure if we should use the term Tuple in all those places or say Row\nwhile linking to glossary-tuple, or something else\n* temporarily re-added an id for glossary-segment, glossary-wal-segment,\nglossary-analytic-function, as those were also referenced and will need\nsimilar decisions made\n* added a stub entry for glossary-unique-index, unsure if it should have a\ndefinition on it's own, or we split it into unique and index.\n* I noticed several cases where a glossterm is used twice in a definition,\nbut didn't de-term them\n* I'm curious about how we should tag a term when using it in its own\ndefinition. same as anywhere else?", "msg_date": "Thu, 2 Apr 2020 15:40:44 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Apr-02, Corey Huinker wrote:\n\n> On Thu, Apr 2, 2020 at 8:44 AM J�rgen Purtz <juergen@purtz.de> wrote:\n> \n> > +1 and many thanks to Alvaros edits.\n> >\n> >\n> I did some of the grunt work Alvaro alluded to in v6, and the results are\n> attached and they build, which means there are no invalid links.\n\nThank you! I had been working on some other changes myself, and merged\nmost of your changes. I give you v8.\n\n> * renamed id glossary-temporary-tables to glossary-temporary-table\n\nGood.\n\n> * temporarily re-added an id for glossary-row as we have many references to\n> that. unsure if we should use the term Tuple in all those places or say Row\n> while linking to glossary-tuple, or something else\n\nI changed these to link to glossary-tuple; that entry already explains\nthese two other terms, so this seems acceptable.\n\n> * temporarily re-added an id for glossary-segment, glossary-wal-segment,\n> glossary-analytic-function, as those were also referenced and will need\n> similar decisions made\n\nDitto.\n\n> * added a stub entry for glossary-unique-index, unsure if it should have a\n> definition on it's own, or we split it into unique and index.\n\nI changed Unique Index into Unique Constraint, which is supposed to be\nthe overarching concept. Used that in the definition of primary key.\n\n> * I noticed several cases where a glossterm is used twice in a definition,\n> but didn't de-term them\n\nDid that for most I found, but I expect that some remain.\n\n> * I'm curious about how we should tag a term when using it in its own\n> definition. same as anywhere else?\n\nI think we should not tag those.\n\nI fixed the definition of global object as mentioned previously. Also\nadded \"client\", made \"connection\" have less importance compared to\n\"session\", and removed \"window frame\" (made \"window function\" refer to\n\"partition\" instead). If you (or anybody) have suggestions for the\ndefinition of \"client\" and \"session\", I'm all ears.\n\nI'm quite liking the result of this now. Thanks for all your efforts.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 2 Apr 2020 19:09:32 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Thu, Apr 02, 2020 at 07:09:32PM -0300, Alvaro Herrera wrote:\n> \"partition\" instead). If you (or anybody) have suggestions for the\n> definition of \"client\" and \"session\", I'm all ears.\n\nWe already have Session:\n A Connection to the Database. \n\nI propose: Client:\n\tA host (or a process on a host) which connects to a server to make\nqueries or other requests.\n\nBut note, \"host\" is still defined as \"server\", which I didn't like.\n\nMaybe it should be:\n\tA computer which may act as a >client< or a >server<.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 2 Apr 2020 17:26:39 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "Pushed now. Many thanks to Corey who put the main thrust, and to J�rgen\nand Roger for the great help, and to Justin for the extensive review and\nFabien for the initial discussion.\n\nThis is just a starting point. Let's keep improving it. And how that\nwe have it, we can start thinking of patching the main part of the docs\nto make reference to it by using <glossterm> in key spots. Right now\nthe glossary links to itself, but it makes lots of sense to have other\nplaces point to it.\n\nOn 2020-Apr-02, Justin Pryzby wrote:\n\n> We already have Session:\n> A Connection to the Database. \n\nYes, but I didn't like that much, so I rewrote it -- I was asking for\nsuggestions on how to improve it further. While I think we use those\nterms (connection and session) interchangeably sometimes, they're not\nexactly the same and the glossary should be more precise or at least\nless vague about the distinction.\n\n> I propose: Client:\n> \tA host (or a process on a host) which connects to a server to make\n> queries or other requests.\n> \n> But note, \"host\" is still defined as \"server\", which I didn't like.\n> \n> Maybe it should be:\n> \tA computer which may act as a >client< or a >server<.\n\nI changed all these terms, and a few others, added a couple more and\ncommented out some that I was not happy with, and pushed.\n\nI think this still needs more work:\n\n* We had \"serializable\", but none of the other isolation levels were\n defined. If we think we should define them, let's define them all.\n But also the definition we had for serializable was not correct;\n it seemed more suited to define \"repeatable read\".\n\n* I commented out the definition of \"sequence\", which seemed to go into\n excessive detail. Let's have a more concise definition?\n\n* We're missing exclusion constraints, and NOT NULL which is also a\n weird type of constraint.\n\nPatches for these omissions, and other contributions, welcome.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Apr 2020 13:45:13 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n> we have it, we can start thinking of patching the main part of the docs\n> to make reference to it by using <glossterm> in key spots. Right now\n> the glossary links to itself, but it makes lots of sense to have other\n> places point to it.\n>\n\nI have some ideas about how to patch the main docs, but will leave those to\na separate thread.\n\n\n> * I commented out the definition of \"sequence\", which seemed to go into\n> excessive detail. Let's have a more concise definition?\n>\n\nThat one's my fault.\n\n\n>\n> Patches for these omissions, and other contributions, welcome.\n>\n\nThanks for all your work on this!\n\nwe have it, we can start thinking of patching the main part of the docs\nto make reference to it by using <glossterm> in key spots.  Right now\nthe glossary links to itself, but it makes lots of sense to have other\nplaces point to it.I have some ideas about how to patch the main docs, but will leave those to a separate thread. * I commented out the definition of \"sequence\", which seemed to go into\n  excessive detail.  Let's have a more concise definition?That one's my fault.  \n\nPatches for these omissions, and other contributions, welcome.Thanks for all your work on this!", "msg_date": "Fri, 3 Apr 2020 13:34:17 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Fri, Apr 3, 2020 at 1:34 PM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n> Thanks for all your work on this!\n>\n\nAnd to add on to Corey's message of thanks, I also want to thank everyone\nfor their input and assistance on that. I am very grateful for the\nopportunity to contribute to this project!\n\nOn Fri, Apr 3, 2020 at 1:34 PM Corey Huinker <corey.huinker@gmail.com> wrote:Thanks for all your work on this! And to add on to Corey's message of thanks, I also want to thank everyone for their input and assistance on that. I am very grateful for the opportunity to contribute to this project!", "msg_date": "Fri, 3 Apr 2020 13:37:47 -0400", "msg_from": "Roger Harkavy <rogerharkavy@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-04-03 18:45, Alvaro Herrera wrote:\n> Pushed now. Many thanks to Corey who put the main thrust, and to \n> Jürgen\n> and Roger for the great help, and to Justin for the extensive review \n> and\n> Fabien for the initial discussion.\n\nA few improvements:\n\n'its value that cannot' should be\n'its value cannot'\n\n'A newly created Cluster' should be\n'A newly created cluster'\n\n'term Cluster' should be\n'term cluster'\n\n'allowed within a Table.' should be\n'allowed within a table.'\n\n'of a SQL data type.' should be\n'of an SQL data type.'\n\n'A SQL command' should be\n'An SQL command'\n\n'i.e. the data' should be\n'i.e., the data'\n\n'that a a view is' should be\n'that a view is'\n\n'One of the tables that each contain part' should be\n'One of multiple tables that each contain part'\n\n'a partition is a user-defined criteria' should be\n'a partition is a user-defined criterion'\n\n'Roless are' should be\n'Roles are'\n\n'undo all of the operations' should be\n'undo all operations'\n\n'A special mark inside the sequence of steps' should be\n'A special mark in the sequence of steps'\n\n'are enforced unique' should be (?)\n'are enforced to be unique'\n\n'the term Schema is used' should be\n'the term schema is used'\n\n'belong to exactly one Schema.' should be\n'belong to exactly one schema.'\n\n'about the Cluster's activities' should be\n'about the cluster's activities'\n\n'the most common form of Relation' should be\n'the most common form of relation'\n\n'A Trigger executes' should be\n'A trigger executes'\n\n'and other closely related garbage-collection-like processing' should \nbe\n'and other processing'\n\n'each of the changes are replayed' should be\n'each of the changes is replayed'\n\nShould also be a lemmata in the glossary:\n\nACID\n\n\n'archaic' should maybe be 'obsolete'. That seems to me to be an easier \nword for non-native speakers.\n\n\nThanks,\n\nErik Rijkers\n\n\n", "msg_date": "Fri, 03 Apr 2020 19:41:40 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Apr-03, Erik Rijkers wrote:\n\n> On 2020-04-03 18:45, Alvaro Herrera wrote:\n> > Pushed now. Many thanks to Corey who put the main thrust, and to J�rgen\n> > and Roger for the great help, and to Justin for the extensive review and\n> > Fabien for the initial discussion.\n> \n> A few improvements:\n\nThanks! That gives me the attached patch.\n\n> Should also be a lemmata in the glossary:\n> \n> ACID\n\nAgreed. Wording suggestions welcome.\n\n> 'archaic' should maybe be 'obsolete'. That seems to me to be an easier word\n> for non-native speakers.\n\nBummer ;-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 3 Apr 2020 17:51:43 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Fri, Apr 03, 2020 at 05:51:43PM -0300, Alvaro Herrera wrote:\n> - The internal representation of one value of a <acronym>SQL</acronym>\n> + The internal representation of one value of an <acronym>SQL</acronym>\n\nI'm not sure about this one. The new glossary says \"a SQL\" seven times, and\ndoesn't say \"an sql\" at all.\n\n\"An SQL\" does appear to be more common in the rest of the docs, but if you\nchange one, I think you'd change them all.\n\nBTW it's now visible at:\nhttps://www.postgresql.org/docs/devel/glossary.html\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 3 Apr 2020 16:01:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-04-03 22:51, Alvaro Herrera wrote:\n> On 2020-Apr-03, Erik Rijkers wrote:\n> \n>> On 2020-04-03 18:45, Alvaro Herrera wrote:\n>> > Pushed now. Many thanks to Corey who put the main thrust, and to Jürgen\n>> > and Roger for the great help, and to Justin for the extensive review and\n>> > Fabien for the initial discussion.\n>> \n>> A few improvements:\n> \n> Thanks! That gives me the attached patch.\n> \n>> Should also be a lemmata in the glossary:\n>> \n>> ACID\n> \n> Agreed. Wording suggestions welcome.\n\nHow about:\n\n\"\nACID\n\nAtomicity, consistency, isolation, and durability. ACID is a set of \nproperties of database transactions intended to guarantee validity even \nin the event of power failures, etc.\nACID is concerned with how the database recovers from such failures that \nmight occur while processing a transaction.\n\"\n\n>> 'archaic' should maybe be 'obsolete'. That seems to me to be an easier \n>> word\n>> for non-native speakers.\n> \n> Bummer ;-)\n\nOK - we'll figure it out :)\n\n\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 03 Apr 2020 23:05:06 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Fri, 2020-04-03 at 16:01 -0500, Justin Pryzby wrote:\n> BTW it's now visible at:\n> https://www.postgresql.org/docs/devel/glossary.html\n\nGreat!\n\nSome comments:\n\n- SQL object: There are more kinds of objects, like roles or full text dictionaries.\n Perhaps better:\n\n Anything that is created with a CREATE statement, for example ...\n Most objects belong to a database schema, except ...\n\n Or do we consider a replication slot to be an object?\n\n- The glossary has \"Primary (server)\", but not \"Standby (server)\".\n That should be a synonym for \"Replica\".\n\n- Server: is that really our definition?\n I thought that \"server\" is what the glossary defines as \"instance\", and\n the thing called \"server\" in the glossary should really be called \"host\".\n\n Maybe I am too Unix-centered.\n\n Many people I know use \"instance\" synonymous to \"cluster\".\n\n- Role: I understand the motivation behind the definition (except that the word \"instance\"\n is ill chosen), but a role is more than a collection of privileges.\n How can a collection of privileges have a password or own an object?\n Perhaps, instead of the first sentence:\n\n A database object used for authentication, authorization and ownership.\n Both database users and user groups are \"roles\" in PostgreSQL.\n\n In the second sentence, \"roles\" is mis-spelled as \"roless\".\n\n- Null\n\n I think it should say \"It represents the absence of *a definite* value.\"\n Usually it is better to think of NULL as \"unknown\".\n\n- Function\n\n I don't know if \"transformation of data\" describes it well.\n Quite a lot of functions in PostgreSQL have side effects.\n How about:\n\n Procedural code stored in the database that can be used in SQL statements.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Sat, 04 Apr 2020 06:04:19 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "\n> BTW it's now visible at:\n> https://www.postgresql.org/docs/devel/glossary.html\n\nAwesome! Linking beetween defs and to relevant sections is great.\n\nBTW, I'm in favor of \"an SQL\" because I pronounce it \"ess-kew-el\", but I \nguess that people who say \"sequel\" would prefer \"a SQL\". Failing that, I'm \nfine with some heterogeneity, life is diverse!\n\nISTM that occurrences of these words elsewhere in the documentation should \nlink to the glossary definitions?\n\nAs the definitions are short and to the point, maybe the HTML display \ncould (also) \"hover\" the definitions when the mouse passes over the word, \nusing the \"title\" attribute?\n\n\"ACID\" does not appear as an entry, nor in the acronyms sections. Also no \nDCL, although DML & DDL are in acronyms.\n\nEntries could link to relevant wikipedia pages, like the acronyms section \ndoes?\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 4 Apr 2020 08:55:05 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "\n> - Server: is that really our definition?\n> I thought that \"server\" is what the glossary defines as \"instance\", and\n> the thing called \"server\" in the glossary should really be called \"host\".\n>\n> Maybe I am too Unix-centered.\n>\n> Many people I know use \"instance\" synonymous to \"cluster\".\n\nCurrently our documentation uses 'server', 'database server', 'host', \n'instance', ...  in an indifferent way. Similar problem with \ndatabase/cluster. Now we have the chance to come to a conclusion about \npreferred terms an their exact meaning. Definitions in the glossary \nshall be the guideline, the documentation itself can adopt these terms \nover time.\n\nHere is my point of view. We have distinguishable things:\n\n(1) (virtual) hardware\n\n(2) an abstract structure of several object types, which models a \nmanagement system for data\n\n(3) a group of closely related processes. They implement the internal \n'business logic' or 'work flow' of (2).\n\n(4) abstract data, which fits into (2)\n\n(5) a physical representation of (4). Mainly and long lasting on disc, \nbut - partly - mirrored in RAM.\n\n(6) client processes, which connect to (3)\n\n\nIMO for (1) the two terms 'server' and 'host' both have their \njustification, depending on the context. There are historical terms \n('server-side', 'foreign server', 'client/server architecture', 'host' \nor 'host name' for IP-specification, 'host variable') which cannot be \nchanged. Therefor we shall accept both with identical definition and use \nthem as synonyms. Independent from this, there are many paragraphs in \nthe documentation, where they are used in a misleading sense ('server \ncrash', '... started the server', 'database server'). They should be \nchanged over time.\n\nFor me, (3) is an 'instance' and (5) is a 'cluster'. There is a 1:1 \nrelation between the two, because one 'instance' controls exactly one \n'cluster'. But the 'instance' consists of processes and memory whereas \nthe 'cluster' of databases which resides (mainly) on disc.\n\nConcerning (6) we are not interested in any hardware-question. We are \nonly interested in the processes, which connect to backend processes. We \nshould only define the term \"Client process\".\n\nKind regards, Jürgen\n\n\n\n\n", "msg_date": "Sat, 4 Apr 2020 14:30:14 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Sat, Apr 4, 2020 at 2:55 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> > BTW it's now visible at:\n> > https://www.postgresql.org/docs/devel/glossary.html\n\n\nNice. I went looking for it yesterday and the docs hadn't rebuilt yet.\n\n\n> ISTM that occurrences of these words elsewhere in the documentation should\n> link to the glossary definitions?\n>\n\nYes, that's a big project. I was considering writing a script to compile\nall the terms as search terms, paired with their glossary ids, and then\ninvoke git grep to identify all pages that have term FOO but don't have\nglossary-foo. We would then go about gloss-linking those pages as\nappropriate, but only a few pages at a time to keep scope sane. Also, I'm\nunclear about the circumstances under which we should _not_ tag a term. I\nremember hearing that we should only tag it on the first usage, but is that\nper section or per page?\n\n\n> As the definitions are short and to the point, maybe the HTML display\n> could (also) \"hover\" the definitions when the mouse passes over the word,\n> using the \"title\" attribute?\n>\n\nI like that idea, if it doesn't conflict with accessibility standards\n(maybe that's just titles on images, not sure).\nI suspect we would want to just carry over the first sentence or so with a\n... to avoid cluttering the screen with my overblown definition of a\nsequence.\nI suggest we pursue this idea in another thread, as we'd probably want to\ndo it for acronyms as well.\n\n\n>\n> \"ACID\" does not appear as an entry, nor in the acronyms sections. Also no\n> DCL, although DML & DDL are in acronyms.\n>\n\nIt needs to be in the acronyms page, and in light of all the docbook\nwizardry that I've learned from Alvaro, those should probably get their own\nacronym-foo ids as well. The cutoff date for 13 fast approaches, so it\nmight be for 14+ unless doc-only patches are treated differently.\n\n\n> Entries could link to relevant wikipedia pages, like the acronyms section\n> does?\n>\n\nThey could. I opted not to do that because each external link invites\ndebate about how authoritative that link is, which is easier to do with\nacronyms. Now that the glossary is a reality, it's easier to have those\ndiscussions.\n\nOn Sat, Apr 4, 2020 at 2:55 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> BTW it's now visible at:\n> https://www.postgresql.org/docs/devel/glossary.html Nice. I went looking for it yesterday and the docs hadn't rebuilt yet. ISTM that occurrences of these words elsewhere in the documentation should \nlink to the glossary definitions?Yes, that's a big project. I was considering writing a script to compile all the terms as search terms, paired with their glossary ids, and then invoke git grep to identify all pages that have term FOO but don't have glossary-foo. We would then go about gloss-linking those pages as appropriate, but only a few pages at a time to keep scope sane. Also, I'm unclear about the circumstances under which we should _not_ tag a term. I remember hearing that we should only tag it on the first usage, but is that per section or per page? As the definitions are short and to the point, maybe the HTML display \ncould (also) \"hover\" the definitions when the mouse passes over the word, \nusing the \"title\" attribute?I like that idea, if it doesn't conflict with accessibility standards (maybe that's just titles on images, not sure).I suspect we would want to just carry over the first sentence or so with a ... to avoid cluttering the screen with my overblown definition of a sequence.I suggest we pursue this idea in another thread, as we'd probably want to do it for acronyms as well. \n\n\"ACID\" does not appear as an entry, nor in the acronyms sections. Also no \nDCL, although DML & DDL are in acronyms.It needs to be in the acronyms page, and in light of all the docbook wizardry that I've learned from Alvaro, those should probably get their own acronym-foo ids as well. The cutoff date for 13 fast approaches, so it might be for 14+ unless doc-only patches are treated differently. Entries could link to relevant wikipedia pages, like the acronyms section \ndoes?They could. I opted not to do that because each external link invites debate about how authoritative that link is, which is easier to do with acronyms. Now that the glossary is a reality, it's easier to have those discussions.", "msg_date": "Sat, 4 Apr 2020 12:52:29 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "\nHi Corey,\n\n>> ISTM that occurrences of these words elsewhere in the documentation should\n>> link to the glossary definitions?\n>\n> Yes, that's a big project. I was considering writing a script to compile\n> all the terms as search terms, paired with their glossary ids, and then\n> invoke git grep to identify all pages that have term FOO but don't have\n> glossary-foo. We would then go about gloss-linking those pages as\n> appropriate, but only a few pages at a time to keep scope sane.\n\nId go for scripting the thing.\n\nShould the glossary be backpatched, to possibly ease doc patchpatches?\n\n> Also, I'm unclear about the circumstances under which we should _not_ \n> tag a term.\n\nAt least when then are explained locally.\n\n> I remember hearing that we should only tag it on the first usage, but is \n> that per section or per page?\n\nPage?\n\n>> As the definitions are short and to the point, maybe the HTML display\n>> could (also) \"hover\" the definitions when the mouse passes over the word,\n>> using the \"title\" attribute?\n>\n> I like that idea, if it doesn't conflict with accessibility standards\n> (maybe that's just titles on images, not sure).\n\nThe following worked fine:\n\n <html><head><title>Title Tag Test</title></head>\n <body>The <a href=\"acid.html\" title=\"ACID stands for Atomic, Consistent, Isolated & Durable\">ACID</a>\n property is great.\n </body></html>\n\nSo basically the def can be put on the glossary link, however retrieving \nthe definition should be automatic.\n\n> I suspect we would want to just carry over the first sentence or so with a\n> ... to avoid cluttering the screen with my overblown definition of a\n> sequence.\n\nDunno. The definitions are quite short, maybe the can fit whole.\n\n> I suggest we pursue this idea in another thread, as we'd probably want to\n> do it for acronyms as well.\n\nOr not. I'd test committer temperature before investing time because it \nwould mean that backpatching the doc would be a little harder.\n\n>> Entries could link to relevant wikipedia pages, like the acronyms section\n>> does?\n>\n> They could. I opted not to do that because each external link invites\n> debate about how authoritative that link is, which is easier to do with\n> acronyms. Now that the glossary is a reality, it's easier to have those\n> discussions.\n\nOk.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 5 Apr 2020 09:38:17 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "a) Some rearrangements of the sequence of terms to meet alphabetical order.\n\nb) <glossterm id=\"linkend-xxx\">� --> <glossterm linkend=\"glossary-xxx\">� \nin two cases. Or should it be a <firstterm>?\n\n\nKind regards, J�rgen", "msg_date": "Sun, 5 Apr 2020 10:41:44 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Apr-05, J�rgen Purtz wrote:\n\n> a) Some rearrangements of the sequence of terms to meet alphabetical order.\n\nThanks, will get this pushed.\n\n> b) <glossterm id=\"linkend-xxx\">� --> <glossterm linkend=\"glossary-xxx\">� in\n> two cases. Or should it be a <firstterm>?\n\nAh, yeah, those should be linkend.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 5 Apr 2020 15:00:46 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Apr-05, Fabien COELHO wrote:\n\n> > > As the definitions are short and to the point, maybe the HTML display\n> > > could (also) \"hover\" the definitions when the mouse passes over the word,\n> > > using the \"title\" attribute?\n> > \n> > I like that idea, if it doesn't conflict with accessibility standards\n> > (maybe that's just titles on images, not sure).\n> \n> The following worked fine:\n> \n> <html><head><title>Title Tag Test</title></head>\n> <body>The <a href=\"acid.html\" title=\"ACID stands for Atomic, Consistent, Isolated & Durable\">ACID</a>\n> property is great.\n> </body></html>\n\nI don't see myself patching the stylesheet as would be needed to do\nthis.\n\n> > I suggest we pursue this idea in another thread, as we'd probably want to\n> > do it for acronyms as well.\n> \n> Or not. I'd test committer temperature before investing time because it\n> would mean that backpatching the doc would be a little harder.\n\nTBH I can't get very excited about this idea. Maybe other documentation\nchampions would be happier about doing that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 5 Apr 2020 18:07:15 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "> On 2020-Apr-05, Jürgen Purtz wrote:\n>\n>> a) Some rearrangements of the sequence of terms to meet alphabetical order.\n> Thanks, will get this pushed.\n>\n>> b) <glossterm id=\"linkend-xxx\">  --> <glossterm linkend=\"glossary-xxx\">  in\n>> two cases. Or should it be a <firstterm>?\n> Ah, yeah, those should be linkend.\n>\nTerm 'relation': A sequence is internally a table with one row - right? \nShall we extend the list of concrete relations by 'sequence'? Or is this \nnot necessary because 'table' is already there?\n\nKind regards, Jürgen\n\n\n\n\n", "msg_date": "Sat, 11 Apr 2020 14:10:21 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": ">\n>\n> Term 'relation': A sequence is internally a table with one row - right?\n> Shall we extend the list of concrete relations by 'sequence'? Or is this\n> not necessary because 'table' is already there?\n>\n\nI wrote one for sequence, it was a bit math-y for Alvaro's taste, so we're\ngoing to try again.\n\nTerm 'relation': A sequence is internally a table with one row - right? \nShall we extend the list of concrete relations by 'sequence'? Or is this \nnot necessary because 'table' is already there?I wrote one for sequence, it was a bit math-y for Alvaro's taste, so we're going to try again.", "msg_date": "Sat, 11 Apr 2020 15:47:47 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 11.04.20 21:47, Corey Huinker wrote:\n>\n>\n> Term 'relation': A sequence is internally a table with one row -\n> right?\n> Shall we extend the list of concrete relations by 'sequence'? Or\n> is this\n> not necessary because 'table' is already there?\n>\n>\n> I wrote one for sequence, it was a bit math-y for Alvaro's taste, so \n> we're going to try again.\n>\nThis seems to be a misunderstanding. My question was whether we shall \nextend the definition of Relation to: \"... Tables, views, foreign \ntables, materialized views, indexes, and *sequences* are all relations.\"\n\nKind regards, Jürgen\n\n\n\n\n\n\n\n\n\nOn 11.04.20 21:47, Corey Huinker wrote:\n\n\n\n\n\n\n Term 'relation': A sequence is internally a table with one\n row - right? \n Shall we extend the list of concrete relations by\n 'sequence'? Or is this \n not necessary because 'table' is already there?\n\n\n\nI wrote one for sequence, it was a bit math-y for\n Alvaro's taste, so we're going to try again.\n\n\n \n\n\n\nThis seems to be a misunderstanding. My question was whether we\n shall extend the definition of Relation to: \"... Tables, views,\n foreign tables, materialized views, indexes, and *sequences* are\n all relations.\"\nKind regards, Jürgen", "msg_date": "Sun, 12 Apr 2020 09:36:50 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "Why are all the glossary terms capitalized? Seems kind of strange.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Apr 2020 21:15:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Wed, Apr 29, 2020 at 3:15 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> Why are all the glossary terms capitalized? Seems kind of strange.\n>\n>\nThey weren't intended to be, and they don't appear to be in the page I'm\nlooking at. Are you referring to the anchor like in\nhttps://www.postgresql.org/docs/devel/glossary.html#GLOSSARY-RELATION ? If\nso, that all-capping is part of the rendering, as the ids were all named in\nall-lower-case.\n\nOn Wed, Apr 29, 2020 at 3:15 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:Why are all the glossary terms capitalized?  Seems kind of strange.They weren't intended to be, and they don't appear to be in the page I'm looking at. Are you referring to the anchor like in https://www.postgresql.org/docs/devel/glossary.html#GLOSSARY-RELATION ? If so, that all-capping is part of the rendering, as the ids were all named in all-lower-case.", "msg_date": "Wed, 29 Apr 2020 15:55:45 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "Thanks everybody. I have compiled together all the suggestions and the\nresult is in the attached patch. Some of it is of my own devising.\n\n* I changed \"instance\", and made \"cluster\" be mostly a synonym of that.\n\n* I removed \"global SQL object\" and made \"SQL object\" explain it.\n\n* Added definitions for ACID, sequence, bloat, fork, FSM, VM, data page,\n transaction ID, epoch.\n\n* Changed \"a SQL\" to \"an sql\" everywhere.\n\n* Sorted alphabetically.\n\n* Removed caps in term names.\n\nI think I should get this pushed, and if there are further suggestions,\nthey're welcome.\n\nDim Fontaine and others suggested a number of terms that could be\nincluded; see https://twitter.com/alvherre/status/1246192786287865856\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 14 May 2020 20:00:17 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Thu, May 14, 2020 at 08:00:17PM -0400, Alvaro Herrera wrote:\n> + <glossterm>ACID</glossterm>\n> + <glossdef>\n> + <para>\n> + <glossterm linkend=\"glossary-atomicity\">Atomicity</glossterm>,\n> + <glossterm linkend=\"glossary-consistency\">consistency</glossterm>,\n> + <glossterm linkend=\"glossary-isolation\">isolation</glossterm>, and\n> + <glossterm linkend=\"glossary-durability\">durability</glossterm>.\n> + A set of properties of database transactions intended to guarantee validity\n> + in concurrent operation and even in event of errors, power failures, etc.\n\nI would capitalize Consistency, Isolation, Durability, and say \"These four\nproperties\" or \"This set of four properties\" (althought that makes this sounds\nmore like a fun game of DBA jeopardy).\n\n> + <glossterm>Background writer (process)</glossterm>\n> <glossdef>\n> <para>\n> - A process that continuously writes dirty pages from\n> + A process that continuously writes dirty\n\nI don't like \"continuously\"\n\n> + <glossterm linkend=\"glossary-data-page\">data pages</glossterm> from\n> \n> + <glossentry id=\"glossary-bloat\">\n> + <glossterm>Bloat</glossterm>\n> + <glossdef>\n> + <para>\n> + Space in data pages which does not contain relevant data,\n> + such as unused (free) space or outdated row versions.\n\n\"current row versions\" instead of relevant ?\n\n> + <glossentry id=\"glossary-data-page\">\n> + <glossterm>Data page</glossterm>\n> + <glossdef>\n> + <para>\n> + The basic structure used to store relation data.\n> + All pages are of the same size.\n> + Data pages are typically stored on disk, each in a specific file,\n> + and can be read to <glossterm linkend=\"glossary-shared-memory\">shared buffers</glossterm>\n> + where they can be modified, becoming\n> + <firstterm>dirty</firstterm>. They get clean by being written down\n\nsay \"They become clean when written to disk\"\n\n> + to disk. New pages, which initially exist in memory only, are also\n> + dirty until written.\n\n> + <glossentry id=\"glossary-fork\">\n> + <glossterm>Fork</glossterm>\n> + <glossdef>\n> + <para>\n> + Each of the separate segmented file sets that a relation stores its\n> + data in. There exist a <firstterm>main fork</firstterm> and two secondary\n\n\"in which a relation's data is stored\"\n\n> + forks: the <glossterm linkend=\"glossary-fsm\">free space map</glossterm>\n> + <glossterm linkend=\"glossary-vm\">visibility map</glossterm>.\n\nmissing \"and\" ?\n\n> + <glossentry id=\"glossary-fsm\">\n> + <glossterm>Free space map (fork)</glossterm>\n> + <glossdef>\n> + <para>\n> + A storage structure that keeps metadata about each data page in a table's\n> + main storage space.\n\ns/in/of/\n\njust say \"main fork\"?\n\n> The free space map entry for each space stores the\n\nfor each page ?\n\n> + amount of free space that's available for future tuples, and is structured\n> + so it is efficient to search for available space for a new tuple of a given\n> + size.\n\n..to be efficiently searched to find free space..\n\n> The heap is realized within\n> - <glossterm linkend=\"glossary-file-segment\">segment files</glossterm>.\n> + <glossterm linkend=\"glossary-file-segment\">segmented files</glossterm>\n> + in the relation's <glossterm linkend=\"glossary-fork\">main fork</glossterm>.\n\nHm, the files aren't segmented. Say \"one or more file segments per relation\"\n\n> + There also exist local objects that do not belong to schemas; some examples are\n> + <glossterm linkend=\"glossary-extension\">extensions</glossterm>,\n> + <glossterm linkend=\"glossary-cast\">data type casts</glossterm>, and\n> + <glossterm linkend=\"glossary-foreign-data-wrapper\">foreign data wrappers</glossterm>.\n\nDon't extensions have schemas ?\n\n> + <glossentry id=\"glossary-xid\">\n> + <glossterm>Transaction ID</glossterm>\n> + <glossdef>\n> + <para>\n> + The numerical, unique, sequentially-assigned identifier that each\n> + transaction receives when it first causes a database modification.\n> + Frequently abbreviated <firstterm>xid</firstterm>.\n\nabbreviated *as* xid\n\n> + approximately four billion write transactions IDs can be generated;\n> + to permit the system to run for longer than that would allow,\n\nremove \"would allow\"\n\n> <para>\n> The process of removing outdated <glossterm linkend=\"glossary-tuple\">tuple\n> versions</glossterm> from tables, and other closely related\n\nactually tables or materialized views..\n\n> + <glossentry id=\"glossary-vm\">\n> + <glossterm>Visibility map (fork)</glossterm>\n> + <glossdef>\n> + <para>\n> + A storage structure that keeps metadata about each data page\n> + in a table's main storage space. The visibility map entry for\n\ns/in/of/\n\nmain fork?\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 14 May 2020 20:03:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "Applied all these suggestions, and made a few additional very small\nedits, and pushed -- better to ship what we have now in beta1, but\nfurther edits are still possible.\n\nOther possible terms to define, including those from the tweet I linked\nto and a couple more:\n\narchive\navailability\nbackup\ncomposite type\ncommon table expression\ndata type\ndomain\ndump\nexport\nfault tolerance\nGUC\nhigh availability\nhot standby\nLSN\nrestore\nsecondary server (?)\nsnapshot\ntransactions per second\n\nAnybody want to try their hand at a tentative definition?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 15 May 2020 13:26:19 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-05-15 19:26, Alvaro Herrera wrote:\n> Applied all these suggestions, and made a few additional very small\n> edits, and pushed -- better to ship what we have now in beta1, but\n> further edits are still possible.\n\nI've gone through the glossary as committed and found some more small \nthings; patch attached.\n\nThanks,\n\n\nErik Rijkers\n\n\n> Other possible terms to define, including those from the tweet I linked\n> to and a couple more:\n> \n> archive\n> availability\n> backup\n> composite type\n> common table expression\n> data type\n> domain\n> dump\n> export\n> fault tolerance\n> GUC\n> high availability\n> hot standby\n> LSN\n> restore\n> secondary server (?)\n> snapshot\n> transactions per second\n> \n> Anybody want to try their hand at a tentative definition?\n> \n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 16 May 2020 23:45:39 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-May-16, Erik Rijkers wrote:\n\n> On 2020-05-15 19:26, Alvaro Herrera wrote:\n> > Applied all these suggestions, and made a few additional very small\n> > edits, and pushed -- better to ship what we have now in beta1, but\n> > further edits are still possible.\n> \n> I've gone through the glossary as committed and found some more small\n> things; patch attached.\n\nAll pushed! Many thanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 16 May 2020 22:22:16 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 15.05.20 02:00, Alvaro Herrera wrote:\n> Thanks everybody. I have compiled together all the suggestions and the\n> result is in the attached patch. Some of it is of my own devising.\n>\n> * I changed \"instance\", and made \"cluster\" be mostly a synonym of that.\nIn my understanding, \"instance\" and \"cluster\" should be different \nthings, not only synonyms. \"instance\" can be the term for permanently \nfluctuating objects (processes and RAM) and \"cluster\" can denote the \nmore static objects (directories and files). What do you think? If you \nagree, I would create a patch.\n> * I removed \"global SQL object\" and made \"SQL object\" explain it.\n+1., but see the (huge) different spellings in patch.\n\nbloat: changed 'current row' to 'relevant row' because not only the \nyoungest one is relevant (non-bloat).\n\ndata type casts: Are you sure that they are global? In pg_cast \n'relisshared' is 'false'.\n\n--\n\nJ�rgen Purtz", "msg_date": "Sun, 17 May 2020 08:15:48 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-May-17, J�rgen Purtz wrote:\n\n> On 15.05.20 02:00, Alvaro Herrera wrote:\n> > Thanks everybody. I have compiled together all the suggestions and the\n> > result is in the attached patch. Some of it is of my own devising.\n> > \n> > * I changed \"instance\", and made \"cluster\" be mostly a synonym of that.\n> In my understanding, \"instance\" and \"cluster\" should be different things,\n> not only synonyms. \"instance\" can be the term for permanently fluctuating\n> objects (processes and RAM) and \"cluster\" can denote the more static objects\n> (directories and files). What do you think? If you agree, I would create a\n> patch.\n\nI don't think that's the general understanding of those terms. For all\nI know, they *are* synonyms, and there's no specific term for \"the\nfluctuating objects\" as you call them. The instance is either running\n(in which case there are processes and RAM) or it isn't.\n\n\n> > * I removed \"global SQL object\" and made \"SQL object\" explain it.\n> +1., but see the (huge) different spellings in patch.\n\nThis seems a misunderstanding of what \"local\" means. Any object that\nexists in a database is local, regardless of whether it exists in a\nschema or not. \"Extensions\" is one type of object that does not belong\nin a schema. \"Foreign data wrapper\" is another type of object that does\nnot belong in a schema. Same with data type casts. They are *not*\nglobal objects.\n\n> bloat: changed 'current row' to 'relevant row' because not only the youngest\n> one is relevant (non-bloat).\n\nHm. TBH I'm not sure of this term at all. I think we sometimes use the\nterm \"bloat\" to talk about the dead rows only, ignoring the free space.\n\n> data type casts: Are you sure that they are global? In pg_cast 'relisshared'\n> is 'false'.\n\nI'm not saying they're global. I'm saying they're outside schemas.\nMaybe this definition needs more rewording, if this bit is unclear.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 17 May 2020 02:51:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 17.05.20 08:51, Alvaro Herrera wrote:\n> Any object that\n> exists in a database is local, regardless of whether it exists in a\n> schema or not.\nThis implies that the term \"local\" is unnecessary, just call them \"SQL \nobject\".\n> \"Extensions\" is one type of object that does not belong\n> in a schema. \"Foreign data wrapper\" is another type of object that does\n> not belong in a schema. ... They are*not*\n> global objects.\npostgres_fdw is a module among many others. It's only an example for \n\"extensions\" and has no different nature. Yes, they are not global SQL \nobjects because they don't belong to the cluster.\n\nIn summary we have 3 types of objects: belonging to a schema, to a \ndatabase, or to the cluster (global). Maybe, we can avoid the use of the \ndifferent names 'local SQL object' and 'global SQL object' at all and \njust call them 'SQL object'. 'global SQL object' is used only once. We \ncould rephrase \"A set of databases and accompanying global SQL objects \n... \" to \"A set of databases and accompanying SQL objects, which exists \nat the cluster level, ... \"\n\n> TBH I'm not sure of this term at all. I think we sometimes use the\n> term \"bloat\" to talk about the dead rows only, ignoring the free space.\n\nThat's a good example for the necessity of the glossary. Currently we \ndon't have a common understanding about all of our used terms. The \nglossary shall fix that and give a mandatory definition - after a \nclearing discussion.\n\n--\n\nJürgen Purtz\n\n\n\n\n", "msg_date": "Sun, 17 May 2020 10:09:48 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 17.05.20 08:51, Alvaro Herrera wrote:\n>> On 15.05.20 02:00, Alvaro Herrera wrote:\n>>> Thanks everybody. I have compiled together all the suggestions and the\n>>> result is in the attached patch. Some of it is of my own devising.\n>>>\n>>> * I changed \"instance\", and made \"cluster\" be mostly a synonym of that.\n>> In my understanding, \"instance\" and \"cluster\" should be different things,\n>> not only synonyms. \"instance\" can be the term for permanently fluctuating\n>> objects (processes and RAM) and \"cluster\" can denote the more static objects\n>> (directories and files). What do you think? If you agree, I would create a\n>> patch.\n> I don't think that's the general understanding of those terms. For all\n> I know, they*are* synonyms, and there's no specific term for \"the\n> fluctuating objects\" as you call them. The instance is either running\n> (in which case there are processes and RAM) or it isn't.\n>\nWe have the basic tools \"initdb — create a new PostgreSQL database \ncluster\" which affects nothing but files, and we have \"pg_ctl — \ninitialize, start, stop, or control a PostgreSQL server\" which - \ndirectly - affects nothing but processes and RAM. (Here the term \n\"server\" collides with new definitions in the glossary. But that's \nanother story.)\n\n--\n\nJürgen Purtz\n\n\n\n\n\n\n\n\nOn 17.05.20 08:51, Alvaro Herrera\n wrote:\n\n\n\nOn 15.05.20 02:00, Alvaro Herrera wrote:\n\n\nThanks everybody. I have compiled together all the suggestions and the\nresult is in the attached patch. Some of it is of my own devising.\n\n* I changed \"instance\", and made \"cluster\" be mostly a synonym of that.\n\n\nIn my understanding, \"instance\" and \"cluster\" should be different things,\nnot only synonyms. \"instance\" can be the term for permanently fluctuating\nobjects (processes and RAM) and \"cluster\" can denote the more static objects\n(directories and files). What do you think? If you agree, I would create a\npatch.\n\n\nI don't think that's the general understanding of those terms. For all\nI know, they *are* synonyms, and there's no specific term for \"the\nfluctuating objects\" as you call them. The instance is either running\n(in which case there are processes and RAM) or it isn't.\n\n\n\nWe have the basic tools \"initdb — create a new PostgreSQL\n database cluster\" which affects nothing but files, and we have\n \"pg_ctl — initialize, start, stop, or control a PostgreSQL server\"\n which - directly - affects nothing but processes and RAM. (Here\n the term \"server\" collides with new definitions in the glossary.\n But that's another story.)\n--\nJürgen Purtz", "msg_date": "Sun, 17 May 2020 10:44:43 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-05-17 08:51, Alvaro Herrera wrote:\n> On 2020-May-17, Jürgen Purtz wrote:\n> \n>> On 15.05.20 02:00, Alvaro Herrera wrote:\n>> > Thanks everybody. I have compiled together all the suggestions and the\n>> >\n>> > * I changed \"instance\", and made \"cluster\" be mostly a synonym of that.\n>> In my understanding, \"instance\" and \"cluster\" should be different \n>> things,\n> \n> I don't think that's the general understanding of those terms. For all\n> I know, they *are* synonyms, and there's no specific term for \"the\n> fluctuating objects\" as you call them. The instance is either running\n> (in which case there are processes and RAM) or it isn't.\n\nFor what it's worth, I've also always understood 'instance' as 'a \nrunning database'. I admit it might be a left-over from my oracle \nyears:\n\n \nhttps://docs.oracle.com/cd/E11882_01/server.112/e40540/startup.htm#CNCPT601\n\nThere, 'instance' clearly refers to a running database. When that \ndatabase is stopped, it ceases to be an instance. I've always \nunderstood this to be the same for the PostgreSQL 'instance'. Once \nstopped, it is no longer an instance, but it is, of course, still a \ncluster.\n\nI know, we don't have to do the same as Oracle, but clearly it's going \nto be an ongoing source of misunderstanding if we define such a \nhigh-level term differently.\n\nErik Rijkers\n\n\n", "msg_date": "Sun, 17 May 2020 11:08:02 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-May-17, Erik Rijkers wrote:\n\n> On 2020-05-17 08:51, Alvaro Herrera wrote:\n\n> > I don't think that's the general understanding of those terms. For all\n> > I know, they *are* synonyms, and there's no specific term for \"the\n> > fluctuating objects\" as you call them. The instance is either running\n> > (in which case there are processes and RAM) or it isn't.\n> \n> For what it's worth, I've also always understood 'instance' as 'a running\n> database'. I admit it might be a left-over from my oracle years:\n> \n> https://docs.oracle.com/cd/E11882_01/server.112/e40540/startup.htm#CNCPT601\n> \n> There, 'instance' clearly refers to a running database. When that database\n> is stopped, it ceases to be an instance.\n\nI've never understood it that way, but I'm open to having my opinion on\nit changed. So let's discuss it and maybe gather opinions from others.\n\nI think the terms under discussion are just\n\n* cluster\n* instance\n* server\n\nWe don't have \"host\" (I just made it a synonym for server), but perhaps\nwe can add that too, if it's useful. It would be good to be consistent\nwith historical Postgres usage, such as the initdb usage of \"cluster\"\netc.\n\nPerhaps we should not only define what our use of each term is, but also\nexplain how each term is used outside PostgreSQL and highlight the\ndifferences. (This would be particularly useful for \"cluster\" ISTM.)\n\nIt seems difficult to get this sorted out before beta1, but there's\nstill time before the glossary is released.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 17 May 2020 11:28:51 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 17.05.20 17:28, Alvaro Herrera wrote:\n> On 2020-May-17, Erik Rijkers wrote:\n>\n>> On 2020-05-17 08:51, Alvaro Herrera wrote:\n>>> I don't think that's the general understanding of those terms. For all\n>>> I know, they*are* synonyms, and there's no specific term for \"the\n>>> fluctuating objects\" as you call them. The instance is either running\n>>> (in which case there are processes and RAM) or it isn't.\n>> For what it's worth, I've also always understood 'instance' as 'a running\n>> database'. I admit it might be a left-over from my oracle years:\n>>\n>> https://docs.oracle.com/cd/E11882_01/server.112/e40540/startup.htm#CNCPT601\n>>\n>> There, 'instance' clearly refers to a running database. When that database\n>> is stopped, it ceases to be an instance.\n> I've never understood it that way, but I'm open to having my opinion on\n> it changed. So let's discuss it and maybe gather opinions from others.\n>\n> I think the terms under discussion are just\n>\n> * cluster\n> * instance\n> * server\n>\n> We don't have \"host\" (I just made it a synonym for server), but perhaps\n> we can add that too, if it's useful. It would be good to be consistent\n> with historical Postgres usage, such as the initdb usage of \"cluster\"\n> etc.\n>\n> Perhaps we should not only define what our use of each term is, but also\n> explain how each term is used outside PostgreSQL and highlight the\n> differences. (This would be particularly useful for \"cluster\" ISTM.)\n\nIn fact, we have reached a point where we don't have a common \nunderstanding of a group of terms. I'm sure that we will meet some more \nsituations like this in the future. Such discussions, subsequent \ndecisions, and implementations in the docs are necessary to gain a solid \nfoundation - primarily for newcomers (what is my first motivation) as \nwell as for more complex discussions among experts. Obviously, each of \nus will include his previous understanding of terms. But we also should \nbe open to sometimes revise old terms.\n\nHere are my two cents.\n\ncluster/instance: PG (mainly) consists of a group of processes that \ncommonly act on shared buffers. The processes are very closely related \nto each other and with the buffers. They exist altogether or not at all. \nThey use a common initialization file and are incarnated by one command. \nEverything exists solely in RAM and therefor has a fluctuating nature. \nIn summary: they build a unit and this unit needs to have a name of \nitself. In some pages we used to use the term *instance* - sometimes in \nextended forms: *database instance*, *PG instance*, *standby instance*, \n*standby server instance*, *server instance*, or *remote instance*.  For \nme, the term *instance* makes sense, the extensions *standby instance* \nand *remote instance* in their context too.\n\nThe next essential component is the data itself. It is organized as a \ngroup of databases plus some common management information (global, \npg_wal, pg_xact, pg_tblspc, ...). The complete data must be treated as a \nwhole because the management information concerns all databases. Its \nnature is different from the processes and shared buffers. Of course, \nits content changes, but it has a steady nature. It even survives a \n'power down'. There is one command to instantiate a new incarnation of \nthe directory structure and all files. In summary, it's something of its \nown and should have its own name. 'database' is not possible because it \nconsists of databases and other things. My favorite is *cluster*; \n*database cluster* is also possible.\n\nserver/host: We need a term to describe the underlying hardware \nrespectively the virtual machine or container, where PG is running. I \nsuggest to use both *server* and *host*. In computer science, both have \ntheir eligibility and are widely used. Everybody understands \n*client/server architecture* or *host* in TCP/IP configuration. We \ncannot change such matter of course. I suggest to use both depending on \nthe context, but with the same meaning: \"real hardware, a container, or \na virtual machine\".\n\n-- \n\nJürgen Purtz\n\n(PS: I added the docs mailing list)\n\n\n\n\n\n\n\n\nOn 17.05.20 17:28, Alvaro Herrera\n wrote:\n\n\nOn 2020-May-17, Erik Rijkers wrote:\n\n\n\nOn 2020-05-17 08:51, Alvaro Herrera wrote:\n\n\n\n\nI don't think that's the general understanding of those terms. For all\nI know, they *are* synonyms, and there's no specific term for \"the\nfluctuating objects\" as you call them. The instance is either running\n(in which case there are processes and RAM) or it isn't.\n\n\nFor what it's worth, I've also always understood 'instance' as 'a running\ndatabase'. I admit it might be a left-over from my oracle years:\n\nhttps://docs.oracle.com/cd/E11882_01/server.112/e40540/startup.htm#CNCPT601\n\nThere, 'instance' clearly refers to a running database. When that database\nis stopped, it ceases to be an instance.\n\n\nI've never understood it that way, but I'm open to having my opinion on\nit changed. So let's discuss it and maybe gather opinions from others.\n\nI think the terms under discussion are just\n\n* cluster\n* instance\n* server\n\nWe don't have \"host\" (I just made it a synonym for server), but perhaps\nwe can add that too, if it's useful. It would be good to be consistent\nwith historical Postgres usage, such as the initdb usage of \"cluster\"\netc.\n\nPerhaps we should not only define what our use of each term is, but also\nexplain how each term is used outside PostgreSQL and highlight the\ndifferences. (This would be particularly useful for \"cluster\" ISTM.)\n\n\nIn fact, we have reached a point where we don't have a common\n understanding of a group of terms. I'm sure that we will meet some\n more situations like this in the future. Such discussions,\n subsequent decisions, and implementations in the docs are\n necessary to gain a solid foundation - primarily for newcomers\n (what is my first motivation) as well as for more complex\n discussions among experts. Obviously, each of us will include his\n previous understanding of terms. But we also should be open to\n sometimes revise old terms.\nHere are my two cents.\n\ncluster/instance: PG (mainly) consists of a group of processes\n that commonly act on shared buffers. The processes are very\n closely related to each other and with the buffers. They exist\n altogether or not at all. They use a common initialization file\n and are incarnated by one command. Everything exists solely in RAM\n and therefor has a fluctuating nature. In summary: they build a\n unit and this unit needs to have a name of itself. In some pages\n we used to use the term *instance* - sometimes in extended forms:\n *database instance*, *PG instance*, *standby instance*, *standby\n server instance*, *server instance*, or *remote instance*.  For\n me, the term *instance* makes sense, the extensions *standby\n instance* and *remote instance* in their context too.\n\nThe next essential component is the data itself. It is organized\n as a group of databases plus some common management information\n (global, pg_wal, pg_xact, pg_tblspc, ...). The complete data must\n be treated as a whole because the management information concerns\n all databases. Its nature is different from the processes and\n shared buffers. Of course, its content changes, but it has a\n steady nature. It even survives a 'power down'. There is one\n command to instantiate a new incarnation of the directory\n structure and all files. In summary, it's something of its own and\n should have its own name. 'database' is not possible because it\n consists of databases and other things. My favorite is *cluster*;\n *database cluster* is also possible.\nserver/host: We need a term to describe the underlying hardware\n respectively the virtual machine or container, where PG is\n running. I suggest to use both *server* and *host*. In computer\n science, both have their eligibility and are widely used.\n Everybody understands *client/server architecture* or *host* in\n TCP/IP configuration. We cannot change such matter of course. I\n suggest to use both depending on the context, but with the same\n meaning: \"real hardware, a container, or a virtual machine\".\n --\n Jürgen Purtz\n\n(PS: I added the docs mailing list)", "msg_date": "Mon, 18 May 2020 18:08:01 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Mon, 2020-05-18 at 18:08 +0200, Jürgen Purtz wrote:\n> cluster/instance: PG (mainly) consists of a group of processes that commonly\n> act on shared buffers. The processes are very closely related to each other\n> and with the buffers. They exist altogether or not at all. They use a common\n> initialization file and are incarnated by one command. Everything exists\n> solely in RAM and therefor has a fluctuating nature. In summary: they build\n> a unit and this unit needs to have a name of itself. In some pages we used\n> to use the term *instance* - sometimes in extended forms: *database instance*,\n> *PG instance*, *standby instance*, *standby server instance*, *server instance*,\n> or *remote instance*. For me, the term *instance* makes sense, the extensions\n> *standby instance* and *remote instance* in their context too.\n\nFWIW, I feel somewhat like Alvaro on that point; I use those terms synonymously,\nperhaps distinguishing between a \"started cluster\" and a \"stopped cluster\".\nAfter all, \"cluster\" refers to \"a cluster of databases\", which are there, regardless\nif you start the server or not.\n\nThe term \"cluster\" is unfortunate, because to most people it suggests a group of\nmachines, so the term \"instance\" is better, but that ship has sailed long ago.\n\nThe static part of a cluster to me is the \"data directory\".\n\n> server/host: We need a term to describe the underlying hardware respectively\n> the virtual machine or container, where PG is running. I suggest to use both\n> *server* and *host*. In computer science, both have their eligibility and are\n> widely used. Everybody understands *client/server architecture* or *host* in\n> TCP/IP configuration. We cannot change such matter of course. I suggest to\n> use both depending on the context, but with the same meaning: \"real hardware,\n> a container, or a virtual machine\".\n\nOn this I have a strong opinion because of my Unix mindset.\n\"machine\" and \"host\" are synonyms, and it doesn't matter to the database if they\nare virtualized or not. You can always disambiguate by adding \"virtual\" or \"physical\".\n\nA \"server\" is a piece of software that responds to client requests, never a machine.\nIn my book, this is purely Windows jargon. The term \"client-server architecture\"\nthat you quote emphasized that.\n\nPerhaps \"machine\" would be the preferable term, because \"host\" is more prone to\nmisunderstandings (except in a networking context).\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 19 May 2020 08:17:26 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "I think there needs to be a careful analysis of the language and a formal\neffort to stabilise it for the future.\n\nIn the context of, say, an Oracle T series, which is partitioned into\nmultiple domains (virtual machines) in it, each\nof these has multiple CPUs, and can run an instance of the OS which hosts\nmultiple virtual instances\nof the same or different OSes. Som domains might do this while others do\nnot!\n\nA host could be a domain, one of many virtual machines, or it could be one\nof many hosts on that VM\nbut even these hosts could be virtual machines that each runs several\nvirtual servers!\n\nOf course, PostgreSQL can run on any tier of this regime, but the\ndocumentation at least needs to be consistent\nabout language.\n\nA \"machine\" should probably refer to hardware, although I would accept that\na domain might count as \"virtual\nhardware\" while a host should probably refer to a single instance of OS.\n\nOf course it is possible for a single instance of OS to run multiple\ninstances of PostgreSQL, and people do this. (I have\nin the past).\n\nSlightly more confusingly, it would appear possible for a single instance\nof an OS to have multiple IP addresses\nand if there are multiple instances of PostgreSQL, they may serve different\nIP Addresses uniquely, or\nshare them. I think this case suggests that a host probably best describes\nan OS instance. I might be wrong.\n\nThe word \"server\" might be an instance of any of the above, or a waiter\nwith a bowl of soup. It is best\nreserved for situations where clarity is not required.\n\nIf you are new to all this, I am sure it is very confusing, and\ninconsistent language is not going to help.\n\nAndrew\n\n\n\nAFAICT\n\n\n\n\n\nOn Tue, 19 May 2020 at 07:17, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Mon, 2020-05-18 at 18:08 +0200, Jürgen Purtz wrote:\n> > cluster/instance: PG (mainly) consists of a group of processes that\n> commonly\n> > act on shared buffers. The processes are very closely related to each\n> other\n> > and with the buffers. They exist altogether or not at all. They use a\n> common\n> > initialization file and are incarnated by one command. Everything exists\n> > solely in RAM and therefor has a fluctuating nature. In summary: they\n> build\n> > a unit and this unit needs to have a name of itself. In some pages we\n> used\n> > to use the term *instance* - sometimes in extended forms: *database\n> instance*,\n> > *PG instance*, *standby instance*, *standby server instance*, *server\n> instance*,\n> > or *remote instance*. For me, the term *instance* makes sense, the\n> extensions\n> > *standby instance* and *remote instance* in their context too.\n>\n> FWIW, I feel somewhat like Alvaro on that point; I use those terms\n> synonymously,\n> perhaps distinguishing between a \"started cluster\" and a \"stopped cluster\".\n> After all, \"cluster\" refers to \"a cluster of databases\", which are there,\n> regardless\n> if you start the server or not.\n>\n> The term \"cluster\" is unfortunate, because to most people it suggests a\n> group of\n> machines, so the term \"instance\" is better, but that ship has sailed long\n> ago.\n>\n> The static part of a cluster to me is the \"data directory\".\n>\n> > server/host: We need a term to describe the underlying hardware\n> respectively\n> > the virtual machine or container, where PG is running. I suggest to use\n> both\n> > *server* and *host*. In computer science, both have their eligibility\n> and are\n> > widely used. Everybody understands *client/server architecture* or\n> *host* in\n> > TCP/IP configuration. We cannot change such matter of course. I suggest\n> to\n> > use both depending on the context, but with the same meaning: \"real\n> hardware,\n> > a container, or a virtual machine\".\n>\n> On this I have a strong opinion because of my Unix mindset.\n> \"machine\" and \"host\" are synonyms, and it doesn't matter to the database\n> if they\n> are virtualized or not. You can always disambiguate by adding \"virtual\"\n> or \"physical\".\n>\n> A \"server\" is a piece of software that responds to client requests, never\n> a machine.\n> In my book, this is purely Windows jargon. The term \"client-server\n> architecture\"\n> that you quote emphasized that.\n>\n> Perhaps \"machine\" would be the preferable term, because \"host\" is more\n> prone to\n> misunderstandings (except in a networking context).\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n>\n\nI think there needs to be a careful analysis of the language and a formal effort to stabilise it for the future.In the context of, say, an Oracle T series, which is partitioned into multiple domains (virtual machines) in it, eachof these has multiple CPUs, and can run an instance of the OS which hosts multiple virtual instancesof the same or different OSes. Som domains might do this while others do not!A host could be a domain, one of many virtual machines, or it could be one of many hosts on that VMbut even these hosts could be virtual machines that each runs several virtual servers!Of course, PostgreSQL can run on any tier of this regime, but the documentation at least needs to be consistentabout language. A \"machine\" should probably refer to hardware, although I would accept that a domain might count as \"virtual hardware\" while a host should probably refer to a single instance of OS. Of course it is possible for a single  instance of OS to run multiple instances of PostgreSQL, and people do this. (I havein the past).Slightly more confusingly, it would appear possible for a single instance of an OS to have multiple IP addressesand if there are multiple instances of PostgreSQL, they may serve different IP Addresses uniquely, or share them. I think this case suggests that a host probably best describes an OS instance. I might be wrong.The word \"server\" might be an instance of any of the above, or a waiter with a bowl of soup. It is bestreserved for situations where clarity is not required. If you are new to all this, I am sure it is very confusing, and inconsistent language is not going to help.AndrewAFAICTOn Tue, 19 May 2020 at 07:17, Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Mon, 2020-05-18 at 18:08 +0200, Jürgen Purtz wrote:\n> cluster/instance: PG (mainly) consists of a group of processes that commonly\n> act on shared buffers. The processes are very closely related to each other\n> and with the buffers. They exist altogether or not at all. They use a common\n> initialization file and are incarnated by one command. Everything exists\n> solely in RAM and therefor has a fluctuating nature. In summary: they build\n> a unit and this unit needs to have a name of itself. In some pages we used\n> to use the term *instance* - sometimes in extended forms: *database instance*,\n> *PG instance*, *standby instance*, *standby server instance*, *server instance*,\n> or *remote instance*.  For me, the term *instance* makes sense, the extensions\n> *standby instance* and *remote instance* in their context too.\n\nFWIW, I feel somewhat like Alvaro on that point; I use those terms synonymously,\nperhaps distinguishing between a \"started cluster\" and a \"stopped cluster\".\nAfter all, \"cluster\" refers to \"a cluster of databases\", which are there, regardless\nif you start the server or not.\n\nThe term \"cluster\" is unfortunate, because to most people it suggests a group of\nmachines, so the term \"instance\" is better, but that ship has sailed long ago.\n\nThe static part of a cluster to me is the \"data directory\".\n\n> server/host: We need a term to describe the underlying hardware respectively\n> the virtual machine or container, where PG is running. I suggest to use both\n> *server* and *host*. In computer science, both have their eligibility and are\n> widely used. Everybody understands *client/server architecture* or *host* in\n> TCP/IP configuration. We cannot change such matter of course. I suggest to\n> use both depending on the context, but with the same meaning: \"real hardware,\n> a container, or a virtual machine\".\n\nOn this I have a strong opinion because of my Unix mindset.\n\"machine\" and \"host\" are synonyms, and it doesn't matter to the database if they\nare virtualized or not.  You can always disambiguate by adding \"virtual\" or \"physical\".\n\nA \"server\" is a piece of software that responds to client requests, never a machine.\nIn my book, this is purely Windows jargon.  The term \"client-server architecture\"\nthat you quote emphasized that.\n\nPerhaps \"machine\" would be the preferable term, because \"host\" is more prone to\nmisunderstandings (except in a networking context).\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 19 May 2020 07:44:57 +0100", "msg_from": "Andrew Grillet <andrew@grillet.co.uk>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-05-19 08:17, Laurenz Albe wrote:\n> The term \"cluster\" is unfortunate, because to most people it suggests a group of\n> machines, so the term \"instance\" is better, but that ship has sailed long ago.\n\nI don't see what would stop us from renaming some things, with some care.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 19 May 2020 13:25:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 19.05.20 08:17, Laurenz Albe wrote:\n> On Mon, 2020-05-18 at 18:08 +0200, Jürgen Purtz wrote:\n>> cluster/instance: PG (mainly) consists of a group of processes that commonly\n>> act on shared buffers. The processes are very closely related to each other\n>> and with the buffers. They exist altogether or not at all. They use a common\n>> initialization file and are incarnated by one command. Everything exists\n>> solely in RAM and therefor has a fluctuating nature. In summary: they build\n>> a unit and this unit needs to have a name of itself. In some pages we used\n>> to use the term *instance* - sometimes in extended forms: *database instance*,\n>> *PG instance*, *standby instance*, *standby server instance*, *server instance*,\n>> or *remote instance*. For me, the term *instance* makes sense, the extensions\n>> *standby instance* and *remote instance* in their context too.\n> FWIW, I feel somewhat like Alvaro on that point; I use those terms synonymously,\n> perhaps distinguishing between a \"started cluster\" and a \"stopped cluster\".\n> After all, \"cluster\" refers to \"a cluster of databases\", which are there, regardless\n> if you start the server or not.\n>\n> The term \"cluster\" is unfortunate, because to most people it suggests a group of\n> machines, so the term \"instance\" is better, but that ship has sailed long ago.\n>\n> The static part of a cluster to me is the \"data directory\".\n\ncluster/instance: The different nature (static/dynamic) of what I call \n\"cluster\" and \"instance\" as well as the existence of the two commands \n\"initdb — create a new PostgreSQL database cluster\" and \"pg_ctl — \ninitialize, start, stop, or control a PostgreSQL server\" confirms me in \nmy opinion that we need two different terms for them. Those two terms \nshall not be synonym to each other, they label distinct things. If \npeople prefer \"data directory\" instead of \"cluster\", this is ok for me.\n\nThere are situations where we need a single term for both of them. \n\"Instance and its data directory\" or \"Instance and its cluster\" are too \nwordy. In many cases we use \"database server\" or \"server\" in this sense. \nImo \"Server\" is too short and ambiguous. \"database server\", the plural \nform \"databases server\", or the new term \"cluster server\", which is more \naccurate, would be ok for me. (Similar to \"server\", the term \"cluster\" \nis also used in many different contexts - but only outside of the PG \nworld; within our context \"cluster\" is not ambiguous.)\n\n>> server/host: We need a term to describe the underlying hardware respectively\n>> the virtual machine or container, where PG is running. I suggest to use both\n>> *server* and *host*. In computer science, both have their eligibility and are\n>> widely used. Everybody understands *client/server architecture* or *host* in\n>> TCP/IP configuration. We cannot change such matter of course. I suggest to\n>> use both depending on the context, but with the same meaning: \"real hardware,\n>> a container, or a virtual machine\".\n> On this I have a strong opinion because of my Unix mindset.\n> \"machine\" and \"host\" are synonyms, and it doesn't matter to the database if they\n> are virtualized or not. You can always disambiguate by adding \"virtual\" or \"physical\".\n>\n> A \"server\" is a piece of software that responds to client requests, never a machine.\n> In my book, this is purely Windows jargon. The term \"client-server architecture\"\n> that you quote emphasized that.\n>\n> Perhaps \"machine\" would be the preferable term, because \"host\" is more prone to\n> misunderstandings (except in a networking context).\n>\nserver/host: I agree that we are not interested in the question whether \nthere is real hardware or any virtualization container. We are even not \ninterested in the operating system. Our primary concern is the existence \nof a port of the Internet Protocol. But is the term \"server\" appropriate \nto name an IP-port? Additionally, \"server\" is used for other meanings: \na) the previously mentioned \"database server\" b) a (virtual) machine: \n\"server-side\", \"... the file ... loaded by the server ...\" c) binaries \n\"... the server must be built with SSL support ...\" d) whenever it seems \nto be appropriate: \"standby server\", \"... the server parses query ...\", \n\"server configuration\", \"server process\".\n\nBecause of its ambiguous usage, the definition of \"server\" must clarify \nthe allowed meanings. What's about:\n\n--\n\nserver: Depending on the context, the term *server* denotes:\n\n * An IP-port which is offered by any OS.   ?????\n * A - possibly virtualized - machine\n * An abbreviation for the slightly longer term \"database(s)/cluster\n server\"  ??? this will support the readability, but not the clarity ???\n * More ?\n\n--\n\nThe term \"host\" is used mainly for IP configuration \"host name\", \"host \naddress\" and in the context of compiling \"host language\", \"host \nvariable\". These are clear situations and can be defined easily.\n\n\n\n\n\n\n\n\nOn 19.05.20 08:17, Laurenz Albe wrote:\n\n\nOn Mon, 2020-05-18 at 18:08 +0200, Jürgen Purtz wrote:\n\n\ncluster/instance: PG (mainly) consists of a group of processes that commonly\nact on shared buffers. The processes are very closely related to each other\nand with the buffers. They exist altogether or not at all. They use a common\ninitialization file and are incarnated by one command. Everything exists\nsolely in RAM and therefor has a fluctuating nature. In summary: they build\na unit and this unit needs to have a name of itself. In some pages we used\nto use the term *instance* - sometimes in extended forms: *database instance*,\n*PG instance*, *standby instance*, *standby server instance*, *server instance*,\nor *remote instance*. For me, the term *instance* makes sense, the extensions\n*standby instance* and *remote instance* in their context too.\n\n\nFWIW, I feel somewhat like Alvaro on that point; I use those terms synonymously,\nperhaps distinguishing between a \"started cluster\" and a \"stopped cluster\".\nAfter all, \"cluster\" refers to \"a cluster of databases\", which are there, regardless\nif you start the server or not.\n\nThe term \"cluster\" is unfortunate, because to most people it suggests a group of\nmachines, so the term \"instance\" is better, but that ship has sailed long ago.\n\nThe static part of a cluster to me is the \"data directory\".\n\ncluster/instance: The different nature (static/dynamic) of what I\n call \"cluster\" and \"instance\" as well as the existence of the two\n commands \"initdb — create a new PostgreSQL database cluster\" and \n \"pg_ctl — initialize, start, stop, or control a PostgreSQL server\"\n confirms me in my opinion that we need two different terms for\n them. Those two terms shall not be synonym to each other, they\n label distinct things. If people prefer \"data directory\" instead\n of \"cluster\", this is ok for me.\nThere are situations where we need a single term for both of\n them. \"Instance and its data directory\" or \"Instance and its\n cluster\" are too wordy. In many cases we use \"database server\" or\n \"server\" in this sense. Imo \"Server\" is too short and ambiguous.\n \"database server\", the plural form \"databases server\", or the new\n term \"cluster server\", which is more accurate, would be ok for me.\n (Similar to \"server\", the term \"cluster\" is also used in many\n different contexts - but only outside of the PG world; within our\n context \"cluster\" is not ambiguous.) \n\n\n\nserver/host: We need a term to describe the underlying hardware respectively\nthe virtual machine or container, where PG is running. I suggest to use both\n*server* and *host*. In computer science, both have their eligibility and are\nwidely used. Everybody understands *client/server architecture* or *host* in\nTCP/IP configuration. We cannot change such matter of course. I suggest to\nuse both depending on the context, but with the same meaning: \"real hardware,\na container, or a virtual machine\".\n\n\nOn this I have a strong opinion because of my Unix mindset.\n\"machine\" and \"host\" are synonyms, and it doesn't matter to the database if they\nare virtualized or not. You can always disambiguate by adding \"virtual\" or \"physical\".\n\nA \"server\" is a piece of software that responds to client requests, never a machine.\nIn my book, this is purely Windows jargon. The term \"client-server architecture\"\nthat you quote emphasized that.\n\nPerhaps \"machine\" would be the preferable term, because \"host\" is more prone to\nmisunderstandings (except in a networking context).\n\n\n\nserver/host: I agree that we are not interested in the question\n whether there is real hardware or any virtualization container. We\n are even not interested in the operating system. Our primary\n concern is the existence of a port of the Internet Protocol. But\n is the term \"server\" appropriate to name an IP-port? Additionally,\n \"server\" is used for other meanings: a) the previously mentioned\n \"database server\" b) a (virtual) machine: \"server-side\", \"... the\n file ... loaded by the server ...\" c) binaries \"... the server\n must be built with SSL support ...\" d) whenever it seems to be\n appropriate: \"standby server\", \"... the server parses query ...\",\n \"server configuration\", \"server process\".\nBecause of its ambiguous usage, the definition of \"server\" must\n clarify the allowed meanings. What's about:\n--\nserver: Depending on the context, the term *server* denotes:\n\nAn IP-port which is offered by any OS.   ?????\n\nA - possibly virtualized - machine\nAn abbreviation for the slightly longer term\n \"database(s)/cluster server\"  ??? this will support the\n readability, but not the clarity ???\n\nMore ?\n\n\n--\nThe term \"host\" is used mainly for IP configuration \"host name\",\n \"host address\" and in the context of compiling \"host language\",\n \"host variable\". These are clear situations and can be defined\n easily.", "msg_date": "Wed, 20 May 2020 13:17:29 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Wed, 2020-05-20 at 13:17 +0200, Jürgen Purtz wrote:\n> > FWIW, I feel somewhat like Alvaro on that point; I use those terms synonymously,\n> > perhaps distinguishing between a \"started cluster\" and a \"stopped cluster\".\n> > After all, \"cluster\" refers to \"a cluster of databases\", which are there, regardless\n> > if you start the server or not.\n> > \n> > The term \"cluster\" is unfortunate, because to most people it suggests a group of\n> > machines, so the term \"instance\" is better, but that ship has sailed long ago.\n> > \n> > The static part of a cluster to me is the \"data directory\".\n> \n> cluster/instance: The different nature (static/dynamic) of what I\n> call \"cluster\" and \"instance\" as well as the existence of the two\n> commands \"initdb — create a new PostgreSQL database cluster\" and \n> \"pg_ctl — initialize, start, stop, or control a PostgreSQL server\"\n> confirms me in my opinion that we need two different terms for\n> them.\n\nI think that the \"pg_ctl\" example does not apply:\nIt does not talk about starting the cluster, but about starting the server process,\nthat is \"server\" in the way I understand it.\n\n> There are situations where we need a single term for both of\n> them. \"Instance and its data directory\" or \"Instance and its\n> cluster\" are too wordy. In many cases we use \"database server\" or\n> \"server\" in this sense. Imo \"Server\" is too short and ambiguous.\n> \"database server\", the plural form \"databases server\", or the new\n> term \"cluster server\", which is more accurate, would be ok for me.\n> (Similar to \"server\", the term \"cluster\" is also used in many\n> different contexts - but only outside of the PG world; within our\n> context \"cluster\" is not ambiguous.) \n\nThat does not feel right to me.\n\n\"cluster server\", ouch. \"databases server\", ouch as well.\n\nI never felt the term \"cluster\" was unclear in these contexts.\nSometimes it means \"data directory\", sometimes it is used for \"server process\",\nbut I think few people would think one cound connect to a data directory\nor create a process in a directory (initdb).\n\nI think clarity is a Good Thing, but it can be overdone.\n\n> > > server/host: We need a term to describe the underlying hardware respectively\n> > > the virtual machine or container, where PG is running. I suggest to use both\n> > > *server* and *host*. In computer science, both have their eligibility and are\n> > > widely used. Everybody understands *client/server architecture* or *host* in\n> > > TCP/IP configuration. We cannot change such matter of course. I suggest to\n> > > use both depending on the context, but with the same meaning: \"real hardware,\n> > > a container, or a virtual machine\".\n> > \n> > On this I have a strong opinion because of my Unix mindset.\n> > \"machine\" and \"host\" are synonyms, and it doesn't matter to the database if they\n> > are virtualized or not. You can always disambiguate by adding \"virtual\" or \"physical\".\n> > \n> > A \"server\" is a piece of software that responds to client requests, never a machine.\n> > In my book, this is purely Windows jargon. The term \"client-server architecture\"\n> > that you quote emphasized that.\n> > \n> > Perhaps \"machine\" would be the preferable term, because \"host\" is more prone to\n> > misunderstandings (except in a networking context).\n> \n> server/host: I agree that we are not interested in the question\n> whether there is real hardware or any virtualization container. We\n> are even not interested in the operating system. Our primary\n> concern is the existence of a port of the Internet Protocol. But\n> is the term \"server\" appropriate to name an IP-port? Additionally,\n> \"server\" is used for other meanings: a) the previously mentioned\n> \"database server\" b) a (virtual) machine: \"server-side\", \"... the\n> file ... loaded by the server ...\" c) binaries \"... the server\n> must be built with SSL support ...\" d) whenever it seems to be\n> appropriate: \"standby server\", \"... the server parses query ...\",\n> \"server configuration\", \"server process\".\n\nYou are most thorough :^)\n \n> Because of its ambiguous usage, the definition of \"server\" must\n> clarify the allowed meanings. What's about:\n> \n> server: Depending on the context, the term *server* denotes:\n> \n> An IP-port which is offered by any OS. ?????\n\nA port is a server? No way.\n \n> A - possibly virtualized - machine\n\nIt might be good to disambiguate that, but I don't think that the PostgreSQL\ndocumentation should use the word \"server\" to mean \"machine\".\n\n> An abbreviation for the slightly longer term\n> \"database(s)/cluster server\" ??? this will support the\n> readability, but not the clarity ???\n\n\"Server\" is short for \"database server\" and is a set of processes that listen\nfor and handle incoming database client requests.\n\nI think that covers all the meanings you quoted from the documentation,\nexcept c), where it is used as shorthand for \"server executable\".\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 20 May 2020 13:38:28 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-04-29 21:55, Corey Huinker wrote:\n> On Wed, Apr 29, 2020 at 3:15 PM Peter Eisentraut \n> <peter.eisentraut@2ndquadrant.com \n> <mailto:peter.eisentraut@2ndquadrant.com>> wrote:\n> \n> Why are all the glossary terms capitalized?  Seems kind of strange.\n> \n> \n> They weren't intended to be, and they don't appear to be in the page I'm \n> looking at. Are you referring to the anchor like in \n> https://www.postgresql.org/docs/devel/glossary.html#GLOSSARY-RELATION ? \n> If so, that all-capping is part of the rendering, as the ids were all \n> named in all-lower-case.\n\nSorry, I meant why is the first letter of each term capitalized. That \nseems unusual.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 14:01:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 17.05.20 17:28, Alvaro Herrera wrote:\n> I think the terms under discussion are just\n>\n> * cluster\n> * instance\n> * server\n\n\nDespite the short period of its existence the glossary achieved some \nimportance, see: \nhttps://www.postgresql.org/message-id/b8e12875ebec9e6d3107df5fa1129e1e%40postgrespro.ru \n. We have to be careful with publications. It's not acceptable that we \nchange definitions from release to release. Therefore IMO we should mark \nor even ignore such terms for which we cannot reach consensus.\n\nCan you agree to the following definitions? If no, we can alternatively \nformulate for each of them: \"Under discussion - currently not defined\". \nMy proposals are inspired by chapter 2.2 Concepts: \"Tables are grouped \ninto databases, and a collection of databases managed by a single \nPostgreSQL server instance constitutes a database cluster.\"\n\n\n- \"Database\" (No change to existing definition): \"A named collection of \nSQL objects.\"\n\n\n- \"Database Cluster\", \"Cluster\" (New definition and rearrangements of \nsome sentences): \"A collection of related databases, and their common \nstatic and dynamic meta-data.\n\nThis term is sometimes used to refer to an instance.\n\n(Don't confuse the term CLUSTER with the SQL command CLUSTER.)\"\n\n\n- \"Data Directory\" (Replaced 'instance' by 'cluster'): \"The base \ndirectory on the filesystem of a server that contains all data files and \nsubdirectories associated with a cluster (with the exception of \ntablespaces). The environment variable PGDATA is commonly used to refer \nto the data directory.\n\nA cluster's storage space comprises the data directory plus any \nadditional tablespaces.\n\nFor more information, see Section 68.1.\"\n\n\n- \"Database Server\", \"Instance\" (Major changes): \"A group of backend and \nauxiliary processes that communicate using a common shared memory area. \nOne postmaster process manages the instance; one instance manages \nexactly one cluster with all its databases. Many instances can run on \nthe same server as long as their TCP ports do not conflict.\n\nThe instance handles all key features of a DBMS: read and write access \nto files and shared memory, assurance of the ACID properties, \nconnections to client processes, privilege verification, crash recovery, \nreplication, etc.\"\n\n\n- \"Server\" (No change to existing definition): \"A computer on which \nPostgreSQL instances run. The term server denotes real hardware, a \ncontainer, or a virtual machine.\n\nThis term is sometimes used to refer to an instance or to a host.\"\n\n\n- \"Host\" (No change to existing definition): \"A computer that \ncommunicates with other computers over a network. This is sometimes used \nas a synonym for server. It is also used to refer to a computer where \nclient processes run.\"\n\n\n--\n\nJürgen Purtz\n\n\n\n\n", "msg_date": "Tue, 9 Jun 2020 13:25:04 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Jun-09, J�rgen Purtz wrote:\n\n> Can you agree to the following definitions? If no, we can alternatively\n> formulate for each of them: \"Under discussion - currently not defined\". My\n> proposals are inspired by chapter 2.2 Concepts: \"Tables are grouped into\n> databases, and a collection of databases managed by a single PostgreSQL\n> server instance constitutes a database cluster.\"\n\nAfter sleeping on it a few more times, I don't oppose the idea of making\n\"instance\" be the running state and \"database cluster\" the on-disk stuff\nthat supports the instance. Here's a patch that does things pretty much\nalong the lines you suggested.\n\nI made small adjustments to \"SQL objects\":\n\n* SQL objects in schemas were said to have their names unique in the\nschema, but we failed to say anything about names of objects not in\nschemas and global objects. Added that.\n\n* Had example object types for global objects and objects not in\nschemas, but no examples for objects in schemas. Added that.\n\n\nSome programs whose output we could tweak per this:\npg_ctl\n> pg_ctl is a utility to initialize, start, stop, or control a PostgreSQL server.\n> -D, --pgdata=DATADIR location of the database storage area\nto:\n> pg_ctl is a utility to initialize or control a PostgreSQL database cluster.\n> -D, --pgdata=DATADIR location of the database directory\n\npg_basebackup:\n> pg_basebackup takes a base backup of a running PostgreSQL server.\nto:\n> pg_basebackup takes a base backup of a PostgreSQL instance.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 16 Jun 2020 20:09:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On Tue, Jun 16, 2020 at 08:09:26PM -0400, Alvaro Herrera wrote:\n> diff --git a/doc/src/sgml/glossary.sgml b/doc/src/sgml/glossary.sgml\n> index 25b03f3b37..e29b55e5ac 100644\n> --- a/doc/src/sgml/glossary.sgml\n> +++ b/doc/src/sgml/glossary.sgml\n> @@ -395,15 +395,15 @@\n> <para>\n> The base directory on the filesystem of a\n> <glossterm linkend=\"glossary-server\">server</glossterm> that contains all\n> - data files and subdirectories associated with an\n> - <glossterm linkend=\"glossary-instance\">instance</glossterm> (with the\n> - exception of <glossterm linkend=\"glossary-tablespace\">tablespaces</glossterm>).\n> + data files and subdirectories associated with a\n> + <glossterm linkend=\"glossary-db-cluster\">database cluster</glossterm>\n> + (with the exception of\n> + <glossterm linkend=\"glossary-tablespace\">tablespaces</glossterm>).\n\nand (optionally) WAL\n\n> + <glossentry id=\"glossary-db-cluster\">\n> + <glossterm>Database cluster</glossterm>\n> + <glossdef>\n> + <para>\n> + A collection of databases and global SQL objects,\n> + and their common static and dynamic meta-data.\n\nmetadata\n\n> @@ -1245,12 +1255,17 @@\n> <glossterm linkend=\"glossary-sql-object\">SQL objects</glossterm>,\n> which all reside in the same\n> <glossterm linkend=\"glossary-database\">database</glossterm>.\n> - Each SQL object must reside in exactly one schema.\n> + Each SQL object must reside in exactly one schema\n> + (though certain types of SQL objects exist outside schemas).\n\n(except for global objects which ..)\n\n> <para>\n> The names of SQL objects of the same type in the same schema are enforced\n> to be unique.\n> There is no restriction on reusing a name in multiple schemas.\n> + For local objects that exist outside schemas, their names are enforced\n> + unique across the whole database. For global objects, their names\n\nI would say \"unique within the database\"\n\n> + are enforced unique across the whole\n> + <glossterm linkend=\"glossary-db-cluster\">database cluster</glossterm>.\n\nand \"unique within the whole db cluster\"\n\n> Most local objects belong to a specific\n> - <glossterm linkend=\"glossary-schema\">schema</glossterm> in their containing database.\n> + <glossterm linkend=\"glossary-schema\">schema</glossterm> in their\n> + containing database, such as\n> + <glossterm linkend=\"glossary-relation\">all types of relations</glossterm>,\n> + <glossterm linkend=\"glossary-function\">all types of functions</glossterm>,\n\nMaybe say: >Relations< (all types), and >Functions< (all types)\n\n> used as the default one for all SQL objects, called <literal>pg_default</literal>. \n\"the default\" (remove \"one\")\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 16 Jun 2020 19:33:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 17.06.20 02:09, Alvaro Herrera wrote:\n> On 2020-Jun-09, J�rgen Purtz wrote:\n>\n>> Can you agree to the following definitions? If no, we can alternatively\n>> formulate for each of them: \"Under discussion - currently not defined\". My\n>> proposals are inspired by chapter 2.2 Concepts: \"Tables are grouped into\n>> databases, and a collection of databases managed by a single PostgreSQL\n>> server instance constitutes a database cluster.\"\n> After sleeping on it a few more times, I don't oppose the idea of making\n> \"instance\" be the running state and \"database cluster\" the on-disk stuff\n> that supports the instance. Here's a patch that does things pretty much\n> along the lines you suggested.\n>\n> I made small adjustments to \"SQL objects\":\n>\n> * SQL objects in schemas were said to have their names unique in the\n> schema, but we failed to say anything about names of objects not in\n> schemas and global objects. Added that.\n>\n> * Had example object types for global objects and objects not in\n> schemas, but no examples for objects in schemas. Added that.\n>\n>\n> Some programs whose output we could tweak per this:\n> pg_ctl\n>> pg_ctl is a utility to initialize, start, stop, or control a PostgreSQL server.\n>> -D, --pgdata=DATADIR location of the database storage area\n> to:\n>> pg_ctl is a utility to initialize or control a PostgreSQL database cluster.\n>> -D, --pgdata=DATADIR location of the database directory\n> pg_basebackup:\n>> pg_basebackup takes a base backup of a running PostgreSQL server.\n> to:\n>> pg_basebackup takes a base backup of a PostgreSQL instance.\n\n+1, with two formal changes:\n\n-� Rearrangement of term \"Data page\" to meet alphabetical order.\n\n-� Add </glossdef> in one case to meet xml-well-formedness.\n\n\nOne last question: The definition of \"Data directory\" reads \"... A \ncluster's storage space comprises the data directory plus ...\" and \n'cluster' links to '\"glossary-instance\". Shouldn't it link to \n\"glossary-db-cluster\"?\n\n--\n\nJ�rgen Purtz", "msg_date": "Wed, 17 Jun 2020 14:52:19 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Jun-16, Justin Pryzby wrote:\n> On Tue, Jun 16, 2020 at 08:09:26PM -0400, Alvaro Herrera wrote:\n\nThanks for the review. I merged all your suggestions. This one:\n\n> > Most local objects belong to a specific\n> > + <glossterm linkend=\"glossary-schema\">schema</glossterm> in their\n> > + containing database, such as\n> > + <glossterm linkend=\"glossary-relation\">all types of relations</glossterm>,\n> > + <glossterm linkend=\"glossary-function\">all types of functions</glossterm>,\n> \n> Maybe say: >Relations< (all types), and >Functions< (all types)\n\nled me down not one but two rabbit holes; first I realized that\n\"functions\" is an insufficient term since procedures should also be\nincluded but weren't, so I had to add the more generic term \"routine\"\nand then modify the definitions of all routine types to mix in well. I\nthink overall the quality of these definitions is improved as a result.\n\nI also felt the need to revise the definition of \"relations\", so I did\nthat too; this made me change the definition of resultset too.\n\nOn 2020-Jun-17, J�rgen Purtz wrote:\n\n> +1, with two formal changes:\n> \n> -� Rearrangement of term \"Data page\" to meet alphabetical order.\n\nTo forestall these ordering issues (look, another rabbit hole), I\ngrepped the file for all glossterms and sorted that under en_US rules,\nthen reordered the terms to match that. Turns out there were several\nother ordering mistakes.\n\ngit grep '<glossterm>' | sed -e 's/<[^>]*>\\([^<]*\\)<[^>]*>/\\1/' > orig\nLC_COLLATE=en_US.UTF-8 sort orig > sorted\n\n(Eliminating the tags is important, otherwise the sort uses the tags\nthemselves to disambiguate)\n\n> One last question: The definition of \"Data directory\" reads \"... A cluster's\n> storage space comprises the data directory plus ...\" and 'cluster' links to\n> '\"glossary-instance\". Shouldn't it link to \"glossary-db-cluster\"?\n\nYes, an oversight, thanks.\n\nI also added TPS, because I had already written it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 18 Jun 2020 19:51:13 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-06-19 01:51, Alvaro Herrera wrote:\n> On 2020-Jun-16, Justin Pryzby wrote:\n>> On Tue, Jun 16, 2020 at 08:09:26PM -0400, Alvaro Herrera wrote:\n\nI noticed one typo:\n\n'aggregates functions' should be\n'aggregate functions'\n\n\nAnd one thing that I am not sure of (but strikes me as a bit odd):\nthere are several cases of\n'are enforced unique'. Should that not be\n'are enforced to be unique' ?\n\n\nAnther small mistake (2x):\n\n'The name of such objects of the same type are' should be\n'The names of such objects of the same type are'\n\n(this phrase occurs 2x wrong, 1x correct)\n\n\nthanks,\n\nErik Rijkers\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 19 Jun 2020 17:45:35 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "Thanks for these fixes! I included all of these.\n\nOn 2020-Jun-19, Erik Rijkers wrote:\n\n> And one thing that I am not sure of (but strikes me as a bit odd):\n> there are several cases of\n> 'are enforced unique'. Should that not be\n> 'are enforced to be unique' ?\n\nI included this change too; I am not too sure of it myself. If some\nEnglish language neatnik wants to argue one way or the other, be my\nguest.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 19 Jun 2020 13:10:37 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 19.06.20 19:10, Alvaro Herrera wrote:\n> Thanks for these fixes! I included all of these.\n>\n> On 2020-Jun-19, Erik Rijkers wrote:\n>\n>> And one thing that I am not sure of (but strikes me as a bit odd):\n>> there are several cases of\n>> 'are enforced unique'. Should that not be\n>> 'are enforced to be unique' ?\n> I included this change too; I am not too sure of it myself. If some\n> English language neatnik wants to argue one way or the other, be my\n> guest.\n>\n- Added '(process)' to the two terms 'Autovacuum' and 'Stats Collector'\n\n- Removed link to himself in 'Logger (process)'\n\n- new term: Base Backup\n\n\n--\n\nJürgen Purtz", "msg_date": "Tue, 21 Jul 2020 13:47:10 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" }, { "msg_contents": "On 2020-Jul-21, J�rgen Purtz wrote:\n\n> - Added '(process)' to the two terms 'Autovacuum' and 'Stats Collector'\n> \n> - Removed link to himself in 'Logger (process)'\n> \n> - new term: Base Backup\n\nPushed. I was not courageous enough to include \"base backup\" in 13, so\nthat one's in master only, but the other ones are in both branches.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Jul 2020 13:13:31 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add A Glossary" } ]
[ { "msg_contents": "I propose this patch to remove the information schema tables\nSQL_LANGUAGES, which was eliminated in SQL:2008, and SQL_PACKAGES, which\nwas eliminated in SQL:2011. Since they were dropped by the SQL\nstandard, the information in them was no longer updated and therefore no\nlonger useful.\n\nThis also removes the feature-package association information in\nsql_feature_packages.txt, but for the time begin we are keeping the\ninformation which features are in the Core package (that is, mandatory\nSQL features). Maybe at some point someone wants to invent a way to\nstore that that does not involve using the \"package\" mechanism\nanymore.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 14 Oct 2019 10:27:14 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Remove obsolete information schema tables" }, { "msg_contents": "On Mon, Oct 14, 2019 at 10:27:14AM +0200, Peter Eisentraut wrote:\n> I propose this patch to remove the information schema tables\n> SQL_LANGUAGES, which was eliminated in SQL:2008, and SQL_PACKAGES, which\n> was eliminated in SQL:2011. Since they were dropped by the SQL\n> standard, the information in them was no longer updated and therefore no\n> longer useful.\n\nThe cleanup looks right. I cannot grep missing references FWIW.\n\n> This also removes the feature-package association information in\n> sql_feature_packages.txt, but for the time begin we are keeping the\n> information which features are in the Core package (that is, mandatory\n> SQL features). Maybe at some point someone wants to invent a way to\n> store that that does not involve using the \"package\" mechanism\n> anymore.\n\nI have a question here. Per the notes in information_schema.sql,\nSQL_SIZING_PROFILES has been removed in SQL:2011,\nattributes.isnullable and DOMAIN_UDT_USAGE in SQL:2003~. Would it\nmake sense to cleanup those ones?\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 16:44:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove obsolete information schema tables" }, { "msg_contents": "On 2019-10-17 09:44, Michael Paquier wrote:\n> I have a question here. Per the notes in information_schema.sql,\n> SQL_SIZING_PROFILES has been removed in SQL:2011,\n\nOK, we can remove that one as well. New patch attached.\n\n> attributes.isnullable and DOMAIN_UDT_USAGE in SQL:2003~. Would it\n> make sense to cleanup those ones?\n\nOK, I'll look into those, but it seems like a separate undertaking. We\ndon't always remove things just because they were dropped by the SQL\nstandard.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 20 Oct 2019 10:01:09 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove obsolete information schema tables" }, { "msg_contents": "On Sun, Oct 20, 2019 at 10:01:09AM +0200, Peter Eisentraut wrote:\n> On 2019-10-17 09:44, Michael Paquier wrote:\n> > I have a question here. Per the notes in information_schema.sql,\n> > SQL_SIZING_PROFILES has been removed in SQL:2011,\n> \n> OK, we can remove that one as well. New patch attached.\n\nLooks fine.\n\n>> attributes.isnullable and DOMAIN_UDT_USAGE in SQL:2003~. Would it\n>> make sense to cleanup those ones?\n> \n> OK, I'll look into those, but it seems like a separate undertaking. We\n> don't always remove things just because they were dropped by the SQL\n> standard.\n\nBut that's the same kind of cleanup you do here. What's the\ndifference with DOMAIN_UDT_USAGE, which is mentioned as removed from\nSQL:2003?\n--\nMichael", "msg_date": "Mon, 21 Oct 2019 14:34:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove obsolete information schema tables" }, { "msg_contents": "On 2019-10-21 07:34, Michael Paquier wrote:\n> On Sun, Oct 20, 2019 at 10:01:09AM +0200, Peter Eisentraut wrote:\n>> On 2019-10-17 09:44, Michael Paquier wrote:\n>>> I have a question here. Per the notes in information_schema.sql,\n>>> SQL_SIZING_PROFILES has been removed in SQL:2011,\n>>\n>> OK, we can remove that one as well. New patch attached.\n> \n> Looks fine.\n\ncommitted\n\n>>> attributes.isnullable and DOMAIN_UDT_USAGE in SQL:2003~. Would it\n>>> make sense to cleanup those ones?\n>>\n>> OK, I'll look into those, but it seems like a separate undertaking. We\n>> don't always remove things just because they were dropped by the SQL\n>> standard.\n> \n> But that's the same kind of cleanup you do here. What's the\n> difference with DOMAIN_UDT_USAGE, which is mentioned as removed from\n> SQL:2003?\n\nSQL_LANGUAGES for example, contains information about which version of\nthe SQL standard is being conformed to. But since it's no longer in the\nstandard, it most recently said that SQL:2003 is supported, which isn't\nvery useful. We could extrapolate new values for more recent standards,\nbut that's also questionable. So it makes sense to remove it.\n\nBy contrast, I don't know why DOMAIN_UDT_USAGE was removed. It might\nstill be useful. Just because something is dropped by an SQL standard,\nit doesn't mean we should remove it. For example, bit and bit varying\nare no longer in the standard.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 25 Oct 2019 21:57:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove obsolete information schema tables" } ]
[ { "msg_contents": "Hi hackers,\n\nErrors in selectivity estimations is one of the main reason of bad plans \ngeneration by Postgres optimizer.\nPostgres estimates selectivity based on the collected statistic \n(histograms).\nWhile it is able to more or less precisely estimated selectivity of \nsimple predicate for particular table,\nit is much more difficult to estimate selectivity for result of join of \nseveral tables and for complex predicate consisting of several \nconjuncts/disjuncts\naccessing different columns.\n\nPostgres is not able to take in account correlation between columns \nunless correspondent multicolumn statistic is explicitly created.\nBut even if such statistic is created, it can not be used in join \nselectivity estimation.\n\nThe problem with adjusting selectivity using machine learning based on \nthe results of EXPLAIN ANALYZE was address in AQO project:\n\nhttps://github.com/postgrespro/aqo\n\nThere are still many issues with proposed AQO approach (for example, it \ndoesn't take in account concrete constant values).\nWe are going  to continue its improvement.\n\nBut here I wan to propose much simpler patch which allows two things:\n1. Use extended statistic in estimation of join selectivity\n2. Create on demand multicolumn statistic in auto_explain extension if \nthere is larger gap between real and estimated number of tuples for the \nconcrete plan node.\n\n\ncreate table inner_tab(x integer, y integer);\ncreate table outer_tab(pk integer primary key, x integer, y integer);\ncreate index on inner_tab(x,y);\ninsert into outer_tab values (generate_series(1,100000), \ngenerate_series(1,100000), generate_series(1,100000)*10);\ninsert into inner_tab values (generate_series(1,1000000)/10, \ngenerate_series(1,1000000)/10*10);\nanalyze inner_tab;\nanalyze outer_tab;\n\n\nWithout this patch:\nexplain select * from outer_tab join inner_tab using(x,y) where pk=1;\n                                           QUERY PLAN\n----------------------------------------------------------------------------------------------\n  Nested Loop  (cost=0.72..16.77 rows=1 width=12)\n    ->  Index Scan using outer_tab_pkey on outer_tab (cost=0.29..8.31 \nrows=1 width=12)\n          Index Cond: (pk = 1)\n    ->  Index Only Scan using inner_tab_x_y_idx on inner_tab \n(cost=0.42..8.45 rows=1 width=8)\n          Index Cond: ((x = outer_tab.x) AND (y = outer_tab.y))\n(5 rows)\n\n\nWith this patch:\n\nload 'auto_explain';\nset auto_explain.log_min_duration=0;\nset auto_explain.add_statistics_threshold=10.0;\nset auto_explain.log_analyze=on;\nselect * from outer_tab join inner_tab using(x,y) where pk=1;\nanalyze inner_tab;\nanalyze outer_tab;\n\nexplain select * from outer_tab join inner_tab using(x,y) where pk=1;\n                                            QUERY PLAN\n------------------------------------------------------------------------------------------------\n  Nested Loop  (cost=0.72..32.79 rows=10 width=12)\n    ->  Index Scan using outer_tab_pkey on outer_tab (cost=0.29..8.31 \nrows=1 width=12)\n          Index Cond: (pk = 1)\n    ->  Index Only Scan using inner_tab_x_y_idx on inner_tab \n(cost=0.42..24.38 rows=10 width=8)\n          Index Cond: ((x = outer_tab.x) AND (y = outer_tab.y))\n(5 rows)\n\n\nAs you can see now estimation of join result is correct (10).\n\nI attached two patches: one for using extended statistic for join \nselectivity estimation and another for auto_explain to implicitly add \nthis extended statistic on demand.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 14 Oct 2019 19:43:10 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Columns correlation and adaptive query optimization" }, { "msg_contents": "Hello Konstantin,\n\nWhat you have proposed regarding join_selectivity and multicolumn statistics\nis a very good new !\n\nRegarding your auto_explain modification, maybe an \"advisor\" mode would also\nbe helpfull (with auto_explain_add_statistics_threshold=-1 for exemple).\nThis would allow to track which missing statistic should be tested (manually\nor in an other environment).\n\nIn my point of view this advice should be an option of the EXPLAIN command,\nthat should also permit\nauto_explain module to propose \"learning\" phase.\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Mon, 14 Oct 2019 15:20:17 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 15.10.2019 1:20, legrand legrand wrote:\n> Hello Konstantin,\n>\n> What you have proposed regarding join_selectivity and multicolumn statistics\n> is a very good new !\n>\n> Regarding your auto_explain modification, maybe an \"advisor\" mode would also\n> be helpfull (with auto_explain_add_statistics_threshold=-1 for exemple).\n> This would allow to track which missing statistic should be tested (manually\n> or in an other environment).\n>\n> In my point of view this advice should be an option of the EXPLAIN command,\n> that should also permit\n> auto_explain module to propose \"learning\" phase.\nThank you for good suggestion. Advisor mode is really good idea.\nI have added \"auto_explain.suggest_only\" GUC.\nWhen it is switched on, suggested CREATE STATISTICS statement is just \nprinted in  log but not actually created.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 15 Oct 2019 10:46:38 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "Smarter version of join selectivity patch handling cases like this:\n\n\nexplain select * from outer_tab join inner_tab using(x,y) where x=1;\n                                            QUERY PLAN\n------------------------------------------------------------------------------------------------\n  Nested Loop  (cost=0.42..1815.47 rows=10 width=12)\n    Join Filter: (outer_tab.y = inner_tab.y)\n    ->  Seq Scan on outer_tab  (cost=0.00..1791.00 rows=1 width=12)\n          Filter: (x = 1)\n    ->  Index Only Scan using inner_tab_x_y_idx on inner_tab \n(cost=0.42..24.35 rows=10 width=8)\n          Index Cond: (x = 1)\n(6 rows)\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 18 Oct 2019 19:53:23 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "New version of patch implicitly adding multicolumn statistic in \nauto_explain extension and using it in optimizer for more precise \nestimation of join selectivity.\nThis patch fixes race condition while adding statistics and restricts \ngenerated statistic name to fit in 64 bytes (NameData).\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 24 Dec 2019 11:15:40 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 12/24/19 3:15 AM, Konstantin Knizhnik wrote:\n> New version of patch implicitly adding multicolumn statistic in \n> auto_explain extension and using it in optimizer for more precise \n> estimation of join selectivity.\n> This patch fixes race condition while adding statistics and restricts \n> generated statistic name to fit in 64 bytes (NameData).\n\nThis patch no longer applies: https://commitfest.postgresql.org/27/2386/\n\nThe CF entry has been updated to Waiting on Author.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 24 Mar 2020 13:12:20 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 24.03.2020 20:12, David Steele wrote:\n> On 12/24/19 3:15 AM, Konstantin Knizhnik wrote:\n>> New version of patch implicitly adding multicolumn statistic in \n>> auto_explain extension and using it in optimizer for more precise \n>> estimation of join selectivity.\n>> This patch fixes race condition while adding statistics and restricts \n>> generated statistic name to fit in 64 bytes (NameData).\n>\n> This patch no longer applies: https://commitfest.postgresql.org/27/2386/\n>\n> The CF entry has been updated to Waiting on Author.\n>\n> Regards,\n\nRebased patch is attached.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 25 Mar 2020 13:57:53 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 3/25/20 6:57 AM, Konstantin Knizhnik wrote:\n> \n> \n> On 24.03.2020 20:12, David Steele wrote:\n>> On 12/24/19 3:15 AM, Konstantin Knizhnik wrote:\n>>> New version of patch implicitly adding multicolumn statistic in \n>>> auto_explain extension and using it in optimizer for more precise \n>>> estimation of join selectivity.\n>>> This patch fixes race condition while adding statistics and restricts \n>>> generated statistic name to fit in 64 bytes (NameData).\n>>\n>> This patch no longer applies: https://commitfest.postgresql.org/27/2386/\n>>\n>> The CF entry has been updated to Waiting on Autho\n> \n> Rebased patch is attached.\n\nThe patch applies now but there are error on Windows and Linux:\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85481\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/666729979\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 25 Mar 2020 09:00:39 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 25.03.2020 16:00, David Steele wrote:\n> On 3/25/20 6:57 AM, Konstantin Knizhnik wrote:\n>>\n>>\n>> On 24.03.2020 20:12, David Steele wrote:\n>>> On 12/24/19 3:15 AM, Konstantin Knizhnik wrote:\n>>>> New version of patch implicitly adding multicolumn statistic in \n>>>> auto_explain extension and using it in optimizer for more precise \n>>>> estimation of join selectivity.\n>>>> This patch fixes race condition while adding statistics and \n>>>> restricts generated statistic name to fit in 64 bytes (NameData).\n>>>\n>>> This patch no longer applies: \n>>> https://commitfest.postgresql.org/27/2386/\n>>>\n>>> The CF entry has been updated to Waiting on Autho\n>>\n>> Rebased patch is attached.\n>\n> The patch applies now but there are error on Windows and Linux:\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85481 \n>\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/666729979\n>\n> Regards,\n\nSorry, yet another patch is attached.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 25 Mar 2020 16:28:34 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "Hello,\n\nThis sounded like an interesting addition to postgresql. I gave some\ntime to it today to review, here are few comments,\n\nOn Wed, 25 Mar 2020 at 14:28, Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n>\n>\n>\n> On 25.03.2020 16:00, David Steele wrote:\n> > On 3/25/20 6:57 AM, Konstantin Knizhnik wrote:\n> >>\n> >>\n> >> On 24.03.2020 20:12, David Steele wrote:\n> >>> On 12/24/19 3:15 AM, Konstantin Knizhnik wrote:\n> >>>> New version of patch implicitly adding multicolumn statistic in\n> >>>> auto_explain extension and using it in optimizer for more precise\n> >>>> estimation of join selectivity.\n> >>>> This patch fixes race condition while adding statistics and\n> >>>> restricts generated statistic name to fit in 64 bytes (NameData).\n> >>>\n> >>> This patch no longer applies:\n> >>> https://commitfest.postgresql.org/27/2386/\n> >>>\n> >>> The CF entry has been updated to Waiting on Autho\n> >>\n> >> Rebased patch is attached.\n> >\n> > The patch applies now but there are error on Windows and Linux:\n> > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85481\n> >\n> > https://travis-ci.org/postgresql-cfbot/postgresql/builds/666729979\n> >\n> > Regards,\n>\n> Sorry, yet another patch is attached.\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n\n+static void\n+AddMultiColumnStatisticsForNode(PlanState *planstate, ExplainState *es);\n+\n\nThis doesn't look like the right place for it, you might want to\ndeclare it with other functions in the starting of the file.\n\nAlso, there is no description about any of the functions here,\nwouldn’t hurt having some more comments there.\n\nA few of more questions that cross my mind at this point,\n\n- have you tried measuring the extra cost we have to pay for this\nmores statistics , and also compare it with the benefit it gives in\nterms of accuracy.\n- I would also be interested in understanding if there are cases when\nadding this extra step doesn’t help and have you excluded them already\nor if some of them are easily identifiable at this stage...?\n- is there any limit on the number of columns for which this will\nwork, or should there be any such limit...?\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Wed, 25 Mar 2020 18:04:12 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "Thank you very much for review.\n\nOn 25.03.2020 20:04, Rafia Sabih wrote:\n>\n> +static void\n> +AddMultiColumnStatisticsForNode(PlanState *planstate, ExplainState *es);\n> +\n>\n> This doesn't look like the right place for it, you might want to\n> declare it with other functions in the starting of the file.\n>\n> Also, there is no description about any of the functions here,\n> wouldn’t hurt having some more comments there.\n\nSorry, I will fix it.\nActually this patch contains of two independent parts:\nfirst allows to use auto_explain extension to generate mutlicolumn \nstatistic for variables used in clauses\nfor which selectivity estimation gives wrong result. It affects only \nauto_explain extension.\n\nSecond part allows to use multicolumn statistic for join selectivity \nestimation.\nAs far as I know extended statistic is now actively improved:\nhttps://www.postgresql.org/message-id/flat/20200309000157.ig5tcrynvaqu4ixd%40development#bfbdf9c41c31ef92819dfc5ecde4a67c\n\nI think that using extended statistic for join selectivity is very \nimportant and should also be addressed.\nIf my approach is on so good, I will be pleased for other suggestions.\n\n>\n> A few of more questions that cross my mind at this point,\n>\n> - have you tried measuring the extra cost we have to pay for this\n> mores statistics , and also compare it with the benefit it gives in\n> terms of accuracy.\nAdding statistic not always leads to performance improvement but I never \nobserved any performance degradation caused by presence of extended \nstatistic.\nDefinitely we can manually create too many extended statistic entries \nfor different subsets of columns.\nAnd it certainly increase planning time because optimizer has to \nconsider more alternatives.\nBut in practice I never noticed such slowdown.\n\n> - I would also be interested in understanding if there are cases when\n> adding this extra step doesn’t help and have you excluded them already\n> or if some of them are easily identifiable at this stage...?\n\nUnfortunately there are many cases when extended statistic can not help.\nEither because optimizer is not able to use it (for example my patch \nconsider only cases with strict equality comparison,\nbut if you use predicate like \"a.x=b.x and  a.y in (1,2,3)\"  then \nextended statistic for <x,y> can not be used.\nEither because collected statistic itself is not precise enough , \nespecially in case of data skews.\n\n\n> - is there any limit on the number of columns for which this will\n> work, or should there be any such limit...?\n>\nRight now there is limit for maximal number of columns used in extended \nstatistic: 8 columns.\nBut in practice I rarely see join predicates involving more than 3 columns.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 26 Mar 2020 17:43:49 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 25.03.2020 20:04, Rafia Sabih wrote:\n>\n> Also, there is no description about any of the functions here,\n> wouldn’t hurt having some more comments there.\n>\n\nAttached please find new version of the patch with more comments and \ndescriptions added.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 26 Mar 2020 18:49:51 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "Hello,\n\nOn Thu, 26 Mar 2020 18:49:51 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> Attached please find new version of the patch with more comments and \n> descriptions added.\n\nAdaptive query optimization is very interesting feature for me, so I looked\ninto this patch. Here are some comments and questions.\n\n(1)\nThis patch needs rebase because clauselist_selectivity was modified to improve\nestimation of OR clauses.\n\n(2)\nIf I understand correctly, your proposal consists of the following two features.\n\n1. Add a feature to auto_explain that creates an extended statistic automatically\nif an error on estimated rows number is large.\n\n2. Improve rows number estimation of join results by considering functional\ndependencies between vars in the join qual if the qual has more than one clauses,\nand also functional dependencies between a var in the join qual and vars in quals\nof the inner/outer relation.\n\nAs you said, these two parts are independent each other, so one feature will work\neven if we don't assume the other. I wonder it would be better to split the patch\nagain, and register them to commitfest separately.\n\n(3)\n+\tDefineCustomBoolVariable(\"auto_explain.suggest_only\",\n+\t\t\t\t\t\t\t \"Do not create statistic but just record in WAL suggested create statistics statement.\",\n+\t\t\t\t\t\t\t NULL,\n+\t\t\t\t\t\t\t &auto_explain_suggest_on\n\nTo imply that this parameter is involving to add_statistics_threshold, it seems\nbetter for me to use more related name like add_statistics_suggest_only.\n\nAlso, additional documentations for new parameters are required.\n\n(4)\n+\t\t\t/*\n+\t\t\t * Prevent concurrent access to extended statistic table\n+\t\t\t */\n+\t\t\tstat_rel = table_open(StatisticExtRelationId, AccessExclusiveLock);\n+\t\t\tslot = table_slot_create(stat_rel, NULL);\n+\t\t\tscan = table_beginscan_catalog(stat_rel, 2, entry);\n(snip)\n+\t\t\ttable_close(stat_rel, AccessExclusiveLock);\n+\t\t}\n\nWhen I tested the auto_explain part, I got the following WARNING.\n\n WARNING: buffer refcount leak: [097] (rel=base/12879/3381, blockNum=0, flags=0x83000000, refcount=1 2)\n WARNING: buffer refcount leak: [097] (rel=base/12879/3381, blockNum=0, flags=0x83000000, refcount=1 1)\n WARNING: relcache reference leak: relation \"pg_statistic_ext\" not closed\n WARNING: TupleDesc reference leak: TupleDesc 0x7fa439266338 (12029,-1) still referenced\n WARNING: Snapshot reference leak: Snapshot 0x55c332c10418 still referenced\n\nTo suppress this, I think we need table_endscan(scan) and\nExecDropSingleTupleTableSlot(slot) before finishing this function.\n\n(6)\n+\t\t\t\t\telog(NOTICE, \"Auto_explain suggestion: CREATE STATISTICS %s %s FROM %s\", stat_name, create_stat_stmt, rel_name);\n\nWe should use ereport instead of elog for log messages.\n\n(7)\n+\t\t\t\t\t\tdouble dep = find_var_dependency(root, innerRelid, var, clauses_attnums);\n+\t\t\t\t\t\tif (dep != 0.0)\n+\t\t\t\t\t\t{\n+\t\t\t\t\t\t\ts1 *= dep + (1 - dep) * s2;\n+\t\t\t\t\t\t\tcontinue;\n+\t\t\t\t\t\t}\n\nI found the following comment of clauselist_apply_dependencies():\n\n * we actually combine selectivities using the formula\n *\n * P(a,b) = f * Min(P(a), P(b)) + (1-f) * P(a) * P(b)\n\nso, is it not necessary using the same formula in this patch? That is, \n\n s1 *= dep + (1-dep) * s2 (if s1 <= s2)\n s1 *= dep * (s2/s1) + (1-dep) * s2 (otherwise) .\n\n(8)\n+/*\n+ * Try to find dependency between variables.\n+ * var: varaibles which dependencies are considered\n+ * join_vars: list of variables used in other clauses\n+ * This functions return strongest dependency and some subset of variables from the same relation\n+ * or 0.0 if no dependency was found.\n+ */\n+static double\n+var_depends_on(PlannerInfo *root, Var* var, List* clause_vars)\n+{\n\nThe comment mentions join_vars but the actual argument name is clauses_vars,\nso it needs unification.\n\n(9)\nCurrently, it only consider functional dependencies statistics. Can we also\nconsider multivariate MCV list, and is it useful?\n\n(10)\nTo achieve adaptive query optimization (AQO) in PostgreSQL, this patch proposes\nto use auto_explain for getting feedback from actual results. So, could auto_explain\nbe a infrastructure of AQO in future? Or, do you have any plan or idea to make\nbuilt-in infrastructure for AQO?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 21 Jan 2021 21:30:48 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "Hello,\n\nThank you for review.\nMy answers are inside.\n\n\nOn 21.01.2021 15:30, Yugo NAGATA wrote:\n> Hello,\n>\n> On Thu, 26 Mar 2020 18:49:51 +0300\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n>\n>> Attached please find new version of the patch with more comments and\n>> descriptions added.\n> Adaptive query optimization is very interesting feature for me, so I looked\n> into this patch. Here are some comments and questions.\n>\n> (1)\n> This patch needs rebase because clauselist_selectivity was modified to improve\n> estimation of OR clauses.\n\nRebased version is attached.\n>\n> (2)\n> If I understand correctly, your proposal consists of the following two features.\n>\n> 1. Add a feature to auto_explain that creates an extended statistic automatically\n> if an error on estimated rows number is large.\n>\n> 2. Improve rows number estimation of join results by considering functional\n> dependencies between vars in the join qual if the qual has more than one clauses,\n> and also functional dependencies between a var in the join qual and vars in quals\n> of the inner/outer relation.\n>\n> As you said, these two parts are independent each other, so one feature will work\n> even if we don't assume the other. I wonder it would be better to split the patch\n> again, and register them to commitfest separately.\n\nI agree with you that this are two almost unrelated changes, although \nwithout clausesel patch additional statistic can not improve query planning.\nBut I already have too many patches at commitfest.\nMay be it will be enough to spit this patch into two?\n\n>\n> (3)\n> +\tDefineCustomBoolVariable(\"auto_explain.suggest_only\",\n> +\t\t\t\t\t\t\t \"Do not create statistic but just record in WAL suggested create statistics statement.\",\n> +\t\t\t\t\t\t\t NULL,\n> +\t\t\t\t\t\t\t &auto_explain_suggest_on\n>\n> To imply that this parameter is involving to add_statistics_threshold, it seems\n> better for me to use more related name like add_statistics_suggest_only.\n>\n> Also, additional documentations for new parameters are required.\n\nDone.\n\n>\n> (4)\n> +\t\t\t/*\n> +\t\t\t * Prevent concurrent access to extended statistic table\n> +\t\t\t */\n> +\t\t\tstat_rel = table_open(StatisticExtRelationId, AccessExclusiveLock);\n> +\t\t\tslot = table_slot_create(stat_rel, NULL);\n> +\t\t\tscan = table_beginscan_catalog(stat_rel, 2, entry);\n> (snip)\n> +\t\t\ttable_close(stat_rel, AccessExclusiveLock);\n> +\t\t}\n>\n> When I tested the auto_explain part, I got the following WARNING.\n>\n> WARNING: buffer refcount leak: [097] (rel=base/12879/3381, blockNum=0, flags=0x83000000, refcount=1 2)\n> WARNING: buffer refcount leak: [097] (rel=base/12879/3381, blockNum=0, flags=0x83000000, refcount=1 1)\n> WARNING: relcache reference leak: relation \"pg_statistic_ext\" not closed\n> WARNING: TupleDesc reference leak: TupleDesc 0x7fa439266338 (12029,-1) still referenced\n> WARNING: Snapshot reference leak: Snapshot 0x55c332c10418 still referenced\n>\n> To suppress this, I think we need table_endscan(scan) and\n> ExecDropSingleTupleTableSlot(slot) before finishing this function.\n\nThank you for noticing the problem, fixed.\n\n\n>\n> (6)\n> +\t\t\t\t\telog(NOTICE, \"Auto_explain suggestion: CREATE STATISTICS %s %s FROM %s\", stat_name, create_stat_stmt, rel_name);\n>\n> We should use ereport instead of elog for log messages.\n\nChanged.\n\n>\n> (7)\n> +\t\t\t\t\t\tdouble dep = find_var_dependency(root, innerRelid, var, clauses_attnums);\n> +\t\t\t\t\t\tif (dep != 0.0)\n> +\t\t\t\t\t\t{\n> +\t\t\t\t\t\t\ts1 *= dep + (1 - dep) * s2;\n> +\t\t\t\t\t\t\tcontinue;\n> +\t\t\t\t\t\t}\n>\n> I found the following comment of clauselist_apply_dependencies():\n>\n> * we actually combine selectivities using the formula\n> *\n> * P(a,b) = f * Min(P(a), P(b)) + (1-f) * P(a) * P(b)\n>\n> so, is it not necessary using the same formula in this patch? That is,\n>\n> s1 *= dep + (1-dep) * s2 (if s1 <= s2)\n> s1 *= dep * (s2/s1) + (1-dep) * s2 (otherwise) .\nMakes sense.\n\n> (8)\n> +/*\n> + * Try to find dependency between variables.\n> + * var: varaibles which dependencies are considered\n> + * join_vars: list of variables used in other clauses\n> + * This functions return strongest dependency and some subset of variables from the same relation\n> + * or 0.0 if no dependency was found.\n> + */\n> +static double\n> +var_depends_on(PlannerInfo *root, Var* var, List* clause_vars)\n> +{\n>\n> The comment mentions join_vars but the actual argument name is clauses_vars,\n> so it needs unification.\n\nFixed.\n\n>\n> (9)\n> Currently, it only consider functional dependencies statistics. Can we also\n> consider multivariate MCV list, and is it useful?\n\n\nRight now auto_explain create statistic without explicit specification \nof statistic kind.\nAccording to the documentation all supported statistics kinds should be \ncreated in this case:\n\n/|statistics_kind|/\n\n A statistics kind to be computed in this statistics object.\n Currently supported kinds are |ndistinct|, which enables n-distinct\n statistics, and |dependencies|, which enables functional dependency\n statistics. If this clause is omitted, all supported statistics\n kinds are included in the statistics object. For more information,\n see Section 14.2.2\n <https://www.postgresql.org/docs/10/planner-stats.html#PLANNER-STATS-EXTENDED>\n and Section 68.2\n <https://www.postgresql.org/docs/10/multivariate-statistics-examples.html>.\n\n\n\n>\n> (10)\n> To achieve adaptive query optimization (AQO) in PostgreSQL, this patch proposes\n> to use auto_explain for getting feedback from actual results. So, could auto_explain\n> be a infrastructure of AQO in future? Or, do you have any plan or idea to make\n> built-in infrastructure for AQO?\nSorry, I do not have answer for this question.\nI just patched auto_explain extension because it is doing  half of the \nrequired work (analyze  expensive statements).\nIt can be certainly moved to separate extension. In this case it will \nparty duplicate existed functionality and\nsettings of auto_explain (like statement execution time threshold). I am \nnot sure that it is good.\nBut from the other side, this my patch makes auto_explain extension to \ndo some unexpected work...\n\nActually task of adaptive query optimization is much bigger.\nWe have separate AQO extension which tries to use machine learning to \ncorrectly adjust estimations.\nThis my patch is much simpler and use existed mechanism (extended \nstatistics) to improve estimations.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 25 Jan 2021 16:27:25 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On Mon, 25 Jan 2021 16:27:25 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> Hello,\n> \n> Thank you for review.\n> My answers are inside.\n\nThank you for updating the patch and answering my questions.\n\n> > (2)\n> > If I understand correctly, your proposal consists of the following two features.\n> >\n> > 1. Add a feature to auto_explain that creates an extended statistic automatically\n> > if an error on estimated rows number is large.\n> >\n> > 2. Improve rows number estimation of join results by considering functional\n> > dependencies between vars in the join qual if the qual has more than one clauses,\n> > and also functional dependencies between a var in the join qual and vars in quals\n> > of the inner/outer relation.\n> >\n> > As you said, these two parts are independent each other, so one feature will work\n> > even if we don't assume the other. I wonder it would be better to split the patch\n> > again, and register them to commitfest separately.\n> \n> I agree with you that this are two almost unrelated changes, although \n> without clausesel patch additional statistic can not improve query planning.\n\nI think extended statistics created by the auto_explain patch can improve query \nplanning even without the clausesel patch. For example, suppose the following case:\n\npostgres=# create table t ( i int, j int);\nCREATE TABLE\npostgres=# insert into t select i/10, i/100 from generate_series(1,1000000) i;\nINSERT 0 1000000\npostgres=# analyze t;\nANALYZE\npostgres=# explain analyze select * from t where i = 100 and j = 10;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..19425.00 rows=1 width=8) (actual time=0.254..97.293 rows=10 loops=1)\n Filter: ((i = 100) AND (j = 10))\n Rows Removed by Filter: 999990\n Planning Time: 0.199 ms\n Execution Time: 97.327 ms\n(5 rows)\n\nAfter applying the auto_explain patch (without clausesel patch) and issuing the query,\nadditional statistics were created.\n\npostgres=# select * from t where i = 100 and j = 10;\nLOG: Add statistics t_i_j\n\nThen, after analyze, the row estimation was improved.\n\npostgres=# analyze t;\nANALYZE\npostgres=# explain analyze select * from t where i = 100 and j = 10;\npostgres=# explain analyze select * from t where i = 100 and j = 10;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..19425.00 rows=10 width=8) (actual time=0.255..95.347 rows=10 loops=1)\n Filter: ((i = 100) AND (j = 10))\n Rows Removed by Filter: 999990\n Planning Time: 0.124 ms\n Execution Time: 95.383 ms\n(5 rows)\n\nSo, I think the auto_explain patch is useful with just that as a tool\nto detect a gap between estimate and real and adjust the plan. Also,\nthe clausesel patch would be useful without the auto_explain patch\nif an appropriate statistics are created.\n\n> But I already have too many patches at commitfest.\n> May be it will be enough to spit this patch into two?\n\nAlthough we can discuss both of these patches in this thread, \nI wonder we don't have to commit them together.\n\n> >\n> > (3)\n> > +\tDefineCustomBoolVariable(\"auto_explain.suggest_only\",\n> > +\t\t\t\t\t\t\t \"Do not create statistic but just record in WAL suggested create statistics statement.\",\n> > +\t\t\t\t\t\t\t NULL,\n> > +\t\t\t\t\t\t\t &auto_explain_suggest_on\n> >\n> > To imply that this parameter is involving to add_statistics_threshold, it seems\n> > better for me to use more related name like add_statistics_suggest_only.\n> >\n> > Also, additional documentations for new parameters are required.\n> \n> Done.\n\n+\n+ <varlistentry>\n+ <term>\n+ <varname>auto_explain.auto_explain.add_statistics_threshold</varname> (<type>real</type>)\n+ <indexterm>\n+ <primary><varname>auto_explain.add_statistics_threshold</varname> configuration parameter</primary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ <varname>auto_explain.add_statistics_threshold</varname> sets the threshold for\n+ actual/estimated #rows ratio triggering creation of multicolumn statistic\n+ for the related columns. It can be used for adpative query optimization.\n+ If there is large gap between real and estimated number of tuples for the\n+ concrete plan node, then multicolumn statistic is created for involved\n+ attributes. Zero value (default) disables implicit creation of multicolumn statistic.\n+ </para>\n+ </listitem>\n\nI wonder we need to say that this parameter has no effect unless log_analyze\nis enabled and that statistics are created only when the excution time exceeds \nlog_min_duration, if these behaviors are intentional.\n\nIn addition, additional statistics are created only if #rows is over-estimated\nand not if it is under-estimated. Although it seems good as a criterion for creating\nmulticolumn statistic since extended statisstic is usually useful to fix over-estimation,\nI am not sure if we don't have to consider under-estimation case at all.\n\n\n> > (9)\n> > Currently, it only consider functional dependencies statistics. Can we also\n> > consider multivariate MCV list, and is it useful?\n> \n> \n> Right now auto_explain create statistic without explicit specification \n> of statistic kind.\n> According to the documentation all supported statistics kinds should be \n> created in this case:\n\nYes, auto_explain creates all kinds of extended statistics. However,\nIIUC, the clausesel patch uses only functional dependencies statistics for\nimproving join, so my question was about possibility to consider MCV in the\nclausesel patch.\n\n> > (10)\n> > To achieve adaptive query optimization (AQO) in PostgreSQL, this patch proposes\n> > to use auto_explain for getting feedback from actual results. So, could auto_explain\n> > be a infrastructure of AQO in future? Or, do you have any plan or idea to make\n> > built-in infrastructure for AQO?\n> Sorry, I do not have answer for this question.\n> I just patched auto_explain extension because it is doing  half of the \n> required work (analyze  expensive statements).\n> It can be certainly moved to separate extension. In this case it will \n> party duplicate existed functionality and\n> settings of auto_explain (like statement execution time threshold). I am \n> not sure that it is good.\n> But from the other side, this my patch makes auto_explain extension to \n> do some unexpected work...\n\nI think that auto_explain is an extension originally for aiming to detect\nand log plans that take a long time, so it doesn't seem so unnatural for\nme to use this for improving such plans. Especially, the feature to find\ntunable points in executed plans seems useful.\n\n> Actually task of adaptive query optimization is much bigger.\n> We have separate AQO extension which tries to use machine learning to \n> correctly adjust estimations.\n> This my patch is much simpler and use existed mechanism (extended \n> statistics) to improve estimations.\n\nWell, this patch provide a kind of AQO as auto_explain feature, but this\nis independent of the AQO extension. Is it right?\nAnyway, I'm interested in the AQO extension, so I'll look into this, too.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 27 Jan 2021 14:45:17 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 27.01.2021 8:45, Yugo NAGATA wrote:\n> On Mon, 25 Jan 2021 16:27:25 +0300\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n>\n>> Hello,\n>>\n>> Thank you for review.\n>> My answers are inside.\n> Thank you for updating the patch and answering my questions.\n>\n>>> (2)\n>>> If I understand correctly, your proposal consists of the following two features.\n>>>\n>>> 1. Add a feature to auto_explain that creates an extended statistic automatically\n>>> if an error on estimated rows number is large.\n>>>\n>>> 2. Improve rows number estimation of join results by considering functional\n>>> dependencies between vars in the join qual if the qual has more than one clauses,\n>>> and also functional dependencies between a var in the join qual and vars in quals\n>>> of the inner/outer relation.\n>>>\n>>> As you said, these two parts are independent each other, so one feature will work\n>>> even if we don't assume the other. I wonder it would be better to split the patch\n>>> again, and register them to commitfest separately.\n>> I agree with you that this are two almost unrelated changes, although\n>> without clausesel patch additional statistic can not improve query planning.\n> I think extended statistics created by the auto_explain patch can improve query\n> planning even without the clausesel patch. For example, suppose the following case:\n>\n> postgres=# create table t ( i int, j int);\n> CREATE TABLE\n> postgres=# insert into t select i/10, i/100 from generate_series(1,1000000) i;\n> INSERT 0 1000000\n> postgres=# analyze t;\n> ANALYZE\n> postgres=# explain analyze select * from t where i = 100 and j = 10;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------\n> Seq Scan on t (cost=0.00..19425.00 rows=1 width=8) (actual time=0.254..97.293 rows=10 loops=1)\n> Filter: ((i = 100) AND (j = 10))\n> Rows Removed by Filter: 999990\n> Planning Time: 0.199 ms\n> Execution Time: 97.327 ms\n> (5 rows)\n>\n> After applying the auto_explain patch (without clausesel patch) and issuing the query,\n> additional statistics were created.\n>\n> postgres=# select * from t where i = 100 and j = 10;\n> LOG: Add statistics t_i_j\n>\n> Then, after analyze, the row estimation was improved.\n>\n> postgres=# analyze t;\n> ANALYZE\n> postgres=# explain analyze select * from t where i = 100 and j = 10;\n> postgres=# explain analyze select * from t where i = 100 and j = 10;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------\n> Seq Scan on t (cost=0.00..19425.00 rows=10 width=8) (actual time=0.255..95.347 rows=10 loops=1)\n> Filter: ((i = 100) AND (j = 10))\n> Rows Removed by Filter: 999990\n> Planning Time: 0.124 ms\n> Execution Time: 95.383 ms\n> (5 rows)\n>\n> So, I think the auto_explain patch is useful with just that as a tool\n> to detect a gap between estimate and real and adjust the plan. Also,\n> the clausesel patch would be useful without the auto_explain patch\n> if an appropriate statistics are created.\n>\n>> But I already have too many patches at commitfest.\n>> May be it will be enough to spit this patch into two?\n> Although we can discuss both of these patches in this thread,\n> I wonder we don't have to commit them together.\n>\n>>> (3)\n>>> +\tDefineCustomBoolVariable(\"auto_explain.suggest_only\",\n>>> +\t\t\t\t\t\t\t \"Do not create statistic but just record in WAL suggested create statistics statement.\",\n>>> +\t\t\t\t\t\t\t NULL,\n>>> +\t\t\t\t\t\t\t &auto_explain_suggest_on\n>>>\n>>> To imply that this parameter is involving to add_statistics_threshold, it seems\n>>> better for me to use more related name like add_statistics_suggest_only.\n>>>\n>>> Also, additional documentations for new parameters are required.\n>> Done.\n> +\n> + <varlistentry>\n> + <term>\n> + <varname>auto_explain.auto_explain.add_statistics_threshold</varname> (<type>real</type>)\n> + <indexterm>\n> + <primary><varname>auto_explain.add_statistics_threshold</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + <varname>auto_explain.add_statistics_threshold</varname> sets the threshold for\n> + actual/estimated #rows ratio triggering creation of multicolumn statistic\n> + for the related columns. It can be used for adpative query optimization.\n> + If there is large gap between real and estimated number of tuples for the\n> + concrete plan node, then multicolumn statistic is created for involved\n> + attributes. Zero value (default) disables implicit creation of multicolumn statistic.\n> + </para>\n> + </listitem>\n>\n> I wonder we need to say that this parameter has no effect unless log_analyze\n> is enabled and that statistics are created only when the excution time exceeds\n> log_min_duration, if these behaviors are intentional.\n>\n> In addition, additional statistics are created only if #rows is over-estimated\n> and not if it is under-estimated. Although it seems good as a criterion for creating\n> multicolumn statistic since extended statisstic is usually useful to fix over-estimation,\n> I am not sure if we don't have to consider under-estimation case at all.\n>\n>\n>>> (9)\n>>> Currently, it only consider functional dependencies statistics. Can we also\n>>> consider multivariate MCV list, and is it useful?\n>>\n>> Right now auto_explain create statistic without explicit specification\n>> of statistic kind.\n>> According to the documentation all supported statistics kinds should be\n>> created in this case:\n> Yes, auto_explain creates all kinds of extended statistics. However,\n> IIUC, the clausesel patch uses only functional dependencies statistics for\n> improving join, so my question was about possibility to consider MCV in the\n> clausesel patch.\n>\n>>> (10)\n>>> To achieve adaptive query optimization (AQO) in PostgreSQL, this patch proposes\n>>> to use auto_explain for getting feedback from actual results. So, could auto_explain\n>>> be a infrastructure of AQO in future? Or, do you have any plan or idea to make\n>>> built-in infrastructure for AQO?\n>> Sorry, I do not have answer for this question.\n>> I just patched auto_explain extension because it is doing  half of the\n>> required work (analyze  expensive statements).\n>> It can be certainly moved to separate extension. In this case it will\n>> party duplicate existed functionality and\n>> settings of auto_explain (like statement execution time threshold). I am\n>> not sure that it is good.\n>> But from the other side, this my patch makes auto_explain extension to\n>> do some unexpected work...\n> I think that auto_explain is an extension originally for aiming to detect\n> and log plans that take a long time, so it doesn't seem so unnatural for\n> me to use this for improving such plans. Especially, the feature to find\n> tunable points in executed plans seems useful.\n>\n>> Actually task of adaptive query optimization is much bigger.\n>> We have separate AQO extension which tries to use machine learning to\n>> correctly adjust estimations.\n>> This my patch is much simpler and use existed mechanism (extended\n>> statistics) to improve estimations.\n> Well, this patch provide a kind of AQO as auto_explain feature, but this\n> is independent of the AQO extension. Is it right?\n> Anyway, I'm interested in the AQO extension, so I'll look into this, too.\n>\n> Regards,\n> Yugo Nagata\n>\n\nI have updated documentation as you suggested and submit patch for \nauto_explain extension for the next commitfest.\nI will create separate  thread for improving join selectivity estimation \nusing extended statistics.\n\n> Well, this patch provide a kind of AQO as auto_explain feature, but this\n> is independent of the AQO extension. Is it right?\nYes. The basic idea is the same, but approaches are different.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 27 Jan 2021 19:39:00 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "Hello Konstantin,\n\n\nSorry for not responding to this thread earlier. I definitely agree the\nfeatures proposed here are very interesting and useful, and I appreciate\nyou kept rebasing the patch.\n\nI think the patch improving join estimates can be treated as separate,\nand I see it already has a separate CF entry - it however still points\nto this thread, which will be confusing. I suggest we start a different\nthread for it, to keep the discussions separate.\n\nI'll focus on the auto_explain part here.\n\n\nI did have some ideas about adaptive query optimization too, although\nmaybe in a slightly different form. My plan was to collect information\nabout estimated / actual cardinalities, and then use this knowledge to\ndirectly tweak the estimates. Directly, without creating extended stats,\nbut treat the collected info about estimates / row counts as a kind of\nad hoc statistics. (Not sure if this is what the AQE extension does.)\n\n\nWhat is being proposed here - an extension suggesting which statistics\nto create (and possibly creating them automatically) is certainly\nuseful, but I'm not sure I'd call it \"adaptive query optimization\". I\nthink \"adaptive\" means the extension directly modifies the estimates\nbased on past executions. So I propose calling it maybe \"statistics\nadvisor\" or something like that.\n\n\nA couple additional points:\n\n1) I think we should create a new extension for this.\n\nauto_explain has a fairly well defined purpose, I don't think this is\nconsistent with it. It's quite likely it'll require stuff like shared\nmemory, etc. which auto_explain does not (and should not) need.\n\nLet's call it statistics_advisor, or something like that. It will use\nabout the same planner/executor callbacks as auto_explain, but that's\nfine I think.\n\n\n2) I'm not sure creating statistics automatically based on a single\nquery execution is a good idea. I think we'll need to collect data from\nmultiple runs (in shared memory), and do suggestions based on that.\n\n\n3) I wonder if it should also consider duration of the query (who cares\nabout estimates if it still executed in 10ms)? Similarly, it probably\nshould require some minimal number of rows (1 vs. 10 rows is likely\ndifferent from 1M vs. 10M rows, but both is 10x difference).\n\n\n4) Ideally it'd evaluate impact of the improved estimates on the whole\nquery plan (you may fix one node, but the cost difference for the whole\nquery may be negligible). But that seems very hard/expensive :-(\n\n\n5) I think AddMultiColumnStatisticsForQual() needs refactoring - it\nmixes stuff at many different levels of abstraction (generating names,\ndeciding which statistics to build, ...). I think it'll also need some\nimprovements to better identify which Vars to consider for statistics,\nand once we get support for statistics on expressions committed (which\nseems to be fairly close now) also to handle expressions.\n\nBTW Why is \"qual\" in\n\n static void\n AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n\ndeclared as \"void *\"? Shouldn't that be \"List *\"?\n\n\n5) I'm not sure about automatically creating the stats. I can't imagine\nanyone actually enabling that on production, TBH (I myself probably\nwould not do that). I suggest we instead provide an easy way to show\nwhich statistics are suggested.\n\nFor one execution that might be integrated into EXPLAIN ANALYZE, I guess\n(through some callback, which seems fairly easy to do).\n\nFor many executions (you can leave it running for a coupel days, then\nsee what is the suggestion based on X runs) we could have a view or\nsomething. This would also work for read-only replicas, where just\ncreating the statistics is impossible.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 10 Mar 2021 03:00:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 3/10/21 3:00 AM, Tomas Vondra wrote:\n> Hello Konstantin,\n> \n> \n> Sorry for not responding to this thread earlier. I definitely agree the\n> features proposed here are very interesting and useful, and I appreciate\n> you kept rebasing the patch.\n> \n> I think the patch improving join estimates can be treated as separate,\n> and I see it already has a separate CF entry - it however still points\n> to this thread, which will be confusing. I suggest we start a different\n> thread for it, to keep the discussions separate.\n> \n\nD'oh! I must have been confused yesterday, because now I see there\nalready is a separate thread [1] for the join selectivity patch. So you\ncan ignore this.\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/flat/71d67391-16a9-3e5e-b5e4-8f7fd32cc1b2@postgrespro.ru\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 11 Mar 2021 00:03:05 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On Wed, 10 Mar 2021 03:00:25 +0100\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> What is being proposed here - an extension suggesting which statistics\n> to create (and possibly creating them automatically) is certainly\n> useful, but I'm not sure I'd call it \"adaptive query optimization\". I\n> think \"adaptive\" means the extension directly modifies the estimates\n> based on past executions. So I propose calling it maybe \"statistics\n> advisor\" or something like that.\n\nI am also agree with the idea to implement this feature as a new \nextension for statistics advisor.\n\n> BTW Why is \"qual\" in\n> \n> static void\n> AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n> \n> declared as \"void *\"? Shouldn't that be \"List *\"?\n\nWhen I tested this extension using TPC-H queries, it raised segmentation\nfault in this function. I think the cause would be around this argument.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 19 Mar 2021 18:17:26 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 19.03.2021 12:17, Yugo NAGATA wrote:\n> On Wed, 10 Mar 2021 03:00:25 +0100\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n>> What is being proposed here - an extension suggesting which statistics\n>> to create (and possibly creating them automatically) is certainly\n>> useful, but I'm not sure I'd call it \"adaptive query optimization\". I\n>> think \"adaptive\" means the extension directly modifies the estimates\n>> based on past executions. So I propose calling it maybe \"statistics\n>> advisor\" or something like that.\n> I am also agree with the idea to implement this feature as a new\n> extension for statistics advisor.\n>\n>> BTW Why is \"qual\" in\n>>\n>> static void\n>> AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n>>\n>> declared as \"void *\"? Shouldn't that be \"List *\"?\n> When I tested this extension using TPC-H queries, it raised segmentation\n> fault in this function. I think the cause would be around this argument.\n>\n> Regards,\n> Yugo Nagata\n>\nAttached please find new version of the patch with \nAddMultiColumnStatisticsForQual parameter type fix and one more fix \nrelated with handling synthetic attributes.\nI can not reproduce the crash on TPC-H queries, so if the problem \npersists, can you please send me stack trace and may be some other \ninformation helping to understand the reason of SIGSEGV?\n\nThanks in advance,\nKonstantin", "msg_date": "Fri, 19 Mar 2021 19:58:27 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "Hi,\nIn AddMultiColumnStatisticsForQual(),\n\n+ /* Loop until we considered all vars */\n+ while (vars != NULL)\n...\n+ /* Contruct list of unique vars */\n+ foreach (cell, vars)\n\nWhat if some cell / node, gets into the else block:\n\n+ else\n+ {\n+ continue;\n\nand being left in vars. Is there a chance for infinite loop ?\nIt seems there should be a bool variable indicating whether any cell gets\nto the following:\n\n+ vars = foreach_delete_current(vars, cell);\n\nIf no cell gets removed in the current iteration, the outer while loop\nshould exit.\n\nCheers\n\nOn Fri, Mar 19, 2021 at 9:58 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 19.03.2021 12:17, Yugo NAGATA wrote:\n> > On Wed, 10 Mar 2021 03:00:25 +0100\n> > Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >\n> >> What is being proposed here - an extension suggesting which statistics\n> >> to create (and possibly creating them automatically) is certainly\n> >> useful, but I'm not sure I'd call it \"adaptive query optimization\". I\n> >> think \"adaptive\" means the extension directly modifies the estimates\n> >> based on past executions. So I propose calling it maybe \"statistics\n> >> advisor\" or something like that.\n> > I am also agree with the idea to implement this feature as a new\n> > extension for statistics advisor.\n> >\n> >> BTW Why is \"qual\" in\n> >>\n> >> static void\n> >> AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n> >>\n> >> declared as \"void *\"? Shouldn't that be \"List *\"?\n> > When I tested this extension using TPC-H queries, it raised segmentation\n> > fault in this function. I think the cause would be around this argument.\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> Attached please find new version of the patch with\n> AddMultiColumnStatisticsForQual parameter type fix and one more fix\n> related with handling synthetic attributes.\n> I can not reproduce the crash on TPC-H queries, so if the problem\n> persists, can you please send me stack trace and may be some other\n> information helping to understand the reason of SIGSEGV?\n>\n> Thanks in advance,\n> Konstantin\n>\n>\n\nHi,In AddMultiColumnStatisticsForQual(), +   /* Loop until we considered all vars */+   while (vars != NULL)...+       /* Contruct list of unique vars */+       foreach (cell, vars)What if some cell / node, gets into the else block:+               else+               {+                   continue;and being left in vars. Is there a chance for infinite loop ?It seems there should be a bool variable indicating whether any cell gets to the following:+           vars = foreach_delete_current(vars, cell);If no cell gets removed in the current iteration, the outer while loop should exit.CheersOn Fri, Mar 19, 2021 at 9:58 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\nOn 19.03.2021 12:17, Yugo NAGATA wrote:\n> On Wed, 10 Mar 2021 03:00:25 +0100\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n>> What is being proposed here - an extension suggesting which statistics\n>> to create (and possibly creating them automatically) is certainly\n>> useful, but I'm not sure I'd call it \"adaptive query optimization\". I\n>> think \"adaptive\" means the extension directly modifies the estimates\n>> based on past executions. So I propose calling it maybe \"statistics\n>> advisor\" or something like that.\n> I am also agree with the idea to implement this feature as a new\n> extension for statistics advisor.\n>\n>> BTW Why is \"qual\" in\n>>\n>>    static void\n>>    AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n>>\n>> declared as \"void *\"? Shouldn't that be \"List *\"?\n> When I tested this extension using TPC-H queries, it raised segmentation\n> fault in this function. I think the cause would be around this argument.\n>\n> Regards,\n> Yugo Nagata\n>\nAttached please find new version of the patch with \nAddMultiColumnStatisticsForQual parameter type fix and one more fix \nrelated with handling synthetic attributes.\nI can not reproduce the crash on TPC-H queries, so if the problem \npersists, can you please send me stack trace and may be some other \ninformation helping to understand the reason of SIGSEGV?\n\nThanks in advance,\nKonstantin", "msg_date": "Fri, 19 Mar 2021 10:32:34 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On 19.03.2021 20:32, Zhihong Yu wrote:\n> Hi,\n> In AddMultiColumnStatisticsForQual(),\n>\n> +   /* Loop until we considered all vars */\n> +   while (vars != NULL)\n> ...\n> +       /* Contruct list of unique vars */\n> +       foreach (cell, vars)\n>\n> What if some cell / node, gets into the else block:\n>\n> +               else\n> +               {\n> +                   continue;\n>\n> and being left in vars. Is there a chance for infinite loop ?\n> It seems there should be a bool variable indicating whether any cell \n> gets to the following:\n>\n> +           vars = foreach_delete_current(vars, cell);\n>\n> If no cell gets removed in the current iteration, the outer while loop \n> should exit.\n\nEach iteration of outer loop (while (vars != NULL))\nprocess variables belonging to one relation.\nWe take \"else continue\" branch only if variable belongs to some other \nrelation.\nAt first iteration of foreach (cell, vars)\nvariable \"cols\" is NULL and we always take first branch of the if.\nIn other words, at each iteration of outer loop we always make some \nprogress in processing \"vars\" list and remove some elements\nfrom this list. So infinite loop can never happen.\n\n>\n> Cheers\n>\n> On Fri, Mar 19, 2021 at 9:58 AM Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n>\n>\n> On 19.03.2021 12:17, Yugo NAGATA wrote:\n> > On Wed, 10 Mar 2021 03:00:25 +0100\n> > Tomas Vondra <tomas.vondra@enterprisedb.com\n> <mailto:tomas.vondra@enterprisedb.com>> wrote:\n> >\n> >> What is being proposed here - an extension suggesting which\n> statistics\n> >> to create (and possibly creating them automatically) is certainly\n> >> useful, but I'm not sure I'd call it \"adaptive query\n> optimization\". I\n> >> think \"adaptive\" means the extension directly modifies the\n> estimates\n> >> based on past executions. So I propose calling it maybe \"statistics\n> >> advisor\" or something like that.\n> > I am also agree with the idea to implement this feature as a new\n> > extension for statistics advisor.\n> >\n> >> BTW Why is \"qual\" in\n> >>\n> >>    static void\n> >>    AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n> >>\n> >> declared as \"void *\"? Shouldn't that be \"List *\"?\n> > When I tested this extension using TPC-H queries, it raised\n> segmentation\n> > fault in this function. I think the cause would be around this\n> argument.\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> Attached please find new version of the patch with\n> AddMultiColumnStatisticsForQual parameter type fix and one more fix\n> related with handling synthetic attributes.\n> I can not reproduce the crash on TPC-H queries, so if the problem\n> persists, can you please send me stack trace and may be some other\n> information helping to understand the reason of SIGSEGV?\n>\n> Thanks in advance,\n> Konstantin\n>\n\n\n\n\n\n\n\n\n\nOn 19.03.2021 20:32, Zhihong Yu wrote:\n\n\n\nHi,\n In AddMultiColumnStatisticsForQual(), \n\n\n+   /* Loop until we considered all vars */\n +   while (vars != NULL)\n\n...\n+       /* Contruct list of unique vars */\n +       foreach (cell, vars)\n\n\n\nWhat if some cell / node, gets into the else block:\n\n\n+               else\n +               {\n +                   continue;\n\n\n\nand being left in vars. Is there a chance for infinite loop\n ?\nIt seems there should be a bool variable indicating whether\n any cell gets to the following:\n\n\n+           vars = foreach_delete_current(vars, cell);\n\n\n\nIf no cell gets removed in the current iteration, the outer\n while loop should exit.\n\n\n\n Each iteration of outer loop (while (vars != NULL))\n process variables belonging to one relation.\n We take \"else continue\" branch only if variable belongs to some\n other relation.\n At first iteration of foreach (cell, vars)\n variable \"cols\" is NULL and we always take first branch of the if.\n In other words, at each iteration of outer loop we always make some\n progress in processing \"vars\" list and remove some elements\n from this list. So infinite loop can never happen.\n\n\n\n\n\nCheers\n\n\n\nOn Fri, Mar 19, 2021 at 9:58\n AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n wrote:\n\n\n\n On 19.03.2021 12:17, Yugo NAGATA wrote:\n > On Wed, 10 Mar 2021 03:00:25 +0100\n > Tomas Vondra <tomas.vondra@enterprisedb.com>\n wrote:\n >\n >> What is being proposed here - an extension suggesting\n which statistics\n >> to create (and possibly creating them automatically)\n is certainly\n >> useful, but I'm not sure I'd call it \"adaptive query\n optimization\". I\n >> think \"adaptive\" means the extension directly\n modifies the estimates\n >> based on past executions. So I propose calling it\n maybe \"statistics\n >> advisor\" or something like that.\n > I am also agree with the idea to implement this feature\n as a new\n > extension for statistics advisor.\n >\n >> BTW Why is \"qual\" in\n >>\n >>    static void\n >>    AddMultiColumnStatisticsForQual(void* qual,\n ExplainState *es)\n >>\n >> declared as \"void *\"? Shouldn't that be \"List *\"?\n > When I tested this extension using TPC-H queries, it\n raised segmentation\n > fault in this function. I think the cause would be around\n this argument.\n >\n > Regards,\n > Yugo Nagata\n >\n Attached please find new version of the patch with \n AddMultiColumnStatisticsForQual parameter type fix and one\n more fix \n related with handling synthetic attributes.\n I can not reproduce the crash on TPC-H queries, so if the\n problem \n persists, can you please send me stack trace and may be some\n other \n information helping to understand the reason of SIGSEGV?\n\n Thanks in advance,\n Konstantin", "msg_date": "Sat, 20 Mar 2021 12:41:44 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On Fri, 19 Mar 2021 19:58:27 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> \n> \n> On 19.03.2021 12:17, Yugo NAGATA wrote:\n> > On Wed, 10 Mar 2021 03:00:25 +0100\n> > Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >\n> >> What is being proposed here - an extension suggesting which statistics\n> >> to create (and possibly creating them automatically) is certainly\n> >> useful, but I'm not sure I'd call it \"adaptive query optimization\". I\n> >> think \"adaptive\" means the extension directly modifies the estimates\n> >> based on past executions. So I propose calling it maybe \"statistics\n> >> advisor\" or something like that.\n> > I am also agree with the idea to implement this feature as a new\n> > extension for statistics advisor.\n> >\n> >> BTW Why is \"qual\" in\n> >>\n> >> static void\n> >> AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n> >>\n> >> declared as \"void *\"? Shouldn't that be \"List *\"?\n> > When I tested this extension using TPC-H queries, it raised segmentation\n> > fault in this function. I think the cause would be around this argument.\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> Attached please find new version of the patch with \n> AddMultiColumnStatisticsForQual parameter type fix and one more fix \n> related with handling synthetic attributes.\n> I can not reproduce the crash on TPC-H queries, so if the problem \n> persists, can you please send me stack trace and may be some other \n> information helping to understand the reason of SIGSEGV?\n\nI also could not reproduce the segfault. I don't know why I observed it,\nbut it may be because I missed something when installing. Sorry for\nannoying you.\n\nInstead, I observed \"ERROR: cache lookup failed for attribute 6 of\nrelation xxxx\" in v8 patch, but this was fixed in v9 patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 22 Mar 2021 11:29:25 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "Hello Konstantin,\n\nI tested this patch as a statistics advisor using TPC-H queries. \nThe used parameters are:\n\n auto_explain.add_statistics_suggest_only = on\n auto_explain.add_statistics_threshold = 0.1\n auto_explain.log_analyze = on\n auto_explain.log_min_duration = 0\n\nauto_explain suggested to create a few extented statistics for some\nqueries, but I could not find performance improvement with the \"join\nselectivity estimation using extended statistics\" patch. I think this\nis because there are no such correlation in TPC-H dataset that the\nextended statistics can help, though.\n\nDuring this test, I came up with a additional comments.\n\n1)\nAs found in TPC-H test, suggested extended statistics may not be useful\nfor improving performance. Therefore, to decide to adopt it or not, it\nwould be useful if we could get information about \"why we require it\" or\n\"why this is suggested\" as DETAIL or HINT. For example, we may show a\nline or snippet of EXPLAIN result as the reason of the suggestion.\n\n2)\nFor Q21 of TPC-H, the following extended statistics were suggested.\n\nNOTICE: Auto_explain suggestion: CREATE STATISTICS lineitem_l_commitdate_l_receiptdate ON l_commitdate, l_receiptdate FROM lineitem\nNOTICE: Auto_explain suggestion: CREATE STATISTICS lineitem_l_commitdate_l_receiptdate_l_suppkey ON l_commitdate, l_receiptdate, l_suppkey FROM lineitem\n\nThe latter's target columns includes the former's, so I am not sure\nwe need both of them. (Which we should adopt may be up to on administrator,\nthough.)\n\n3)\nFor Q22 of TPC-H, the following two same extended statistics were suggested.\n\nNOTICE: Auto_explain suggestion: CREATE STATISTICS customer_c_acctbal_c_phone ON c_acctbal, c_phone FROM customer\nNOTICE: Auto_explain suggestion: CREATE STATISTICS customer_c_acctbal_c_phone ON c_acctbal, c_phone FROM customer\n\nSo, when we set add_statistics_suggest_only to off, we get the following error:\n\nERROR: duplicate key value violates unique constraint \"pg_statistic_ext_name_index\"\nDETAIL: Key (stxname, stxnamespace)=(customer_c_acctbal_c_phone, 2200) already exists.\n\n\nRegards,\nYugo Nagata\n\nOn Sat, 20 Mar 2021 12:41:44 +0300\nKonstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\n> \n> \n> On 19.03.2021 20:32, Zhihong Yu wrote:\n> > Hi,\n> > In AddMultiColumnStatisticsForQual(),\n> >\n> > +   /* Loop until we considered all vars */\n> > +   while (vars != NULL)\n> > ...\n> > +       /* Contruct list of unique vars */\n> > +       foreach (cell, vars)\n> >\n> > What if some cell / node, gets into the else block:\n> >\n> > +               else\n> > +               {\n> > +                   continue;\n> >\n> > and being left in vars. Is there a chance for infinite loop ?\n> > It seems there should be a bool variable indicating whether any cell \n> > gets to the following:\n> >\n> > +           vars = foreach_delete_current(vars, cell);\n> >\n> > If no cell gets removed in the current iteration, the outer while loop \n> > should exit.\n> \n> Each iteration of outer loop (while (vars != NULL))\n> process variables belonging to one relation.\n> We take \"else continue\" branch only if variable belongs to some other \n> relation.\n> At first iteration of foreach (cell, vars)\n> variable \"cols\" is NULL and we always take first branch of the if.\n> In other words, at each iteration of outer loop we always make some \n> progress in processing \"vars\" list and remove some elements\n> from this list. So infinite loop can never happen.\n> \n> >\n> > Cheers\n> >\n> > On Fri, Mar 19, 2021 at 9:58 AM Konstantin Knizhnik \n> > <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n> >\n> >\n> >\n> > On 19.03.2021 12:17, Yugo NAGATA wrote:\n> > > On Wed, 10 Mar 2021 03:00:25 +0100\n> > > Tomas Vondra <tomas.vondra@enterprisedb.com\n> > <mailto:tomas.vondra@enterprisedb.com>> wrote:\n> > >\n> > >> What is being proposed here - an extension suggesting which\n> > statistics\n> > >> to create (and possibly creating them automatically) is certainly\n> > >> useful, but I'm not sure I'd call it \"adaptive query\n> > optimization\". I\n> > >> think \"adaptive\" means the extension directly modifies the\n> > estimates\n> > >> based on past executions. So I propose calling it maybe \"statistics\n> > >> advisor\" or something like that.\n> > > I am also agree with the idea to implement this feature as a new\n> > > extension for statistics advisor.\n> > >\n> > >> BTW Why is \"qual\" in\n> > >>\n> > >>    static void\n> > >>    AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n> > >>\n> > >> declared as \"void *\"? Shouldn't that be \"List *\"?\n> > > When I tested this extension using TPC-H queries, it raised\n> > segmentation\n> > > fault in this function. I think the cause would be around this\n> > argument.\n> > >\n> > > Regards,\n> > > Yugo Nagata\n> > >\n> > Attached please find new version of the patch with\n> > AddMultiColumnStatisticsForQual parameter type fix and one more fix\n> > related with handling synthetic attributes.\n> > I can not reproduce the crash on TPC-H queries, so if the problem\n> > persists, can you please send me stack trace and may be some other\n> > information helping to understand the reason of SIGSEGV?\n> >\n> > Thanks in advance,\n> > Konstantin\n> >\n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 22 Mar 2021 19:29:36 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "On Fri, Mar 19, 2021 at 10:28 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n>\n>\n>\n> On 19.03.2021 12:17, Yugo NAGATA wrote:\n> > On Wed, 10 Mar 2021 03:00:25 +0100\n> > Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >\n> >> What is being proposed here - an extension suggesting which statistics\n> >> to create (and possibly creating them automatically) is certainly\n> >> useful, but I'm not sure I'd call it \"adaptive query optimization\". I\n> >> think \"adaptive\" means the extension directly modifies the estimates\n> >> based on past executions. So I propose calling it maybe \"statistics\n> >> advisor\" or something like that.\n> > I am also agree with the idea to implement this feature as a new\n> > extension for statistics advisor.\n> >\n> >> BTW Why is \"qual\" in\n> >>\n> >> static void\n> >> AddMultiColumnStatisticsForQual(void* qual, ExplainState *es)\n> >>\n> >> declared as \"void *\"? Shouldn't that be \"List *\"?\n> > When I tested this extension using TPC-H queries, it raised segmentation\n> > fault in this function. I think the cause would be around this argument.\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> Attached please find new version of the patch with\n> AddMultiColumnStatisticsForQual parameter type fix and one more fix\n> related with handling synthetic attributes.\n> I can not reproduce the crash on TPC-H queries, so if the problem\n> persists, can you please send me stack trace and may be some other\n> information helping to understand the reason of SIGSEGV?\n>\n\n\n\"C:\\projects\\postgresql\\pgsql.sln\" (default target) (1) ->\n\"C:\\projects\\postgresql\\auto_explain.vcxproj\" (default target) (45) ->\n(ClCompile target) ->\ncontrib/auto_explain/auto_explain.c(658): error C2039: 'mt_plans' : is\nnot a member of 'ModifyTableState'\n[C:\\projects\\postgresql\\auto_explain.vcxproj]\ncontrib/auto_explain/auto_explain.c(659): error C2039: 'mt_nplans' :\nis not a member of 'ModifyTableState'\n[C:\\projects\\postgresql\\auto_explain.vcxproj]\ncontrib/auto_explain/auto_explain.c(660): error C2198:\n'AddMultiColumnStatisticsForMemberNodes' : too few arguments for call\n[C:\\projects\\postgresql\\auto_explain.vcxproj]\n2 Warning(s)\n3 Error(s)\n\nAlso Yugo Nagata's comments need to be addressed, I'm changing the\nstatus to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 14 Jul 2021 16:43:42 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" }, { "msg_contents": "> On 14 Jul 2021, at 13:13, vignesh C <vignesh21@gmail.com> wrote:\n\n> \"C:\\projects\\postgresql\\pgsql.sln\" (default target) (1) ->\n> \"C:\\projects\\postgresql\\auto_explain.vcxproj\" (default target) (45) ->\n> (ClCompile target) ->\n> contrib/auto_explain/auto_explain.c(658): error C2039: 'mt_plans' : is\n> not a member of 'ModifyTableState'\n> [C:\\projects\\postgresql\\auto_explain.vcxproj]\n> contrib/auto_explain/auto_explain.c(659): error C2039: 'mt_nplans' :\n> is not a member of 'ModifyTableState'\n> [C:\\projects\\postgresql\\auto_explain.vcxproj]\n> contrib/auto_explain/auto_explain.c(660): error C2198:\n> 'AddMultiColumnStatisticsForMemberNodes' : too few arguments for call\n> [C:\\projects\\postgresql\\auto_explain.vcxproj]\n> 2 Warning(s)\n> 3 Error(s)\n> \n> Also Yugo Nagata's comments need to be addressed, I'm changing the\n> status to \"Waiting for Author\".\n\nAs this thread has stalled and the patch hasn't worked in the CI for quite some\ntime, I'm marking this Returned with Feedback. Feel free to open a new entry\nfor an updated patch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 10:56:54 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Columns correlation and adaptive query optimization" } ]
[ { "msg_contents": "ProcArrayGroupClearXid() has this:\n\n\twhile (true)\n\t{\n\t\tnextidx = pg_atomic_read_u32(&procglobal->procArrayGroupFirst);\n\n\t\t...\n\n\t\tif (pg_atomic_compare_exchange_u32(&procglobal->procArrayGroupFirst,\n\t\t\t\t\t\t\t\t\t\t &nextidx,\n\t\t\t\t\t\t\t\t\t\t (uint32) proc->pgprocno))\n\t\t\tbreak;\n\t}\n\nThis, from UnpinBuffer(), is our more-typical style:\n\n\t\told_buf_state = pg_atomic_read_u32(&buf->state);\n\t\tfor (;;)\n\t\t{\n\t\t\t...\n\n\t\t\tif (pg_atomic_compare_exchange_u32(&buf->state, &old_buf_state,\n\t\t\t\t\t\t\t\t\t\t\t buf_state))\n\t\t\t\tbreak;\n\t\t}\n\nThat is, we typically put the pg_atomic_read_u32() outside the loop. After\nthe first iteration, it is redundant with the side effect of\npg_atomic_compare_exchange_u32(). I haven't checked whether this materially\nimproves performance, but, for style, I would like to change it in HEAD.", "msg_date": "Mon, 14 Oct 2019 20:53:48 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "ProcArrayGroupClearXid() compare-exchange style" }, { "msg_contents": "On Tue, Oct 15, 2019 at 9:23 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> ProcArrayGroupClearXid() has this:\n>\n> while (true)\n> {\n> nextidx = pg_atomic_read_u32(&procglobal->procArrayGroupFirst);\n>\n> ...\n>\n> if (pg_atomic_compare_exchange_u32(&procglobal->procArrayGroupFirst,\n> &nextidx,\n> (uint32) proc->pgprocno))\n> break;\n> }\n>\n> This, from UnpinBuffer(), is our more-typical style:\n>\n> old_buf_state = pg_atomic_read_u32(&buf->state);\n> for (;;)\n> {\n> ...\n>\n> if (pg_atomic_compare_exchange_u32(&buf->state, &old_buf_state,\n> buf_state))\n> break;\n> }\n>\n> That is, we typically put the pg_atomic_read_u32() outside the loop. After\n> the first iteration, it is redundant with the side effect of\n> pg_atomic_compare_exchange_u32(). I haven't checked whether this materially\n> improves performance, but, for style, I would like to change it in HEAD.\n>\n\n+1. I am not sure if it would improve performance as this whole\noptimization was to reduce the number of attempts to acquire LWLock,\nbut definitely, it makes the code consistent.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Oct 2019 15:00:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ProcArrayGroupClearXid() compare-exchange style" } ]
[ { "msg_contents": "While reviewing a parallel vacuum patch [1], we noticed a few things\nabout $SUBJECT implemented in commit -\n7df159a620b760e289f1795b13542ed1b3e13b87.\n\n1. A new memory context GistBulkDeleteResult->page_set_context has\nbeen introduced, but it doesn't seem to be used.\n2. Right now, in gistbulkdelete we make a note of empty leaf pages and\ninternals pages and then in the second pass during gistvacuumcleanup,\nwe delete all the empty leaf pages. I was thinking why unlike nbtree,\nwe have delayed the deletion of empty pages till gistvacuumcleanup. I\ndon't see any problem if we do this during gistbulkdelete itself\nsimilar to nbtree, also I think there is some advantage in marking the\npages as deleted as early as possible. Basically, if the vacuum\noperation is canceled or errored out between gistbulkdelete and\ngistvacuumcleanup, then I think the deleted pages could be marked as\nrecyclable very early in next vacuum operation. The other advantage\nof doing this during gistbulkdelete is we can avoid sharing\ninformation between gistbulkdelete and gistvacuumcleanup which is\nquite helpful for a parallel vacuum as the information is not trivial\n(it is internally stored as in-memory Btree). OTOH, there might be\nsome advantage for delaying the deletion of pages especially in the\ncase of multiple scans during a single VACUUM command. We can\nprobably delete all empty leaf pages in one go which could in some\ncases lead to fewer internal page reads. However, I am not sure if it\nis really advantageous to postpone the deletion as there seem to be\nsome downsides to it as well. I don't see it documented why unlike\nnbtree we consider delaying deletion of empty pages till\ngistvacuumcleanup, but I might be missing something.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JEQ2y3uNucNopDjK8pse6xSe5%3D_oknoWfRQvAF%3DVqsBA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Oct 2019 12:07:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Questions/Observations related to Gist vacuum" }, { "msg_contents": "On 15/10/2019 09:37, Amit Kapila wrote:\n> While reviewing a parallel vacuum patch [1], we noticed a few things\n> about $SUBJECT implemented in commit -\n> 7df159a620b760e289f1795b13542ed1b3e13b87.\n> \n> 1. A new memory context GistBulkDeleteResult->page_set_context has\n> been introduced, but it doesn't seem to be used.\n\nOops. internal_page_set and empty_leaf_set were supposed to be allocated \nin that memory context. As things stand, we leak them until end of \nvacuum, in a multi-pass vacuum.\n\n> 2. Right now, in gistbulkdelete we make a note of empty leaf pages and\n> internals pages and then in the second pass during gistvacuumcleanup,\n> we delete all the empty leaf pages. I was thinking why unlike nbtree,\n> we have delayed the deletion of empty pages till gistvacuumcleanup. I\n> don't see any problem if we do this during gistbulkdelete itself\n> similar to nbtree, also I think there is some advantage in marking the\n> pages as deleted as early as possible. Basically, if the vacuum\n> operation is canceled or errored out between gistbulkdelete and\n> gistvacuumcleanup, then I think the deleted pages could be marked as\n> recyclable very early in next vacuum operation. The other advantage\n> of doing this during gistbulkdelete is we can avoid sharing\n> information between gistbulkdelete and gistvacuumcleanup which is\n> quite helpful for a parallel vacuum as the information is not trivial\n> (it is internally stored as in-memory Btree). OTOH, there might be\n> some advantage for delaying the deletion of pages especially in the\n> case of multiple scans during a single VACUUM command. We can\n> probably delete all empty leaf pages in one go which could in some\n> cases lead to fewer internal page reads. However, I am not sure if it\n> is really advantageous to postpone the deletion as there seem to be\n> some downsides to it as well. I don't see it documented why unlike\n> nbtree we consider delaying deletion of empty pages till\n> gistvacuumcleanup, but I might be missing something.\n\nHmm. The thinking is/was that removing the empty pages is somewhat \nexpensive, because it has to scan all the internal nodes to find the \ndownlinks to the to-be-deleted pages. Furthermore, it needs to scan all \nthe internal pages (or at least until it has found all the downlinks), \nregardless of how many empty pages are being deleted. So it makes sense \nto do it only once, for all the empty pages. You're right though, that \nthere would be advantages, too, in doing it after each pass. All things \nconsidered, I'm not sure which is better.\n\n- Heikki\n\n\n", "msg_date": "Tue, 15 Oct 2019 15:43:25 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 15/10/2019 09:37, Amit Kapila wrote:\n> > While reviewing a parallel vacuum patch [1], we noticed a few things\n> > about $SUBJECT implemented in commit -\n> > 7df159a620b760e289f1795b13542ed1b3e13b87.\n> >\n> > 1. A new memory context GistBulkDeleteResult->page_set_context has\n> > been introduced, but it doesn't seem to be used.\n>\n> Oops. internal_page_set and empty_leaf_set were supposed to be allocated\n> in that memory context. As things stand, we leak them until end of\n> vacuum, in a multi-pass vacuum.\n\nHere is a patch to fix this issue.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 16 Oct 2019 11:20:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 15/10/2019 09:37, Amit Kapila wrote:\n> > 2. Right now, in gistbulkdelete we make a note of empty leaf pages and\n> > internals pages and then in the second pass during gistvacuumcleanup,\n> > we delete all the empty leaf pages. I was thinking why unlike nbtree,\n> > we have delayed the deletion of empty pages till gistvacuumcleanup. I\n> > don't see any problem if we do this during gistbulkdelete itself\n> > similar to nbtree, also I think there is some advantage in marking the\n> > pages as deleted as early as possible. Basically, if the vacuum\n> > operation is canceled or errored out between gistbulkdelete and\n> > gistvacuumcleanup, then I think the deleted pages could be marked as\n> > recyclable very early in next vacuum operation. The other advantage\n> > of doing this during gistbulkdelete is we can avoid sharing\n> > information between gistbulkdelete and gistvacuumcleanup which is\n> > quite helpful for a parallel vacuum as the information is not trivial\n> > (it is internally stored as in-memory Btree). OTOH, there might be\n> > some advantage for delaying the deletion of pages especially in the\n> > case of multiple scans during a single VACUUM command. We can\n> > probably delete all empty leaf pages in one go which could in some\n> > cases lead to fewer internal page reads. However, I am not sure if it\n> > is really advantageous to postpone the deletion as there seem to be\n> > some downsides to it as well. I don't see it documented why unlike\n> > nbtree we consider delaying deletion of empty pages till\n> > gistvacuumcleanup, but I might be missing something.\n>\n> Hmm. The thinking is/was that removing the empty pages is somewhat\n> expensive, because it has to scan all the internal nodes to find the\n> downlinks to the to-be-deleted pages. Furthermore, it needs to scan all\n> the internal pages (or at least until it has found all the downlinks),\n> regardless of how many empty pages are being deleted. So it makes sense\n> to do it only once, for all the empty pages. You're right though, that\n> there would be advantages, too, in doing it after each pass.\n>\n\nI was thinking more about this and it seems that there could be more\ncases where delaying the delete mark for pages can further delay the\nrecycling of pages. It is quite possible that immediately after bulk\ndelete the value of nextFullXid (used as deleteXid) is X and during\nvacuum clean up it can be X + N where the chances of N being large is\nmore when there are multiple passes of vacuum scan. Now, if we would\nhave set the value of deleteXid as X, then there are more chances for\nthe next vacuum to recycle it. I am not sure but it might be that in\nthe future, we could come up with something (say if we can recompute\nRecentGlobalXmin again) where we can recycle pages of first index scan\nin the next scan of the index during single vacuum operation.\n\nThis is just to emphasize the point that doing the delete marking of\npages in the same pass has advantages, otherwise, I understand that\nthere are advantages in delaying it as well.\n\n> All things\n> considered, I'm not sure which is better.\n>\n\nYeah, this is a tough call to make, but if we can allow it to delete\nthe pages in bulkdelete conditionally for parallel vacuum workers,\nthen it would be better.\n\nI think we have below option w.r.t Gist indexes for parallel vacuum\na. don't allow Gist Index to participate in parallel vacuum\nb. allow delete of leaf pages in bulkdelete for parallel worker\nc. always allow deleting leaf pages in bulkdelete\nd. Invent some mechanism to share all the Gist stats via shared memory\n\n(a) is not a very good option, but it is a safe option as we can\nextend it in the future and we might decide to go with it especially\nif we can't decide among any other options. (b) would serve the need\nbut would add some additional checks in gistbulkdelete and will look\nmore like a hack. (c) would be best, but I think it will be difficult\nto be sure that is a good decision for all type of cases. (d) can\nrequire a lot of effort and I am not sure if it is worth adding\ncomplexity in the proposed patch.\n\nDo you have any thoughts on this?\n\nJust to give you an idea of the current parallel vacuum patch, the\nmaster backend scans the heap and forms the dead tuple array in dsm\nand then we launch one worker for each index based on the availability\nof workers and share the dead tuple array with each worker. Each\nworker performs bulkdelete for the index. In the end, we perform\ncleanup of all the indexes either via worker or master backend based\non some conditions.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Oct 2019 16:27:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>\n>wrote:\n>> All things\n>> considered, I'm not sure which is better.\n>\n>Yeah, this is a tough call to make, but if we can allow it to delete\n>the pages in bulkdelete conditionally for parallel vacuum workers,\n>then it would be better.\n\nYeah, if it's needed for parallel vacuum, maybe that tips the scale.\n\nHopefully, multi-pass vacuums are rare in practice. And we should lift the current 1 GB limit on the dead TID array, replacing it with something more compact and expandable, to make multi-pass vacuums even more rare. So I don't think we need to jump through many hoops to optimize the multi-pass case.\n\n- Heikki\n\n\n", "msg_date": "Wed, 16 Oct 2019 15:51:49 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Wed, Oct 16, 2019 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 15/10/2019 09:37, Amit Kapila wrote:\n> > > While reviewing a parallel vacuum patch [1], we noticed a few things\n> > > about $SUBJECT implemented in commit -\n> > > 7df159a620b760e289f1795b13542ed1b3e13b87.\n> > >\n> > > 1. A new memory context GistBulkDeleteResult->page_set_context has\n> > > been introduced, but it doesn't seem to be used.\n> >\n> > Oops. internal_page_set and empty_leaf_set were supposed to be allocated\n> > in that memory context. As things stand, we leak them until end of\n> > vacuum, in a multi-pass vacuum.\n>\n> Here is a patch to fix this issue.\n>\n\nThe patch looks good to me. I have slightly modified the comments and\nremoved unnecessary initialization.\n\nHeikki, are you fine me committing and backpatching this to 12? Let\nme know if you have a different idea to fix.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 17 Oct 2019 09:01:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Wed, Oct 16, 2019 at 7:21 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>\n> >wrote:\n> >> All things\n> >> considered, I'm not sure which is better.\n> >\n> >Yeah, this is a tough call to make, but if we can allow it to delete\n> >the pages in bulkdelete conditionally for parallel vacuum workers,\n> >then it would be better.\n>\n> Yeah, if it's needed for parallel vacuum, maybe that tips the scale.\n>\n\nmakes sense. I think we can write a patch for it and prepare the\nparallel vacuum patch on top of it. Once the parallel vacuum is in a\ncommittable shape, we can commit the gist-index related patch first\nfollowed by parallel vacuum patch.\n\n> Hopefully, multi-pass vacuums are rare in practice. And we should lift the current 1 GB limit on the dead TID array, replacing it with something more compact and expandable, to make multi-pass vacuums even more rare. So I don't think we need to jump through many hoops to optimize the multi-pass case.\n>\n\nYeah, that will be a good improvement.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Oct 2019 09:15:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Thu, Oct 17, 2019 at 9:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 16, 2019 at 7:21 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>\n> > >wrote:\n> > >> All things\n> > >> considered, I'm not sure which is better.\n> > >\n> > >Yeah, this is a tough call to make, but if we can allow it to delete\n> > >the pages in bulkdelete conditionally for parallel vacuum workers,\n> > >then it would be better.\n> >\n> > Yeah, if it's needed for parallel vacuum, maybe that tips the scale.\n> >\n>\n> makes sense. I think we can write a patch for it and prepare the\n> parallel vacuum patch on top of it. Once the parallel vacuum is in a\n> committable shape, we can commit the gist-index related patch first\n> followed by parallel vacuum patch.\n\n+1\nI can write a patch for the same.\n\n> > Hopefully, multi-pass vacuums are rare in practice. And we should lift the current 1 GB limit on the dead TID array, replacing it with something more compact and expandable, to make multi-pass vacuums even more rare. So I don't think we need to jump through many hoops to optimize the multi-pass case.\n> >\n>\n> Yeah, that will be a good improvement.\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Oct 2019 09:51:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On 17/10/2019 05:31, Amit Kapila wrote:\n> On Wed, Oct 16, 2019 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>>\n>>> On 15/10/2019 09:37, Amit Kapila wrote:\n>>>> While reviewing a parallel vacuum patch [1], we noticed a few things\n>>>> about $SUBJECT implemented in commit -\n>>>> 7df159a620b760e289f1795b13542ed1b3e13b87.\n>>>>\n>>>> 1. A new memory context GistBulkDeleteResult->page_set_context has\n>>>> been introduced, but it doesn't seem to be used.\n>>>\n>>> Oops. internal_page_set and empty_leaf_set were supposed to be allocated\n>>> in that memory context. As things stand, we leak them until end of\n>>> vacuum, in a multi-pass vacuum.\n>>\n>> Here is a patch to fix this issue.\n> \n> The patch looks good to me. I have slightly modified the comments and\n> removed unnecessary initialization.\n> \n> Heikki, are you fine me committing and backpatching this to 12? Let\n> me know if you have a different idea to fix.\n\nThanks! Looks good to me. Did either of you test it, though, with a \nmulti-pass vacuum?\n\n- Heikki\n\n\n", "msg_date": "Thu, 17 Oct 2019 08:57:12 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 17/10/2019 05:31, Amit Kapila wrote:\n> > On Wed, Oct 16, 2019 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>\n> >> On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >>>\n> >>> On 15/10/2019 09:37, Amit Kapila wrote:\n> >>>> While reviewing a parallel vacuum patch [1], we noticed a few things\n> >>>> about $SUBJECT implemented in commit -\n> >>>> 7df159a620b760e289f1795b13542ed1b3e13b87.\n> >>>>\n> >>>> 1. A new memory context GistBulkDeleteResult->page_set_context has\n> >>>> been introduced, but it doesn't seem to be used.\n> >>>\n> >>> Oops. internal_page_set and empty_leaf_set were supposed to be allocated\n> >>> in that memory context. As things stand, we leak them until end of\n> >>> vacuum, in a multi-pass vacuum.\n> >>\n> >> Here is a patch to fix this issue.\n> >\n> > The patch looks good to me. I have slightly modified the comments and\n> > removed unnecessary initialization.\n> >\n> > Heikki, are you fine me committing and backpatching this to 12? Let\n> > me know if you have a different idea to fix.\n>\n> Thanks! Looks good to me. Did either of you test it, though, with a\n> multi-pass vacuum?\n\n From my side, I have tested it with the multi-pass vacuum using the\ngist index and after the fix, it's using expected memory context.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Oct 2019 13:46:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 17/10/2019 05:31, Amit Kapila wrote:\n> > >\n> > > The patch looks good to me. I have slightly modified the comments and\n> > > removed unnecessary initialization.\n> > >\n> > > Heikki, are you fine me committing and backpatching this to 12? Let\n> > > me know if you have a different idea to fix.\n> >\n> > Thanks! Looks good to me. Did either of you test it, though, with a\n> > multi-pass vacuum?\n>\n> From my side, I have tested it with the multi-pass vacuum using the\n> gist index and after the fix, it's using expected memory context.\n>\n\nI have also verified that, but I think what additionally we can test\nhere is that without the patch it will leak the memory in\nTopTransactionContext (CurrentMemoryContext), but after patch it\nshouldn't leak it during multi-pass vacuum.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Oct 2019 14:58:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Thu, 17 Oct 2019, 14:59 Amit Kapila, <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi>\n> wrote:\n> > >\n> > > On 17/10/2019 05:31, Amit Kapila wrote:\n> > > >\n> > > > The patch looks good to me. I have slightly modified the comments\n> and\n> > > > removed unnecessary initialization.\n> > > >\n> > > > Heikki, are you fine me committing and backpatching this to 12? Let\n> > > > me know if you have a different idea to fix.\n> > >\n> > > Thanks! Looks good to me. Did either of you test it, though, with a\n> > > multi-pass vacuum?\n> >\n> > From my side, I have tested it with the multi-pass vacuum using the\n> > gist index and after the fix, it's using expected memory context.\n> >\n>\n> I have also verified that, but I think what additionally we can test\n> here is that without the patch it will leak the memory in\n> TopTransactionContext (CurrentMemoryContext), but after patch it\n> shouldn't leak it during multi-pass vacuum.\n>\n> Make sense to me, I will test that by tomorrow.\n\nOn Thu, 17 Oct 2019, 14:59 Amit Kapila, <amit.kapila16@gmail.com> wrote:On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 17/10/2019 05:31, Amit Kapila wrote:\n> > >\n> > > The patch looks good to me.  I have slightly modified the comments and\n> > > removed unnecessary initialization.\n> > >\n> > > Heikki, are you fine me committing and backpatching this to 12?  Let\n> > > me know if you have a different idea to fix.\n> >\n> > Thanks! Looks good to me. Did either of you test it, though, with a\n> > multi-pass vacuum?\n>\n> From my side, I have tested it with the multi-pass vacuum using the\n> gist index and after the fix, it's using expected memory context.\n>\n\nI have also verified that, but I think what additionally we can test\nhere is that without the patch it will leak the memory in\nTopTransactionContext (CurrentMemoryContext), but after patch it\nshouldn't leak it during multi-pass vacuum.\nMake sense to me, I will test that by tomorrow.", "msg_date": "Thu, 17 Oct 2019 18:32:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Thu, Oct 17, 2019 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, 17 Oct 2019, 14:59 Amit Kapila, <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> >\n>> > On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> > >\n>> > > On 17/10/2019 05:31, Amit Kapila wrote:\n>> > > >\n>> > > > The patch looks good to me. I have slightly modified the comments and\n>> > > > removed unnecessary initialization.\n>> > > >\n>> > > > Heikki, are you fine me committing and backpatching this to 12? Let\n>> > > > me know if you have a different idea to fix.\n>> > >\n>> > > Thanks! Looks good to me. Did either of you test it, though, with a\n>> > > multi-pass vacuum?\n>> >\n>> > From my side, I have tested it with the multi-pass vacuum using the\n>> > gist index and after the fix, it's using expected memory context.\n>> >\n>>\n>> I have also verified that, but I think what additionally we can test\n>> here is that without the patch it will leak the memory in\n>> TopTransactionContext (CurrentMemoryContext), but after patch it\n>> shouldn't leak it during multi-pass vacuum.\n>>\n>> Make sense to me, I will test that by tomorrow.\n\nI have performed the test to observe the memory usage in the\nTopTransactionContext using the MemoryContextStats function from gdb.\n\nFor testing this, in gistvacuumscan every time, after it resets the\npage_set_context, I have collected the sample. I have collected 3\nsamples for both the head and the patch. We can clearly see that on\nthe head the memory is getting accumulated in the\nTopTransactionContext whereas with the patch there is no memory\ngetting accumulated.\n\nhead:\nTopTransactionContext: 1056832 total in 2 blocks; 3296 free (5\nchunks); 1053536 used\n GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0\nfree (0 chunks); 112 used\nGrand total: 1056944 bytes in 2 blocks; 3296 free (5 chunks); 1053648 used\n\nTopTransactionContext: 1089600 total in 4 blocks; 19552 free (14\nchunks); 1070048 used\n GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0\nfree (0 chunks); 112 used\nGrand total: 1089712 bytes in 4 blocks; 19552 free (14 chunks); 1070160 used\n\nTopTransactionContext: 1122368 total in 5 blocks; 35848 free (20\nchunks); 1086520 used\n GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0\nfree (0 chunks); 112 used\nGrand total: 1122480 bytes in 5 blocks; 35848 free (20 chunks); 1086632 used\n\n\nWith Patch:\nTopTransactionContext: 1056832 total in 2 blocks; 3296 free (1\nchunks); 1053536 used\n GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0\nfree (0 chunks); 112 used\nGrand total: 1056944 bytes in 2 blocks; 3296 free (1 chunks); 1053648 used\n\nTopTransactionContext: 1056832 total in 2 blocks; 3296 free (1\nchunks); 1053536 used\n GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0\nfree (0 chunks); 112 used\nGrand total: 1056944 bytes in 2 blocks; 3296 free (1 chunks); 1053648 used\n\nTopTransactionContext: 1056832 total in 2 blocks; 3296 free (1\nchunks); 1053536 used\n GiST VACUUM page set context: 112 total in 0 blocks (0 chunks); 0\nfree (0 chunks); 112 used\nGrand total: 1056944 bytes in 2 blocks; 3296 free (1 chunks); 1053648 used\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 09:34:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Wed, Oct 16, 2019 at 7:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>\n> >wrote:\n> >> All things\n> >> considered, I'm not sure which is better.\n> >\n> >Yeah, this is a tough call to make, but if we can allow it to delete\n> >the pages in bulkdelete conditionally for parallel vacuum workers,\n> >then it would be better.\n>\n> Yeah, if it's needed for parallel vacuum, maybe that tips the scale.\n\nAre we planning to do this only if it is called from parallel vacuum\nworkers or in general?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 09:41:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Fri, Oct 18, 2019 at 9:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 17, 2019 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, 17 Oct 2019, 14:59 Amit Kapila, <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >> >\n> >> > On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> > >\n> >> > > Thanks! Looks good to me. Did either of you test it, though, with a\n> >> > > multi-pass vacuum?\n> >> >\n> >> > From my side, I have tested it with the multi-pass vacuum using the\n> >> > gist index and after the fix, it's using expected memory context.\n> >> >\n> >>\n> >> I have also verified that, but I think what additionally we can test\n> >> here is that without the patch it will leak the memory in\n> >> TopTransactionContext (CurrentMemoryContext), but after patch it\n> >> shouldn't leak it during multi-pass vacuum.\n> >>\n> >> Make sense to me, I will test that by tomorrow.\n>\n> I have performed the test to observe the memory usage in the\n> TopTransactionContext using the MemoryContextStats function from gdb.\n>\n> For testing this, in gistvacuumscan every time, after it resets the\n> page_set_context, I have collected the sample. I have collected 3\n> samples for both the head and the patch. We can clearly see that on\n> the head the memory is getting accumulated in the\n> TopTransactionContext whereas with the patch there is no memory\n> getting accumulated.\n>\n\nThanks for the test. It shows that prior to patch the memory was\ngetting leaked in TopTransactionContext during multi-pass vacuum and\nafter patch, there is no leak. I will commit the patch early next\nweek unless Heikki or someone wants some more tests.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 10:48:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Fri, Oct 18, 2019 at 9:41 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Oct 16, 2019 at 7:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>\n> > >wrote:\n> > >> All things\n> > >> considered, I'm not sure which is better.\n> > >\n> > >Yeah, this is a tough call to make, but if we can allow it to delete\n> > >the pages in bulkdelete conditionally for parallel vacuum workers,\n> > >then it would be better.\n> >\n> > Yeah, if it's needed for parallel vacuum, maybe that tips the scale.\n>\n> Are we planning to do this only if it is called from parallel vacuum\n> workers or in general?\n>\n\nI think we can do it in general as adding some check for parallel\nvacuum there will look bit hackish. It is not clear if we get enough\nbenefit by keeping it for cleanup phase of the index as discussed in\nemails above. Heikki, others, let us know if you don't agree here.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 10:55:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Fri, Oct 18, 2019 at 10:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 18, 2019 at 9:41 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Oct 16, 2019 at 7:22 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > >\n> > > On 16 October 2019 12:57:03 CEST, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi>\n> > > >wrote:\n> > > >> All things\n> > > >> considered, I'm not sure which is better.\n> > > >\n> > > >Yeah, this is a tough call to make, but if we can allow it to delete\n> > > >the pages in bulkdelete conditionally for parallel vacuum workers,\n> > > >then it would be better.\n> > >\n> > > Yeah, if it's needed for parallel vacuum, maybe that tips the scale.\n> >\n> > Are we planning to do this only if it is called from parallel vacuum\n> > workers or in general?\n> >\n>\n> I think we can do it in general as adding some check for parallel\n> vacuum there will look bit hackish.\nI agree with that point.\n It is not clear if we get enough\n> benefit by keeping it for cleanup phase of the index as discussed in\n> emails above. Heikki, others, let us know if you don't agree here.\n\nI have prepared a first version of the patch. Currently, I am\nperforming an empty page deletion for all the cases.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 18 Oct 2019 16:51:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Fri, Oct 18, 2019 at 10:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks for the test. It shows that prior to patch the memory was\n> getting leaked in TopTransactionContext during multi-pass vacuum and\n> after patch, there is no leak. I will commit the patch early next\n> week unless Heikki or someone wants some more tests.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 11:23:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Mon, Oct 21, 2019 at 11:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 18, 2019 at 10:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Thanks for the test. It shows that prior to patch the memory was\n> > getting leaked in TopTransactionContext during multi-pass vacuum and\n> > after patch, there is no leak. I will commit the patch early next\n> > week unless Heikki or someone wants some more tests.\n> >\n>\n> Pushed.\nThanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 11:30:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "Hi!\n\n> 18 окт. 2019 г., в 13:21, Dilip Kumar <dilipbalaut@gmail.com> написал(а):\n> \n> On Fri, Oct 18, 2019 at 10:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> \n>> \n>> I think we can do it in general as adding some check for parallel\n>> vacuum there will look bit hackish.\n> I agree with that point.\n> It is not clear if we get enough\n>> benefit by keeping it for cleanup phase of the index as discussed in\n>> emails above. Heikki, others, let us know if you don't agree here.\n> \n> I have prepared a first version of the patch. Currently, I am\n> performing an empty page deletion for all the cases.\n\nI've took a look into the patch, and cannot understand one simple thing...\nWe should not call gistvacuum_delete_empty_pages() for same gist_stats twice.\nAnother way once the function is called we should somehow update or zero empty_leaf_set.\nDoes this invariant hold in your patch?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 21 Oct 2019 11:00:47 +0200", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Mon, Oct 21, 2019 at 2:30 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi!\n>\n> > 18 окт. 2019 г., в 13:21, Dilip Kumar <dilipbalaut@gmail.com> написал(а):\n> >\n> > On Fri, Oct 18, 2019 at 10:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >>\n> >> I think we can do it in general as adding some check for parallel\n> >> vacuum there will look bit hackish.\n> > I agree with that point.\n> > It is not clear if we get enough\n> >> benefit by keeping it for cleanup phase of the index as discussed in\n> >> emails above. Heikki, others, let us know if you don't agree here.\n> >\n> > I have prepared a first version of the patch. Currently, I am\n> > performing an empty page deletion for all the cases.\n>\n> I've took a look into the patch, and cannot understand one simple thing...\n> We should not call gistvacuum_delete_empty_pages() for same gist_stats twice.\n> Another way once the function is called we should somehow update or zero empty_leaf_set.\n> Does this invariant hold in your patch?\n>\nThanks for looking into the patch. With this patch now\nGistBulkDeleteResult is local to single gistbulkdelete call or\ngistvacuumcleanup. So now we are not sharing GistBulkDeleteResult,\nacross the calls so I am not sure how it will be called twice for the\nsame gist_stats? I might be missing something here?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 14:42:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "\n\n> 21 окт. 2019 г., в 11:12, Dilip Kumar <dilipbalaut@gmail.com> написал(а):\n> \n> On Mon, Oct 21, 2019 at 2:30 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> \n>> I've took a look into the patch, and cannot understand one simple thing...\n>> We should not call gistvacuum_delete_empty_pages() for same gist_stats twice.\n>> Another way once the function is called we should somehow update or zero empty_leaf_set.\n>> Does this invariant hold in your patch?\n>> \n> Thanks for looking into the patch. With this patch now\n> GistBulkDeleteResult is local to single gistbulkdelete call or\n> gistvacuumcleanup. So now we are not sharing GistBulkDeleteResult,\n> across the calls so I am not sure how it will be called twice for the\n> same gist_stats? I might be missing something here?\n\nYes, you are right, sorry for the noise.\nCurrently we are doing both gistvacuumscan() and gistvacuum_delete_empty_pages() in both gistbulkdelete() and gistvacuumcleanup(). Is it supposed to be so? Functions gistbulkdelete() and gistvacuumcleanup() look very similar and share some comments. This is what triggered my attention.\n\nThanks!\n\n--\nAndrey Borodin\nOpen source RDBMS development team leader\nYandex.Cloud\n\n\n\n", "msg_date": "Mon, 21 Oct 2019 11:27:58 +0200", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Mon, Oct 21, 2019 at 2:58 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n>\n>\n> > 21 окт. 2019 г., в 11:12, Dilip Kumar <dilipbalaut@gmail.com> написал(а):\n> >\n> > On Mon, Oct 21, 2019 at 2:30 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >>\n> >> I've took a look into the patch, and cannot understand one simple thing...\n> >> We should not call gistvacuum_delete_empty_pages() for same gist_stats twice.\n> >> Another way once the function is called we should somehow update or zero empty_leaf_set.\n> >> Does this invariant hold in your patch?\n> >>\n> > Thanks for looking into the patch. With this patch now\n> > GistBulkDeleteResult is local to single gistbulkdelete call or\n> > gistvacuumcleanup. So now we are not sharing GistBulkDeleteResult,\n> > across the calls so I am not sure how it will be called twice for the\n> > same gist_stats? I might be missing something here?\n>\n> Yes, you are right, sorry for the noise.\n> Currently we are doing both gistvacuumscan() and gistvacuum_delete_empty_pages() in both gistbulkdelete() and gistvacuumcleanup(). Is it supposed to be so?\n\nThere was an issue discussed in parallel vacuum thread[1], and for\nsolving that it has been discussed in this thread[2] that we can\ndelete empty pages in bulkdelete phase itself. But, that does not\nmean that we can remove that from the gistvacuumcleanup phase.\nBecause if the gistbulkdelete is not at all called in the vacuum pass\nthen gistvacuumcleanup, will perform both gistvacuumscan and\ngistvacuum_delete_empty_pages. In short, In whichever pass, we detect\nthe empty page in the same pass we delete the empty page.\n\nFunctions gistbulkdelete() and gistvacuumcleanup() look very similar\nand share some comments. This is what triggered my attention.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JEQ2y3uNucNopDjK8pse6xSe5%3D_oknoWfRQvAF%3DVqsBA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/69EF7B88-F3E7-4E09-824D-694CF39E5683%40iki.fi\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 16:05:11 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Fri, Oct 18, 2019 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have prepared a first version of the patch. Currently, I am\n> performing an empty page deletion for all the cases.\n>\n\nFew comments:\n----------------------\n1.\n-/*\n- * State kept across vacuum stages.\n- */\n typedef struct\n {\n- IndexBulkDeleteResult stats; /* must be first */\n+ IndexBulkDeleteResult *stats; /* kept across vacuum stages. */\n\n /*\n- * These are used to memorize all internal and empty leaf pages in the 1st\n- * vacuum stage. They are used in the 2nd stage, to delete all the empty\n- * pages.\n+ * These are used to memorize all internal and empty leaf pages. They are\n+ * used for deleting all the empty pages.\n */\n IntegerSet *internal_page_set;\n IntegerSet *empty_leaf_set;\n\nNow, if we don't want to share the remaining stats across\ngistbulkdelete and gistvacuumcleanup, isn't it better to keep the\ninformation of internal and empty leaf pages as part of GistVacState?\nAlso, I think it is better to call gistvacuum_delete_empty_pages from\nfunction gistvacuumscan as that will avoid it calling from multiple\nplaces.\n\n2.\n- gist_stats->page_set_context = NULL;\n- gist_stats->internal_page_set = NULL;\n- gist_stats->empty_leaf_set = NULL;\n\nWhy have you removed this initialization?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Oct 2019 09:09:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Tue, Oct 22, 2019 at 9:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 18, 2019 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have prepared a first version of the patch. Currently, I am\n> > performing an empty page deletion for all the cases.\n> >\n>\n> Few comments:\n> ----------------------\n> 1.\n> -/*\n> - * State kept across vacuum stages.\n> - */\n> typedef struct\n> {\n> - IndexBulkDeleteResult stats; /* must be first */\n> + IndexBulkDeleteResult *stats; /* kept across vacuum stages. */\n>\n> /*\n> - * These are used to memorize all internal and empty leaf pages in the 1st\n> - * vacuum stage. They are used in the 2nd stage, to delete all the empty\n> - * pages.\n> + * These are used to memorize all internal and empty leaf pages. They are\n> + * used for deleting all the empty pages.\n> */\n> IntegerSet *internal_page_set;\n> IntegerSet *empty_leaf_set;\n>\n> Now, if we don't want to share the remaining stats across\n> gistbulkdelete and gistvacuumcleanup, isn't it better to keep the\n> information of internal and empty leaf pages as part of GistVacState?\n\nBasically, only IndexBulkDeleteResult is now shared across the stage\nso we can move all members to GistVacState and completely get rid of\nGistBulkDeleteResult?\n\nIndexBulkDeleteResult *stats\nIntegerSet *internal_page_set;\nIntegerSet *empty_leaf_set;\nMemoryContext page_set_context;\n\n\n> Also, I think it is better to call gistvacuum_delete_empty_pages from\n> function gistvacuumscan as that will avoid it calling from multiple\n> places.\nYeah we can do that.\n>\n> 2.\n> - gist_stats->page_set_context = NULL;\n> - gist_stats->internal_page_set = NULL;\n> - gist_stats->empty_leaf_set = NULL;\n>\n> Why have you removed this initialization?\nThis was post-cleanup reset since we were returning the gist_stats so\nit was better to clean up but now we are not returning it out so I\nthough we don't need to clean this. But, I think now we can free the\nmemory gist_stats itself.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Oct 2019 10:50:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Tue, Oct 22, 2019 at 10:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 22, 2019 at 9:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Oct 18, 2019 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I have prepared a first version of the patch. Currently, I am\n> > > performing an empty page deletion for all the cases.\n> > >\n> >\n> > Few comments:\n> > ----------------------\n> > 1.\n> > -/*\n> > - * State kept across vacuum stages.\n> > - */\n> > typedef struct\n> > {\n> > - IndexBulkDeleteResult stats; /* must be first */\n> > + IndexBulkDeleteResult *stats; /* kept across vacuum stages. */\n> >\n> > /*\n> > - * These are used to memorize all internal and empty leaf pages in the 1st\n> > - * vacuum stage. They are used in the 2nd stage, to delete all the empty\n> > - * pages.\n> > + * These are used to memorize all internal and empty leaf pages. They are\n> > + * used for deleting all the empty pages.\n> > */\n> > IntegerSet *internal_page_set;\n> > IntegerSet *empty_leaf_set;\n> >\n> > Now, if we don't want to share the remaining stats across\n> > gistbulkdelete and gistvacuumcleanup, isn't it better to keep the\n> > information of internal and empty leaf pages as part of GistVacState?\n>\n> Basically, only IndexBulkDeleteResult is now shared across the stage\n> so we can move all members to GistVacState and completely get rid of\n> GistBulkDeleteResult?\n>\n\nYes, something like that would be better. Let's try and see how it comes out.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Oct 2019 10:53:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Tue, Oct 22, 2019 at 10:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 22, 2019 at 10:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Oct 22, 2019 at 9:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Oct 18, 2019 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > I have prepared a first version of the patch. Currently, I am\n> > > > performing an empty page deletion for all the cases.\n> > > >\n> > >\n> > > Few comments:\n> > > ----------------------\n> > > 1.\n> > > -/*\n> > > - * State kept across vacuum stages.\n> > > - */\n> > > typedef struct\n> > > {\n> > > - IndexBulkDeleteResult stats; /* must be first */\n> > > + IndexBulkDeleteResult *stats; /* kept across vacuum stages. */\n> > >\n> > > /*\n> > > - * These are used to memorize all internal and empty leaf pages in the 1st\n> > > - * vacuum stage. They are used in the 2nd stage, to delete all the empty\n> > > - * pages.\n> > > + * These are used to memorize all internal and empty leaf pages. They are\n> > > + * used for deleting all the empty pages.\n> > > */\n> > > IntegerSet *internal_page_set;\n> > > IntegerSet *empty_leaf_set;\n> > >\n> > > Now, if we don't want to share the remaining stats across\n> > > gistbulkdelete and gistvacuumcleanup, isn't it better to keep the\n> > > information of internal and empty leaf pages as part of GistVacState?\n> >\n> > Basically, only IndexBulkDeleteResult is now shared across the stage\n> > so we can move all members to GistVacState and completely get rid of\n> > GistBulkDeleteResult?\n> >\n>\n> Yes, something like that would be better. Let's try and see how it comes out.\nI have modified as we discussed. Please take a look.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Oct 2019 14:17:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Tue, Oct 22, 2019 at 2:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 22, 2019 at 10:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Basically, only IndexBulkDeleteResult is now shared across the stage\n> > > so we can move all members to GistVacState and completely get rid of\n> > > GistBulkDeleteResult?\n> > >\n> >\n> > Yes, something like that would be better. Let's try and see how it comes out.\n> I have modified as we discussed. Please take a look.\n>\n\nThanks, I haven't reviewed this yet, but it seems to be on the right\nlines. Sawada-San, can you please prepare the next version of the\nparallel vacuum patch on top of this patch and enable parallel vacuum\nfor Gist indexes? We can do the review of this patch in detail once\nthe parallel vacuum patch is in better shape as without that it\nwouldn't make sense to commit this patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 23 Oct 2019 16:44:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Wed, Oct 23, 2019 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 22, 2019 at 2:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Oct 22, 2019 at 10:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > Basically, only IndexBulkDeleteResult is now shared across the stage\n> > > > so we can move all members to GistVacState and completely get rid of\n> > > > GistBulkDeleteResult?\n> > > >\n> > >\n> > > Yes, something like that would be better. Let's try and see how it comes out.\n> > I have modified as we discussed. Please take a look.\n> >\n>\n> Thanks, I haven't reviewed this yet, but it seems to be on the right\n> lines. Sawada-San, can you please prepare the next version of the\n> parallel vacuum patch on top of this patch and enable parallel vacuum\n> for Gist indexes?\n\nYeah I've sent the latest patch set that is built on top of this\npatch[1]. BTW I looked at this patch briefly but it looks good to me.\n\n[1] https://www.postgresql.org/message-id/CAD21AoBMo9dr_QmhT%3DdKh7fmiq7tpx%2ByLHR8nw9i5NZ-SgtaVg%40mail.gmail.com\n\nRegards,\n\n--\nMasahiko Sawada\n\n\n", "msg_date": "Sat, 26 Oct 2019 00:52:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Fri, Oct 25, 2019 at 9:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 23, 2019 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 22, 2019 at 2:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 22, 2019 at 10:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I have modified as we discussed. Please take a look.\n> > >\n> >\n> > Thanks, I haven't reviewed this yet, but it seems to be on the right\n> > lines. Sawada-San, can you please prepare the next version of the\n> > parallel vacuum patch on top of this patch and enable parallel vacuum\n> > for Gist indexes?\n>\n> Yeah I've sent the latest patch set that is built on top of this\n> patch[1]. BTW I looked at this patch briefly but it looks good to me.\n>\n\nToday, I have looked at this patch and found a few things that need to\nbe changed:\n\n1.\n static void gistvacuum_delete_empty_pages(IndexVacuumInfo *info,\n- GistBulkDeleteResult *stats);\n-static bool gistdeletepage(IndexVacuumInfo *info, GistBulkDeleteResult *stats,\n+ GistVacState *stats);\n\nI think stats is not a good name for GistVacState. How about vstate?\n\n2.\n+ /* we don't need the internal and empty page sets anymore */\n+ MemoryContextDelete(vstate.page_set_context);\n\nAfter memory context delete, we can reset this and other related\nvariables as we were doing without the patch.\n\n3. There are a couple of places in code (like comments, README) that\nmentions the deletion of empty pages in the second stage of the\nvacuum. We should change all such places.\n\nI have modified the patch for the above points and additionally ran\npgindent. Let me know what you think about the attached patch?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 9 Dec 2019 14:27:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Mon, Dec 9, 2019 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I have modified the patch for the above points and additionally ran\n> pgindent. Let me know what you think about the attached patch?\n>\n\nA new version with a slightly modified commit message.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 9 Dec 2019 14:37:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Mon, Dec 9, 2019 at 2:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 9, 2019 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I have modified the patch for the above points and additionally ran\n> > pgindent. Let me know what you think about the attached patch?\n> >\n>\n> A new version with a slightly modified commit message.\n\nYour changes look fine to me. Thanks!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Dec 2019 13:27:20 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Mon, 9 Dec 2019 at 14:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 9, 2019 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> >\n> > I have modified the patch for the above points and additionally ran\n> > pgindent. Let me know what you think about the attached patch?\n> >\n>\n> A new version with a slightly modified commit message.\n\nI reviewed v4 patch and below is the one review comment:\n\n+ * These are used to memorize all internal and empty leaf pages. They\nare\n+ * used for deleting all the empty pages.\n */\nAfter dot, there should be 2 spaces. Earlier, there was 2 spaces.\n\nOther than that patch looks fine to me.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 9 Dec 2019 at 14:37, Amit Kapila <amit.kapila16@gmail.com> wrote:>> On Mon, Dec 9, 2019 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:> >> > I have modified the patch for the above points and additionally ran> > pgindent.  Let me know what you think about the attached patch?> >>> A new version with a slightly modified commit message.I reviewed v4 patch and below is the one review comment:+     * These are used to memorize all internal and empty leaf pages. They are+     * used for deleting all the empty pages.      */After dot, there should be 2 spaces. Earlier, there was 2 spaces.Other than that patch looks fine to me.-- Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 9 Jan 2020 16:41:43 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" }, { "msg_contents": "On Thu, Jan 9, 2020 at 4:41 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:\n>\n> On Mon, 9 Dec 2019 at 14:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 9, 2019 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I have modified the patch for the above points and additionally ran\n> > > pgindent. Let me know what you think about the attached patch?\n> > >\n> >\n> > A new version with a slightly modified commit message.\n>\n> I reviewed v4 patch and below is the one review comment:\n>\n> + * These are used to memorize all internal and empty leaf pages. They are\n> + * used for deleting all the empty pages.\n> */\n> After dot, there should be 2 spaces. Earlier, there was 2 spaces.\n>\n> Other than that patch looks fine to me.\n>\nThanks for the comment. Amit has already taken care of this before pushing it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jan 2020 08:55:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions/Observations related to Gist vacuum" } ]
[ { "msg_contents": "I was mystified by this comment in Makefile.shlib:\n\n# We need several not-quite-identical variants of .DEF files to build\n# DLLs for Windows. These are made from the single source file\n# exports.txt. Since we can't assume that Windows boxes will have\n# sed, the .DEF files are always built and included in distribution\n# tarballs.\n\nifneq (,$(SHLIB_EXPORTS))\ndistprep: lib$(NAME)dll.def lib$(NAME)ddll.def\n...\n\nThis doesn't make much sense (anymore?) since MinGW surely has sed and\nMSVC doesn't use this (and has Perl). I think this is a leftover from\nvarious ancient client-only ad-hoc Windows build provisions (those\nwin32.mak files we used to have around). Also, the ddll.def (debug\nbuild) isn't used by anything anymore AFAICT.\n\nI think we can clean this up and just have the regular ddl.def built\nnormally at build time if required.\n\nDoes anyone know more about this?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 15 Oct 2019 09:00:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Clean up MinGW def file generation" }, { "msg_contents": "On Tue, Oct 15, 2019 at 09:00:23AM +0200, Peter Eisentraut wrote:\n> This doesn't make much sense (anymore?) since MinGW surely has sed and\n> MSVC doesn't use this (and has Perl). I think this is a leftover from\n> various ancient client-only ad-hoc Windows build provisions (those\n> win32.mak files we used to have around). Also, the ddll.def (debug\n> build) isn't used by anything anymore AFAICT.\n\nsed is present in MinGW for some time, at least 2009 if you look here:\nhttps://sourceforge.net/projects/mingw/files/MSYS/Base/sed/\nCygwin also includes sed, so this cleanup makes sense.\n\n> I think we can clean this up and just have the regular ddl.def built\n> normally at build time if required.\n> \n> Does anyone know more about this?\n\nThis comes from here, but I cannot see a thread about this topic\naround this date:\ncommit: a1d5d8574751d62a039d8ceb44329ee7c637196a\nauthor: Peter Eisentraut <peter_e@gmx.net>\ndate: Tue, 26 Feb 2008 06:41:24 +0000\nRefactor the code that creates the shared library export files to appear\nonly once in Makefile.shlib and not in four copies.\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 16:18:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Clean up MinGW def file generation" }, { "msg_contents": "On 2019-Oct-17, Michael Paquier wrote:\n\n> On Tue, Oct 15, 2019 at 09:00:23AM +0200, Peter Eisentraut wrote:\n\n> > I think we can clean this up and just have the regular ddl.def built\n> > normally at build time if required.\n> > \n> > Does anyone know more about this?\n> \n> This comes from here, but I cannot see a thread about this topic\n> around this date:\n> commit: a1d5d8574751d62a039d8ceb44329ee7c637196a\n> author: Peter Eisentraut <peter_e@gmx.net>\n> date: Tue, 26 Feb 2008 06:41:24 +0000\n> Refactor the code that creates the shared library export files to appear\n> only once in Makefile.shlib and not in four copies.\n\nWell, yes, but that code originates from much earlier. For example\n2a63c1602d9d (Tom Lane, Oct. 2004) is the one that created the libpq\nones. But even that ancient one seems to be just refactoring some stuff\nthat was already there, namely something that seems to have been created\nby commit 53cd7cd8a916:\n\ncommit 53cd7cd8a9168d4b2e2feb52129336429cc99b98\nAuthor: Bruce Momjian <bruce@momjian.us>\nAuthorDate: Tue Mar 9 04:53:37 2004 +0000\nCommitDate: Tue Mar 9 04:53:37 2004 +0000\n\n Make a separate win32 debug DLL along with the non-debug version:\n \n Currently, src/interfaces/libpq/win32.mak builds a statically-linked\n library \"libpq.lib\", a debug dll \"libpq.dll\", import library for the\n debug dll \"libpqdll.lib\", a release dll \"libpq.dll\", import library for\n the release dll \"libpqdll.lib\". To avoid naming clashes, I would make\n the debug dll and import libraries \"libpqd.dll\" and \"libpqddll.lib\".\n \n Basically, the debug build uses the cl flags: \"/MDd /D _DEBUG\", and the\n release build uses the cl flags \"/MD /D NDEBUG\". Usually the debug\n build has a \"D\" suffix on the file name, so for example:\n \n libpqd.dll libpq, debug build\n libpqd.lib libpq, debug build, import library\n libpq.dll libpq, release build\n libpq.lib libpq, release build, import library\n \n David Turner\n\nThis stuff was used by win32.mak, but I don't know if that tells anyone\nanything.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 18 Oct 2019 09:07:58 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Clean up MinGW def file generation" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Oct-17, Michael Paquier wrote:\n>> On Tue, Oct 15, 2019 at 09:00:23AM +0200, Peter Eisentraut wrote:\n>>> I think we can clean this up and just have the regular ddl.def built\n>>> normally at build time if required.\n>>> Does anyone know more about this?\n\n> Well, yes, but that code originates from much earlier. For example\n> 2a63c1602d9d (Tom Lane, Oct. 2004) is the one that created the libpq\n> ones.\n\nYeah, the comment that Peter complained about is mine. I believe the\ndesire to avoid depending on \"sed\" at build time was focused on our\nold support for building libpq with Borland C (and not much else).\nSince this makefile infrastructure is now only used for MinGW, I agree\nwe ought to be able to quit shipping those files in tarballs.\n\nI think there could be some .gitignore cleanup done along with this.\nNotably, I see exclusions for /exports.list in several places, but no\nother references to that name --- isn't that an intermediate file that\nwe used to generate while creating these files?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Oct 2019 15:00:14 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Clean up MinGW def file generation" }, { "msg_contents": "On 2019-10-18 15:00, Tom Lane wrote:\n> Yeah, the comment that Peter complained about is mine. I believe the\n> desire to avoid depending on \"sed\" at build time was focused on our\n> old support for building libpq with Borland C (and not much else).\n> Since this makefile infrastructure is now only used for MinGW, I agree\n> we ought to be able to quit shipping those files in tarballs.\n\nYeah, it all makes sense now. I have committed my patch now.\n\n> I think there could be some .gitignore cleanup done along with this.\n> Notably, I see exclusions for /exports.list in several places, but no\n> other references to that name --- isn't that an intermediate file that\n> we used to generate while creating these files?\n\nexports.list is built from exports.txt on non-Windows platforms and\nAFAICT it is not cleaned up as an intermediate file. So I think the\ncurrent arrangement is correct.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 20 Oct 2019 10:26:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Clean up MinGW def file generation" }, { "msg_contents": "On 2019-10-20 10:26, Peter Eisentraut wrote:\n> On 2019-10-18 15:00, Tom Lane wrote:\n>> Yeah, the comment that Peter complained about is mine. I believe the\n>> desire to avoid depending on \"sed\" at build time was focused on our\n>> old support for building libpq with Borland C (and not much else).\n>> Since this makefile infrastructure is now only used for MinGW, I agree\n>> we ought to be able to quit shipping those files in tarballs.\n> \n> Yeah, it all makes sense now. I have committed my patch now.\n\nVery related, I believe the file libpq-dist.rc is also obsolete; see\nattached patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 21 Oct 2019 00:07:02 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Clean up MinGW def file generation" }, { "msg_contents": "On 2019-10-21 00:07, Peter Eisentraut wrote:\n> On 2019-10-20 10:26, Peter Eisentraut wrote:\n>> On 2019-10-18 15:00, Tom Lane wrote:\n>>> Yeah, the comment that Peter complained about is mine. I believe the\n>>> desire to avoid depending on \"sed\" at build time was focused on our\n>>> old support for building libpq with Borland C (and not much else).\n>>> Since this makefile infrastructure is now only used for MinGW, I agree\n>>> we ought to be able to quit shipping those files in tarballs.\n>>\n>> Yeah, it all makes sense now. I have committed my patch now.\n> \n> Very related, I believe the file libpq-dist.rc is also obsolete; see\n> attached patch.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 23 Oct 2019 07:13:20 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Clean up MinGW def file generation" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16059\nLogged by: Steven Winfield\nEmail address: steven.winfield@cantabcapital.com\nPostgreSQL version: 11.5\nOperating system: Linux\nDescription: \n\nAs per the documentation[1], the COPY command requires the output filename\nto be single-quoted.\r\n\r\nHowever, when using psql, a partial COPY command such as this...\r\nCOPY pg_catalog.pg_class TO '/usr\r\n...will, on hitting TAB, be converted to this...\r\nCOPY pg_catalog.pg_class TO /usr/\r\n...requiring the user to move the cursor back to re-insert the single quote\nbefore finishing the command and executing.\r\n\r\nThe issue seems to be somewhere around here[2], where complete_from_files[3]\nis used to suggest replacements - that function strips quotes from the\nexisting (partial) filename but doesn't put them back unless quote_if_needed\nis true (which I guess it isn't, unless there is a space in the filename for\nexample).\r\n\r\nNote that using the \\copy command instead works fine, as filenames do not\nneed to be quoted in that case.\r\n\r\n[1] https://www.postgresql.org/docs/11/sql-copy.html\r\n[2]\nhttps://github.com/postgres/postgres/blob/4b011cad272e997935eb8d80ab741a40b395fdf5/src/bin/psql/tab-complete.c#L2234\r\n[3]\nhttps://github.com/postgres/postgres/blob/4b011cad272e997935eb8d80ab741a40b395fdf5/src/bin/psql/tab-complete.c#L4350", "msg_date": "Tue, 15 Oct 2019 13:11:29 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Steven:\n\nOn Tue, Oct 15, 2019 at 3:12 PM PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> As per the documentation[1], the COPY command requires the output filename\n> to be single-quoted.\n> However, when using psql, a partial COPY command such as this...\n> COPY pg_catalog.pg_class TO '/usr\n> ...will, on hitting TAB, be converted to this...\n> COPY pg_catalog.pg_class TO /usr/\n> ...requiring the user to move the cursor back to re-insert the single quote\n> before finishing the command and executing.\n\n> The issue seems to be somewhere around here[2], where complete_from_files[3]\n> is used to suggest replacements - that function strips quotes from the\n> existing (partial) filename but doesn't put them back unless quote_if_needed\n> is true (which I guess it isn't, unless there is a space in the filename for\n> example).\n\nNot saying it's not a bug, but bear in mind psql CAN NOT correctly\ncomplete filenames for SERVER SIDE copy. You may be running in the\nsame machine, but even with this and using unix domain sockets it's\ndifficult to know what is at the other end of the socket ( not sure if\nyou can always know it even if you are root, and you can have things\nlike psql connecting through unix domain socket to pgbouncer which\nforwards to I-do-not-know-where (.com) .\n\n> Note that using the \\copy command instead works fine, as filenames do not\n> need to be quoted in that case.\n\nThey are different beasts, in \\copy you are not completing an sql\ncommand to send to the server, you are completing a command to psql (\nwhich it implemts using an sql command plus some magic ).\n\nFrancisco Olarte.\n\n\n", "msg_date": "Tue, 15 Oct 2019 15:38:15 +0200", "msg_from": "Francisco Olarte <folarte@peoplecall.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "> Not saying it's not a bug, but bear in mind psql CAN NOT correctly\r\n> complete filenames for SERVER SIDE copy. You may be running in the same\r\n> machine, but even with this and using unix domain sockets it's difficult\r\n> to know what is at the other end of the socket ( not sure if you can\r\n> always know it even if you are root, and you can have things like psql\r\n> connecting through unix domain socket to pgbouncer which forwards to I-do-\r\n> not-know-where (.com) .\r\n\r\nThat's very true, but at some point the decision was made to tab-complete COPY commands using information from the local filesystem, since that might be useful.\r\nI doubt there was ever an intention to take an otherwise-well-formed (partial) COPY command and make it invalid by removing a single quote in the middle of it!\r\n\r\n> They are different beasts, in \\copy you are not completing an sql command\r\n> to send to the server, you are completing a command to psql ( which it\r\n> implemts using an sql command plus some magic ).\r\n\r\nYep, I'm aware of that - I'm just pointing out the difference in syntax between the two commands, which I had always believed to be near-drop-in replacements for each other syntax-wise. \r\nIt's also relevant because the same tab-completion code is used for both \\copy and COPY and currently can't distinguish between them.\r\n\r\nPerhaps complete_from_files() needs an extra argument to specify the quoting behaviour.\r\n\r\nSteven\r\n", "msg_date": "Tue, 15 Oct 2019 13:52:52 +0000", "msg_from": "Steven Winfield <Steven.Winfield@cantabcapital.com>", "msg_from_op": false, "msg_subject": "RE: BUG #16059: Tab-completion of filenames in COPY commands\n removes required quotes" }, { "msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> As per the documentation[1], the COPY command requires the output filename\n> to be single-quoted.\n\n> However, when using psql, a partial COPY command such as this...\n> COPY pg_catalog.pg_class TO '/usr\n> ...will, on hitting TAB, be converted to this...\n> COPY pg_catalog.pg_class TO /usr/\n> ...requiring the user to move the cursor back to re-insert the single quote\n> before finishing the command and executing.\n\n> The issue seems to be somewhere around here[2], where complete_from_files[3]\n> is used to suggest replacements - that function strips quotes from the\n> existing (partial) filename but doesn't put them back unless quote_if_needed\n> is true (which I guess it isn't, unless there is a space in the filename for\n> example).\n\n> Note that using the \\copy command instead works fine, as filenames do not\n> need to be quoted in that case.\n\nYeah, it seems like a bad idea to override the user's choice to write\na quote, even if one is not formally necessary. I propose the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 02 Nov 2019 17:14:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": ">> As per the documentation[1], the COPY command requires the output filename\n>> to be single-quoted.\n...\n>> Note that using the \\copy command instead works fine, as filenames do not\n>> need to be quoted in that case.\n\n>Yeah, it seems like a bad idea to override the user's choice to write\n>a quote, even if one is not formally necessary. I propose the attached.\n\nThanks for taking a look at this. It will save me (and I hope many others) from much frustration!\n\nBut, to be clear, for the COPY command the single quotes *are* formally necessary, so at the moment tab-completion is turning a valid (partial) invocation into an invalid one.\n\nBest,\nSteve.\n\n\n\n\n", "msg_date": "Sat, 2 Nov 2019 22:00:30 +0000", "msg_from": "Steven Winfield <Steven.Winfield@cantabcapital.com>", "msg_from_op": false, "msg_subject": "RE: BUG #16059: Tab-completion of filenames in COPY commands\n removes required quotes" }, { "msg_contents": "[ redirecting to -hackers ]\n\nI wrote:\n> Yeah, it seems like a bad idea to override the user's choice to write\n> a quote, even if one is not formally necessary. I propose the attached.\n\nAfter further experimentation, I concluded that that patch is a bad idea;\nit breaks a lot of cases that used to work before. It turns out that\nReadline has a bunch of behaviors for filename completion that occur\noutside of the rl_filename_completion_function function proper, and they\nall assume that what's passed back from that function is plain unquoted\nfilename(s). Notably, completion of a path that includes directory names\njust doesn't work well at all anymore with that patch ... nor did it\nwork well before, if the path contained characters that we thought we\nshould quote.\n\nThe right way to do things, seemingly, is to let\nrl_filename_completion_function be invoked without any interference,\nand instead put our SQL-aware quoting/dequoting logic into the hooks\nReadline provides for that purpose, rl_filename_quoting_function and\nrl_filename_dequoting_function. (It appears that somebody tried to do\nthat before, way back around the turn of the century, but gave up on it.\nToo bad, because it's the right thing.)\n\nOf course, this only works if we have those hooks :-(. So far as\nI can tell, all libreadline variants that might still exist in the wild\ndo have them; but libedit doesn't, at least not in the version Apple\nis shipping. Hence, what the attached patch does is to make configure\nprobe for the existence of the hook variables; if they're not there,\nfall back to what I proposed previously. The behavior on libedit is\na bit less nice than one could wish, but it's better than what we have\nnow.\n\nI've tested this on the oldest and newest readline versions I have at\nhand (4.2a and 8.0), as well as the oldest and newest versions of\nApple's libedit derivative; but I haven't tried it on whatever the\nBSDen are shipping as libedit.\n\nThere's enough moving parts here that this probably needs to go through\na full review cycle, so I'll add it to the next commitfest. Some notes\nfor anybody wanting to review:\n\n* The patch now always quotes completed filenames, so quote_if_needed()\nis misnamed and overcomplicated for this use-case. I left the extra\ngenerality in place for possible future use. On the other hand, this\nis the *only* use-case, so you could also argue that we should just\nsimplify the function's API. I have no strong opinion about that.\n\n* In addition to the directly-related-to-filenames changes, it turns out\nto be necessary to set rl_completer_quote_characters to include at least\nsingle quotes, else Readline doesn't understand that a quoted filename\nis quoted. The patch sets it to include double quotes as well. This\nis probably the aspect of the patch that most needs testing. The general\neffect of this change is that Readline now understands that quoted\nstrings are single entities, plus it will try to complete the contents\nof a string if you ask it. The side-effects I've noticed seem to be\nall beneficial -- for example, if you do\n\n\tselect * from \"foo<TAB>\n\nit now correctly completes table names starting with \"foo\", which it\ndid not before. But there might be some bad effects as well. Also,\nalthough libedit has this variable, setting it doesn't have that effect\nthere; I've not really found that the variable does anything at all there.\n\n* The behavior of quote_file_name is directly modeled on what Readline's\ndefault implementation rl_quote_filename does, except that it uses\nSQL-aware quoting rules. The business of passing back the final quote\nmark separately is their idea.\n\n* An example of the kind of case that's interesting is that if you type\n\n\\lo_import /usr/i<TAB>\n\nthen what you get on readline (with this patch) is\n\n\\lo_import '/usr/include/\n\nwhile libedit produces\n\n\\lo_import '/usr/include' (with a space after the trailing quote)\n\nThat is, readline knows that the completion-so-far is a directory and\nassumes that's not all you want, whereas libedit doesn't know that.\nSo you typically now have to back up two characters, type slash, and\nresume completing. That's kind of a pain, but I'm not sure we can\nmake it better very easily. Anyway, libedit did something close to\nthat before, too.\n\n* There are also some interesting cases around special characters in\nthe filename. It seems to work well for embedded spaces, not so well\nfor embedded single quotes, though that may well vary across readline\nversions. Again, there seems to be a limited amount we can do about\nthat, given how much of the relevant logic is buried where we can't\nmodify it. And I'm not sure how much I care about that case, anyway.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 03 Nov 2019 17:40:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "All in all, after testing this for a bit, I think this patch is a clear\nimprovement over the statu quo. Thanks for working on this.\n\nI suggest to indicate in complete_from_files where to find the hook\nfunctions it refers to (say \"see quote_file_name, below\", or something.)\n\nI tested this on libreadline 7.x (where #define\nHAVE_RL_FILENAME_COMPLETION_FUNCTION 1). I noticed that if I enter a\nfilename that doesn't exist and then <tab>, it adds a closing quote.\nBash manages to do nothing somehow, which is the desired behavior IMO.\n\n(I tried to make sense of these hooks, but couldn't readily and I don't\nhave the readline documentation installed, so I have no opinion on\nwhether this problem is fixable. Maybe the trick is to let\nquote_if_needed know that this is a completion for a filename, and have\nit test for file existence?)\n\nAlso, some commands such as \\cd want a directory rather than just any\nfile. Not sure rl_filename_completion_function has a way to pass this\ndown. (This point is a bit outside this patch's charter perhaps, but\nmay as well think about it since we're here ...)\n\nI don't quite understand why a readline library that doesn't have\nrl_filename_completion_function is known to have a\nfilename_completion_function, ie. this bit \n\n#ifdef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n#define filename_completion_function rl_filename_completion_function\n#else\n/* decl missing in some header files, but function exists anyway */\nextern char *filename_completion_function();\n#endif\n\nWhat's going on here? How does this ever work?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Dec 2019 22:55:52 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On Sun, Nov 3, 2019 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Of course, this only works if we have those hooks :-(. So far as\n> I can tell, all libreadline variants that might still exist in the wild\n> do have them; but libedit doesn't, at least not in the version Apple\n> is shipping. Hence, what the attached patch does is to make configure\n> probe for the existence of the hook variables; if they're not there,\n> fall back to what I proposed previously.\n\nAre people still compiling against libedit and then redirecting to\nlibreadline at runtime? I seem to recall some discussion about this\nbeing a thing, many years ago. If it were being done it would be\nadvantageous to have the checks be runtime rather than compile-time,\nalthough I guess that would probably be tough to make work.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 11 Dec 2019 09:17:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On 2019-Dec-11, Robert Haas wrote:\n\n> On Sun, Nov 3, 2019 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Of course, this only works if we have those hooks :-(. So far as\n> > I can tell, all libreadline variants that might still exist in the wild\n> > do have them; but libedit doesn't, at least not in the version Apple\n> > is shipping. Hence, what the attached patch does is to make configure\n> > probe for the existence of the hook variables; if they're not there,\n> > fall back to what I proposed previously.\n> \n> Are people still compiling against libedit and then redirecting to\n> libreadline at runtime? I seem to recall some discussion about this\n> being a thing, many years ago.\n\nYeah, Debian did that out of licensing concerns. It seems they still do\nthat, based on\nhttps://packages.debian.org/bullseye/postgresql-client-12\n\n> If it were being done it would be\n> advantageous to have the checks be runtime rather than compile-time,\n> although I guess that would probably be tough to make work.\n\nYeah. On the other hand, I suppose Debian uses the BSD version of the\nlibraries, not the Apple version, so I think it should be fine?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Dec 2019 12:06:37 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On 2019-Dec-11, Alvaro Herrera wrote:\n\n> On 2019-Dec-11, Robert Haas wrote:\n\n> > If it were being done it would be\n> > advantageous to have the checks be runtime rather than compile-time,\n> > although I guess that would probably be tough to make work.\n> \n> Yeah. On the other hand, I suppose Debian uses the BSD version of the\n> libraries, not the Apple version, so I think it should be fine?\n\n... actually, grepping libedit's source at\nhttp://deb.debian.org/debian/pool/main/libe/libedit/libedit_3.1-20191025.orig.tar.gz\nthere's no occurrence of rl_filename_quoting_function.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Dec 2019 12:18:35 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Dec-11, Robert Haas wrote:\n>> Are people still compiling against libedit and then redirecting to\n>> libreadline at runtime? I seem to recall some discussion about this\n>> being a thing, many years ago.\n\n> Yeah, Debian did that out of licensing concerns. It seems they still do\n> that, based on\n> https://packages.debian.org/bullseye/postgresql-client-12\n\nI think it's Debian's problem, not ours, if that doesn't work. It is\nnot unreasonable for a package to probe existence of a library function\nat configure time. It's up to them to make sure that the headers match\nthe actual library.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Dec 2019 10:52:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On Wed, Dec 11, 2019 at 10:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think it's Debian's problem, not ours, if that doesn't work. It is\n> not unreasonable for a package to probe existence of a library function\n> at configure time. It's up to them to make sure that the headers match\n> the actual library.\n\nThat seems like an unhelpful attitude. Debian is a mainstream\nplatform, and no doubt feels that they have important reasons for what\nthey are doing.\n\nThat's not to say that I'm against the patch, but I don't believe it's\nright to treat the concerns of mainstream Linux distributions in\nanything less than a serious manner.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 11 Dec 2019 10:57:30 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Dec 11, 2019 at 10:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think it's Debian's problem, not ours, if that doesn't work. It is\n>> not unreasonable for a package to probe existence of a library function\n>> at configure time. It's up to them to make sure that the headers match\n>> the actual library.\n\n> That seems like an unhelpful attitude. Debian is a mainstream\n> platform, and no doubt feels that they have important reasons for what\n> they are doing.\n\nNonetheless, if they're doing that, it's *their* bug not ours when\nthe run-time library fails to match what was supplied to compile\nagainst. I think it would fall to them to patch either libedit\nor readline to make those two agree. This is not different in any\nway from the expectation that a platform supply a libc whose ABI\nis stable.\n\nIn any case, this discussion is a bit hypothetical isn't it?\nIf I understand correctly, your concern is that the proposed\npatch might fail to take advantage of functionality that actually\nmight be present at runtime. So what? It's no worse than before.\nMore, it's likely that there are other similar losses of functionality\nalready in our code and/or other peoples'. Debian bought into that\ntradeoff, not us.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Dec 2019 11:34:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I tested this on libreadline 7.x (where #define\n> HAVE_RL_FILENAME_COMPLETION_FUNCTION 1). I noticed that if I enter a\n> filename that doesn't exist and then <tab>, it adds a closing quote.\n> Bash manages to do nothing somehow, which is the desired behavior IMO.\n\nHmm. I'll take a look, but I'm not terribly hopeful. I have looked\nbriefly at what Bash does for filename completion, and as I recall\nit was massive, spaghetti-ish, and way too much in bed with various\nimplementation details of libreadline --- they don't pretend to work\nwith libedit. I'm not prepared to go there. It's reasonable for Bash\nto expend huge effort on filename completion, because that's such a core\nuse-case for them, but I don't think it deserves as much work in psql.\n\n> I don't quite understand why a readline library that doesn't have\n> rl_filename_completion_function is known to have a\n> filename_completion_function, ie. this bit \n\n> #ifdef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n> #define filename_completion_function rl_filename_completion_function\n> #else\n> /* decl missing in some header files, but function exists anyway */\n> extern char *filename_completion_function();\n> #endif\n\nI think the point is that before rl_filename_completion_function the\nfunction existed but was just called filename_completion_function.\nIt's possible that that's obsolete --- I've not really checked.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Dec 2019 11:43:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I don't quite understand why a readline library that doesn't have\n>> rl_filename_completion_function is known to have a\n>> filename_completion_function, ie. this bit \n\n>> #ifdef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n>> #define filename_completion_function rl_filename_completion_function\n>> #else\n>> /* decl missing in some header files, but function exists anyway */\n>> extern char *filename_completion_function();\n>> #endif\n\n> I think the point is that before rl_filename_completion_function the\n> function existed but was just called filename_completion_function.\n> It's possible that that's obsolete --- I've not really checked.\n\nI had a look through the buildfarm results, and it seems that the only\n(non-Windows) animals that don't HAVE_RL_FILENAME_COMPLETION_FUNCTION\nare prairiedog and locust. prairiedog is using the libedit that\nApple supplied in its stone-age version of macOS, and I imagine the\nsame can be said of locust, though that's one macOS release newer.\n\nprairiedog's version does define filename_completion_function:\n\n$ grep completion_func /usr/include/readline/readline.h \nextern CPPFunction *rl_attempted_completion_function;\nchar *filename_completion_function(const char *, int);\nchar *username_completion_function(const char *, int);\n\nso the assumption embodied in our code is both correct and necessary\nso far as the current universe of buildfarm critters is concerned.\n\nHaving said that, prairiedog's version of libedit is buggy as hell;\nit generates bogus warnings at every psql exit, for instance.\n\n$ psql postgres\npsql (13devel)\nType \"help\" for help.\n\npostgres=# \\q\ncould not save history to file \"/Users/tgl/.psql_history\": operating system error 0\n$ \n\nIt wouldn't be an unreasonable idea to desupport this version,\nif it allowed additional simplifications in psql beside this\nparticular #ifdef mess. I'm not sure whether any of the\ncontortions in e.g. saveHistory could go away if we did so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Dec 2019 00:49:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "I wrote:\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>> I don't quite understand why a readline library that doesn't have\n>>> rl_filename_completion_function is known to have a\n>>> filename_completion_function, ie. this bit \n\n>>> #ifdef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n>>> #define filename_completion_function rl_filename_completion_function\n>>> #else\n>>> /* decl missing in some header files, but function exists anyway */\n>>> extern char *filename_completion_function();\n>>> #endif\n\n>> I think the point is that before rl_filename_completion_function the\n>> function existed but was just called filename_completion_function.\n>> It's possible that that's obsolete --- I've not really checked.\n\nLooking closer at this, the \"extern\" could be got rid of, I think.\nprairiedog's readline header does have that extern, so it's hard to\nbelieve anybody is still using libedit versions that don't declare it.\n\nA possible further change is to switch the code over to calling\n\"rl_filename_completion_function\", and then invert the sense of\nthis logic, like\n\n/*\n * Ancient versions of libedit provide filename_completion_function()\n * instead of rl_filename_completion_function().\n */\n#ifndef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n#define rl_filename_completion_function filename_completion_function\n#endif\n\nThis would make it easier to compare our code to the readline\ndocumentation, so maybe it's helpful ... or maybe it's just\nchurn. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Dec 2019 09:42:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On 2019-Dec-13, Tom Lane wrote:\n\n> I wrote:\n> >> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> >>> #ifdef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n> >>> #define filename_completion_function rl_filename_completion_function\n> >>> #else\n> >>> /* decl missing in some header files, but function exists anyway */\n> >>> extern char *filename_completion_function();\n> >>> #endif\n\n> Looking closer at this, the \"extern\" could be got rid of, I think.\n> prairiedog's readline header does have that extern, so it's hard to\n> believe anybody is still using libedit versions that don't declare it.\n\nAgreed.\n\n> A possible further change is to switch the code over to calling\n> \"rl_filename_completion_function\", and then invert the sense of\n> this logic, like\n> \n> /*\n> * Ancient versions of libedit provide filename_completion_function()\n> * instead of rl_filename_completion_function().\n> */\n> #ifndef HAVE_RL_FILENAME_COMPLETION_FUNCTION\n> #define rl_filename_completion_function filename_completion_function\n> #endif\n> \n> This would make it easier to compare our code to the readline\n> documentation, so maybe it's helpful ... or maybe it's just\n> churn. Thoughts?\n\n+1, I think that's clearer.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Dec 2019 12:38:05 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Dec-13, Tom Lane wrote:\n>> A possible further change is to switch the code over to calling\n>> \"rl_filename_completion_function\", and then invert the sense of\n>> this logic, like ...\n\n> +1, I think that's clearer.\n\nOK, I went ahead and pushed that change, since it seems separate\nand uncontroversial. I'll send along a new patch for the main\nchange in a little bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Dec 2019 11:18:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "I wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Wed, Dec 11, 2019 at 10:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I think it's Debian's problem, not ours, if that doesn't work. It is\n>>> not unreasonable for a package to probe existence of a library function\n>>> at configure time. It's up to them to make sure that the headers match\n>>> the actual library.\n\n>> That seems like an unhelpful attitude. Debian is a mainstream\n>> platform, and no doubt feels that they have important reasons for what\n>> they are doing.\n\nActually, this argument is based on a false premise anyhow. I took\na look into Debian's source package, and AFAICS they are not doing\nanything as weird as a run-time substitution. They are just filling\nthe build environment with libedit-dev not libreadline-dev. So that\nis certainly a supported configuration from our side, and if there\nis any header-to-library discrepancy then it's just a simple bug\nin the libedit package.\n\n(Maybe at one time they were doing something weird; I didn't look\nback further than the current sources for PG 12.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Dec 2019 13:55:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I suggest to indicate in complete_from_files where to find the hook\n> functions it refers to (say \"see quote_file_name, below\", or something.)\n\nDone.\n\n> I tested this on libreadline 7.x (where #define\n> HAVE_RL_FILENAME_COMPLETION_FUNCTION 1). I noticed that if I enter a\n> filename that doesn't exist and then <tab>, it adds a closing quote.\n> Bash manages to do nothing somehow, which is the desired behavior IMO.\n\nFixed --- on looking closer, I'd drawn the wrong conclusions from\nlooking at readline's default implementation of the quoting function\n(which seems to be a tad broken, at least in the version I looked at).\nIt turns out that there are some special cases we need to handle if\nwe want it to behave nicely.\n\n> Also, some commands such as \\cd want a directory rather than just any\n> file. Not sure rl_filename_completion_function has a way to pass this\n> down. (This point is a bit outside this patch's charter perhaps, but\n> may as well think about it since we're here ...)\n\nI ended up adding an S_ISDIR stat check in the completion function,\nbecause the desired behavior of terminating a directory name with '/'\n(and no quote) doesn't seem to be possible to get otherwise. So it would\nbe possible to do something different for \\cd, but I am not clear that\nthere's any real advantage. You can't really guess if the user wants the\ncurrently completable directory or a subdirectory, so it wouldn't do to\nemit a closing quote.\n\nI've now spent some effort on hacking the libedit code path (i.e. the\none where we don't have the hooks) as well as the libreadline path.\nThis version of the patch seems to behave well on all the following:\n* readline 6.0 (RHEL 6)\n* readline 8.0 (Fedora 30)\n* libedit 3.1 (Debian stretch)\n* whatever libedit Apple is shipping in current macOS\n\nI also tried it on ancient libedits from prairiedog and some other\nold macOS releases. There are cosmetic issues there (e.g. prairiedog\nwants to double the slash after a directory name) but I doubt we care\nenough to fix them. It does compile and more-or-less work.\n\nI noticed along the way that configure's probe for\nrl_completion_append_character fails if we're using <editline/readline.h>,\nbecause that configure macro was never taught to honor\nHAVE_EDITLINE_READLINE_H. This might account for weird behavior on\nlibedit builds, perhaps. Arguably that could be a back-patchable bug fix,\nbut I'm disinclined to do so because it might break peoples' muscle memory\nabout whether a space needs to be typed after a completion; not a great\nidea in a minor release.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 13 Dec 2019 14:16:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "I wrote:\n> [ psql-filename-completion-fixes-2.patch ]\n\nThe cfbot noted this was broken by the removal of pg_config.h.win32,\nso here's a new version rebased over that. No changes other than\nadjusting the MSVC autoconf-substitute code.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 27 Dec 2019 14:27:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On 2019-Dec-27, Tom Lane wrote:\n\n> I wrote:\n> > [ psql-filename-completion-fixes-2.patch ]\n> \n> The cfbot noted this was broken by the removal of pg_config.h.win32,\n> so here's a new version rebased over that. No changes other than\n> adjusting the MSVC autoconf-substitute code.\n\nWorks well for me.\n\nOne minor thing I noticed is that if I enter\n\\copy t from '/tmp/t'<tab>\nand I have files /tmp/t and /tmp/tst then it removes the ending quote.\nI can live with that, since the tab there is pointless anyway.\n(The unpatched version removes *both* quotes. The quotes there are not\nmandatory, but the new version works better when there are files with\nwhitespace in the name.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 27 Dec 2019 17:03:33 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> One minor thing I noticed is that if I enter\n> \\copy t from '/tmp/t'<tab>\n> and I have files /tmp/t and /tmp/tst then it removes the ending quote.\n\nYeah, that bothered me too. [ pokes at it for a bit... ] The\nquote_file_name function isn't passed enough info to handle this\nin a clean way, but we can make it work if we don't mind introducing\nsome more coupling with psql_completion, ie having the latter remember\nwhether the raw input word ended with a quote. As per v4.\n\n> (The unpatched version removes *both* quotes. The quotes there are not\n> mandatory, but the new version works better when there are files with\n> whitespace in the name.)\n\nOr when you're completing in a COPY rather than \\copy --- then, removing\nthe quotes is broken whether there's whitespace or not.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 27 Dec 2019 17:36:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On 2019-11-03 23:40, Tom Lane wrote:\n> * The patch now always quotes completed filenames, so quote_if_needed()\n> is misnamed and overcomplicated for this use-case. I left the extra\n> generality in place for possible future use. On the other hand, this\n> is the*only* use-case, so you could also argue that we should just\n> simplify the function's API. I have no strong opinion about that.\n\nI haven't found an explanation in this thread why it does always quote \nnow. That seems a bit unusual. Is there a reason for this? Can we do \nwithout it?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jan 2020 08:54:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-11-03 23:40, Tom Lane wrote:\n>> * The patch now always quotes completed filenames, so quote_if_needed()\n>> is misnamed and overcomplicated for this use-case. I left the extra\n>> generality in place for possible future use. On the other hand, this\n>> is the*only* use-case, so you could also argue that we should just\n>> simplify the function's API. I have no strong opinion about that.\n\n> I haven't found an explanation in this thread why it does always quote \n> now. That seems a bit unusual. Is there a reason for this? Can we do \n> without it?\n\nThe core problem we're trying to solve is stated in the thread title:\nif you do\n\nprompt# copy mytab from 'myfil<TAB>\n\nthen (assuming some completion is available) the current code actually\n*removes* the quote, which is completely wrong. Even if the user\ndidn't type a leading quote, it's better to add one, because COPY\nwon't work otherwise.\n\nIt'd be possible, perhaps, to distinguish between this case and the\ncases in backslash commands, which are okay with omitted quotes\n(for some filenames). I'm not sure that that would be an improvement\nthough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jan 2020 08:37:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> I haven't found an explanation in this thread why it does always quote \n>> now. That seems a bit unusual. Is there a reason for this? Can we do \n>> without it?\n\n> The core problem we're trying to solve is stated in the thread title:\n> ...\n> It'd be possible, perhaps, to distinguish between this case and the\n> cases in backslash commands, which are okay with omitted quotes\n> (for some filenames). I'm not sure that that would be an improvement\n> though.\n\nI realized that there *is* a good reason to worry about this. Fairly\nrecent versions of libedit will try to backslash-escape the output\nof psql_completion(), as I described over at [1]. This is absolutely\ndisastrous if we've emitted a quoted filename: we end up going from,\nsay,\n\t\\lo_import myf<TAB>\nto\n\t\\lo_import \\'myfile\\'\nwhich of course doesn't work at all (psql thinks \\' is in itself a\nbackslash command).\n\nThere isn't much we can do about this in contexts where we must\nquote the filename: not doing so produces an equally broken command.\nHowever, this problem does suggest that we shouldn't force quoting\nif we don't have to, as that just breaks cases we needn't break.\n\nHence, the attached revision only forces quoting in a SQL COPY\ncommand, or if the user already typed a quote.\n\nI also added some regression tests (whee!). They pass for me with a\ncouple different readline and libedit versions, but I have only minimal\nhopes for them passing everywhere without further hacking ... the\nresults of the other thread suggest that pain is to be expected here.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/13708.1578059577%40sss.pgh.pa.us", "msg_date": "Mon, 06 Jan 2020 01:06:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On 2020-01-06 07:06, Tom Lane wrote:\n> Hence, the attached revision only forces quoting in a SQL COPY\n> command, or if the user already typed a quote.\n\nYes, that seems better. Users tend to not like if tab completion messes \nwith what they have already typed unless strictly necessary.\n\nThe file name completion portion of this patch seems to work quite well now.\n\nI have found a weird behavior with identifier quoting, which is not the \nsubject of this patch, but it appears to be affected by it.\n\nThe good thing is that the new code will behave sensibly with\n\nselect * from \"pg_cl<TAB>\n\nwhich the old code didn't offer anything on.\n\nThe problem is that if you have\n\ncreate table \"test\"\"1\" (a int);\n\nthen the new code responds to\n\nselect * from \"te<TAB>\n\nby making that\n\nselect * from \"te\"\n\nwhereas the old code curiously handled that perfectly.\n\nNeither the old nor the new code will produce anything from\n\nselect * from te<TAB>\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Jan 2020 17:34:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> The file name completion portion of this patch seems to work quite well now.\n\nThanks for testing!\n\n> I have found a weird behavior with identifier quoting, which is not the \n> subject of this patch, but it appears to be affected by it.\n\n> The good thing is that the new code will behave sensibly with\n> select * from \"pg_cl<TAB>\n> which the old code didn't offer anything on.\n\nCheck.\n\n> The problem is that if you have\n> create table \"test\"\"1\" (a int);\n> then the new code responds to\n> select * from \"te<TAB>\n> by making that\n> select * from \"te\"\n> whereas the old code curiously handled that perfectly.\n\nRight. The underlying cause of both these changes seems to be that\nwhat readline passes to psql_completion is just the contents of the\nstring, now that we've told it that double-quote is a string quoting\ncharacter. So that works fine with 'pg_cl' which didn't need quoting\nanyway. In your second example, because we generate possible matches\nthat are already quoted-if-necessary, we fail to find anything that\nbare 'te' is a prefix of, where before we were considering '\"te' and\nit worked.\n\nI'll think about how to improve this. Given that we're dequoting\nfilenames explicitly anyway, maybe we don't need to tell readline that\ndouble-quote is a quoting character. Another idea is that maybe\nwe ought to refactor things so that identifier dequoting and requoting\nis handled explicitly, similarly to the way filenames work now.\nThat would make the patch a lot bigger though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jan 2020 15:36:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> I have found a weird behavior with identifier quoting, which is not the \n>> subject of this patch, but it appears to be affected by it.\n\n> I'll think about how to improve this. Given that we're dequoting\n> filenames explicitly anyway, maybe we don't need to tell readline that\n> double-quote is a quoting character. Another idea is that maybe\n> we ought to refactor things so that identifier dequoting and requoting\n> is handled explicitly, similarly to the way filenames work now.\n> That would make the patch a lot bigger though.\n\nOn reflection, it seems like the best bet for the moment is to\nremove double-quote from the rl_completer_quote_characters string,\nwhich should restore all behavior around double-quoted strings to\nwhat it was before. (We have to keep single-quote in that string,\nthough, or quoted file names fail entirely.)\n\nThe only impact this has on filename completion is that we're no\nlonger bright enough to convert a double-quoted filename to\nsingle-quoted for you, which would be a nice-to-have but it's\nhardly core functionality.\n\nAt some point it'd be good to undo that and upgrade quoted-identifier\nprocessing, but I don't really want to include such changes in this\npatch. I'm about burned out on tab completion for right now.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 13 Jan 2020 20:38:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "On 2020-01-14 02:38, Tom Lane wrote:\n> I wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> I have found a weird behavior with identifier quoting, which is not the\n>>> subject of this patch, but it appears to be affected by it.\n> \n>> I'll think about how to improve this. Given that we're dequoting\n>> filenames explicitly anyway, maybe we don't need to tell readline that\n>> double-quote is a quoting character. Another idea is that maybe\n>> we ought to refactor things so that identifier dequoting and requoting\n>> is handled explicitly, similarly to the way filenames work now.\n>> That would make the patch a lot bigger though.\n> \n> On reflection, it seems like the best bet for the moment is to\n> remove double-quote from the rl_completer_quote_characters string,\n> which should restore all behavior around double-quoted strings to\n> what it was before. (We have to keep single-quote in that string,\n> though, or quoted file names fail entirely.)\n\nThis patch (version 6) looks good to me.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Jan 2020 14:46:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-01-14 02:38, Tom Lane wrote:\n>> On reflection, it seems like the best bet for the moment is to\n>> remove double-quote from the rl_completer_quote_characters string,\n>> which should restore all behavior around double-quoted strings to\n>> what it was before. (We have to keep single-quote in that string,\n>> though, or quoted file names fail entirely.)\n\n> This patch (version 6) looks good to me.\n\nThanks for reviewing! Pushed, now we'll see what the buildfarm thinks...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Jan 2020 11:09:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16059: Tab-completion of filenames in COPY commands removes\n required quotes" } ]
[ { "msg_contents": "I have been looking at PostgreSQL's Tuple Queue \r\n(/include/executor/tqueue.h) which provides functionality for queuing \r\ntuples between processes through shm_mq. I am still familiarising myself \r\nwith the bigger picture and TupTableStores. I can see that a copy (not a \r\nreference) of a HeapTuple (obtained from TupleTableSlot or SPI_TupTable \r\netc) can be sent to a queue using shm_mq. Then, another process can \r\nreceive these HeapTuples, probably later placing them in 'output' \r\nTupleTableSlots.\r\n\r\nWhat I am having difficulty understanding is what happens to the \r\nlocation of the HeapTuple as it moves from one TupleTableSlot to the \r\nother as described above. Since there most likely is a reference to a \r\nphysical tuple involved, am I incurring a disk-access overhead with each \r\ncopy of a tuple? This would seem like a massive overhead; how can I keep \r\nsuch overheads to a minimum?\r\n\r\nFurthermore, to what extent can I expect other modules to impact a \r\nqueued HeapTuple? If some external process updates this tuple, when will \r\nI see the change? Would it be a possiblity that the update is not \r\nreflected on the queued HeapTuple but the external process is not \r\nblocked/delayed from updating? In other words, like operating on some \r\nkind of multiple snapshots? When does DBMS logging kick in whilst I am \r\ntransferring a tuple from TupTableStore to another?\r\n\r\nThanks,\r\nTom\r\n", "msg_date": "Wed, 16 Oct 2019 01:24:04 +0000", "msg_from": "Tom Mercha <mercha_t@hotmail.com>", "msg_from_op": true, "msg_subject": "Understanding TupleQueue impact and overheads?" }, { "msg_contents": "Hi,\n\nOn 2019-10-16 01:24:04 +0000, Tom Mercha wrote:\n> What I am having difficulty understanding is what happens to the\n> location of the HeapTuple as it moves from one TupleTableSlot to the\n> other as described above. Since there most likely is a reference to a\n> physical tuple involved, am I incurring a disk-access overhead with each\n> copy of a tuple? This would seem like a massive overhead; how can I keep\n> such overheads to a minimum?\n\nThe tuple is fully \"materialized\" on the sending size, due to\n\ttuple = ExecFetchSlotHeapTuple(slot, true, &should_free);\n\nso there's no direct references to disk data at that point. But if\nthere's toasted columns, they'll may only be accessed on the receiving\nside.\n\nSide-note: This very likely rather should use a minimal, rather than a\nfull heap, tuple.\n\n\n> Furthermore, to what extent can I expect other modules to impact a\n> queued HeapTuple? If some external process updates this tuple, when will\n> I see the change? Would it be a possiblity that the update is not\n> reflected on the queued HeapTuple but the external process is not\n> blocked/delayed from updating? In other words, like operating on some\n> kind of multiple snapshots? When does DBMS logging kick in whilst I am\n> transferring a tuple from TupTableStore to another?\n\nI'm not quite sure what you're actually trying to get at. Whether a\ntuple is ferried through the queue or not shouldn't have an impact on\nvisibility / snapshot and locking considerations. For parallel query etc\nthe snapshots are synchronized between the \"leader\" and its workers. If\nyou want to use them for something separate, it's your responsibility to\ndo so.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Oct 2019 01:40:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Understanding TupleQueue impact and overheads?" } ]
[ { "msg_contents": "Hi hackers,\n\nIn recently, I discovered a postgres bug, and I hope I can ask you for the best solution.\nThe problem is as follows:\n\npostgres=# explain analyze select * from xxx where a=500;\nERROR: could not open relation with OID 25989\nThe structure of my table is as follows:\npostgres=# \\d xxx\n Table \"public.xxx\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | text | | |\n\npostgres=# select count(*) from xxx;\n count \n--------\n 800000\n(1 row)\n\npostgres=# select * from xxx limit 3;\n\n a | b\n---+----------------------------------\n 1 | 203c51477570aa517cfa317155dcc52c\n 2 | b58da31baa5c78fee4642fd398bd5909\n 3 | c7c475bf0a3ca2fc2afc4812a4d44c58\n\nI opened the log file and saw that the index of table xxx was deleted,\n\npostgres=# drop index CONCURRENTLY idx_xxx ;\nDROP INDEX\n\nIn order to reproduce this bug, I created and deleted the index again and again on the master.\nWhat is hard to understand is that this bug cannot be repeated 100%.\nI wrote a script that loops over the master and runs the following two sentences.\n\npostgres=# create index idx_xxx on xxx (a);\npostgres=# drop index CONCURRENTLY idx_xxx ;\npostgres=# create index idx_xxx on xxx (a);\npostgres=# drop index CONCURRENTLY idx_xxx ;\n...\n...\n...\nAt the same time, I started two clients in the standby, \nrespectively execute the following sql on the table xxx:\n\npostgres=# explain analyze select * from xxx where a=500;\npostgres=# \\watch 0.1\n\nAfter a few minutes, the bug will appear.\n\nI finally confirmed my guess, I used an index scan in the standby query,\nbut deleted the index on the master at the same time.\nCurious, I went to read the source code of Postgres. I found that\n regular DROP INDEX commands imposes a AccessExclusiveLock on the table,\n while drop index concurrently commands only used ShareUpdateExclusiveLock.\n\nAs we all know, only AccessExclusiveLock and AccessShareLock ,a select's lock ,\nare mutually exclusive, and AccessShareLock can't block ShareUpdateExclusiveLock.\nThis is very weird and not desirable.\n\nThis is of course, developers must have thought of this, so we can see in the source \ncode, before the drop index concurrently, will wait for all transactions using this\n index to wait for detection.\n\n But this only exists on the master, my query is executed on the standby.\n I use the pg_waldump tool to parse the wal file, and analyze the stantup process,\n I found that there is no similar operation on the standby, so it will appear that \n when I execute the query on the standby, the index will be deleted by others.\n\n\nI think this is a bug that will affect the user's experience. we need to fix it.\n I have imagined that the logic that detects the query transaction and\n waits for it to end is implemented on the standby,but this may increase the\n log application delay and the delay is exacerbated that cause the master and backup. \nThis is not desirable if the query concurrency is large.\n\nAll in all, I expect that you can provide a solution that can use drop index concurrently \nwithout affecting the master-slave delay.\n\nSincerely look forward to your reply and thanks.\n\nadger\n\n\n\n\nHi hackers,In recently, I discovered a postgres bug, and I hope I can ask you for the best solution.The problem is as follows:postgres=# explain analyze select * from xxx where a=500;ERROR:  could not open relation with OID 25989The structure of my table is as follows:postgres=# \\d xxx                Table \"public.xxx\" Column |  Type   | Collation | Nullable | Default --------+---------+-----------+----------+--------- a      | integer |           |          |  b      | text    |           |          | postgres=# select count(*) from xxx; count  -------- 800000(1 row)postgres=# select * from xxx limit 3; a |                b                 ---+---------------------------------- 1 | 203c51477570aa517cfa317155dcc52c 2 | b58da31baa5c78fee4642fd398bd5909 3 | c7c475bf0a3ca2fc2afc4812a4d44c58I opened the log file and saw that the index of table xxx was deleted,postgres=# drop index CONCURRENTLY idx_xxx ;DROP INDEXIn order to reproduce this bug, I created and deleted the index again and again on the master.What is hard to understand is that this bug cannot be repeated 100%.I wrote a script that loops over the master and runs the following two sentences.postgres=# create index idx_xxx on xxx (a);postgres=# drop index CONCURRENTLY idx_xxx ;postgres=# create index idx_xxx on xxx (a);postgres=# drop index CONCURRENTLY idx_xxx ;.........At the same time, I started two clients in the standby, respectively execute the following sql on the table xxx:postgres=# explain analyze select * from xxx where a=500;postgres=# \\watch 0.1After a few minutes, the bug will appear.I finally confirmed my guess, I used an index scan in the standby query,but deleted the index on the master at the same time.Curious, I went to read the source code of Postgres. I found that regular DROP INDEX commands imposes a AccessExclusiveLock on the table, while drop index concurrently commands only used ShareUpdateExclusiveLock.As we all know, only AccessExclusiveLock and  AccessShareLock ,a select's  lock ,are mutually exclusive, and AccessShareLock can't block ShareUpdateExclusiveLock.This is very weird and not desirable.This is of course, developers must have thought of this, so we can see in the source code, before the drop index concurrently, will wait for all transactions using this index to wait for detection. But this only exists on the master, my query is executed on the standby. I use the pg_waldump tool to parse the wal file, and analyze the stantup process, I found that there is no similar operation on the standby, so it will appear that  when I execute the query on the standby, the index will be deleted by others.I think this is a bug that will affect the user's experience. we need to fix it. I have imagined that the logic that detects the  query transaction and  waits for it to end is implemented on the standby,but this may increase the log application delay and the delay is exacerbated that cause the master and backup. This is not desirable if the query concurrency is large.All in all, I expect that you can provide a solution that can use drop index concurrently without affecting the master-slave delay.Sincerely look forward to your reply and thanks.adger", "msg_date": "Wed, 16 Oct 2019 16:34:31 +0800", "msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?QnVnIGFib3V0IGRyb3AgaW5kZXggY29uY3VycmVudGx5?=" }, { "msg_contents": "Hi,\n\nI can trivially reproduce this - it's enough to create a master-standby\nsetup, and then do this on the master\n\n CREATE TABLE t (a int, b int);\n INSERT INTO t SELECT i, i FROM generate_series(1,10000) s(i);\n\nand run pgbench with this script\n\n DROP INDEX CONCURRENTLY IF EXISTS t_a_idx;\n CREATE INDEX t_a_idx ON t(a);\n\nwhile on the standby there's another pgbench running this script\n\n EXPLAIN ANALYZE SELECT * FROM t WHERE a = 10000;\n\nand it fails pretty fast for me. With an extra assert(false) added to\nsrc/backend/access/common/relation.c I get a backtrace like this:\n\n Program terminated with signal SIGABRT, Aborted.\n #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n Missing separate debuginfos, use: dnf debuginfo-install glibc-2.29-22.fc30.x86_64\n (gdb) bt\n #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n #1 0x00007c32e457a895 in abort () from /lib64/libc.so.6\n #2 0x0000000000a58579 in ExceptionalCondition (conditionName=0xacd9bc \"!(0)\", errorType=0xacd95b \"FailedAssertion\", fileName=0xacd950 \"relation.c\", lineNumber=64) at assert.c:54\n #3 0x000000000048d1bd in relation_open (relationId=38216, lockmode=1) at relation.c:64\n #4 0x00000000005082e4 in index_open (relationId=38216, lockmode=1) at indexam.c:130\n #5 0x000000000080ac3f in get_relation_info (root=0x21698b8, relationObjectId=16385, inhparent=false, rel=0x220ce60) at plancat.c:196\n #6 0x00000000008118c6 in build_simple_rel (root=0x21698b8, relid=1, parent=0x0) at relnode.c:292\n #7 0x00000000007d485d in add_base_rels_to_query (root=0x21698b8, jtnode=0x2169478) at initsplan.c:114\n #8 0x00000000007d48a3 in add_base_rels_to_query (root=0x21698b8, jtnode=0x21693e0) at initsplan.c:122\n #9 0x00000000007d8fad in query_planner (root=0x21698b8, qp_callback=0x7ded97 <standard_qp_callback>, qp_extra=0x7fffa4834f10) at planmain.c:168\n #10 0x00000000007dc316 in grouping_planner (root=0x21698b8, inheritance_update=false, tuple_fraction=0) at planner.c:2048\n #11 0x00000000007da7ca in subquery_planner (glob=0x220d078, parse=0x2168f78, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1012\n #12 0x00000000007d942c in standard_planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:406\n #13 0x00000000007d91e8 in planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:275\n #14 0x00000000008e1b0d in pg_plan_query (querytree=0x2168f78, cursorOptions=256, boundParams=0x0) at postgres.c:878\n #15 0x0000000000658683 in ExplainOneQuery (query=0x2168f78, cursorOptions=256, into=0x0, es=0x220cd90, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0) at explain.c:367\n #16 0x0000000000658386 in ExplainQuery (pstate=0x220cc28, stmt=0x2141728, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0, dest=0x220cb90) at explain.c:255\n #17 0x00000000008ea218 in standard_ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n completionTag=0x7fffa48355c0 \"\") at utility.c:675\n #18 0x00000000008e9a45 in ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n completionTag=0x7fffa48355c0 \"\") at utility.c:360\n #19 0x00000000008e8a0c in PortalRunUtility (portal=0x219c278, pstmt=0x21425c0, isTopLevel=true, setHoldSnapshot=true, dest=0x220cb90, completionTag=0x7fffa48355c0 \"\") at pquery.c:1175\n #20 0x00000000008e871a in FillPortalStore (portal=0x219c278, isTopLevel=true) at pquery.c:1035\n #21 0x00000000008e8075 in PortalRun (portal=0x219c278, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21efb90, altdest=0x21efb90, completionTag=0x7fffa48357b0 \"\") at pquery.c:765\n #22 0x00000000008e207c in exec_simple_query (query_string=0x21407b8 \"explain analyze select * from t where a = 10000;\") at postgres.c:1215\n #23 0x00000000008e636e in PostgresMain (argc=1, argv=0x216c600, dbname=0x216c4e0 \"test\", username=0x213c3f8 \"user\") at postgres.c:4236\n #24 0x000000000083c71e in BackendRun (port=0x2165850) at postmaster.c:4437\n #25 0x000000000083beef in BackendStartup (port=0x2165850) at postmaster.c:4128\n #26 0x0000000000838313 in ServerLoop () at postmaster.c:1704\n #27 0x0000000000837bbf in PostmasterMain (argc=3, argv=0x213a360) at postmaster.c:1377\n #28 0x0000000000759643 in main (argc=3, argv=0x213a360) at main.c:228\n\nSo my guess is the root cause is pretty simple - we close/unlock the\nindexes after completing the query, but then EXPLAIN tries to open it\nagain when producing the explain plan.\n\nI don't have a very good idea how to fix this, as explain has no idea\nwhich indexes will be used by the query, etc.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 18 Oct 2019 17:00:54 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Bug about drop index concurrently" }, { "msg_contents": "Thanks for the quick reply.\nAnd sorry I haven’t got back to you sooner .\n\nI have seen this backtrace in the core file, and I also looked at the bug in the standby because there is no lock in the drop index concurrently.\n\nHowever, when our business will perform a large number of queries in the standby, this problem will occur more frequently. So we are trying to solve this problem, and the solution we are currently dealing with is to ban it.\n\nOf course, we considered applying the method of waiting to detect the query lock on the master to the standby, but worried about affecting the standby application log delay, so we gave up that.\n\nIf you have a better solution in the future, please push it to the new version, or email it, thank you very much.\n\nregards.\n\nadger.\n\n\n------------------------------------------------------------------\n发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>\n发送时间:2019年10月19日(星期六) 02:00\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: Bug about drop index concurrently\n\nHi,\n\nI can trivially reproduce this - it's enough to create a master-standby\nsetup, and then do this on the master\n\n CREATE TABLE t (a int, b int);\n INSERT INTO t SELECT i, i FROM generate_series(1,10000) s(i);\n\nand run pgbench with this script\n\n DROP INDEX CONCURRENTLY IF EXISTS t_a_idx;\n CREATE INDEX t_a_idx ON t(a);\n\nwhile on the standby there's another pgbench running this script\n\n EXPLAIN ANALYZE SELECT * FROM t WHERE a = 10000;\n\nand it fails pretty fast for me. With an extra assert(false) added to\nsrc/backend/access/common/relation.c I get a backtrace like this:\n\n Program terminated with signal SIGABRT, Aborted.\n #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n Missing separate debuginfos, use: dnf debuginfo-install glibc-2.29-22.fc30.x86_64\n (gdb) bt\n #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n #1 0x00007c32e457a895 in abort () from /lib64/libc.so.6\n #2 0x0000000000a58579 in ExceptionalCondition (conditionName=0xacd9bc \"!(0)\", errorType=0xacd95b \"FailedAssertion\", fileName=0xacd950 \"relation.c\", lineNumber=64) at assert.c:54\n #3 0x000000000048d1bd in relation_open (relationId=38216, lockmode=1) at relation.c:64\n #4 0x00000000005082e4 in index_open (relationId=38216, lockmode=1) at indexam.c:130\n #5 0x000000000080ac3f in get_relation_info (root=0x21698b8, relationObjectId=16385, inhparent=false, rel=0x220ce60) at plancat.c:196\n #6 0x00000000008118c6 in build_simple_rel (root=0x21698b8, relid=1, parent=0x0) at relnode.c:292\n #7 0x00000000007d485d in add_base_rels_to_query (root=0x21698b8, jtnode=0x2169478) at initsplan.c:114\n #8 0x00000000007d48a3 in add_base_rels_to_query (root=0x21698b8, jtnode=0x21693e0) at initsplan.c:122\n #9 0x00000000007d8fad in query_planner (root=0x21698b8, qp_callback=0x7ded97 <standard_qp_callback>, qp_extra=0x7fffa4834f10) at planmain.c:168\n #10 0x00000000007dc316 in grouping_planner (root=0x21698b8, inheritance_update=false, tuple_fraction=0) at planner.c:2048\n #11 0x00000000007da7ca in subquery_planner (glob=0x220d078, parse=0x2168f78, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1012\n #12 0x00000000007d942c in standard_planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:406\n #13 0x00000000007d91e8 in planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:275\n #14 0x00000000008e1b0d in pg_plan_query (querytree=0x2168f78, cursorOptions=256, boundParams=0x0) at postgres.c:878\n #15 0x0000000000658683 in ExplainOneQuery (query=0x2168f78, cursorOptions=256, into=0x0, es=0x220cd90, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0) at explain.c:367\n #16 0x0000000000658386 in ExplainQuery (pstate=0x220cc28, stmt=0x2141728, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0, dest=0x220cb90) at explain.c:255\n #17 0x00000000008ea218 in standard_ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n completionTag=0x7fffa48355c0 \"\") at utility.c:675\n #18 0x00000000008e9a45 in ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n completionTag=0x7fffa48355c0 \"\") at utility.c:360\n #19 0x00000000008e8a0c in PortalRunUtility (portal=0x219c278, pstmt=0x21425c0, isTopLevel=true, setHoldSnapshot=true, dest=0x220cb90, completionTag=0x7fffa48355c0 \"\") at pquery.c:1175\n #20 0x00000000008e871a in FillPortalStore (portal=0x219c278, isTopLevel=true) at pquery.c:1035\n #21 0x00000000008e8075 in PortalRun (portal=0x219c278, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21efb90, altdest=0x21efb90, completionTag=0x7fffa48357b0 \"\") at pquery.c:765\n #22 0x00000000008e207c in exec_simple_query (query_string=0x21407b8 \"explain analyze select * from t where a = 10000;\") at postgres.c:1215\n #23 0x00000000008e636e in PostgresMain (argc=1, argv=0x216c600, dbname=0x216c4e0 \"test\", username=0x213c3f8 \"user\") at postgres.c:4236\n #24 0x000000000083c71e in BackendRun (port=0x2165850) at postmaster.c:4437\n #25 0x000000000083beef in BackendStartup (port=0x2165850) at postmaster.c:4128\n #26 0x0000000000838313 in ServerLoop () at postmaster.c:1704\n #27 0x0000000000837bbf in PostmasterMain (argc=3, argv=0x213a360) at postmaster.c:1377\n #28 0x0000000000759643 in main (argc=3, argv=0x213a360) at main.c:228\n\nSo my guess is the root cause is pretty simple - we close/unlock the\nindexes after completing the query, but then EXPLAIN tries to open it\nagain when producing the explain plan.\n\nI don't have a very good idea how to fix this, as explain has no idea\nwhich indexes will be used by the query, etc.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nThanks for the quick reply.And sorry I haven’t got back to you sooner .I have seen this backtrace in the core file, and I also looked at the bug in the standby because there is no lock in the drop index concurrently.However, when our business will perform a large number of queries in the standby, this problem will occur more frequently. So we are trying to solve this problem, and the solution we are currently dealing with is to ban it.Of course, we considered applying the method of waiting to detect the query lock on the master to the standby, but worried about affecting the standby application log delay, so we gave up that.If you have a better solution in the future, please push it to the new version, or email it, thank you very much.regards.adger.------------------------------------------------------------------发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>发送时间:2019年10月19日(星期六) 02:00收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>主 题:Re: Bug about drop index concurrentlyHi,I can trivially reproduce this - it's enough to create a master-standbysetup, and then do this on the master  CREATE TABLE t (a int, b int);  INSERT INTO t SELECT i, i FROM generate_series(1,10000) s(i);and run pgbench with this script  DROP INDEX CONCURRENTLY IF EXISTS t_a_idx;  CREATE INDEX t_a_idx ON t(a);while on the standby there's another pgbench running this script  EXPLAIN ANALYZE SELECT * FROM t WHERE a = 10000;and it fails pretty fast for me. With an extra assert(false) added tosrc/backend/access/common/relation.c I get a backtrace like this:    Program terminated with signal SIGABRT, Aborted.    #0  0x00007c32e458fe35 in raise () from /lib64/libc.so.6    Missing separate debuginfos, use: dnf debuginfo-install glibc-2.29-22.fc30.x86_64    (gdb) bt    #0  0x00007c32e458fe35 in raise () from /lib64/libc.so.6    #1  0x00007c32e457a895 in abort () from /lib64/libc.so.6    #2  0x0000000000a58579 in ExceptionalCondition (conditionName=0xacd9bc \"!(0)\", errorType=0xacd95b \"FailedAssertion\", fileName=0xacd950 \"relation.c\", lineNumber=64) at assert.c:54    #3  0x000000000048d1bd in relation_open (relationId=38216, lockmode=1) at relation.c:64    #4  0x00000000005082e4 in index_open (relationId=38216, lockmode=1) at indexam.c:130    #5  0x000000000080ac3f in get_relation_info (root=0x21698b8, relationObjectId=16385, inhparent=false, rel=0x220ce60) at plancat.c:196    #6  0x00000000008118c6 in build_simple_rel (root=0x21698b8, relid=1, parent=0x0) at relnode.c:292    #7  0x00000000007d485d in add_base_rels_to_query (root=0x21698b8, jtnode=0x2169478) at initsplan.c:114    #8  0x00000000007d48a3 in add_base_rels_to_query (root=0x21698b8, jtnode=0x21693e0) at initsplan.c:122    #9  0x00000000007d8fad in query_planner (root=0x21698b8, qp_callback=0x7ded97 <standard_qp_callback>, qp_extra=0x7fffa4834f10) at planmain.c:168    #10 0x00000000007dc316 in grouping_planner (root=0x21698b8, inheritance_update=false, tuple_fraction=0) at planner.c:2048    #11 0x00000000007da7ca in subquery_planner (glob=0x220d078, parse=0x2168f78, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1012    #12 0x00000000007d942c in standard_planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:406    #13 0x00000000007d91e8 in planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:275    #14 0x00000000008e1b0d in pg_plan_query (querytree=0x2168f78, cursorOptions=256, boundParams=0x0) at postgres.c:878    #15 0x0000000000658683 in ExplainOneQuery (query=0x2168f78, cursorOptions=256, into=0x0, es=0x220cd90, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0) at explain.c:367    #16 0x0000000000658386 in ExplainQuery (pstate=0x220cc28, stmt=0x2141728, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0, dest=0x220cb90) at explain.c:255    #17 0x00000000008ea218 in standard_ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,        completionTag=0x7fffa48355c0 \"\") at utility.c:675    #18 0x00000000008e9a45 in ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,        completionTag=0x7fffa48355c0 \"\") at utility.c:360    #19 0x00000000008e8a0c in PortalRunUtility (portal=0x219c278, pstmt=0x21425c0, isTopLevel=true, setHoldSnapshot=true, dest=0x220cb90, completionTag=0x7fffa48355c0 \"\") at pquery.c:1175    #20 0x00000000008e871a in FillPortalStore (portal=0x219c278, isTopLevel=true) at pquery.c:1035    #21 0x00000000008e8075 in PortalRun (portal=0x219c278, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21efb90, altdest=0x21efb90, completionTag=0x7fffa48357b0 \"\") at pquery.c:765    #22 0x00000000008e207c in exec_simple_query (query_string=0x21407b8 \"explain analyze select * from t where a = 10000;\") at postgres.c:1215    #23 0x00000000008e636e in PostgresMain (argc=1, argv=0x216c600, dbname=0x216c4e0 \"test\", username=0x213c3f8 \"user\") at postgres.c:4236    #24 0x000000000083c71e in BackendRun (port=0x2165850) at postmaster.c:4437    #25 0x000000000083beef in BackendStartup (port=0x2165850) at postmaster.c:4128    #26 0x0000000000838313 in ServerLoop () at postmaster.c:1704    #27 0x0000000000837bbf in PostmasterMain (argc=3, argv=0x213a360) at postmaster.c:1377    #28 0x0000000000759643 in main (argc=3, argv=0x213a360) at main.c:228So my guess is the root cause is pretty simple - we close/unlock theindexes after completing the query, but then EXPLAIN tries to open itagain when producing the explain plan.I don't have a very good idea how to fix this, as explain has no ideawhich indexes will be used by the query, etc.regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 21 Oct 2019 10:36:04 +0800", "msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?5Zue5aSN77yaQnVnIGFib3V0IGRyb3AgaW5kZXggY29uY3VycmVudGx5?=" }, { "msg_contents": "Hi all,\n\nI am sorry to bother you again.\n\nI want to consult again, about the last time I raised a bug about drop index, are you going to deal with it in the future? Is it to ban it or to propose a repair plan, what is your next plan?\n\nSincerely look forward to your reply and thanks.\n\nadger.\n\n\n------------------------------------------------------------------\n发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>\n发送时间:2019年10月19日(星期六) 02:00\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: Bug about drop index concurrently\n\nHi,\n\nI can trivially reproduce this - it's enough to create a master-standby\nsetup, and then do this on the master\n\n CREATE TABLE t (a int, b int);\n INSERT INTO t SELECT i, i FROM generate_series(1,10000) s(i);\n\nand run pgbench with this script\n\n DROP INDEX CONCURRENTLY IF EXISTS t_a_idx;\n CREATE INDEX t_a_idx ON t(a);\n\nwhile on the standby there's another pgbench running this script\n\n EXPLAIN ANALYZE SELECT * FROM t WHERE a = 10000;\n\nand it fails pretty fast for me. With an extra assert(false) added to\nsrc/backend/access/common/relation.c I get a backtrace like this:\n\n Program terminated with signal SIGABRT, Aborted.\n #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n Missing separate debuginfos, use: dnf debuginfo-install glibc-2.29-22.fc30.x86_64\n (gdb) bt\n #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n #1 0x00007c32e457a895 in abort () from /lib64/libc.so.6\n #2 0x0000000000a58579 in ExceptionalCondition (conditionName=0xacd9bc \"!(0)\", errorType=0xacd95b \"FailedAssertion\", fileName=0xacd950 \"relation.c\", lineNumber=64) at assert.c:54\n #3 0x000000000048d1bd in relation_open (relationId=38216, lockmode=1) at relation.c:64\n #4 0x00000000005082e4 in index_open (relationId=38216, lockmode=1) at indexam.c:130\n #5 0x000000000080ac3f in get_relation_info (root=0x21698b8, relationObjectId=16385, inhparent=false, rel=0x220ce60) at plancat.c:196\n #6 0x00000000008118c6 in build_simple_rel (root=0x21698b8, relid=1, parent=0x0) at relnode.c:292\n #7 0x00000000007d485d in add_base_rels_to_query (root=0x21698b8, jtnode=0x2169478) at initsplan.c:114\n #8 0x00000000007d48a3 in add_base_rels_to_query (root=0x21698b8, jtnode=0x21693e0) at initsplan.c:122\n #9 0x00000000007d8fad in query_planner (root=0x21698b8, qp_callback=0x7ded97 <standard_qp_callback>, qp_extra=0x7fffa4834f10) at planmain.c:168\n #10 0x00000000007dc316 in grouping_planner (root=0x21698b8, inheritance_update=false, tuple_fraction=0) at planner.c:2048\n #11 0x00000000007da7ca in subquery_planner (glob=0x220d078, parse=0x2168f78, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1012\n #12 0x00000000007d942c in standard_planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:406\n #13 0x00000000007d91e8 in planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:275\n #14 0x00000000008e1b0d in pg_plan_query (querytree=0x2168f78, cursorOptions=256, boundParams=0x0) at postgres.c:878\n #15 0x0000000000658683 in ExplainOneQuery (query=0x2168f78, cursorOptions=256, into=0x0, es=0x220cd90, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0) at explain.c:367\n #16 0x0000000000658386 in ExplainQuery (pstate=0x220cc28, stmt=0x2141728, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0, dest=0x220cb90) at explain.c:255\n #17 0x00000000008ea218 in standard_ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n completionTag=0x7fffa48355c0 \"\") at utility.c:675\n #18 0x00000000008e9a45 in ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n completionTag=0x7fffa48355c0 \"\") at utility.c:360\n #19 0x00000000008e8a0c in PortalRunUtility (portal=0x219c278, pstmt=0x21425c0, isTopLevel=true, setHoldSnapshot=true, dest=0x220cb90, completionTag=0x7fffa48355c0 \"\") at pquery.c:1175\n #20 0x00000000008e871a in FillPortalStore (portal=0x219c278, isTopLevel=true) at pquery.c:1035\n #21 0x00000000008e8075 in PortalRun (portal=0x219c278, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21efb90, altdest=0x21efb90, completionTag=0x7fffa48357b0 \"\") at pquery.c:765\n #22 0x00000000008e207c in exec_simple_query (query_string=0x21407b8 \"explain analyze select * from t where a = 10000;\") at postgres.c:1215\n #23 0x00000000008e636e in PostgresMain (argc=1, argv=0x216c600, dbname=0x216c4e0 \"test\", username=0x213c3f8 \"user\") at postgres.c:4236\n #24 0x000000000083c71e in BackendRun (port=0x2165850) at postmaster.c:4437\n #25 0x000000000083beef in BackendStartup (port=0x2165850) at postmaster.c:4128\n #26 0x0000000000838313 in ServerLoop () at postmaster.c:1704\n #27 0x0000000000837bbf in PostmasterMain (argc=3, argv=0x213a360) at postmaster.c:1377\n #28 0x0000000000759643 in main (argc=3, argv=0x213a360) at main.c:228\n\nSo my guess is the root cause is pretty simple - we close/unlock the\nindexes after completing the query, but then EXPLAIN tries to open it\nagain when producing the explain plan.\n\nI don't have a very good idea how to fix this, as explain has no idea\nwhich indexes will be used by the query, etc.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nHi all,I am sorry to bother you again.I want to consult again, about the last time I raised a bug about drop index, are you going to deal with it in the future? Is it to ban it or to propose a repair plan, what is your next plan?Sincerely look forward to your reply and thanks.adger.------------------------------------------------------------------发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>发送时间:2019年10月19日(星期六) 02:00收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>主 题:Re: Bug about drop index concurrentlyHi,I can trivially reproduce this - it's enough to create a master-standbysetup, and then do this on the master  CREATE TABLE t (a int, b int);  INSERT INTO t SELECT i, i FROM generate_series(1,10000) s(i);and run pgbench with this script  DROP INDEX CONCURRENTLY IF EXISTS t_a_idx;  CREATE INDEX t_a_idx ON t(a);while on the standby there's another pgbench running this script  EXPLAIN ANALYZE SELECT * FROM t WHERE a = 10000;and it fails pretty fast for me. With an extra assert(false) added tosrc/backend/access/common/relation.c I get a backtrace like this:    Program terminated with signal SIGABRT, Aborted.    #0  0x00007c32e458fe35 in raise () from /lib64/libc.so.6    Missing separate debuginfos, use: dnf debuginfo-install glibc-2.29-22.fc30.x86_64    (gdb) bt    #0  0x00007c32e458fe35 in raise () from /lib64/libc.so.6    #1  0x00007c32e457a895 in abort () from /lib64/libc.so.6    #2  0x0000000000a58579 in ExceptionalCondition (conditionName=0xacd9bc \"!(0)\", errorType=0xacd95b \"FailedAssertion\", fileName=0xacd950 \"relation.c\", lineNumber=64) at assert.c:54    #3  0x000000000048d1bd in relation_open (relationId=38216, lockmode=1) at relation.c:64    #4  0x00000000005082e4 in index_open (relationId=38216, lockmode=1) at indexam.c:130    #5  0x000000000080ac3f in get_relation_info (root=0x21698b8, relationObjectId=16385, inhparent=false, rel=0x220ce60) at plancat.c:196    #6  0x00000000008118c6 in build_simple_rel (root=0x21698b8, relid=1, parent=0x0) at relnode.c:292    #7  0x00000000007d485d in add_base_rels_to_query (root=0x21698b8, jtnode=0x2169478) at initsplan.c:114    #8  0x00000000007d48a3 in add_base_rels_to_query (root=0x21698b8, jtnode=0x21693e0) at initsplan.c:122    #9  0x00000000007d8fad in query_planner (root=0x21698b8, qp_callback=0x7ded97 <standard_qp_callback>, qp_extra=0x7fffa4834f10) at planmain.c:168    #10 0x00000000007dc316 in grouping_planner (root=0x21698b8, inheritance_update=false, tuple_fraction=0) at planner.c:2048    #11 0x00000000007da7ca in subquery_planner (glob=0x220d078, parse=0x2168f78, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1012    #12 0x00000000007d942c in standard_planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:406    #13 0x00000000007d91e8 in planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:275    #14 0x00000000008e1b0d in pg_plan_query (querytree=0x2168f78, cursorOptions=256, boundParams=0x0) at postgres.c:878    #15 0x0000000000658683 in ExplainOneQuery (query=0x2168f78, cursorOptions=256, into=0x0, es=0x220cd90, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0) at explain.c:367    #16 0x0000000000658386 in ExplainQuery (pstate=0x220cc28, stmt=0x2141728, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0, dest=0x220cb90) at explain.c:255    #17 0x00000000008ea218 in standard_ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,        completionTag=0x7fffa48355c0 \"\") at utility.c:675    #18 0x00000000008e9a45 in ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,        completionTag=0x7fffa48355c0 \"\") at utility.c:360    #19 0x00000000008e8a0c in PortalRunUtility (portal=0x219c278, pstmt=0x21425c0, isTopLevel=true, setHoldSnapshot=true, dest=0x220cb90, completionTag=0x7fffa48355c0 \"\") at pquery.c:1175    #20 0x00000000008e871a in FillPortalStore (portal=0x219c278, isTopLevel=true) at pquery.c:1035    #21 0x00000000008e8075 in PortalRun (portal=0x219c278, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21efb90, altdest=0x21efb90, completionTag=0x7fffa48357b0 \"\") at pquery.c:765    #22 0x00000000008e207c in exec_simple_query (query_string=0x21407b8 \"explain analyze select * from t where a = 10000;\") at postgres.c:1215    #23 0x00000000008e636e in PostgresMain (argc=1, argv=0x216c600, dbname=0x216c4e0 \"test\", username=0x213c3f8 \"user\") at postgres.c:4236    #24 0x000000000083c71e in BackendRun (port=0x2165850) at postmaster.c:4437    #25 0x000000000083beef in BackendStartup (port=0x2165850) at postmaster.c:4128    #26 0x0000000000838313 in ServerLoop () at postmaster.c:1704    #27 0x0000000000837bbf in PostmasterMain (argc=3, argv=0x213a360) at postmaster.c:1377    #28 0x0000000000759643 in main (argc=3, argv=0x213a360) at main.c:228So my guess is the root cause is pretty simple - we close/unlock theindexes after completing the query, but then EXPLAIN tries to open itagain when producing the explain plan.I don't have a very good idea how to fix this, as explain has no ideawhich indexes will be used by the query, etc.regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 22 Oct 2019 15:24:41 +0800", "msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?5Zue5aSN77yaQnVnIGFib3V0IGRyb3AgaW5kZXggY29uY3VycmVudGx5?=" }, { "msg_contents": "On Mon, Oct 21, 2019 at 10:36:04AM +0800, 李杰(慎追) wrote:\n>Thanks for the quick reply. And sorry I haven’t got back to you sooner\n>.\n>\n>I have seen this backtrace in the core file, and I also looked at the\n>bug in the standby because there is no lock in the drop index\n>concurrently.\n>\n\nI'm a bit confused. You shouldn't see any crashes and/or core files in\nthis scenario, for two reasons. Firstly, I assume you're running a\nregular build without asserts. Secondly, I had to add an extra assert to\ntrigger the failure. So what core are you talking about?\n\nAlso, it's not clear to me what do you mean by \"bug in the standby\" or\nno lock in the drop index concurrently. Can you explain?\n\n>However, when our business will perform a large number of queries in\n>the standby, this problem will occur more frequently. So we are trying\n>to solve this problem, and the solution we are currently dealing with\n>is to ban it.\n>\n\nHmm, so you observe the issue with regular queries, not just EXPLAIN\nANALYZE?\n\n>Of course, we considered applying the method of waiting to detect the\n>query lock on the master to the standby, but worried about affecting\n>the standby application log delay, so we gave up that.\n>\n\nI don't understand? What method?\n\n>If you have a better solution in the future, please push it to the new\n>version, or email it, thank you very much.\n>\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 22 Oct 2019 19:47:08 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaQnVn?= about drop index concurrently" }, { "msg_contents": "On Tue, Oct 22, 2019 at 03:24:41PM +0800, 李杰(慎追) wrote:\n>Hi all,\n>\n>I am sorry to bother you again.\n>\n>I want to consult again, about the last time I raised a bug about drop\n>index, are you going to deal with it in the future? Is it to ban it or\n>to propose a repair plan, what is your next plan?\n>\n\nIt does seem like a bug, i.e. something we need to fix.\n\nNot sure what/how we could ban?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 22 Oct 2019 19:49:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaQnVn?= about drop index concurrently" }, { "msg_contents": "On Fri, Oct 18, 2019 at 05:00:54PM +0200, Tomas Vondra wrote:\n>Hi,\n>\n>I can trivially reproduce this - it's enough to create a master-standby\n>setup, and then do this on the master\n>\n> CREATE TABLE t (a int, b int);\n> INSERT INTO t SELECT i, i FROM generate_series(1,10000) s(i);\n>\n>and run pgbench with this script\n>\n> DROP INDEX CONCURRENTLY IF EXISTS t_a_idx;\n> CREATE INDEX t_a_idx ON t(a);\n>\n>while on the standby there's another pgbench running this script\n>\n> EXPLAIN ANALYZE SELECT * FROM t WHERE a = 10000;\n>\n>and it fails pretty fast for me. With an extra assert(false) added to\n>src/backend/access/common/relation.c I get a backtrace like this:\n>\n> Program terminated with signal SIGABRT, Aborted.\n> #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: dnf debuginfo-install glibc-2.29-22.fc30.x86_64\n> (gdb) bt\n> #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n> #1 0x00007c32e457a895 in abort () from /lib64/libc.so.6\n> #2 0x0000000000a58579 in ExceptionalCondition (conditionName=0xacd9bc \"!(0)\", errorType=0xacd95b \"FailedAssertion\", fileName=0xacd950 \"relation.c\", lineNumber=64) at assert.c:54\n> #3 0x000000000048d1bd in relation_open (relationId=38216, lockmode=1) at relation.c:64\n> #4 0x00000000005082e4 in index_open (relationId=38216, lockmode=1) at indexam.c:130\n> #5 0x000000000080ac3f in get_relation_info (root=0x21698b8, relationObjectId=16385, inhparent=false, rel=0x220ce60) at plancat.c:196\n> #6 0x00000000008118c6 in build_simple_rel (root=0x21698b8, relid=1, parent=0x0) at relnode.c:292\n> #7 0x00000000007d485d in add_base_rels_to_query (root=0x21698b8, jtnode=0x2169478) at initsplan.c:114\n> #8 0x00000000007d48a3 in add_base_rels_to_query (root=0x21698b8, jtnode=0x21693e0) at initsplan.c:122\n> #9 0x00000000007d8fad in query_planner (root=0x21698b8, qp_callback=0x7ded97 <standard_qp_callback>, qp_extra=0x7fffa4834f10) at planmain.c:168\n> #10 0x00000000007dc316 in grouping_planner (root=0x21698b8, inheritance_update=false, tuple_fraction=0) at planner.c:2048\n> #11 0x00000000007da7ca in subquery_planner (glob=0x220d078, parse=0x2168f78, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1012\n> #12 0x00000000007d942c in standard_planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:406\n> #13 0x00000000007d91e8 in planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:275\n> #14 0x00000000008e1b0d in pg_plan_query (querytree=0x2168f78, cursorOptions=256, boundParams=0x0) at postgres.c:878\n> #15 0x0000000000658683 in ExplainOneQuery (query=0x2168f78, cursorOptions=256, into=0x0, es=0x220cd90, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0) at explain.c:367\n> #16 0x0000000000658386 in ExplainQuery (pstate=0x220cc28, stmt=0x2141728, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0, dest=0x220cb90) at explain.c:255\n> #17 0x00000000008ea218 in standard_ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n> completionTag=0x7fffa48355c0 \"\") at utility.c:675\n> #18 0x00000000008e9a45 in ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n> completionTag=0x7fffa48355c0 \"\") at utility.c:360\n> #19 0x00000000008e8a0c in PortalRunUtility (portal=0x219c278, pstmt=0x21425c0, isTopLevel=true, setHoldSnapshot=true, dest=0x220cb90, completionTag=0x7fffa48355c0 \"\") at pquery.c:1175\n> #20 0x00000000008e871a in FillPortalStore (portal=0x219c278, isTopLevel=true) at pquery.c:1035\n> #21 0x00000000008e8075 in PortalRun (portal=0x219c278, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21efb90, altdest=0x21efb90, completionTag=0x7fffa48357b0 \"\") at pquery.c:765\n> #22 0x00000000008e207c in exec_simple_query (query_string=0x21407b8 \"explain analyze select * from t where a = 10000;\") at postgres.c:1215\n> #23 0x00000000008e636e in PostgresMain (argc=1, argv=0x216c600, dbname=0x216c4e0 \"test\", username=0x213c3f8 \"user\") at postgres.c:4236\n> #24 0x000000000083c71e in BackendRun (port=0x2165850) at postmaster.c:4437\n> #25 0x000000000083beef in BackendStartup (port=0x2165850) at postmaster.c:4128\n> #26 0x0000000000838313 in ServerLoop () at postmaster.c:1704\n> #27 0x0000000000837bbf in PostmasterMain (argc=3, argv=0x213a360) at postmaster.c:1377\n> #28 0x0000000000759643 in main (argc=3, argv=0x213a360) at main.c:228\n>\n>So my guess is the root cause is pretty simple - we close/unlock the\n>indexes after completing the query, but then EXPLAIN tries to open it\n>again when producing the explain plan.\n>\n\nD'oh! I've just looked at this issue more carefully, and I realize this\nsuggestion (that it's due to releasing a lock too early) is just bogus.\nSorry about the confusion :-(\n\nIn fact, I think you've been 100% correct in your analysis regarding the\nroot cause, which is that we don't realize there's a query on a standby,\nusing the index that we're dropping (and REINDEX CONCURRENTLY seems to\nhave exactly the same issue).\n\nI've reproduced this on all releases back to 10, I suppose it affects\nall releases with DROP INDEX CONCURRENTLY (I haven't tried, but I don't\nsee why it wouldn't).\n\nI still think it's a bug, and we'll need to fix it somehow. Not sure\nhow, though.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 23 Oct 2019 00:49:53 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Bug about drop index concurrently" }, { "msg_contents": ">\n>I'm a bit confused. You shouldn't see any crashes and/or core files in\n>this scenario, for two reasons. Firstly, I assume you're running a\n>regular build without asserts. Secondly, I had to add an extra assert to\n>trigger the failure. So what core are you talking about?\n>\nSorry, I should explain it more clearly.\nI saw the core file because I modified the postgres source code and added Assert to it.\n>\n>Also, it's not clear to me what do you mean by \"bug in the standby\" or\n>no lock in the drop index concurrently. Can you explain?\n>\n\"bug in the standby\" means that we built a master-slave instance, when we executed a large number of queries on the standby, we executed 'drop index concurrently' on the master so that get ‘error’ in the standby. Although it is not 100%, it will appear.\nno lock in the drop index concurrently ::: I think this is because there are not enough advanced locks when executing ‘ drop index concurrently’.\n\n>Hmm, so you observe the issue with regular queries, not just EXPLAIN\n>ANALYZE?\n\nyeah, we have seen this error frequently.\n\n>>Of course, we considered applying the method of waiting to detect the\n>>query lock on the master to the standby, but worried about affecting\n>>the standby application log delay, so we gave up that.\n>>\n>I don't understand? What method?\n>\n\nI analyzed this problem, I used to find out the cause of this problem, I also executed 'drop index concurrently' and ‘explain select * from xxx’ on the master, but the bug did not appear as expected.\nSo I went to analyze the source code. I found that there is such a mechanism on the master that when the 'drop index concurrently' is execute, it wait will every transaction that saw the old index state has finished. source code is as follows follow as:\n\n\n\nWaitForLockers(heaplocktag, AccessExclusiveLock);\n\nTherefore, I think that if this method is also available in standby, then the error will not appear. but I worried about affecting the standby application log delay, so we gave up that.\n\n\n\n\n\n\n------------------------------------------------------------------\n发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>\n发送时间:2019年10月23日(星期三) 01:47\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: 回复:Bug about drop index concurrently\n\nOn Mon, Oct 21, 2019 at 10:36:04AM +0800, 李杰(慎追) wrote:\n>Thanks for the quick reply. And sorry I haven’t got back to you sooner\n>.\n>\n>I have seen this backtrace in the core file, and I also looked at the\n>bug in the standby because there is no lock in the drop index\n>concurrently.\n>\n\nI'm a bit confused. You shouldn't see any crashes and/or core files in\nthis scenario, for two reasons. Firstly, I assume you're running a\nregular build without asserts. Secondly, I had to add an extra assert to\ntrigger the failure. So what core are you talking about?\n\nAlso, it's not clear to me what do you mean by \"bug in the standby\" or\nno lock in the drop index concurrently. Can you explain?\n\n>However, when our business will perform a large number of queries in\n>the standby, this problem will occur more frequently. So we are trying\n>to solve this problem, and the solution we are currently dealing with\n>is to ban it.\n>\n\nHmm, so you observe the issue with regular queries, not just EXPLAIN\nANALYZE?\n\n>Of course, we considered applying the method of waiting to detect the\n>query lock on the master to the standby, but worried about affecting\n>the standby application log delay, so we gave up that.\n>\n\nI don't understand? What method?\n\n>If you have a better solution in the future, please push it to the new\n>version, or email it, thank you very much.\n>\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>I'm a bit confused. You shouldn't see any crashes and/or core files in>this scenario, for two reasons. Firstly, I assume you're running a>regular build without asserts. Secondly, I had to add an extra assert to>trigger the failure. So what core are you talking about?>Sorry, I should explain it more clearly.I saw the core file because I modified the postgres source code and added Assert to it.>>Also, it's not clear to me what do you mean by \"bug in the standby\" or>no lock in the drop index concurrently. Can you explain?>\"bug in the standby\" means that we built a master-slave instance, when we executed a large number of queries on the standby, we executed 'drop index concurrently' on the master so that get ‘error’ in the standby. Although it is not 100%, it will appear.no lock in the drop index concurrently :::  I think this is because there are not enough advanced locks when executing ‘ drop index concurrently’.>Hmm, so you observe the issue with regular queries, not just EXPLAIN>ANALYZE?yeah, we have seen this error frequently.>>Of course, we considered applying the method of waiting to detect the>>query lock on the master to the standby, but worried about affecting>>the standby application log delay, so we gave up that.>>>I don't understand? What method?>I analyzed this problem, I used to find out the cause of this problem, I also executed 'drop index concurrently' and ‘explain select * from xxx’  on the master, but the bug did not appear as expected.So I went to analyze the source code. I found that there is such a mechanism on the master that when the 'drop index concurrently' is execute, it wait will every transaction that saw the old index state has finished. source code is as follows follow as:WaitForLockers(heaplocktag, AccessExclusiveLock);Therefore, I think that if this method is also available in standby, then the error will not appear. but I worried about affecting the standby application log delay, so we gave up that.------------------------------------------------------------------发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>发送时间:2019年10月23日(星期三) 01:47收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>主 题:Re: 回复:Bug about drop index concurrentlyOn Mon, Oct 21, 2019 at 10:36:04AM +0800, 李杰(慎追) wrote:>Thanks for the quick reply.  And sorry I haven’t got back to you sooner>.>>I have seen this backtrace in the core file, and I also looked at the>bug in the standby because there is no lock in the drop index>concurrently.>I'm a bit confused. You shouldn't see any crashes and/or core files inthis scenario, for two reasons. Firstly, I assume you're running aregular build without asserts. Secondly, I had to add an extra assert totrigger the failure. So what core are you talking about?Also, it's not clear to me what do you mean by \"bug in the standby\" orno lock in the drop index concurrently. Can you explain?>However, when our business will perform a large number of queries in>the standby, this problem will occur more frequently. So we are trying>to solve this problem, and the solution we are currently dealing with>is to ban it.>Hmm, so you observe the issue with regular queries, not just EXPLAINANALYZE?>Of course, we considered applying the method of waiting to detect the>query lock on the master to the standby, but worried about affecting>the standby application log delay, so we gave up that.>I don't understand? What method?>If you have a better solution in the future, please push it to the new>version, or email it, thank you very much.>regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 23 Oct 2019 14:38:45 +0800", "msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77yaQnVnIGFib3V0IGRyb3AgaW5kZXggY29uY3VycmVudGx5?=" }, { "msg_contents": "Ah ha ha , this is great, I am very ashamed of my English expression, did not let you clearly understand my mail.\n\nnow, I am very glad that you can understand this. I sincerely hope that I can help you. I am also a postgres fan, a freshly graduated student.\n\nWe have all confirmed that this bug will only appear on the standby and will not appear on the master.But it does affect the use of standby.\n\nFor this bug, I proposed two options, one is to disable this feature (drop index concurrently), the other is to wait for the standby select like on the master, but it may affect the application delay of the log. Because this bug appears on the standby, I think both methods have advantages and disadvantages. So I hope that you can discuss it so much that it will help you.\n\nSincerely \n\nadger\n\n\n\n------------------------------------------------------------------\n发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>\n发送时间:2019年10月23日(星期三) 06:49\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: Bug about drop index concurrently\n\nOn Fri, Oct 18, 2019 at 05:00:54PM +0200, Tomas Vondra wrote:\n>Hi,\n>\n>I can trivially reproduce this - it's enough to create a master-standby\n>setup, and then do this on the master\n>\n> CREATE TABLE t (a int, b int);\n> INSERT INTO t SELECT i, i FROM generate_series(1,10000) s(i);\n>\n>and run pgbench with this script\n>\n> DROP INDEX CONCURRENTLY IF EXISTS t_a_idx;\n> CREATE INDEX t_a_idx ON t(a);\n>\n>while on the standby there's another pgbench running this script\n>\n> EXPLAIN ANALYZE SELECT * FROM t WHERE a = 10000;\n>\n>and it fails pretty fast for me. With an extra assert(false) added to\n>src/backend/access/common/relation.c I get a backtrace like this:\n>\n> Program terminated with signal SIGABRT, Aborted.\n> #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: dnf debuginfo-install glibc-2.29-22.fc30.x86_64\n> (gdb) bt\n> #0 0x00007c32e458fe35 in raise () from /lib64/libc.so.6\n> #1 0x00007c32e457a895 in abort () from /lib64/libc.so.6\n> #2 0x0000000000a58579 in ExceptionalCondition (conditionName=0xacd9bc \"!(0)\", errorType=0xacd95b \"FailedAssertion\", fileName=0xacd950 \"relation.c\", lineNumber=64) at assert.c:54\n> #3 0x000000000048d1bd in relation_open (relationId=38216, lockmode=1) at relation.c:64\n> #4 0x00000000005082e4 in index_open (relationId=38216, lockmode=1) at indexam.c:130\n> #5 0x000000000080ac3f in get_relation_info (root=0x21698b8, relationObjectId=16385, inhparent=false, rel=0x220ce60) at plancat.c:196\n> #6 0x00000000008118c6 in build_simple_rel (root=0x21698b8, relid=1, parent=0x0) at relnode.c:292\n> #7 0x00000000007d485d in add_base_rels_to_query (root=0x21698b8, jtnode=0x2169478) at initsplan.c:114\n> #8 0x00000000007d48a3 in add_base_rels_to_query (root=0x21698b8, jtnode=0x21693e0) at initsplan.c:122\n> #9 0x00000000007d8fad in query_planner (root=0x21698b8, qp_callback=0x7ded97 <standard_qp_callback>, qp_extra=0x7fffa4834f10) at planmain.c:168\n> #10 0x00000000007dc316 in grouping_planner (root=0x21698b8, inheritance_update=false, tuple_fraction=0) at planner.c:2048\n> #11 0x00000000007da7ca in subquery_planner (glob=0x220d078, parse=0x2168f78, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1012\n> #12 0x00000000007d942c in standard_planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:406\n> #13 0x00000000007d91e8 in planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:275\n> #14 0x00000000008e1b0d in pg_plan_query (querytree=0x2168f78, cursorOptions=256, boundParams=0x0) at postgres.c:878\n> #15 0x0000000000658683 in ExplainOneQuery (query=0x2168f78, cursorOptions=256, into=0x0, es=0x220cd90, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0) at explain.c:367\n> #16 0x0000000000658386 in ExplainQuery (pstate=0x220cc28, stmt=0x2141728, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0, dest=0x220cb90) at explain.c:255\n> #17 0x00000000008ea218 in standard_ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n> completionTag=0x7fffa48355c0 \"\") at utility.c:675\n> #18 0x00000000008e9a45 in ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,\n> completionTag=0x7fffa48355c0 \"\") at utility.c:360\n> #19 0x00000000008e8a0c in PortalRunUtility (portal=0x219c278, pstmt=0x21425c0, isTopLevel=true, setHoldSnapshot=true, dest=0x220cb90, completionTag=0x7fffa48355c0 \"\") at pquery.c:1175\n> #20 0x00000000008e871a in FillPortalStore (portal=0x219c278, isTopLevel=true) at pquery.c:1035\n> #21 0x00000000008e8075 in PortalRun (portal=0x219c278, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21efb90, altdest=0x21efb90, completionTag=0x7fffa48357b0 \"\") at pquery.c:765\n> #22 0x00000000008e207c in exec_simple_query (query_string=0x21407b8 \"explain analyze select * from t where a = 10000;\") at postgres.c:1215\n> #23 0x00000000008e636e in PostgresMain (argc=1, argv=0x216c600, dbname=0x216c4e0 \"test\", username=0x213c3f8 \"user\") at postgres.c:4236\n> #24 0x000000000083c71e in BackendRun (port=0x2165850) at postmaster.c:4437\n> #25 0x000000000083beef in BackendStartup (port=0x2165850) at postmaster.c:4128\n> #26 0x0000000000838313 in ServerLoop () at postmaster.c:1704\n> #27 0x0000000000837bbf in PostmasterMain (argc=3, argv=0x213a360) at postmaster.c:1377\n> #28 0x0000000000759643 in main (argc=3, argv=0x213a360) at main.c:228\n>\n>So my guess is the root cause is pretty simple - we close/unlock the\n>indexes after completing the query, but then EXPLAIN tries to open it\n>again when producing the explain plan.\n>\n\nD'oh! I've just looked at this issue more carefully, and I realize this\nsuggestion (that it's due to releasing a lock too early) is just bogus.\nSorry about the confusion :-(\n\nIn fact, I think you've been 100% correct in your analysis regarding the\nroot cause, which is that we don't realize there's a query on a standby,\nusing the index that we're dropping (and REINDEX CONCURRENTLY seems to\nhave exactly the same issue).\n\nI've reproduced this on all releases back to 10, I suppose it affects\nall releases with DROP INDEX CONCURRENTLY (I haven't tried, but I don't\nsee why it wouldn't).\n\nI still think it's a bug, and we'll need to fix it somehow. Not sure\nhow, though.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \nAh ha ha , this is great, I am very ashamed of my English expression, did not let you clearly understand my mail.now, I am very glad that you can understand this. I sincerely hope that I can help you. I am also a postgres fan, a freshly graduated student.We have all confirmed that this bug will only appear on the standby and will not appear on the master.But it does affect the use of standby.For this bug, I proposed two options, one is to disable this feature (drop index concurrently), the other is to wait for the standby select like on the master, but it may affect the application delay of the log. Because this bug appears on the standby, I think both methods have advantages and disadvantages. So I hope that you can discuss it so much that it will help you.Sincerely adger------------------------------------------------------------------发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>发送时间:2019年10月23日(星期三) 06:49收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>主 题:Re: Bug about drop index concurrentlyOn Fri, Oct 18, 2019 at 05:00:54PM +0200, Tomas Vondra wrote:>Hi,>>I can trivially reproduce this - it's enough to create a master-standby>setup, and then do this on the master>> CREATE TABLE t (a int, b int);> INSERT INTO t SELECT i, i FROM generate_series(1,10000) s(i);>>and run pgbench with this script>> DROP INDEX CONCURRENTLY IF EXISTS t_a_idx;> CREATE INDEX t_a_idx ON t(a);>>while on the standby there's another pgbench running this script>> EXPLAIN ANALYZE SELECT * FROM t WHERE a = 10000;>>and it fails pretty fast for me. With an extra assert(false) added to>src/backend/access/common/relation.c I get a backtrace like this:>>   Program terminated with signal SIGABRT, Aborted.>   #0  0x00007c32e458fe35 in raise () from /lib64/libc.so.6>   Missing separate debuginfos, use: dnf debuginfo-install glibc-2.29-22.fc30.x86_64>   (gdb) bt>   #0  0x00007c32e458fe35 in raise () from /lib64/libc.so.6>   #1  0x00007c32e457a895 in abort () from /lib64/libc.so.6>   #2  0x0000000000a58579 in ExceptionalCondition (conditionName=0xacd9bc \"!(0)\", errorType=0xacd95b \"FailedAssertion\", fileName=0xacd950 \"relation.c\", lineNumber=64) at assert.c:54>   #3  0x000000000048d1bd in relation_open (relationId=38216, lockmode=1) at relation.c:64>   #4  0x00000000005082e4 in index_open (relationId=38216, lockmode=1) at indexam.c:130>   #5  0x000000000080ac3f in get_relation_info (root=0x21698b8, relationObjectId=16385, inhparent=false, rel=0x220ce60) at plancat.c:196>   #6  0x00000000008118c6 in build_simple_rel (root=0x21698b8, relid=1, parent=0x0) at relnode.c:292>   #7  0x00000000007d485d in add_base_rels_to_query (root=0x21698b8, jtnode=0x2169478) at initsplan.c:114>   #8  0x00000000007d48a3 in add_base_rels_to_query (root=0x21698b8, jtnode=0x21693e0) at initsplan.c:122>   #9  0x00000000007d8fad in query_planner (root=0x21698b8, qp_callback=0x7ded97 <standard_qp_callback>, qp_extra=0x7fffa4834f10) at planmain.c:168>   #10 0x00000000007dc316 in grouping_planner (root=0x21698b8, inheritance_update=false, tuple_fraction=0) at planner.c:2048>   #11 0x00000000007da7ca in subquery_planner (glob=0x220d078, parse=0x2168f78, parent_root=0x0, hasRecursion=false, tuple_fraction=0) at planner.c:1012>   #12 0x00000000007d942c in standard_planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:406>   #13 0x00000000007d91e8 in planner (parse=0x2168f78, cursorOptions=256, boundParams=0x0) at planner.c:275>   #14 0x00000000008e1b0d in pg_plan_query (querytree=0x2168f78, cursorOptions=256, boundParams=0x0) at postgres.c:878>   #15 0x0000000000658683 in ExplainOneQuery (query=0x2168f78, cursorOptions=256, into=0x0, es=0x220cd90, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0) at explain.c:367>   #16 0x0000000000658386 in ExplainQuery (pstate=0x220cc28, stmt=0x2141728, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", params=0x0, queryEnv=0x0, dest=0x220cb90) at explain.c:255>   #17 0x00000000008ea218 in standard_ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,>       completionTag=0x7fffa48355c0 \"\") at utility.c:675>   #18 0x00000000008e9a45 in ProcessUtility (pstmt=0x21425c0, queryString=0x21407b8 \"explain analyze select * from t where a = 10000;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x220cb90,>       completionTag=0x7fffa48355c0 \"\") at utility.c:360>   #19 0x00000000008e8a0c in PortalRunUtility (portal=0x219c278, pstmt=0x21425c0, isTopLevel=true, setHoldSnapshot=true, dest=0x220cb90, completionTag=0x7fffa48355c0 \"\") at pquery.c:1175>   #20 0x00000000008e871a in FillPortalStore (portal=0x219c278, isTopLevel=true) at pquery.c:1035>   #21 0x00000000008e8075 in PortalRun (portal=0x219c278, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x21efb90, altdest=0x21efb90, completionTag=0x7fffa48357b0 \"\") at pquery.c:765>   #22 0x00000000008e207c in exec_simple_query (query_string=0x21407b8 \"explain analyze select * from t where a = 10000;\") at postgres.c:1215>   #23 0x00000000008e636e in PostgresMain (argc=1, argv=0x216c600, dbname=0x216c4e0 \"test\", username=0x213c3f8 \"user\") at postgres.c:4236>   #24 0x000000000083c71e in BackendRun (port=0x2165850) at postmaster.c:4437>   #25 0x000000000083beef in BackendStartup (port=0x2165850) at postmaster.c:4128>   #26 0x0000000000838313 in ServerLoop () at postmaster.c:1704>   #27 0x0000000000837bbf in PostmasterMain (argc=3, argv=0x213a360) at postmaster.c:1377>   #28 0x0000000000759643 in main (argc=3, argv=0x213a360) at main.c:228>>So my guess is the root cause is pretty simple - we close/unlock the>indexes after completing the query, but then EXPLAIN tries to open it>again when producing the explain plan.>D'oh! I've just looked at this issue more carefully, and I realize thissuggestion (that it's due to releasing a lock too early) is just bogus.Sorry about the confusion :-(In fact, I think you've been 100% correct in your analysis regarding theroot cause, which is that we don't realize there's a query on a standby,using the index that we're dropping (and REINDEX CONCURRENTLY seems tohave exactly the same issue).I've reproduced this on all releases back to 10, I suppose it affectsall releases with DROP INDEX CONCURRENTLY (I haven't tried, but I don'tsee why it wouldn't).I still think it's a bug, and we'll need to fix it somehow. Not surehow, though.regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 23 Oct 2019 14:55:24 +0800", "msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?5Zue5aSN77yaQnVnIGFib3V0IGRyb3AgaW5kZXggY29uY3VycmVudGx5?=" }, { "msg_contents": "On Wed, Oct 23, 2019 at 02:38:45PM +0800, 李杰(慎追) wrote:\n>>\n>>I'm a bit confused. You shouldn't see any crashes and/or core files in\n>>this scenario, for two reasons. Firstly, I assume you're running a\n>>regular build without asserts. Secondly, I had to add an extra assert\n>>to trigger the failure. So what core are you talking about?\n>>\n>Sorry, I should explain it more clearly. I saw the core file because I\n>modified the postgres source code and added Assert to it.\n>>\n\nOK\n\n>>Also, it's not clear to me what do you mean by \"bug in the standby\" or\n>>no lock in the drop index concurrently. Can you explain?\n>>\n>\"bug in the standby\" means that we built a master-slave instance, when\n>we executed a large number of queries on the standby, we executed 'drop\n>index concurrently' on the master so that get ‘error’ in the standby.\n>Although it is not 100%, it will appear. no lock in the drop index\n>concurrently ::: I think this is because there are not enough advanced\n>locks when executing ‘ drop index concurrently’.\n>\n\nOK, thanks for the clarification. Yes, it won't appear every time, it's\nlikely timing-sensitive (I'll explain in a minute).\n\n>>Hmm, so you observe the issue with regular queries, not just EXPLAIN\n>>ANALYZE?\n>\n>yeah, we have seen this error frequently.\n>\n\nThat suggests you're doing a lot of 'drop index concurrently', right?\n\n>>>Of course, we considered applying the method of waiting to detect the\n>>>query lock on the master to the standby, but worried about affecting\n>>>the standby application log delay, so we gave up that.\n>>>\n>>I don't understand? What method?\n>>\n>\n>I analyzed this problem, I used to find out the cause of this problem,\n>I also executed 'drop index concurrently' and ‘explain select * from\n>xxx’ on the master, but the bug did not appear as expected. So I went\n>to analyze the source code. I found that there is such a mechanism on\n>the master that when the 'drop index concurrently' is execute, it wait\n>will every transaction that saw the old index state has finished.\n>source code is as follows follow as:\n>\n>WaitForLockers(heaplocktag, AccessExclusiveLock);\n>\n>Therefore, I think that if this method is also available in standby,\n>then the error will not appear. but I worried about affecting the\n>standby application log delay, so we gave up that.\n>\n\nYes, but we can't really do that, I'm afraid.\n\nWe certainly can't do that on the master because we simply don't have\nthe necessary information about locks from the standby, and we really\ndon't want to have it, because with a busy standby that might be quite a\nbit of data (plust the standby would have to wait for the master to\nconfirm each lock acquisition, I think which seems pretty terrible).\n\nOn the standby, we don't really have an idea that the there's a drop\nindex running - we only get information about AE locks, and a bunch of\ncatalog updates. I don't think we have a way to determine this is a drop\nindex in concurrent mode :-(\n\nMore preciresly, the master sends information about AccessExclusiveLock\nin XLOG_STANDBY_LOCK wal record (in xl_standby_locks struct). And when\nthe standby replays that, it should acquire those locks.\n\nFor regular DROP INDEX we send this:\n\nrmgr: Standby ... desc: LOCK xid 8888 db 16384 rel 16385 \nrmgr: Standby ... desc: LOCK xid 8888 db 16384 rel 20573 \n... catalog changes ...\nrmgr: Transaction ... desc: COMMIT 2019-10-23 22:42:27.950995 CEST;\n rels: base/16384/20573; inval msgs: catcache 32 catcache 7 catcache\n 6 catcache 50 catcache 49 relcache 20573 relcache 16385 snapshot 2608\n\nwhile for DROP IDNEX CONCURRENTLY we send this\n\nrmgr: Heap ... desc: INPLACE ... catalog update\nrmgr: Standby ... desc: INVALIDATIONS ; inval msgs: catcache 32\n relcache 21288 relcache 16385\nrmgr: Heap ... desc: INPLACE ... catalog update\nrmgr: Standby ... desc: INVALIDATIONS ; inval msgs: catcache 32\n relcache 21288 relcache 16385\nrmgr: Standby ... desc: LOCK xid 10326 db 16384 rel 21288 \n... catalog updates ...\nrmgr: Transaction ... desc: COMMIT 2019-10-23 23:47:10.042568 CEST;\n rels: base/16384/21288; inval msgs: catcache 32 catcache 7 catcache 6\n catcache 50 catcache 49 relcache 21288 relcache 16385 snapshot 2608\n\nSo just a single lock on the index, but not the lock on the relation\nitself (which makes sense, because for DROP INDEX CONCURRENTLY we don't\nget an exclusive lock on the table).\n\nI'm not quite familiar with this part of the code, but the SELECT\nbackends are clearly getting a stale list of indexes from relcache, and\ntry to open an index which was already removed by the redo. We do\nacquire the lock on the index itself, but that's not sufficient :-(\n\nNot sure how to fix this. I wonder if we could invalidate the relcache\nfor the relation at some point. Or maybe we could add additional\ninformation to the WAL to make the redo wait for all lock waiters, just\nlike on the master. But that might be tricky because of deadlocks, and\nbecause the redo could easily get \"stuck\" waiting for a long-running\nqueries.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 24 Oct 2019 00:04:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: =?utf-8?B?5Zue5aSN77ya5Zue5aSN77yaQnU=?= =?utf-8?Q?g?= about\n drop index concurrently" }, { "msg_contents": ">That suggests you're doing a lot of 'drop index concurrently', right?\n\nNot completely, In the actual scene, in fact, I didn't perform too much 'drop index concurrently' on the master. I just just execute a lot of queries on the standby. You know, it will have a certain probability every time you execute ‘drop index concurrently’ on the master. Although it is small, it may appear.\n>Yes, but we can't really do that, I'm afraid.\n>\n>We certainly can't do that on the master because we simply don't have\n>the necessary information about locks from the standby, and we really\n>don't want to have it, because with a busy standby that might be quite a\n>bit of data (plust the standby would have to wait for the master to\n>confirm each lock acquisition, I think which seems pretty terrible).\n>\nyeah, we can't do this and will lose too much in order to achieve this.\n\n>On the standby, we don't really have an idea that the there's a drop\n>index running - we only get information about AE locks, and a bunch of\n>catalog updates. I don't think we have a way to determine this is a drop\n>index in concurrent mode :-(\n>\n>More preciresly, the master sends information about AccessExclusiveLock\n>in XLOG_STANDBY_LOCK wal record (in xl_standby_locks struct). And when\n>the standby replays that, it should acquire those locks.\n>\n>For regular DROP INDEX we send this:\n>\n>rmgr: Standby ... desc: LOCK xid 8888 db 16384 rel 16385 \n>rmgr: Standby ... desc: LOCK xid 8888 db 16384 rel 20573 \n>... catalog changes ...\n>rmgr: Transaction ... desc: COMMIT 2019-10-23 22:42:27.950995 CEST;\n> rels: base/16384/20573; inval msgs: catcache 32 catcache 7 catcache\n> 6 catcache 50 catcache 49 relcache 20573 relcache 16385 snapshot 2608\n>\n>while for DROP IDNEX CONCURRENTLY we send this\n>\n>rmgr: Heap ... desc: INPLACE ... catalog update\n>rmgr: Standby ... desc: INVALIDATIONS ; inval msgs: catcache 32\n>relcache 21288 relcache 16385\n>rmgr: Heap ... desc: INPLACE ... catalog update\n>rmgr: Standby ... desc: INVALIDATIONS ; inval msgs: catcache 32\n> relcache 21288 relcache 16385\n>rmgr: Standby ... desc: LOCK xid 10326 db 16384 rel 21288 \n>... catalog updates ...\n>rmgr: Transaction ... desc: COMMIT 2019-10-23 23:47:10.042568 CEST;\n> rels: base/16384/21288; inval msgs: catcache 32 catcache 7 catcache 6\n> catcache 50 catcache 49 relcache 21288 relcache 16385 snapshot 2608\n>\nyeah, you are right, I got this .\n\n>So just a single lock on the index, but not the lock on the relation\n>itself (which makes sense, because for DROP INDEX CONCURRENTLY we don'>t get an exclusive lock on the table).\n>\n>I'm not quite familiar with this part of the code, but the SELECT\n>backends are clearly getting a stale list of indexes from relcache, and\n>try to open an index which was already removed by the redo. We do\n>acquire the lock on the index itself, but that's not sufficient :-(\n>\n>Not sure how to fix this. I wonder if we could invalidate the relcache\n>for the relation at some point.\n--method one --\n >Or maybe we could add additional\n>information to the WAL to make the redo wait for all lock waiters, just\n>like on the master. But that might be tricky because of deadlocks, and\n>because the redo could easily get \"stuck\" waiting for a long-running\n>queries.\n--method two --\n\nfor the method one , I don't think this can solve the problem at all. \n\nBecause we can't predict on the standby when the master will 'drop index concurrently'. I also have a way to manually reproduce this bug in order to make it easier for you to understand the problem. You can try to follow the steps below:\n1. We build a connection on the standby, then use gdb to attach to the backend process\n2. We set a breakpoint at plancat.c:196 in get_relation_info\n3. We execute 'explain select ' on this backend, in gdb you will see that the breakpoint has hit, and the query will hang.\n4.We execute 'drop index concurrently' on the master. (If 'drop index' will be blocked, because there is a query on our standby, the master can not get EXlock). \n5. on the standby ,let gdb continue.\nWe will see an error(ERROR: could not open relation with OIDXXX) in the standby client.\n\nTherefore, we can see that the query is executed first by the standby, and then the 'drop index concurrently' executed by the master. So no matter how fast we can make relcache invalid, there is no way to prevent this error from happening, because there is always a moment when the index has been dropped by the master. In other words, there is no way to predict the master's 'drop index concurrently' on standby.\n\nfor the method two, you are right, deadlock is possible, and the most unbearable is that it will bring delay to the log apply on standb, resulting in inconsistent data between the master and the standby.\n\nAll in all, I think this bug is a flaw in postgres design. We need to think carefully about how to handle it better. Even we can learn from other database products. I hope I can help you.\n\nThank you very much for your attention.\n\nRegards.\n\nadger.\n\n\n\n\n\n------------------------------------------------------------------\n发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>\n发送时间:2019年10月24日(星期四) 06:04\n收件人:李杰(慎追) <adger.lj@alibaba-inc.com>\n抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: 回复:回复:Bug about drop index concurrently\n\nOn Wed, Oct 23, 2019 at 02:38:45PM +0800, 李杰(慎追) wrote:\n>>\n>>I'm a bit confused. You shouldn't see any crashes and/or core files in\n>>this scenario, for two reasons. Firstly, I assume you're running a\n>>regular build without asserts. Secondly, I had to add an extra assert\n>>to trigger the failure. So what core are you talking about?\n>>\n>Sorry, I should explain it more clearly. I saw the core file because I\n>modified the postgres source code and added Assert to it.\n>>\n\nOK\n\n>>Also, it's not clear to me what do you mean by \"bug in the standby\" or\n>>no lock in the drop index concurrently. Can you explain?\n>>\n>\"bug in the standby\" means that we built a master-slave instance, when\n>we executed a large number of queries on the standby, we executed 'drop\n>index concurrently' on the master so that get ‘error’ in the standby.\n>Although it is not 100%, it will appear. no lock in the drop index\n>concurrently ::: I think this is because there are not enough advanced\n>locks when executing ‘ drop index concurrently’.\n>\n\nOK, thanks for the clarification. Yes, it won't appear every time, it's\nlikely timing-sensitive (I'll explain in a minute).\n\n>>Hmm, so you observe the issue with regular queries, not just EXPLAIN\n>>ANALYZE?\n>\n>yeah, we have seen this error frequently.\n>\n\nThat suggests you're doing a lot of 'drop index concurrently', right?\n\n>>>Of course, we considered applying the method of waiting to detect the\n>>>query lock on the master to the standby, but worried about affecting\n>>>the standby application log delay, so we gave up that.\n>>>\n>>I don't understand? What method?\n>>\n>\n>I analyzed this problem, I used to find out the cause of this problem,\n>I also executed 'drop index concurrently' and ‘explain select * from\n>xxx’ on the master, but the bug did not appear as expected. So I went\n>to analyze the source code. I found that there is such a mechanism on\n>the master that when the 'drop index concurrently' is execute, it wait\n>will every transaction that saw the old index state has finished.\n>source code is as follows follow as:\n>\n>WaitForLockers(heaplocktag, AccessExclusiveLock);\n>\n>Therefore, I think that if this method is also available in standby,\n>then the error will not appear. but I worried about affecting the\n>standby application log delay, so we gave up that.\n>\n\nYes, but we can't really do that, I'm afraid.\n\nWe certainly can't do that on the master because we simply don't have\nthe necessary information about locks from the standby, and we really\ndon't want to have it, because with a busy standby that might be quite a\nbit of data (plust the standby would have to wait for the master to\nconfirm each lock acquisition, I think which seems pretty terrible).\n\nOn the standby, we don't really have an idea that the there's a drop\nindex running - we only get information about AE locks, and a bunch of\ncatalog updates. I don't think we have a way to determine this is a drop\nindex in concurrent mode :-(\n\nMore preciresly, the master sends information about AccessExclusiveLock\nin XLOG_STANDBY_LOCK wal record (in xl_standby_locks struct). And when\nthe standby replays that, it should acquire those locks.\n\nFor regular DROP INDEX we send this:\n\nrmgr: Standby ... desc: LOCK xid 8888 db 16384 rel 16385 \nrmgr: Standby ... desc: LOCK xid 8888 db 16384 rel 20573 \n... catalog changes ...\nrmgr: Transaction ... desc: COMMIT 2019-10-23 22:42:27.950995 CEST;\n rels: base/16384/20573; inval msgs: catcache 32 catcache 7 catcache\n 6 catcache 50 catcache 49 relcache 20573 relcache 16385 snapshot 2608\n\nwhile for DROP IDNEX CONCURRENTLY we send this\n\nrmgr: Heap ... desc: INPLACE ... catalog update\nrmgr: Standby ... desc: INVALIDATIONS ; inval msgs: catcache 32\n relcache 21288 relcache 16385\nrmgr: Heap ... desc: INPLACE ... catalog update\nrmgr: Standby ... desc: INVALIDATIONS ; inval msgs: catcache 32\n relcache 21288 relcache 16385\nrmgr: Standby ... desc: LOCK xid 10326 db 16384 rel 21288 \n... catalog updates ...\nrmgr: Transaction ... desc: COMMIT 2019-10-23 23:47:10.042568 CEST;\n rels: base/16384/21288; inval msgs: catcache 32 catcache 7 catcache 6\n catcache 50 catcache 49 relcache 21288 relcache 16385 snapshot 2608\n\nSo just a single lock on the index, but not the lock on the relation\nitself (which makes sense, because for DROP INDEX CONCURRENTLY we don't\nget an exclusive lock on the table).\n\nI'm not quite familiar with this part of the code, but the SELECT\nbackends are clearly getting a stale list of indexes from relcache, and\ntry to open an index which was already removed by the redo. We do\nacquire the lock on the index itself, but that's not sufficient :-(\n\nNot sure how to fix this. I wonder if we could invalidate the relcache\nfor the relation at some point. Or maybe we could add additional\ninformation to the WAL to make the redo wait for all lock waiters, just\nlike on the master. But that might be tricky because of deadlocks, and\nbecause the redo could easily get \"stuck\" waiting for a long-running\nqueries.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n>That suggests you're doing a lot of 'drop index concurrently', right?Not completely, In the actual scene,  in fact, I didn't perform too much 'drop index concurrently' on the master. I just just execute a lot of queries on the standby. You know, it will have a certain probability every time you execute ‘drop index concurrently’ on the master. Although it is small, it may appear.>Yes, but we can't really do that, I'm afraid.>>We certainly can't do that on the master because we simply don't have>the necessary information about locks from the standby, and we really>don't want to have it, because with a busy standby that might be quite a>bit of data (plust the standby would have to wait for the master to>confirm each lock acquisition, I think which seems pretty terrible).>yeah,  we can't do this and will lose too much in order to achieve this.>On the standby, we don't really have an idea that the there's a drop>index running - we only get information about AE locks, and a bunch of>catalog updates. I don't think we have a way to determine this is a drop>index in concurrent mode :-(>>More preciresly, the master sends information about AccessExclusiveLock>in XLOG_STANDBY_LOCK wal record (in xl_standby_locks struct). And when>the standby replays that, it should acquire those locks.>>For regular DROP INDEX we send this:>>rmgr: Standby     ... desc: LOCK xid 8888 db 16384 rel 16385 >rmgr: Standby     ... desc: LOCK xid 8888 db 16384 rel 20573 >... catalog changes ...>rmgr: Transaction ... desc: COMMIT 2019-10-23 22:42:27.950995 CEST;>   rels: base/16384/20573; inval msgs: catcache 32 catcache 7 catcache> 6 catcache 50 catcache 49 relcache 20573 relcache 16385 snapshot 2608>>while for DROP IDNEX CONCURRENTLY we send this>>rmgr: Heap        ... desc: INPLACE ... catalog update>rmgr: Standby     ... desc: INVALIDATIONS ; inval msgs: catcache 32>relcache 21288 relcache 16385>rmgr: Heap        ... desc: INPLACE ... catalog update>rmgr: Standby     ... desc: INVALIDATIONS ; inval msgs: catcache 32>  relcache 21288 relcache 16385>rmgr: Standby     ... desc: LOCK xid 10326 db 16384 rel 21288 >... catalog updates ...>rmgr: Transaction ... desc: COMMIT 2019-10-23 23:47:10.042568 CEST;>   rels: base/16384/21288; inval msgs: catcache 32 catcache 7 catcache 6>   catcache 50 catcache 49 relcache 21288 relcache 16385 snapshot 2608>yeah, you are right, I got this .>So just a single lock on the index, but not the lock on the relation>itself (which makes sense, because for DROP INDEX CONCURRENTLY we don'>t get an exclusive lock on the table).>>I'm not quite familiar with this part of the code, but the SELECT>backends are clearly getting a stale list of indexes from relcache, and>try to open an index which was already removed by the redo. We do>acquire the lock on the index itself, but that's not sufficient :-(>>Not sure how to fix this. I wonder if we could invalidate the relcache>for the relation at some point.--method one -- >Or maybe we could add additional>information to the WAL to make the redo wait for all lock waiters, just>like on the master. But that might be tricky because of deadlocks, and>because  the redo could easily get \"stuck\" waiting for a long-running>queries.--method two --for the method one , I don't think this can solve the problem at all. Because we can't predict on the standby when the master will 'drop index concurrently'. I also have a way to manually reproduce this bug in order to make it easier for you to understand the problem. You can try to follow the steps below:1. We build a connection on the standby, then use gdb to attach to the backend process2. We set a breakpoint at  plancat.c:196 in get_relation_info3. We execute 'explain select ' on this backend, in gdb you will see that the breakpoint has hit, and the query will hang.4.We execute 'drop index concurrently' on the master. (If 'drop index' will be blocked, because there is a query on our standby, the master can not get EXlock). 5. on the standby ,let gdb continue.We will see an error(ERROR:  could not open relation with OIDXXX) in the standby client.Therefore, we can see that the query is executed first by the standby, and then the 'drop index concurrently' executed by the master. So no matter how fast we can make relcache invalid, there is no way to prevent this error from happening, because there is always a moment when the index has been dropped by the master. In other words, there is no way to predict the master's 'drop index concurrently' on standby.for the method two, you are right, deadlock is possible, and the most unbearable is that it will bring delay to the log apply on standb, resulting in inconsistent data between the master and the standby.All in all, I think this bug is a flaw in postgres design. We need to think carefully about how to handle it better. Even we can learn from other database products. I hope I can help you.Thank you very much for your attention.Regards.adger.------------------------------------------------------------------发件人:Tomas Vondra <tomas.vondra@2ndquadrant.com>发送时间:2019年10月24日(星期四) 06:04收件人:李杰(慎追) <adger.lj@alibaba-inc.com>抄 送:pgsql-hackers <pgsql-hackers@lists.postgresql.org>主 题:Re: 回复:回复:Bug about drop index concurrentlyOn Wed, Oct 23, 2019 at 02:38:45PM +0800, 李杰(慎追) wrote:>>>>I'm a bit confused. You shouldn't see any crashes and/or core files in>>this scenario, for two reasons. Firstly, I assume you're running a>>regular build without asserts. Secondly, I had to add an extra assert>>to trigger the failure. So what core are you talking about?>>>Sorry, I should explain it more clearly.  I saw the core file because I>modified the postgres source code and added Assert to it.>>OK>>Also, it's not clear to me what do you mean by \"bug in the standby\" or>>no lock in the drop index concurrently. Can you explain?>>>\"bug in the standby\" means that we built a master-slave instance, when>we executed a large number of queries on the standby, we executed 'drop>index concurrently' on the master so that get ‘error’ in the standby.>Although it is not 100%, it will appear.  no lock in the drop index>concurrently :::  I think this is because there are not enough advanced>locks when executing ‘ drop index concurrently’.>OK, thanks for the clarification. Yes, it won't appear every time, it'slikely timing-sensitive (I'll explain in a minute).>>Hmm, so you observe the issue with regular queries, not just EXPLAIN>>ANALYZE?>>yeah, we have seen this error frequently.>That suggests you're doing a lot of 'drop index concurrently', right?>>>Of course, we considered applying the method of waiting to detect the>>>query lock on the master to the standby, but worried about affecting>>>the standby application log delay, so we gave up that.>>>>>I don't understand? What method?>>>>I analyzed this problem, I used to find out the cause of this problem,>I also executed 'drop index concurrently' and ‘explain select * from>xxx’  on the master, but the bug did not appear as expected.  So I went>to analyze the source code. I found that there is such a mechanism on>the master that when the 'drop index concurrently' is execute, it wait>will every transaction that saw the old index state has finished.>source code is as follows follow as:>>WaitForLockers(heaplocktag, AccessExclusiveLock);>>Therefore, I think that if this method is also available in standby,>then the error will not appear. but I worried about affecting the>standby application log delay, so we gave up that.>Yes, but we can't really do that, I'm afraid.We certainly can't do that on the master because we simply don't havethe necessary information about locks from the standby, and we reallydon't want to have it, because with a busy standby that might be quite abit of data (plust the standby would have to wait for the master toconfirm each lock acquisition, I think which seems pretty terrible).On the standby, we don't really have an idea that the there's a dropindex running - we only get information about AE locks, and a bunch ofcatalog updates. I don't think we have a way to determine this is a dropindex in concurrent mode :-(More preciresly, the master sends information about AccessExclusiveLockin XLOG_STANDBY_LOCK wal record (in xl_standby_locks struct). And whenthe standby replays that, it should acquire those locks.For regular DROP INDEX we send this:rmgr: Standby     ... desc: LOCK xid 8888 db 16384 rel 16385 rmgr: Standby     ... desc: LOCK xid 8888 db 16384 rel 20573 ... catalog changes ...rmgr: Transaction ... desc: COMMIT 2019-10-23 22:42:27.950995 CEST;    rels: base/16384/20573; inval msgs: catcache 32 catcache 7 catcache    6 catcache 50 catcache 49 relcache 20573 relcache 16385 snapshot 2608while for DROP IDNEX CONCURRENTLY we send thisrmgr: Heap        ... desc: INPLACE ... catalog updatermgr: Standby     ... desc: INVALIDATIONS ; inval msgs: catcache 32  relcache 21288 relcache 16385rmgr: Heap        ... desc: INPLACE ... catalog updatermgr: Standby     ... desc: INVALIDATIONS ; inval msgs: catcache 32  relcache 21288 relcache 16385rmgr: Standby     ... desc: LOCK xid 10326 db 16384 rel 21288 ... catalog updates ...rmgr: Transaction ... desc: COMMIT 2019-10-23 23:47:10.042568 CEST;   rels: base/16384/21288; inval msgs: catcache 32 catcache 7 catcache 6   catcache 50 catcache 49 relcache 21288 relcache 16385 snapshot 2608So just a single lock on the index, but not the lock on the relationitself (which makes sense, because for DROP INDEX CONCURRENTLY we don'tget an exclusive lock on the table).I'm not quite familiar with this part of the code, but the SELECTbackends are clearly getting a stale list of indexes from relcache, andtry to open an index which was already removed by the redo. We doacquire the lock on the index itself, but that's not sufficient :-(Not sure how to fix this. I wonder if we could invalidate the relcachefor the relation at some point. Or maybe we could add additionalinformation to the WAL to make the redo wait for all lock waiters, justlike on the master. But that might be tricky because of deadlocks, andbecause  the redo could easily get \"stuck\" waiting for a long-runningqueries.regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 24 Oct 2019 11:10:16 +0800", "msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77ya5Zue5aSN77yaQnVnIGFib3V0IGRyb3AgaW5kZXggY29uY3Vy?=\n =?UTF-8?B?cmVudGx5?=" } ]
[ { "msg_contents": "Hello everybody,\n\nOur company was in desperate need of portals in async interface of libpq,\nso we patched it.\n\nWe would be happy to upstream the changes.\n\nThe description of changes:\n\nTwo functions in libpq-fe.h:\nPQsendPortalBindParams for sending a command to bind a portal to a\npreviously prepared statement;\nPQsendPortalExecute for executing a previously bound portal with a given\nnumber of rows.\n\nA patch to pqParseInput3 in fe-protocol3.c to handle the `portal suspended`\nmessage tag.\n\nThe patch is ready for review, but it lacks documentation, tests and usage\nexamples.\n\nThere are no functions for sending bind without params and no functions for\nsync interface, but they can easily be added to the feature.\n\n-- \nThank you,\nSergei Fedorov\n\nHello everybody,Our company was in desperate need of portals in async interface of libpq, so we patched it.We would be happy to upstream the changes.The description of changes:Two functions in libpq-fe.h:PQsendPortalBindParams for sending a command to bind a portal to a previously prepared statement;PQsendPortalExecute for executing a previously bound portal with a given number of rows.A patch to pqParseInput3 in fe-protocol3.c to handle the `portal suspended` message tag.The patch is ready for review, but it lacks documentation, tests and usage examples.There are no functions for sending bind without params and no functions for sync interface, but they can easily be added to the feature.-- Thank you,Sergei Fedorov", "msg_date": "Wed, 16 Oct 2019 22:11:32 +0300", "msg_from": "Sergei Fedorov <sergei.a.fedorov@gmail.com>", "msg_from_op": true, "msg_subject": "[Patch proposal] libpq portal support" }, { "msg_contents": "On Thu, 17 Oct 2019 at 03:12, Sergei Fedorov <sergei.a.fedorov@gmail.com>\nwrote:\n\n> Hello everybody,\n>\n> Our company was in desperate need of portals in async interface of libpq,\n> so we patched it.\n>\n> We would be happy to upstream the changes.\n>\n> The description of changes:\n>\n> Two functions in libpq-fe.h:\n> PQsendPortalBindParams for sending a command to bind a portal to a\n> previously prepared statement;\n> PQsendPortalExecute for executing a previously bound portal with a given\n> number of rows.\n>\n> A patch to pqParseInput3 in fe-protocol3.c to handle the `portal\n> suspended` message tag.\n>\n> The patch is ready for review, but it lacks documentation, tests and usage\n> examples.\n>\n> There are no functions for sending bind without params and no functions\n> for sync interface, but they can easily be added to the feature.\n>\n\nIf you are happy to put it under The PostgreSQL License, then sending it as\nan attachment here is the first step.\n\nIf possible, please rebase it on top of git master.\n\nSome explanation for why you have this need and what problems this solves\nfor you would be helpful as well.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 17 Oct 2019 at 03:12, Sergei Fedorov <sergei.a.fedorov@gmail.com> wrote:Hello everybody,Our company was in desperate need of portals in async interface of libpq, so we patched it.We would be happy to upstream the changes.The description of changes:Two functions in libpq-fe.h:PQsendPortalBindParams for sending a command to bind a portal to a previously prepared statement;PQsendPortalExecute for executing a previously bound portal with a given number of rows.A patch to pqParseInput3 in fe-protocol3.c to handle the `portal suspended` message tag.The patch is ready for review, but it lacks documentation, tests and usage examples.There are no functions for sending bind without params and no functions for sync interface, but they can easily be added to the feature.If you are happy to put it under The PostgreSQL License, then sending it as an attachment here is the first step.If possible, please rebase it on top of git master.Some explanation for why you have this need and what problems this solves for you would be helpful as well.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 18 Oct 2019 20:21:20 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch proposal] libpq portal support" }, { "msg_contents": "Hello everybody,\n\nYes, we will be happy to put our patch under the PostgreSQL License.\n\nPatch is attached to this email, master was rebased to head prior to\ncreating the patch.\n\nWe are using a C++ wrapper on top of libpq for using database connections\nin multithreaded asynchronous applications. For security reasons (and\npartially because we are too lazy to escape query parameters) we use\nprepared queries and parameter binding for execution. There are situations\nwhen we need to fetch the query results not in one batch but in a `paged`\nway, the most convenient way is to use the portals feature of PosgreSQL\nprotocol.\n\nпт, 18 окт. 2019 г. в 15:21, Craig Ringer <craig@2ndquadrant.com>:\n\n> On Thu, 17 Oct 2019 at 03:12, Sergei Fedorov <sergei.a.fedorov@gmail.com>\n> wrote:\n>\n>> Hello everybody,\n>>\n>> Our company was in desperate need of portals in async interface of libpq,\n>> so we patched it.\n>>\n>> We would be happy to upstream the changes.\n>>\n>> The description of changes:\n>>\n>> Two functions in libpq-fe.h:\n>> PQsendPortalBindParams for sending a command to bind a portal to a\n>> previously prepared statement;\n>> PQsendPortalExecute for executing a previously bound portal with a given\n>> number of rows.\n>>\n>> A patch to pqParseInput3 in fe-protocol3.c to handle the `portal\n>> suspended` message tag.\n>>\n>> The patch is ready for review, but it lacks documentation, tests and\n>> usage examples.\n>>\n>> There are no functions for sending bind without params and no functions\n>> for sync interface, but they can easily be added to the feature.\n>>\n>\n> If you are happy to put it under The PostgreSQL License, then sending it\n> as an attachment here is the first step.\n>\n> If possible, please rebase it on top of git master.\n>\n> Some explanation for why you have this need and what problems this solves\n> for you would be helpful as well.\n>\n> --\n> Craig Ringer http://www.2ndQuadrant.com/\n> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n>\n\n\n-- \nThank you,\nSergei Fedorov", "msg_date": "Thu, 7 Nov 2019 12:43:23 +0300", "msg_from": "Sergei Fedorov <sergei.a.fedorov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [Patch proposal] libpq portal support" }, { "msg_contents": "On Thu, 7 Nov 2019 at 17:43, Sergei Fedorov <sergei.a.fedorov@gmail.com>\nwrote:\n\n> Hello everybody,\n>\n> Yes, we will be happy to put our patch under the PostgreSQL License.\n>\n> Patch is attached to this email, master was rebased to head prior to\n> creating the patch.\n>\n> We are using a C++ wrapper on top of libpq for using database connections\n> in multithreaded asynchronous applications. For security reasons (and\n> partially because we are too lazy to escape query parameters) we use\n> prepared queries and parameter binding for execution. There are situations\n> when we need to fetch the query results not in one batch but in a `paged`\n> way, the most convenient way is to use the portals feature of PosgreSQL\n> protocol.\n>\n>\n>\nThanks. That's a really good reason. It'd also bring libpq closer to\nfeature-parity with PgJDBC.\n\nPlease add it to the commitfest app https://commitfest.postgresql.org/\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 7 Nov 2019 at 17:43, Sergei Fedorov <sergei.a.fedorov@gmail.com> wrote:Hello everybody,Yes, we will be happy to put our patch under the PostgreSQL License.Patch is attached to this email, master was rebased to head prior to creating the patch.We are using a C++ wrapper on top of libpq for using database connections in multithreaded asynchronous applications. For security reasons (and partially because we are too lazy to escape query parameters) we use prepared queries and parameter binding for execution. There are situations when we need to fetch the query results not in one batch but in a `paged` way, the most convenient way is to use the portals feature of PosgreSQL protocol.Thanks. That's a really good reason. It'd also bring libpq closer to feature-parity with PgJDBC.Please add it to the commitfest app https://commitfest.postgresql.org/ --  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 8 Nov 2019 13:44:02 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch proposal] libpq portal support" }, { "msg_contents": "On Thu, 7 Nov 2019 at 17:43, Sergei Fedorov <sergei.a.fedorov@gmail.com>\nwrote:\n\n> Hello everybody,\n>\n> Yes, we will be happy to put our patch under the PostgreSQL License.\n>\n> Patch is attached to this email, master was rebased to head prior to\n> creating the patch.\n>\n> We are using a C++ wrapper on top of libpq for using database connections\n> in multithreaded asynchronous applications. For security reasons (and\n> partially because we are too lazy to escape query parameters) we use\n> prepared queries and parameter binding for execution. There are situations\n> when we need to fetch the query results not in one batch but in a `paged`\n> way, the most convenient way is to use the portals feature of PosgreSQL\n> protocol.\n>\n>\nBy way of initial patch review: there's a lot of copy/paste here that\nshould be avoided if possible. It looks like the added function\nPQsendPortalBindParams() is heavily based on PQsendQueryGuts(), which is\nthe common implementation shared by the existing PQsendQueryParams()\nand PQsendQueryPrepared() .\n\nSimilar for PQsendPortalExecute().\n\nI'd like to see the common code factored out, perhaps by adding the needed\nfunctionality into PQsendQueryGuts() etc.\n\nThe patch is also missing documentation; please add it\nto doc/src/sgml/libpq.sgml in docbook XML format. See the existing function\nexamples.\n\nI'd ask you to add test cover, but we don't really have a useful test suite\nfor libpq yet, so there's not much you can do there. It definitely won't\nfly without the docs and copy/paste reduction though.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 7 Nov 2019 at 17:43, Sergei Fedorov <sergei.a.fedorov@gmail.com> wrote:Hello everybody,Yes, we will be happy to put our patch under the PostgreSQL License.Patch is attached to this email, master was rebased to head prior to creating the patch.We are using a C++ wrapper on top of libpq for using database connections in multithreaded asynchronous applications. For security reasons (and partially because we are too lazy to escape query parameters) we use prepared queries and parameter binding for execution. There are situations when we need to fetch the query results not in one batch but in a `paged` way, the most convenient way is to use the portals feature of PosgreSQL protocol.By way of initial patch review: there's a lot of copy/paste here that should be avoided if possible. It looks like the added function PQsendPortalBindParams() is heavily based on PQsendQueryGuts(), which is the common implementation shared by the existing PQsendQueryParams() and PQsendQueryPrepared() .Similar for PQsendPortalExecute().I'd like to see the common code factored out, perhaps by adding the needed functionality into PQsendQueryGuts() etc.The patch is also missing documentation; please add it to doc/src/sgml/libpq.sgml in docbook XML format. See the existing function examples.I'd ask you to add test cover, but we don't really have a useful test suite for libpq yet, so there's not much you can do there. It definitely won't fly without the docs and copy/paste reduction though.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 8 Nov 2019 14:14:35 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch proposal] libpq portal support" } ]
[ { "msg_contents": "Hi all,\n\nI have just bumped into $subject, and we now use the table_*\nequivalents in the code. Any objections to the simple patch attached\nto clean up that?\n\nThanks,\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 10:47:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Remaining calls of heap_close/heap_open in the tree" }, { "msg_contents": "Hi,\n\nOn 2019-10-17 10:47:06 +0900, Michael Paquier wrote:\n> I have just bumped into $subject, and we now use the table_*\n> equivalents in the code. Any objections to the simple patch attached\n> to clean up that?\n\nThey're not really \"remaining\", as much as having been introduced after\nthe introduction of table_open()/close()...\n\nWonder if it's worth removing the backward compat ones from master? I\ndon't quite think so, but...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Oct 2019 01:04:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remaining calls of heap_close/heap_open in the tree" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-17 10:47:06 +0900, Michael Paquier wrote:\n>> I have just bumped into $subject, and we now use the table_*\n>> equivalents in the code. Any objections to the simple patch attached\n>> to clean up that?\n\n> They're not really \"remaining\", as much as having been introduced after\n> the introduction of table_open()/close()...\n\n> Wonder if it's worth removing the backward compat ones from master? I\n> don't quite think so, but...\n\nIf we don't remove 'em, we'll keep getting new calls from patches that\nhaven't been updated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Oct 2019 10:15:39 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remaining calls of heap_close/heap_open in the tree" }, { "msg_contents": "On Thu, Oct 17, 2019 at 01:04:50AM -0700, Andres Freund wrote:\n> Wonder if it's worth removing the backward compat ones from master? I\n> don't quite think so, but...\n\nI would vote for the removal so as we'll never see that again in\ncore. Let's see what others think here.\n--\nMichael", "msg_date": "Thu, 17 Oct 2019 17:25:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Remaining calls of heap_close/heap_open in the tree" }, { "msg_contents": "On 2019-Oct-17, Michael Paquier wrote:\n\n> On Thu, Oct 17, 2019 at 01:04:50AM -0700, Andres Freund wrote:\n> > Wonder if it's worth removing the backward compat ones from master? I\n> > don't quite think so, but...\n> \n> I would vote for the removal so as we'll never see that again in\n> core. Let's see what others think here.\n\nAgreed. There are enough other API changes that if an external\nextension wants to keep using heap_* in their code, they can add their\nown defines anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 17 Oct 2019 06:58:27 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remaining calls of heap_close/heap_open in the tree" }, { "msg_contents": "Hi,\n\nOn 2019-10-17 06:58:27 -0300, Alvaro Herrera wrote:\n> On 2019-Oct-17, Michael Paquier wrote:\n> \n> > On Thu, Oct 17, 2019 at 01:04:50AM -0700, Andres Freund wrote:\n> > > Wonder if it's worth removing the backward compat ones from master? I\n> > > don't quite think so, but...\n> > \n> > I would vote for the removal so as we'll never see that again in\n> > core. Let's see what others think here.\n> \n> Agreed. There are enough other API changes that if an external\n> extension wants to keep using heap_* in their code, they can add their\n> own defines anyway.\n\nThere's plenty extensions that essentially only need to change\nheap_open/close to table_open/close between 11 and 12. And it's\nespecially the simpler ones where that's the case.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 17 Oct 2019 04:02:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remaining calls of heap_close/heap_open in the tree" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n\n> Hi,\n>\n> On 2019-10-17 06:58:27 -0300, Alvaro Herrera wrote:\n>> On 2019-Oct-17, Michael Paquier wrote:\n>> \n>> > On Thu, Oct 17, 2019 at 01:04:50AM -0700, Andres Freund wrote:\n>> > > Wonder if it's worth removing the backward compat ones from master? I\n>> > > don't quite think so, but...\n>> > \n>> > I would vote for the removal so as we'll never see that again in\n>> > core. Let's see what others think here.\n>> \n>> Agreed. There are enough other API changes that if an external\n>> extension wants to keep using heap_* in their code, they can add their\n>> own defines anyway.\n>\n> There's plenty extensions that essentially only need to change\n> heap_open/close to table_open/close between 11 and 12. And it's\n> especially the simpler ones where that's the case.\n\nWould it be possible to wrap them in some #if(n)def guard so that\nthey're available when building out-of-tree extensions, but not when\nbuilding postgres itself?\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n", "msg_date": "Thu, 17 Oct 2019 12:34:44 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Remaining calls of heap_close/heap_open in the tree" }, { "msg_contents": "On Thu, Oct 17, 2019 at 12:34:44PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Would it be possible to wrap them in some #if(n)def guard so that\n> they're available when building out-of-tree extensions, but not when\n> building postgres itself?\n\nNot sure that's worth the trouble. If there are no objections, I will\nremove the compatibility macros.\n--\nMichael", "msg_date": "Fri, 18 Oct 2019 10:03:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Remaining calls of heap_close/heap_open in the tree" }, { "msg_contents": "On Fri, Oct 18, 2019 at 10:03:11AM +0900, Michael Paquier wrote:\n> Not sure that's worth the trouble. If there are no objections, I will\n> remove the compatibility macros.\n\nOkay, cleanup done with the compatibility macros removed.\n--\nMichael", "msg_date": "Sat, 19 Oct 2019 11:26:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Remaining calls of heap_close/heap_open in the tree" } ]
[ { "msg_contents": "Hi Folks,\n\n\nI am doing some development for Enduro/X distributed transaction \nmiddleware XA driver emulation, so that it would be suitable for ecpg \napps too, and so the program opens the connection with help of \nECPGconnect() with assigned connection id, .e.g:\n\n/**\n  * Perform connect\n  * @param conndata parsed connection data\n  * @param connname connection name\n  * @return NULL (connection failed) or connection object\n  */\nexpublic PGconn * ndrx_pg_connect(ndrx_pgconnect_t *conndata, char \n*connname)\n{\n     PGconn *ret = NULL;\n\n     NDRX_LOG(log_debug, \"Establishing ECPG connection: [%s]\", conndata);\n\n     /* OK, try to open, with out autocommit please!\n      */\n     if (!ECPGconnect (__LINE__, conndata->c, conndata->url, conndata->user,\n             conndata->password, connname, EXFALSE))\n     {\n         NDRX_LOG(log_error, \"ECPGconnect failed, code %ld state: [%s]: %s\",\n                 (long)sqlca.sqlcode, sqlca.sqlstate, \nsqlca.sqlerrm.sqlerrmc);\n         ret = NULL;\n         goto out;\n     }\n\n     ret = ECPGget_PGconn(connname);\n     if (NULL==ret)\n     {\n         NDRX_LOG(log_error, \"Postgres error: failed to get PQ \nconnection!\");\n         ret = NULL;\n         goto out;\n     }\n\nout:\n   ...\n\n\nAnd connection at the end of the program are closed by:\n\n\n/**\n  * disconnect from postgres\n  * @param conn current connection object\n  * @param connname connection name\n  * @return EXSUCCEED/EXFAIL\n  */\nexpublic int ndrx_pg_disconnect(PGconn *conn, char *connname)\n{\n     int ret = EXSUCCEED;\n\n     NDRX_LOG(log_debug, \"Closing ECPG connection: [%s]\", connname);\n\n     if (!ECPGdisconnect(__LINE__, connname))\n     {\n         NDRX_LOG(log_error, \"ECPGdisconnect failed: %s\",\n                 PQerrorMessage(conn));\n         EXFAIL_OUT(ret);\n     }\nout:\n     return ret;\n}\n\n\nI logs I have:\n\nN:NDRX:5:b86e6a53: \n3940:7f4ccc218300:012:20191017:091547294:a_open_entry:tmi/xa.c:0365:atmi_xa_open_entry \nRMID=1\nN:NDRX:5:b86e6a53: \n3940:7f4ccc218300:012:20191017:091547294:a_open_entry:switch.c:0295:Connection \nname: [20191017-91547294-11]\nN:NDRX:5:b86e6a53: \n3940:7f4ccc218300:012:20191017:091547294:x_pg_connect:s/ecpg.c:0067:Establishing \nECPG connection: []\nN:NDRX:5:b86e6a53: 3940:7f4cdab329c0:001:20191017:091547296:_tpcontinue \n:return.c:0631:Long jumping to continue!\n\n...\n\nN:NDRX:5:b86e6a53: \n3940:7f4cdab329c0:001:20191017:091548579:_close_entry:tmi/xa.c:0404:atmi_xa_close_entry\nN:NDRX:5:b86e6a53: \n3940:7f4cdab329c0:001:20191017:091548579:g_disconnect:s/ecpg.c:0102:Closing \nECPG connection: [20191017-91546256-0]\nN:NDRX:4:b86e6a53: \n3940:7f4cdab329c0:001:20191017:091548580:_close_entry:switch.c:0341:Connection \nclosed\n\n\nBut the problem is that at times I get following leaks:\n\n==3940==ERROR: LeakSanitizer: detected memory leaks\n\nDirect leak of 716 byte(s) in 3 object(s) allocated from:\n     #0 0x7f4cd9a42b50 in __interceptor_malloc \n(/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)\n     #1 0x7f4cd43a4e58 in CRYPTO_zalloc \n(/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1+0x17ae58)\n\nDirect leak of 584 byte(s) in 1 object(s) allocated from:\n     #0 0x7f4cd9a42b50 in __interceptor_malloc \n(/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)\n     #1 0x7f4cd43a4e58 in CRYPTO_zalloc \n(/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1+0x17ae58)\n     #2 0x7f4cd4ba1b5f in PQconnectPoll \n(/usr/lib/x86_64-linux-gnu/libpq.so.5+0xeb5f)\n     #3 0x7f4cd4ba29de (/usr/lib/x86_64-linux-gnu/libpq.so.5+0xf9de)\n     #4 0x7f4cd4ba5666 in PQconnectdbParams \n(/usr/lib/x86_64-linux-gnu/libpq.so.5+0x12666)\n     #5 0x7f4cd4de89c2 in ECPGconnect \n(/usr/lib/x86_64-linux-gnu/libecpg.so.6+0xc9c2)\n     #6 0x7f4cd50f9d73 in ndrx_pg_connect \n/home/user1/endurox/xadrv/postgres/ecpg.c:71\n     #7 0x7f4cd50f6ce7 in xa_open_entry \n/home/user1/endurox/xadrv/postgres/pgswitch.c:297\n     #8 0x7f4cd50f6ce7 in xa_open_entry_stat \n/home/user1/endurox/xadrv/postgres/pgswitch.c:785\n     #9 0x7f4cd9403ef0 in atmi_xa_open_entry \n/home/user1/endurox/libatmi/xa.c:373\n     #10 0x7f4cd940bcce in ndrx_tpopen /home/user1/endurox/libatmi/xa.c:1335\n     #11 0x7f4cd93c27e2 in tpopen /home/user1/endurox/libatmi/atmi.c:468\n     #12 0x55ae20f4e0f9 in tm_thread_init \n/home/user1/endurox/tmsrv/tmsrv.c:115\n     #13 0x55ae20f4f3b8 in TPTMSRV_TH /home/user1/endurox/tmsrv/tmsrv.c:161\n     #14 0x55ae20f6ca63 in poolthread_do \n/home/user1/endurox/libexthpool/thpool.c:370\n     #15 0x7f4cd8bf36da in start_thread \n(/lib/x86_64-linux-gnu/libpthread.so.0+0x76da)\n\nIndirect leak of 1728 byte(s) in 8 object(s) allocated from:\n     #0 0x7f4cd9a42b50 in __interceptor_malloc \n(/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)\n     #1 0x7f4cd43a4e58 in CRYPTO_zalloc \n(/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1+0x17ae58)\n\nCan anybody give a hint where could be a problem?\n\n\nprogram links libecpg and libpq.\n\n$ psql --version\npsql (PostgreSQL) 10.10 (Ubuntu 10.10-0ubuntu0.18.04.1)\n\n\nThanks a lot in advance,\n\nMadars\n\n\n\n\n", "msg_date": "Thu, 17 Oct 2019 13:12:03 +0300", "msg_from": "Madars Vitolins <madars.vitolins@gmail.com>", "msg_from_op": true, "msg_subject": "Memory leak reported by address sanitizer in\n ECPGconnect/CRYPTO_zalloc" } ]