threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi,\n\nValgrind on our internal buildfarm complained about use-after-free\nduring currentEventTriggerState->commandList manipulations, e.g. lappend\nin EventTriggerCollectSimpleCommand. I've discovered that the source of\nproblem is EventTriggerAlterTableEnd not bothering to switch into its\nown context before appending to the list. ced138e8cba fixed this in\nmaster and 13 but wasn't backpatched further, so I see the problem in\n12-.\n\nThe particular reproducing scenario is somewhat involved; it combines\npg_pathman [1] extension and SQL interface to it created in our fork of\nPostgres. Namely, we allow to create partitions in CREATE TABLE\nPARTITIONED by statement (similar to what [2] proposes). Each partition\nis created via separate SPI call which in turn ends up making\nAlterTableStmt as ProcessUtility PROCESS_UTILITY_SUBCOMMAND; so\nEventTriggerAlterTableStart/End is done, but additional\nEventTriggerQueryState is not allocated and commands are collected in\ntoplevel EventTriggerQueryState. Of course SPI frees its memory between\nthe calls which makes valgrind scream.\n\nAdmittedly our case is tricky and I'm not sure it is possible to make up\nsomething like that in the pure core code, but I do believe other\nextension writers might run into this, so I propose to backpatch (the\nattached) 3 lines healing to all supported branches.\n\nTechnically, all you (an extension author) have to do to encounter this\nis\n 1) Have toplevel EvenTriggerQueryState, i.e. catch utility statement.\n 2) Inside it, run AlterTable as PROCESS_UTILITY_SUBCOMMAND in some\n short-living context.\n\n[1] https://github.com/postgrespro/pg_pathman\n[2] https://www.postgresql.org/message-id/flat/CALT9ZEFBv05OhLMKO1Lbo_Zg9a0v%2BU9q9twe%3Dt-dixfR45RmVQ%40mail.gmail.com#f86f0fcfa62d5108fb81556a43f42457\n\n\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 27 Oct 2020 12:36:24 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Use-after-free in 12- EventTriggerAlterTableEnd"
},
{
"msg_contents": "Arseny Sher <a.sher@postgrespro.ru> writes:\n> Valgrind on our internal buildfarm complained about use-after-free\n> during currentEventTriggerState->commandList manipulations, e.g. lappend\n> in EventTriggerCollectSimpleCommand. I've discovered that the source of\n> problem is EventTriggerAlterTableEnd not bothering to switch into its\n> own context before appending to the list. ced138e8cba fixed this in\n> master and 13 but wasn't backpatched further, so I see the problem in\n> 12-.\n\nYeah, that clearly should have been back-patched --- the fact that it\naccidentally didn't fail in the most common case wasn't a good reason\nfor leaving the bug in place. I'm not excited about the test case\nced138e8cba added though, so I think your proposed patch is fine.\nWill push shortly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Oct 2020 15:21:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use-after-free in 12- EventTriggerAlterTableEnd"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that the table AM abstraction introduced the symbol\nHEAP_TABLE_AM_HANDLER_OID, although we already have a convention for\ndefining symbols automatically for builtin functions, which in this case is\n(currently unused) F_HEAP_TABLEAM_HANDLER.\n\nIt seems a wart to have two symbols for the same function (seemingly\naccidentally). I suppose it's unacceptable to remove the non-standard\nsymbol since it's been referred to in code for a while now. We could remove\nthe unused (in core anyway) standard one by arranging fmgroids.h to use\nexplicit symbols from pg_proc.dat where they exist, as well as prevent such\nsymbols from being emitted into pg_proc_d.h. But then again there is\nno guarantee the standard symbol is not being used elsewhere. Thoughts?\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHi,I noticed that the table AM abstraction introduced the symbol HEAP_TABLE_AM_HANDLER_OID, although we already have a convention for defining symbols automatically for builtin functions, which in this case is (currently unused) F_HEAP_TABLEAM_HANDLER. It seems a wart to have two symbols for the same function (seemingly accidentally). I suppose it's unacceptable to remove the non-standard symbol since it's been referred to in code for a while now. We could remove the unused (in core anyway) standard one by arranging fmgroids.h to use explicit symbols from pg_proc.dat where they exist, as well as prevent such symbols from being emitted into pg_proc_d.h. But then again there is no guarantee the standard symbol is not being used elsewhere. Thoughts?-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 27 Oct 2020 08:43:28 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "duplicate function oid symbols"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> I noticed that the table AM abstraction introduced the symbol\n> HEAP_TABLE_AM_HANDLER_OID, although we already have a convention for\n> defining symbols automatically for builtin functions, which in this case is\n> (currently unused) F_HEAP_TABLEAM_HANDLER.\n\nYeah, that seems wrong. I'd just remove HEAP_TABLE_AM_HANDLER_OID.\nAs long as we're not back-patching the change, it seems like a very\nminor thing to fix, if anyone outside core is referencing the old name.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Oct 2020 09:51:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > I noticed that the table AM abstraction introduced the symbol\n> > HEAP_TABLE_AM_HANDLER_OID, although we already have a convention for\n> > defining symbols automatically for builtin functions, which in this case\n> is\n> > (currently unused) F_HEAP_TABLEAM_HANDLER.\n>\n> Yeah, that seems wrong. I'd just remove HEAP_TABLE_AM_HANDLER_OID.\n> As long as we're not back-patching the change, it seems like a very\n> minor thing to fix, if anyone outside core is referencing the old name.\n>\n\nOk, here is a patch to fix that, and also throw an error if pg_proc.dat has\nan explicitly defined symbol.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 27 Oct 2020 17:40:09 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 05:40:09PM -0400, John Naylor wrote:\n> Ok, here is a patch to fix that, and also throw an error if pg_proc.dat has\n> an explicitly defined symbol.\n\nI am not seeing any uses of HEAP_TABLE_AM_HANDLER_OID in the wild, so\n+1 for just removing it, and not back-patch.\n--\nMichael",
"msg_date": "Wed, 28 Oct 2020 14:26:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "I wrote:\n\n\n> Ok, here is a patch to fix that, and also throw an error if pg_proc.dat\n> has an explicitly defined symbol.\n>\n\nIt occurred to me I neglected to explain the error with a comment, which\nI've added in v2.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Oct 2020 11:44:26 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n>> Ok, here is a patch to fix that, and also throw an error if pg_proc.dat\n>> has an explicitly defined symbol.\n\n> It occurred to me I neglected to explain the error with a comment, which\n> I've added in v2.\n\nPushed with a bit of tweaking of the error message.\n\nI wondered about introducing a similar prohibition for pg_type.\nThe only existing oid_symbol in pg_type that I think has enough\ngrandfather status to be tough to change is CASHOID for \"money\".\nBut we could imagine special-casing that with a handmade macro\n\n#define CASHOID MONEYOID\n\nand then getting rid of the oid_symbol entries. (Or perhaps we\ncould just up and nuke CASHOID too? It's somewhat dubious that\nany outside code is really using that macro.)\n\nMay not be worth the trouble, but if we're anal enough to do this\npatch maybe we should do that too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Oct 2020 12:25:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 12:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wondered about introducing a similar prohibition for pg_type.\n>\n\nThat might be worth doing, since some of the grandfathered macros are\nclustered together, which could lead to more cases creeping in as people\nmatch new types to examples nearby.\n\n\n> The only existing oid_symbol in pg_type that I think has enough\n> grandfather status to be tough to change is CASHOID for \"money\".\n> But we could imagine special-casing that with a handmade macro\n>\n> #define CASHOID MONEYOID\n>\n> and then getting rid of the oid_symbol entries. (Or perhaps we\n> could just up and nuke CASHOID too? It's somewhat dubious that\n> any outside code is really using that macro.)\n>\n\nYeah, grepping shows that some of those aren't even used in core code. On\nthe other hand, the difference from the heap_am_handler case is the\nstandard macros don't already exist for these pg_type entries. The handmade\nmacro idea could be used for all eight just as easily as for one.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Oct 28, 2020 at 12:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wondered about introducing a similar prohibition for pg_type.\nThat might be worth doing, since some of the grandfathered macros are clustered together, which could lead to more cases creeping in as people match new types to examples nearby. The only existing oid_symbol in pg_type that I think has enough\ngrandfather status to be tough to change is CASHOID for \"money\".\nBut we could imagine special-casing that with a handmade macro\n\n#define CASHOID MONEYOID\n\nand then getting rid of the oid_symbol entries. (Or perhaps we\ncould just up and nuke CASHOID too? It's somewhat dubious that\nany outside code is really using that macro.)\nYeah, grepping shows that some of those aren't even used in core code. On the other hand, the difference from the heap_am_handler case is the standard macros don't already exist for these pg_type entries. The handmade macro idea could be used for all eight just as easily as for one. -- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Oct 2020 14:08:28 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "Hi,\n\nThanks for fixing the HEAP_TABLE_AM_HANDLER_OID one.\n\nOn 2020-10-28 14:08:28 -0400, John Naylor wrote:\n> > The only existing oid_symbol in pg_type that I think has enough\n> > grandfather status to be tough to change is CASHOID for \"money\".\n> > But we could imagine special-casing that with a handmade macro\n> >\n> > #define CASHOID MONEYOID\n> >\n> > and then getting rid of the oid_symbol entries. (Or perhaps we\n> > could just up and nuke CASHOID too? It's somewhat dubious that\n> > any outside code is really using that macro.)\n> >\n> \n> Yeah, grepping shows that some of those aren't even used in core code. On\n> the other hand, the difference from the heap_am_handler case is the\n> standard macros don't already exist for these pg_type entries. The handmade\n> macro idea could be used for all eight just as easily as for one.\n\nI think changing type oid macro names is somewhat problematic - in\ncontrast to function oid macros the type macros are much more likely to\nbe used by client applications, e.g. for deciding whether to use binary\nor text format for a type.\n\nA quick code search shows a few references, even just within debian\npackages (some are incorrect hits, others aren't):\nhttps://codesearch.debian.net/search?q=CASHOID&literal=1&perpkg=1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Oct 2020 11:22:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think changing type oid macro names is somewhat problematic - in\n> contrast to function oid macros the type macros are much more likely to\n> be used by client applications, e.g. for deciding whether to use binary\n> or text format for a type.\n\n> A quick code search shows a few references, even just within debian\n> packages (some are incorrect hits, others aren't):\n> https://codesearch.debian.net/search?q=CASHOID&literal=1&perpkg=1\n\nYeah, I can easily believe that for CASHOID in particular. So I'm\nokay with keeping that available as a handmade alias. The other\nextant oid_symbol entries are\n\nPGNODETREEOID\nPGNDISTINCTOID\nPGDEPENDENCIESOID\nPGMCVLISTOID\nPGDDLCOMMANDOID\nLSNOID\nEVTTRIGGEROID\n\nThe only one of these that client code would plausibly be using is LSNOID,\nand even that is a bit of a stretch. Moreover, this clearly shows the\neffect John mentioned that people have been copying the style of adjacent\nentries rather than making use of the standard oid_symbol convention like\nthey should --- some of these don't exist in the initial v11 version of\npg_type.dat.\n\nI'd suggest keeping CASHOID and LSNOID available as aliases, and renaming\nthe rest.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Oct 2020 14:49:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-28 14:49:06 -0400, Tom Lane wrote:\n> The other extant oid_symbol entries are\n> \n> PGNODETREEOID\n> PGNDISTINCTOID\n> PGDEPENDENCIESOID\n> PGMCVLISTOID\n> PGDDLCOMMANDOID\n> EVTTRIGGEROID\n\n> The only one of these that client code would plausibly be using is LSNOID,\n> and even that is a bit of a stretch.\n\nThere's a quite a few references to LSNOID in github code:\nhttps://github.com/search?o=desc&q=LSNOID&s=indexed&type=Code\n\nThere also are a few references to the more marginal symbols above. But\nthey look more like somebody trying to be complete. E.g.\nhttps://github.com/yugabyte/yugabyte-db/blob/8d0ef3f7f8c49a8d9bec302cdcc0c40f5d9e785b/src/postgres/src/backend/utils/misc/pg_yb_utils.c#L500\nalthough there also is slightly more intentional looking references like\nhttps://github.com/tada/pljava/blob/63d8a5e467a9c0f626c48e9ee134a58ac308fd8e/pljava/src/main/java/org/postgresql/pljava/jdbc/SQLXMLImpl.java#L177\n\n\n> Moreover, this clearly shows the\n> effect John mentioned that people have been copying the style of adjacent\n> entries rather than making use of the standard oid_symbol convention like\n> they should --- some of these don't exist in the initial v11 version of\n> pg_type.dat.\n\nWonder if it's worth using something like 'backward_compat_oid_symbol'\nand rejecting plain oid_symbol references for pg_type? That'd perhaps be\nless likely to be copied?\n\n\n> I'd suggest keeping CASHOID and LSNOID available as aliases, and renaming\n> the rest.\n\nI don't really have an opinion on wether it's worth keepign the other\naliases or not...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Oct 2020 12:17:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-10-28 14:49:06 -0400, Tom Lane wrote:\n>> Moreover, this clearly shows the\n>> effect John mentioned that people have been copying the style of adjacent\n>> entries rather than making use of the standard oid_symbol convention like\n>> they should --- some of these don't exist in the initial v11 version of\n>> pg_type.dat.\n\n> Wonder if it's worth using something like 'backward_compat_oid_symbol'\n> and rejecting plain oid_symbol references for pg_type? That'd perhaps be\n> less likely to be copied?\n\nNah. What I'm imagining is just that pg_type.h contains\n\n#ifdef EXPOSE_TO_CLIENT_CODE\n\n/*\n * Backwards compatibility for ancient random spellings of OID macros.\n * Don't use these macros in new code.\n */\n#define CASHOID MONEYOID\n#define LSNOID PG_LSNOID\n\n#endif\n\nand then the negotiation here is only about whether to make this list\nlonger. We don't need to complicate genbki.pl with a new facility.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Oct 2020 15:24:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "On 2020-10-28 15:24:20 -0400, Tom Lane wrote:\n> Nah. What I'm imagining is just that pg_type.h contains\n> \n> #ifdef EXPOSE_TO_CLIENT_CODE\n> \n> /*\n> * Backwards compatibility for ancient random spellings of OID macros.\n> * Don't use these macros in new code.\n> */\n> #define CASHOID MONEYOID\n> #define LSNOID PG_LSNOID\n> \n> #endif\n\nAh, good idea. +1\n\n\n> We don't need to complicate genbki.pl with a new facility.\n\nI assume you plan to error out if oid_symbol is defined for pg_type\ngoing forward?\n\n\n",
"msg_date": "Wed, 28 Oct 2020 12:28:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 3:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> and then the negotiation here is only about whether to make this list\n> longer. We don't need to complicate genbki.pl with a new facility.\n>\n\nAgreed, and reformat_dat_files.pl must also know about these special\nattributes.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Oct 28, 2020 at 3:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: \nand then the negotiation here is only about whether to make this list\nlonger. We don't need to complicate genbki.pl with a new facility.\nAgreed, and reformat_dat_files.pl must also know about these special attributes.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Oct 2020 15:28:49 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I assume you plan to error out if oid_symbol is defined for pg_type\n> going forward?\n\nRight, just like we just did for pg_proc.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Oct 2020 15:36:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 3:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Nah. What I'm imagining is just that pg_type.h contains\n>\n> #ifdef EXPOSE_TO_CLIENT_CODE\n>\n> /*\n> * Backwards compatibility for ancient random spellings of OID macros.\n> * Don't use these macros in new code.\n> */\n> #define CASHOID MONEYOID\n> #define LSNOID PG_LSNOID\n>\n> #endif\n>\n\nHere is a quick patch implementing this much.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Oct 2020 15:58:33 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: duplicate function oid symbols"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> Here is a quick patch implementing this much.\n\nPushed with a couple cosmetic tweaks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Oct 2020 13:35:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function oid symbols"
}
] |
[
{
"msg_contents": "Hi Stephen, hackers,\n\n> The analyze is doing more-or-less random i/o since it's skipping through\n> the table picking out select blocks, not doing regular sequential i/o.\nVS\n>> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890910, bstrategy=0x1102278) at heapam_handler.c:984\n>> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890912, bstrategy=0x1102278) at heapam_handler.c:984\n>> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890922, bstrategy=0x1102278) at heapam_handler.c:984\n>Not really sure what's interesting here, but it does look like we're\n>skipping through the table as expected.\n\nYes, but not randomly in this case. I wanted to point out that this is incrementing block number, therefore I've included this debug output which might trigger readahead heuristics.\nPerhaps this depends on how the table was built / vacuumed ? (in this case, pure-INSERT-only; I would expect the same in time series DBs and DWHs). \n\n> With all those 'readahead' calls it certainly makes one wonder if the\n> Linux kernel is reading more than just the block we're looking for\n> because it thinks we're doing a sequential read and will therefore want\n> the next few blocks when, in reality, we're going to skip past them,\n> meaning that any readahead the kernel is doing is likely just wasted\n> I/O.\n\nI've done some quick&dirty tests with blockdev --setra/setfra 0 after spending time looking at the smgr/md/fd API changes required to find shortcut, but I'm getting actually a little bit worse timings at least on \"laptop DB tests\". One thing that I've noticed is that needs to be only for automatic-analyze, but not for automatic-vacuum where apparently there is some boost due to readahead.\n\n> That definitely seems like a useful thing to include and thanks for the\n> patch! Please be sure to register it in the commitfest app:\n> https://commitfest.postgresql.org\n\nThank you! Thread is now registered.\n\n> I would think that, ideally, we'd teach analyze.c to work in the same\n> way that bitmap heap scans do- that is, use posix_fadvise to let the\n> kernel know what pages we're going to want next instead of the kernel\n> guessing (incorrectly) or not doing any pre-fetching. I didn't spend a\n> lot of time poking, but it doesn't look like analyze.c tries to do any\n> prefetching today. In a similar vein, I wonder if VACUUM should be\n> doing prefetching too today, at least when it's skipping through the\n> heap based on the visibility map and jumping over all-frozen pages.\n\nMy only idea would be that a lot of those blocks could be read asynchronously in batches (AIO) with POSIX_FADV_RANDOM issued on block-range before, so maybe the the optimization is possible, but not until we'll have AIO ;)\n\n> Haven't looked too closely at this but in general +1 on the idea and\n> this approach looks pretty reasonable to me. Only thing I can think of\n> off-hand is to check how it compares to other places where we report IO\n> read time and make sure that it looks similar.\n\nOk, I've changed the output in 002 version to include \"avg read rate\" just like in the autovacuum case but still maintaining single line output, e.g: \nautomatic analyze of table \"test.public.t1_default\" avg read rate: 96.917 MB/s (read time: 2.52 s), system usage: CPU: user: 0.28 s, system: 0.26 s, elapsed: 2.94 s\n\n-J.",
"msg_date": "Tue, 27 Oct 2020 15:06:05 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": true,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings Jakub,\n\n* Jakub Wartak (Jakub.Wartak@tomtom.com) wrote:\n> > The analyze is doing more-or-less random i/o since it's skipping through\n> > the table picking out select blocks, not doing regular sequential i/o.\n> VS\n> >> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890910, bstrategy=0x1102278) at heapam_handler.c:984\n> >> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890912, bstrategy=0x1102278) at heapam_handler.c:984\n> >> Breakpoint 1, heapam_scan_analyze_next_block (scan=0x10c8098, blockno=19890922, bstrategy=0x1102278) at heapam_handler.c:984\n> >Not really sure what's interesting here, but it does look like we're\n> >skipping through the table as expected.\n> \n> Yes, but not randomly in this case. I wanted to point out that this is incrementing block number, therefore I've included this debug output which might trigger readahead heuristics.\n\nSure, it's incrementing, but it's skipping- this is very similar to what\nwe do with Bitmap Heap Scans, and that makes it different from a typical\nsequential scan.\n\n> Perhaps this depends on how the table was built / vacuumed ? (in this case, pure-INSERT-only; I would expect the same in time series DBs and DWHs). \n\nNo, this is how ANALYZE behaves and hasn't got much to do with how the\ntable was built or vacuumed.\n\n> > With all those 'readahead' calls it certainly makes one wonder if the\n> > Linux kernel is reading more than just the block we're looking for\n> > because it thinks we're doing a sequential read and will therefore want\n> > the next few blocks when, in reality, we're going to skip past them,\n> > meaning that any readahead the kernel is doing is likely just wasted\n> > I/O.\n> \n> I've done some quick&dirty tests with blockdev --setra/setfra 0 after spending time looking at the smgr/md/fd API changes required to find shortcut, but I'm getting actually a little bit worse timings at least on \"laptop DB tests\". One thing that I've noticed is that needs to be only for automatic-analyze, but not for automatic-vacuum where apparently there is some boost due to readahead.\n\nInteresting that you weren't seeing any benefit to disabling readahead.\nWere you able to see where the time in the kernel was going when\nreadahead was turned off for the ANALYZE?\n\nNote that you shouldn't need to make changes to smgr/md/fd to leverage\nposix_fadvise- what you would do is use PrefetchBuffer(), see\nBitmapPrefetch().\n\nThe VACUUM case is going to be complicated by what's in the visibility\nmap. A VACUUM that isn't able to skip any pages yet is certainly going\nto benefit from the kernel's readahead, but a VACUUM that's able to skip\nover pages likely wouldn't benefit as much.\n\nThe way to test this would look something like:\n\n- COPY a bunch of data into a table\n- make sure to commit that and any other ongoing transactions\n- VACUUM FREEZE the table\n- check the visibility map to make sure most of the pages are marked as\n all-frozen after the VACUUM FREEZE\n- randomly UPDATE the table, to really get the effect, maybe update 20%\n of the pages while leaving the rest alone (and therefore 80% of the\n table should still have the all-frozen bit set in the visibility map)\n- *then* do a VACUUM on the table and see what happens with different\n amounts of read-ahead (or, ideally, with posix_fadvise() being used to\n let the kernel know what pages we're going to actually want).\n\n> > That definitely seems like a useful thing to include and thanks for the\n> > patch! Please be sure to register it in the commitfest app:\n> > https://commitfest.postgresql.org\n> \n> Thank you! Thread is now registered.\n\nGreat!\n\n> > I would think that, ideally, we'd teach analyze.c to work in the same\n> > way that bitmap heap scans do- that is, use posix_fadvise to let the\n> > kernel know what pages we're going to want next instead of the kernel\n> > guessing (incorrectly) or not doing any pre-fetching. I didn't spend a\n> > lot of time poking, but it doesn't look like analyze.c tries to do any\n> > prefetching today. In a similar vein, I wonder if VACUUM should be\n> > doing prefetching too today, at least when it's skipping through the\n> > heap based on the visibility map and jumping over all-frozen pages.\n> \n> My only idea would be that a lot of those blocks could be read asynchronously in batches (AIO) with POSIX_FADV_RANDOM issued on block-range before, so maybe the the optimization is possible, but not until we'll have AIO ;)\n\nWell, the idea is that posix_fadvise() usage through PrefetchBuffer()\ngets us some of that by letting the kernel know what we're going to ask\nfor next. AIO is a whole other animal that's been discussed off and on\naround here but it's a much larger and more invasive change than just\ncalling posix_fadvise().\n\n> > Haven't looked too closely at this but in general +1 on the idea and\n> > this approach looks pretty reasonable to me. Only thing I can think of\n> > off-hand is to check how it compares to other places where we report IO\n> > read time and make sure that it looks similar.\n> \n> Ok, I've changed the output in 002 version to include \"avg read rate\" just like in the autovacuum case but still maintaining single line output, e.g: \n> automatic analyze of table \"test.public.t1_default\" avg read rate: 96.917 MB/s (read time: 2.52 s), system usage: CPU: user: 0.28 s, system: 0.26 s, elapsed: 2.94 s\n\nAh, in this case I had been thinking actually about the other things\nthat track_io_timing controls, as in:\n\n ExplainPropertyFloat(\"I/O Read Time\", \"ms\",\n INSTR_TIME_GET_MILLISEC(usage->blk_read_time),\n 3, es);\n ExplainPropertyFloat(\"I/O Write Time\", \"ms\",\n INSTR_TIME_GET_MILLISEC(usage->blk_write_time),\n 3, es);\n\nSeems we already aren't exactly consistent in how we report these\nthings, unfortunately.\n\nThat said, since this is ANALYZE, yeah, it probably makes more sense to\nhave it look more like what VACUUM reports than what you'd get from\nEXPLAIN. As an interesting point also- what VACUUM reports isn't\nthrough using track_io_timing, which makes one wonder if we should be\ndoing that here or not..\n\nThanks!\n\nStephen",
"msg_date": "Tue, 27 Oct 2020 11:35:26 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
}
] |
[
{
"msg_contents": "As I mentioned in [1], I've volunteered to clear out the TODO list of items\nthat appear to be too difficult, controversial, or otherwise not worth\ndoing to warrant being listed there. I'll be working a few sections at a\ntime, and every so often I'll have a list of proposed items for removal. If\nI don't hear objections, I'll remove the items after a few days while going\nthrough the next set.\n\nWhere there's an email thread, I've skimmed a few messages to get a sense\nof the community's thoughts on it. Where easily determined, I've taken age\ninto account, insofar as something from 2017 is going to get much more\nbenefit of doubt than something from 2008. I've added after each item a\nphrase that sums up the reason I believe it doesn't belong anymore.\nFeedback welcome, of course, although I suspect there won't be much.\n\n**Administration\n\n- Have custom variables be transaction-safe\n Old and found to be difficult after attempting\n\n- Allow custom variables to appear in pg_settings()\n Old and controversial\n\n- Implement the SQL-standard mechanism whereby REVOKE ROLE revokes only the\nprivilege granted by the invoking role, and not those granted by other roles\n Old and difficult\n\n- Prevent query cancel packets from being replayed by an attacker,\nespecially when using SSL\n Old and difficult\n\n*Configuration files\n\n- Consider normalizing fractions in postgresql.conf, perhaps using '%'\n At the time (2007), some gucs used an actual percentage.\n\n- Add external tool to auto-tune some postgresql.conf parameters\n There are already out-of-core tools that try to do this.\n\n- Create utility to compute accurate random_page_cost value\n Seems outdated: In the current age of SSDs and cloud environments, it's\noften just set to 1.1, and there hasn't been a demand to be more accurate\nthan that.\n\n- Allow synchronous_standby_names to be disabled after communication\nfailure with all synchronous standby servers exceeds some timeout\n Controversial\n\n- Adjust rounding behavior for numeric GUC values\n Controversial\n\n*Tablespaces\n\n- Allow WAL replay of CREATE TABLESPACE to work when the directory\nstructure on the recovery computer is different from the original\n Thread quote: \"part of the difficult, perhaps-not-worth doing impossible\nproblems\"\n\n- Allow per-tablespace quotas\n This seems to point to the larger problem space of disk space monitoring,\nand should probably be phrased thusly, and is a much bigger project or set\nof projects.\n\n- Allow tablespaces on RAM-based partitions for temporary objects\n In the thread, what's desired is the ability to have some amount of\ndurability on a RAM-disk without WAL logging.\n\n- Close race in DROP TABLESPACE on Windows\n This refers to buildfarm failures from 2014.\n\n*Statistics Collector\n\n- Track number of WAL files ready to be archived in pg_stat_archiver\n Thread quote: \"pg_stat_archiver already has a column for\nlast_archived_wal and last_failed_wal, so you can already work out how many\nfiles there must be between then and now\"\n\n*Point-In-Time Recovery\n\n- Allow archive_mode to be changed without server restart\n Controversial and old\n\n*Standby server mode\n\n- Allow pg_xlogfile_name() to be used in recovery mode\n Controversial and old\n\n- Change walsender so that it applies per-role settings\n Old and possibly obsolete\n\n--\n[1]\nhttps://www.postgresql.org/message-id/CAFBsxsHbqMzDoGB3eAGmpcpB%2B7uae%2BLLi_G%2Bo8HMEECM9CbQcQ%40mail.gmail.com\n\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nAs I mentioned in [1], I've volunteered to clear out the TODO list of items that appear to be too difficult, controversial, or otherwise not worth doing to warrant being listed there. I'll be working a few sections at a time, and every so often I'll have a list of proposed items for removal. If I don't hear objections, I'll remove the items after a few days while going through the next set.Where there's an email thread, I've skimmed a few messages to get a sense of the community's thoughts on it. Where easily determined, I've taken age into account, insofar as something from 2017 is going to get much more benefit of doubt than something from 2008. I've added after each item a phrase that sums up the reason I believe it doesn't belong anymore. Feedback welcome, of course, although I suspect there won't be much.**Administration- Have custom variables be transaction-safe Old and found to be difficult after attempting- Allow custom variables to appear in pg_settings() Old and controversial- Implement the SQL-standard mechanism whereby REVOKE ROLE revokes only the privilege granted by the invoking role, and not those granted by other roles Old and difficult- Prevent query cancel packets from being replayed by an attacker, especially when using SSL Old and difficult*Configuration files- Consider normalizing fractions in postgresql.conf, perhaps using '%' At the time (2007), some gucs used an actual percentage.- Add external tool to auto-tune some postgresql.conf parameters There are already out-of-core tools that try to do this.- Create utility to compute accurate random_page_cost value Seems outdated: In the current age of SSDs and cloud environments, it's often just set to 1.1, and there hasn't been a demand to be more accurate than that.- Allow synchronous_standby_names to be disabled after communication failure with all synchronous standby servers exceeds some timeout Controversial- Adjust rounding behavior for numeric GUC values Controversial*Tablespaces- Allow WAL replay of CREATE TABLESPACE to work when the directory structure on the recovery computer is different from the original Thread quote: \"part of the difficult, perhaps-not-worth doing impossible problems\"- Allow per-tablespace quotas This seems to point to the larger problem space of disk space monitoring, and should probably be phrased thusly, and is a much bigger project or set of projects.- Allow tablespaces on RAM-based partitions for temporary objects In the thread, what's desired is the ability to have some amount of durability on a RAM-disk without WAL logging. - Close race in DROP TABLESPACE on Windows This refers to buildfarm failures from 2014.*Statistics Collector- Track number of WAL files ready to be archived in pg_stat_archiver Thread quote: \"pg_stat_archiver already has a column for last_archived_wal and last_failed_wal, so you can already work out how many files there must be between then and now\"*Point-In-Time Recovery- Allow archive_mode to be changed without server restart Controversial and old*Standby server mode- Allow pg_xlogfile_name() to be used in recovery mode Controversial and old- Change walsender so that it applies per-role settings Old and possibly obsolete--[1] https://www.postgresql.org/message-id/CAFBsxsHbqMzDoGB3eAGmpcpB%2B7uae%2BLLi_G%2Bo8HMEECM9CbQcQ%40mail.gmail.com-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 27 Oct 2020 15:24:35 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "cutting down the TODO list thread"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-27 15:24:35 -0400, John Naylor wrote:\n> As I mentioned in [1], I've volunteered to clear out the TODO list of items\n> that appear to be too difficult, controversial, or otherwise not worth\n> doing to warrant being listed there. I'll be working a few sections at a\n> time, and every so often I'll have a list of proposed items for removal. If\n> I don't hear objections, I'll remove the items after a few days while going\n> through the next set.\n> \n> Where there's an email thread, I've skimmed a few messages to get a sense\n> of the community's thoughts on it. Where easily determined, I've taken age\n> into account, insofar as something from 2017 is going to get much more\n> benefit of doubt than something from 2008. I've added after each item a\n> phrase that sums up the reason I believe it doesn't belong anymore.\n> Feedback welcome, of course, although I suspect there won't be much.\n>\n\n> - Prevent query cancel packets from being replayed by an attacker,\n> especially when using SSL\n> Old and difficult\n\nFWIW, I don't think we should remove this. Our current solution has some\nserious issues that we should address at some point.\n\n\n\n> - Allow WAL replay of CREATE TABLESPACE to work when the directory\n> structure on the recovery computer is different from the original\n> Thread quote: \"part of the difficult, perhaps-not-worth doing impossible\n> problems\"\n\nI think we ought to do something here. Mostly because the current\nsituation makes it impossible to test many things on a single\nsystem. And we have a partial solution with the tablespace mapping\nfiles.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Oct 2020 12:36:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 03:24:35PM -0400, John Naylor wrote:\n> As I mentioned in [1], I've volunteered to clear out the TODO list of items\n> that appear to be too difficult, controversial, or otherwise not worth doing to\n> warrant being listed there. I'll be working a few sections at a time, and every\n> so often I'll have a list of proposed items for removal. If I don't hear\n> objections, I'll remove the items after a few days while going through the next\n> set.\n> \n> Where there's an email thread, I've skimmed a few messages to get a sense of\n> the community's thoughts on it. Where easily determined, I've taken age into\n> account, insofar as something from 2017 is going to get much more benefit of\n> doubt than something from 2008. I've added after each item a phrase that sums\n> up the reason I believe it doesn't belong anymore. Feedback welcome, of course,\n> although I suspect there won't be much.\n\nThanks for working on this. It certainly needs new eyes (not mine). ;-)\n\nI am fine reomving all the items below. I am kind of disappointed we\nhave these _stuck_ items, but I don't see a clear way forward, so let's\njust remove them and see what requests we get for them.\n\n---------------------------------------------------------------------------\n\n> **Administration\n> \n> - Have custom variables be transaction-safe\n> � Old and found to be difficult after attempting\n> \n> - Allow custom variables to appear in pg_settings()\n> � Old and controversial\n> \n> - Implement the SQL-standard mechanism whereby REVOKE ROLE revokes only the\n> privilege granted by the invoking role, and not those granted by other roles\n> � Old and difficult\n> \n> - Prevent query cancel packets from being replayed by an attacker, especially\n> when using SSL\n> � Old and difficult\n> \n> *Configuration files\n> \n> - Consider normalizing fractions in postgresql.conf, perhaps using '%'\n> � At the time (2007), some gucs used an actual percentage.\n> \n> - Add external tool to auto-tune some postgresql.conf parameters\n> � There are already out-of-core tools that try to do this.\n> \n> - Create utility to compute accurate random_page_cost value\n> � Seems outdated: In the current age of SSDs and cloud environments, it's often\n> just set to 1.1, and there hasn't been a demand to be more accurate than that.\n> \n> - Allow synchronous_standby_names to be disabled after communication failure\n> with all synchronous standby servers exceeds some timeout\n> � Controversial\n> \n> - Adjust rounding behavior for numeric GUC values\n> � Controversial\n> \n> *Tablespaces\n> \n> - Allow WAL replay of CREATE TABLESPACE to work when the directory structure on\n> the recovery computer is different from the original\n> � Thread quote: \"part of the difficult, perhaps-not-worth doing impossible\n> problems\"\n> \n> - Allow per-tablespace quotas\n> � This seems to point to the larger problem space of disk space monitoring, and\n> should probably be phrased thusly, and is a much bigger project or set of\n> projects.\n> \n> - Allow tablespaces on RAM-based partitions for temporary objects\n> � In the thread, what's desired is the ability to have some amount of\n> durability on a RAM-disk without WAL logging.\n> \n> - Close race in DROP TABLESPACE on Windows\n> � This refers to buildfarm failures from 2014.\n> \n> *Statistics Collector\n> \n> - Track number of WAL files ready to be archived in pg_stat_archiver\n> � Thread quote: \"pg_stat_archiver already has a column for last_archived_wal\n> and last_failed_wal, so you can already work out how many files there must be\n> between then and now\"\n> \n> *Point-In-Time Recovery\n> \n> - Allow archive_mode to be changed without server restart\n> � Controversial and old\n> \n> *Standby server mode\n> \n> - Allow pg_xlogfile_name() to be used in recovery mode\n> � Controversial and old\n> \n> - Change walsender so that it applies per-role settings\n> � Old and possibly obsolete\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 27 Oct 2020 15:46:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 03:46:14PM -0400, Bruce Momjian wrote:\n> On Tue, Oct 27, 2020 at 03:24:35PM -0400, John Naylor wrote:\n> > As I mentioned in [1], I've volunteered to clear out the TODO list of items\n> > that appear to be too difficult, controversial, or otherwise not worth doing to\n> > warrant being listed there. I'll be working a few sections at a time, and every\n> > so often I'll have a list of proposed items for removal. If I don't hear\n> > objections, I'll remove the items after a few days while going through the next\n> > set.\n> > \n> > Where there's an email thread, I've skimmed a few messages to get a sense of\n> > the community's thoughts on it. Where easily determined, I've taken age into\n> > account, insofar as something from 2017 is going to get much more benefit of\n> > doubt than something from 2008. I've added after each item a phrase that sums\n> > up the reason I believe it doesn't belong anymore. Feedback welcome, of course,\n> > although I suspect there won't be much.\n> \n> Thanks for working on this. It certainly needs new eyes (not mine). ;-)\n> \n> I am fine reomving all the items below. I am kind of disappointed we\n> have these _stuck_ items, but I don't see a clear way forward, so let's\n> just remove them and see what requests we get for them.\n\nDo any of these limitations need to be documented before removing them\nfrom the TODO list?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 27 Oct 2020 15:52:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 8:36 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-10-27 15:24:35 -0400, John Naylor wrote:\n> > - Allow WAL replay of CREATE TABLESPACE to work when the directory\n> > structure on the recovery computer is different from the original\n> > Thread quote: \"part of the difficult, perhaps-not-worth doing impossible\n> > problems\"\n>\n> I think we ought to do something here. Mostly because the current\n> situation makes it impossible to test many things on a single\n> system. And we have a partial solution with the tablespace mapping\n> files.\n\n+1, we need to get something like this working so that we can write\ndecent replication tests. FWIW there was another little thread on the\ntopic, not listed there:\n\nhttps://www.postgresql.org/message-id/flat/CALfoeisEF92F5nJ-aAcuWTvF_Aogxq_1bHLem_kVfM_tHc2mfg%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 28 Oct 2020 09:00:17 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 4:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Wed, Oct 28, 2020 at 8:36 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-10-27 15:24:35 -0400, John Naylor wrote:\n> > > - Allow WAL replay of CREATE TABLESPACE to work when the directory\n> > > structure on the recovery computer is different from the original\n> > > Thread quote: \"part of the difficult, perhaps-not-worth doing\n> impossible\n> > > problems\"\n> >\n> > I think we ought to do something here. Mostly because the current\n> > situation makes it impossible to test many things on a single\n> > system. And we have a partial solution with the tablespace mapping\n> > files.\n>\n> +1, we need to get something like this working so that we can write\n> decent replication tests. FWIW there was another little thread on the\n> topic, not listed there:\n>\n>\n> https://www.postgresql.org/message-id/flat/CALfoeisEF92F5nJ-aAcuWTvF_Aogxq_1bHLem_kVfM_tHc2mfg%40mail.gmail.com\n>\n\nThanks, I've added this thread to the entry.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Tue, Oct 27, 2020 at 4:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Wed, Oct 28, 2020 at 8:36 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-10-27 15:24:35 -0400, John Naylor wrote:\n> > - Allow WAL replay of CREATE TABLESPACE to work when the directory\n> > structure on the recovery computer is different from the original\n> > Thread quote: \"part of the difficult, perhaps-not-worth doing impossible\n> > problems\"\n>\n> I think we ought to do something here. Mostly because the current\n> situation makes it impossible to test many things on a single\n> system. And we have a partial solution with the tablespace mapping\n> files.\n\n+1, we need to get something like this working so that we can write\ndecent replication tests. FWIW there was another little thread on the\ntopic, not listed there:\n\nhttps://www.postgresql.org/message-id/flat/CALfoeisEF92F5nJ-aAcuWTvF_Aogxq_1bHLem_kVfM_tHc2mfg%40mail.gmail.com\nThanks, I've added this thread to the entry.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 27 Oct 2020 16:19:47 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 3:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> Do any of these limitations need to be documented before removing them\n> from the TODO list?\n>\n\nI see two areas that might use a mention:\n\n- pg_settings not displaying custom variables\n- SQL standard difference with REVOKE ROLE (I haven't looked further into\nthis)\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Tue, Oct 27, 2020 at 3:52 PM Bruce Momjian <bruce@momjian.us> wrote:\nDo any of these limitations need to be documented before removing them\nfrom the TODO list?\nI see two areas that might use a mention:- pg_settings not displaying custom variables- SQL standard difference with REVOKE ROLE (I haven't looked further into this)-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 27 Oct 2020 16:54:24 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 04:54:24PM -0400, John Naylor wrote:\n> \n> \n> On Tue, Oct 27, 2020 at 3:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> \n> Do any of these limitations need to be documented before removing them\n> from the TODO list?\n> \n> \n> I see two areas that might use a mention:\n> \n> - pg_settings not displaying custom variables\n> - SQL standard difference�with REVOKE ROLE (I haven't looked further into this)\n\nOK, thanks. Do you want to work on a doc patch or should I? Having it\nthe docs at least warns our users.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 27 Oct 2020 18:05:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 8:25 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> As I mentioned in [1], I've volunteered to clear out the TODO list of\n> items that appear to be too difficult, controversial, or otherwise not\n> worth doing to warrant being listed there. I'll be working a few sections\n> at a time, and every so often I'll have a list of proposed items for\n> removal. If I don't hear objections, I'll remove the items after a few days\n> while going through the next set.\n>\n\nI'm totally on board with cleaning the list up, but how about marking as\n\"won't fix\" (or similar) instead of actually removing the items? That\nshould help to prevent the same exact items from appearing on the list\nagain, which they eventually would, I believe.\n\n--\nAlex\n\nOn Tue, Oct 27, 2020 at 8:25 PM John Naylor <john.naylor@enterprisedb.com> wrote:As I mentioned in [1], I've volunteered to clear out the TODO list of items that appear to be too difficult, controversial, or otherwise not worth doing to warrant being listed there. I'll be working a few sections at a time, and every so often I'll have a list of proposed items for removal. If I don't hear objections, I'll remove the items after a few days while going through the next set.I'm totally on board with cleaning the list up, but how about marking as \"won't fix\" (or similar) instead of actually removing the items? That should help to prevent the same exact items from appearing on the list again, which they eventually would, I believe.--Alex",
"msg_date": "Wed, 28 Oct 2020 10:55:18 +0100",
"msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, 28 Oct 2020, 17:55 Oleksandr Shulgin <oleksandr.shulgin@zalando.de\nwrote:\n\n> On Tue, Oct 27, 2020 at 8:25 PM John Naylor <john.naylor@enterprisedb.com>\n> wrote:\n>\n>> As I mentioned in [1], I've volunteered to clear out the TODO list of\n>> items that appear to be too difficult, controversial, or otherwise not\n>> worth doing to warrant being listed there. I'll be working a few sections\n>> at a time, and every so often I'll have a list of proposed items for\n>> removal. If I don't hear objections, I'll remove the items after a few days\n>> while going through the next set.\n>>\n>\n> I'm totally on board with cleaning the list up, but how about marking as\n> \"won't fix\" (or similar) instead of actually removing the items? That\n> should help to prevent the same exact items from appearing on the list\n> again, which they eventually would, I believe.\n>\n\n+1\n\n>\n\nOn Wed, 28 Oct 2020, 17:55 Oleksandr Shulgin <oleksandr.shulgin@zalando.de wrote:On Tue, Oct 27, 2020 at 8:25 PM John Naylor <john.naylor@enterprisedb.com> wrote:As I mentioned in [1], I've volunteered to clear out the TODO list of items that appear to be too difficult, controversial, or otherwise not worth doing to warrant being listed there. I'll be working a few sections at a time, and every so often I'll have a list of proposed items for removal. If I don't hear objections, I'll remove the items after a few days while going through the next set.I'm totally on board with cleaning the list up, but how about marking as \"won't fix\" (or similar) instead of actually removing the items? That should help to prevent the same exact items from appearing on the list again, which they eventually would, I believe.+1",
"msg_date": "Wed, 28 Oct 2020 18:15:15 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 11:15 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, 28 Oct 2020, 17:55 Oleksandr Shulgin <oleksandr.shulgin@zalando.de wrote:\n>>\n>> On Tue, Oct 27, 2020 at 8:25 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n>>>\n>>> As I mentioned in [1], I've volunteered to clear out the TODO list of items that appear to be too difficult, controversial, or otherwise not worth doing to warrant being listed there. I'll be working a few sections at a time, and every so often I'll have a list of proposed items for removal. If I don't hear objections, I'll remove the items after a few days while going through the next set.\n>>\n>>\n>> I'm totally on board with cleaning the list up, but how about marking as \"won't fix\" (or similar) instead of actually removing the items? That should help to prevent the same exact items from appearing on the list again, which they eventually would, I believe.\n>\n>\n> +1\n\nA small technical detail on the topic but if doing that, let's not\nmark them as that inline -- create a separate page with those items on\nit.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 28 Oct 2020 11:52:30 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 6:05 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Oct 27, 2020 at 04:54:24PM -0400, John Naylor wrote:\n> >\n> >\n> > On Tue, Oct 27, 2020 at 3:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> >\n> > Do any of these limitations need to be documented before removing\n> them\n> > from the TODO list?\n> >\n> >\n> > I see two areas that might use a mention:\n> >\n> > - pg_settings not displaying custom variables\n> > - SQL standard difference with REVOKE ROLE (I haven't looked further\n> into this)\n>\n> OK, thanks. Do you want to work on a doc patch or should I? Having it\n> the docs at least warns our users.\n>\n\nI'll work on that.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Tue, Oct 27, 2020 at 6:05 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Oct 27, 2020 at 04:54:24PM -0400, John Naylor wrote:\n> \n> \n> On Tue, Oct 27, 2020 at 3:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> \n> Do any of these limitations need to be documented before removing them\n> from the TODO list?\n> \n> \n> I see two areas that might use a mention:\n> \n> - pg_settings not displaying custom variables\n> - SQL standard difference with REVOKE ROLE (I haven't looked further into this)\n\nOK, thanks. Do you want to work on a doc patch or should I? Having it\nthe docs at least warns our users.I'll work on that.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Oct 2020 09:26:09 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 6:52 AM Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Wed, Oct 28, 2020 at 11:15 AM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> >\n> > On Wed, 28 Oct 2020, 17:55 Oleksandr Shulgin <\n> oleksandr.shulgin@zalando.de wrote:\n> >> I'm totally on board with cleaning the list up, but how about marking\n> as \"won't fix\" (or similar) instead of actually removing the items? That\n> should help to prevent the same exact items from appearing on the list\n> again, which they eventually would, I believe.\n> >\n> >\n> > +1\n>\n> A small technical detail on the topic but if doing that, let's not\n> mark them as that inline -- create a separate page with those items on\n> it.\n>\n\nHow about a section on the same page at the bottom, near \"features we don't\nwant\"?\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Oct 28, 2020 at 6:52 AM Magnus Hagander <magnus@hagander.net> wrote:On Wed, Oct 28, 2020 at 11:15 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, 28 Oct 2020, 17:55 Oleksandr Shulgin <oleksandr.shulgin@zalando.de wrote:\n>> I'm totally on board with cleaning the list up, but how about marking as \"won't fix\" (or similar) instead of actually removing the items? That should help to prevent the same exact items from appearing on the list again, which they eventually would, I believe.\n>\n>\n> +1\n\nA small technical detail on the topic but if doing that, let's not\nmark them as that inline -- create a separate page with those items on\nit.How about a section on the same page at the bottom, near \"features we don't want\"?-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Oct 2020 09:27:43 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 9:27 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Wed, Oct 28, 2020 at 6:52 AM Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>> On Wed, Oct 28, 2020 at 11:15 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> >\n>> > On Wed, 28 Oct 2020, 17:55 Oleksandr Shulgin <oleksandr.shulgin@zalando.de wrote:\n>> >> I'm totally on board with cleaning the list up, but how about marking as \"won't fix\" (or similar) instead of actually removing the items? That should help to prevent the same exact items from appearing on the list again, which they eventually would, I believe.\n>> >\n>> >\n>> > +1\n>>\n>> A small technical detail on the topic but if doing that, let's not\n>> mark them as that inline -- create a separate page with those items on\n>> it.\n>\n>\n> How about a section on the same page at the bottom, near \"features we don't want\"?\n\nYes, that's what I was thinking, otherwise it'll still be too easy to miss.\n\n\n",
"msg_date": "Wed, 28 Oct 2020 22:35:15 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 3:35 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Oct 28, 2020 at 9:27 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> >\n> > On Wed, Oct 28, 2020 at 6:52 AM Magnus Hagander <magnus@hagander.net> wrote:\n> >>\n> >> On Wed, Oct 28, 2020 at 11:15 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >> >\n> >> > On Wed, 28 Oct 2020, 17:55 Oleksandr Shulgin <oleksandr.shulgin@zalando.de wrote:\n> >> >> I'm totally on board with cleaning the list up, but how about marking as \"won't fix\" (or similar) instead of actually removing the items? That should help to prevent the same exact items from appearing on the list again, which they eventually would, I believe.\n> >> >\n> >> >\n> >> > +1\n> >>\n> >> A small technical detail on the topic but if doing that, let's not\n> >> mark them as that inline -- create a separate page with those items on\n> >> it.\n> >\n> >\n> > How about a section on the same page at the bottom, near \"features we don't want\"?\n>\n> Yes, that's what I was thinking, otherwise it'll still be too easy to miss.\n\nI would personally prefer a completely seprate page, but I don't feel\nstrongly enough about it to push for it :) As long as it's very\nclearly marked as such (and maybe the title of the page also changed\nto that it's a combined todo and todon't list? :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 28 Oct 2020 16:20:03 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On 2020-10-28 16:20:03 +0100, Magnus Hagander wrote:\n> I would personally prefer a completely seprate page\n\nSame.\n\n\n",
"msg_date": "Wed, 28 Oct 2020 10:57:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 1:57 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2020-10-28 16:20:03 +0100, Magnus Hagander wrote:\n> > I would personally prefer a completely seprate page\n>\n> Same.\n>\n\nOk, that's two votes for a separate page, and one for a new section on the\nsame page, so it looks like it's a new page. That being the case, I would\nthink it logical to move \"features we don't want\" there. As for the name,\nwe should probably encompass both \"won't fix\" bugs and features not wanted.\nMaybe \"past development ideas\" or \"not worth doing\", but I'm open to better\nideas. Once that's agreed upon, I'll make a new page and migrate the items\nover, minus the two that were mentioned upthread.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Oct 28, 2020 at 1:57 PM Andres Freund <andres@anarazel.de> wrote:On 2020-10-28 16:20:03 +0100, Magnus Hagander wrote:\n> I would personally prefer a completely seprate page\n\nSame.\nOk, that's two votes for a separate page, and one for a new section on the same page, so it looks like it's a new page. That being the case, I would think it logical to move \"features we don't want\" there. As for the name, we should probably encompass both \"won't fix\" bugs and features not wanted. Maybe \"past development ideas\" or \"not worth doing\", but I'm open to better ideas. Once that's agreed upon, I'll make a new page and migrate the items over, minus the two that were mentioned upthread.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 30 Oct 2020 11:37:59 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "I wrote:\n\n>\n> Ok, that's two votes for a separate page, and one for a new section on the\n> same page, so it looks like it's a new page. That being the case, I would\n> think it logical to move \"features we don't want\" there. As for the name,\n> we should probably encompass both \"won't fix\" bugs and features not wanted.\n> Maybe \"past development ideas\" or \"not worth doing\", but I'm open to better\n> ideas. Once that's agreed upon, I'll make a new page and migrate the items\n> over, minus the two that were mentioned upthread.\n>\n\nHearing no preference, I've created\n\nhttps://wiki.postgresql.org/wiki/Not_Worth_Doing\n\n...with links between the two. I've moved over the items I suggested\nupthread, minus the two where I heard feedback otherwise (prevent replay of\nquery cancel packets and improve WAL replay of CREATE TABLESPACE)\n\nI have patches for documenting some behavior we won't fix in [1][2].\n\nI was thinking of not having the next updates during commitfest, but it\ncould also be argued this is a type of review, and the things here will be\nreturned with feedback or rejected, in a way. Ultimately, it comes down to\n\"when time permits\".\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAFBsxsGsBZsG%3DcLM0Op5HFb2Ks6SzJrOc_eRO_jcKSNuqFRKnQ%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAFBsxsEmg=KQrEKxRLYGy0UjcFyck4vgxZKaLRwH_oLfj8OJnA@mail.gmail.com\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nI wrote:Ok, that's two votes for a separate page, and one for a new section on the same page, so it looks like it's a new page. That being the case, I would think it logical to move \"features we don't want\" there. As for the name, we should probably encompass both \"won't fix\" bugs and features not wanted. Maybe \"past development ideas\" or \"not worth doing\", but I'm open to better ideas. Once that's agreed upon, I'll make a new page and migrate the items over, minus the two that were mentioned upthread.Hearing no preference, I've createdhttps://wiki.postgresql.org/wiki/Not_Worth_Doing...with links between the two. I've moved over the items I suggested upthread, minus the two where I heard feedback otherwise (prevent replay of query cancel packets and improve WAL replay of CREATE TABLESPACE)I have patches for documenting some behavior we won't fix in [1][2].I was thinking of not having the next updates during commitfest, but it could also be argued this is a type of review, and the things here will be returned with feedback or rejected, in a way. Ultimately, it comes down to \"when time permits\".[1] https://www.postgresql.org/message-id/flat/CAFBsxsGsBZsG%3DcLM0Op5HFb2Ks6SzJrOc_eRO_jcKSNuqFRKnQ%40mail.gmail.com[2] https://www.postgresql.org/message-id/CAFBsxsEmg=KQrEKxRLYGy0UjcFyck4vgxZKaLRwH_oLfj8OJnA@mail.gmail.com-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 3 Nov 2020 14:06:13 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 02:06:13PM -0400, John Naylor wrote:\n> I was thinking of not having the next�updates�during commitfest, but it could\n> also be argued this is a type of review, and the things here will be returned\n> with feedback or rejected, in a way. Ultimately, it comes down to \"when time\n> permits\".\n\nI don't understand what this is referring to. Thanks for the rest of\nthe work.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 10 Nov 2020 18:08:48 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 7:08 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Nov 3, 2020 at 02:06:13PM -0400, John Naylor wrote:\n> > I was thinking of not having the next updates during commitfest, but it\n> could\n> > also be argued this is a type of review, and the things here will be\n> returned\n> > with feedback or rejected, in a way. Ultimately, it comes down to \"when\n> time\n> > permits\".\n>\n> I don't understand what this is referring to. Thanks for the rest of\n> the work.\n>\n\nThis was awkwardly phrased, but I was concerned future proposals for\nremoval would be easy to miss during commitfest. At this point, I'm\nthinking it isn't an issue.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Tue, Nov 10, 2020 at 7:08 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Nov 3, 2020 at 02:06:13PM -0400, John Naylor wrote:\n> I was thinking of not having the next updates during commitfest, but it could\n> also be argued this is a type of review, and the things here will be returned\n> with feedback or rejected, in a way. Ultimately, it comes down to \"when time\n> permits\".\n\nI don't understand what this is referring to. Thanks for the rest of\nthe work.\nThis was awkwardly phrased, but I was concerned future proposals for removal would be easy to miss during commitfest. At this point, I'm thinking it isn't an issue.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 11 Nov 2020 09:29:00 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "Here is the next section on data types, proposed to be moved to the \"not\nworth doing\" page. As before, if there are any objections, do speak up.\nI'll make the move in a few days.\n\n**Datatypes\n\n- Fix data types where equality comparison is not intuitive, e.g. box\n There is likely no way to do this without breaking applications. We\nalready have a big warning about this in the docs.\n\n- Add IMMUTABLE column attribute\n Controversial\n\n*Domains (this entire section would go)\n\n- Allow functions defined as casts to domains to be called during casting\n- Allow values to be cast to domain types\n- Make domains work better with polymorphic functions\n Old and difficult\n\n*Date/Time\n\n- Allow TIMESTAMP WITH TIME ZONE to store the original timezone information\n- Have timestamp subtraction not call justify_hours()\n Very old\n\n- Allow a comma to denote fractional seconds in ISO-8601-compliant times\n(and timestamps)\n Apparent lack of interest\n\n*Arrays\n\n- Add function to detect if an array is empty\n- Improve handling of NULLs in arrays\n Lack of interest\n\n*Money (this entire section would go)\n\n- Add locale-aware MONEY type, and support multiple currencies\n- MONEY dumps in a locale-specific format making it difficult to restore to\na system with a different locale\n The money datatype seems kind of obsolete anyway, and there doesn't seem\nto be demand to improve it.\n\n*Text Search\n\n- Allow dictionaries to change the token that is passed on to later\ndictionaries\n- Consider a function-based API for '@@' searches\n Very old\n\n- Improve text search error messages\n One of the gripes has been fixed already, in any case it's very old\n\n- tsearch and tsdicts regression tests fail in Turkish locale on glibc\n Bug report that refers to locale behavior from 2009\n\n- Improve handling of dash and plus signs in email address user names, and\nperhaps improve URL parsing\n Difficult\n\n*XML (difficult section -- plenty of bugs which should be fixed, but also\nold and low interest)\n\n- Allow XML arrays to be cast to other data types\n Very old\n\n- Allow reliable XML operation non-UTF8 server encodings\n Difficult\n\n- Move XSLT from contrib/xml2 to a more reasonable location\n Lack of consensus\n\n- Improve the XSLT parameter passing API\n Lack of consensus\n\n- xpath_table needs to be implemented/implementable to get rid of\ncontrib/xml2\n- xpath_table is pretty broken anyway\n Unclear path forward\n\n- better handling of XPath data types\n- Improve handling of PIs and DTDs in xmlconcat()\n Zero interest\n\n- Restructure XML and /contrib/xml2 functionality\n As discussed in the thread, it's an unrealistically large project\n\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHere is the next section on data types, proposed to be moved to the \"not worth doing\" page. As before, if there are any objections, do speak up. I'll make the move in a few days.**Datatypes- Fix data types where equality comparison is not intuitive, e.g. box There is likely no way to do this without breaking applications. We already have a big warning about this in the docs.- Add IMMUTABLE column attribute Controversial*Domains (this entire section would go)- Allow functions defined as casts to domains to be called during casting- Allow values to be cast to domain types- Make domains work better with polymorphic functions Old and difficult*Date/Time- Allow TIMESTAMP WITH TIME ZONE to store the original timezone information- Have timestamp subtraction not call justify_hours() Very old- Allow a comma to denote fractional seconds in ISO-8601-compliant times (and timestamps) Apparent lack of interest*Arrays- Add function to detect if an array is empty- Improve handling of NULLs in arrays Lack of interest*Money (this entire section would go)- Add locale-aware MONEY type, and support multiple currencies- MONEY dumps in a locale-specific format making it difficult to restore to a system with a different locale The money datatype seems kind of obsolete anyway, and there doesn't seem to be demand to improve it.*Text Search- Allow dictionaries to change the token that is passed on to later dictionaries- Consider a function-based API for '@@' searches Very old- Improve text search error messages One of the gripes has been fixed already, in any case it's very old- tsearch and tsdicts regression tests fail in Turkish locale on glibc Bug report that refers to locale behavior from 2009- Improve handling of dash and plus signs in email address user names, and perhaps improve URL parsing Difficult*XML (difficult section -- plenty of bugs which should be fixed, but also old and low interest)- Allow XML arrays to be cast to other data types Very old- Allow reliable XML operation non-UTF8 server encodings Difficult- Move XSLT from contrib/xml2 to a more reasonable location Lack of consensus- Improve the XSLT parameter passing API Lack of consensus- xpath_table needs to be implemented/implementable to get rid of contrib/xml2- xpath_table is pretty broken anyway Unclear path forward- better handling of XPath data types- Improve handling of PIs and DTDs in xmlconcat() Zero interest- Restructure XML and /contrib/xml2 functionality As discussed in the thread, it's an unrealistically large project-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 11 Nov 2020 16:45:06 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 4:45 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> Here is the next section on data types, proposed to be moved to the \"not\n> worth doing\" page. As before, if there are any objections, do speak up.\n> I'll make the move in a few days.\n>\n\nHearing no objection, these have been moved over.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Nov 11, 2020 at 4:45 PM John Naylor <john.naylor@enterprisedb.com> wrote:Here is the next section on data types, proposed to be moved to the \"not worth doing\" page. As before, if there are any objections, do speak up. I'll make the move in a few days.Hearing no objection, these have been moved over.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 16 Nov 2020 10:15:03 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "Here are the next couple of sections with items proposed to be moved to the\n\"not worth doing\" page. As before, if there are any objections, let me\nknow. I'll make the move in a few days.\n\nAlso, since 13 has been released, I'll change the explanation of Done items\nto \"will appear in the PostgreSQL 14 release.\" Also, since that wasn't\nupdated, it's not immediately clear to me which release the [D] marking for\n\"improve setting of visibility map bits for read-only and insert-only\nworkloads\" refers to. Does anyone know which commit that is?\n\n\n*Functions\n\n- Enforce typmod for function inputs, function results and parameters for\nspi_prepare'd statements called from PLs\n Lack of consensus\n\n- Reduce memory usage of aggregates in set returning functions\n The issue and proposed patch is likely no longer an important thing to\nimprove in this area nowadays.\n\n- Fix /contrib/ltree operator\n Bug from 2007 with zero followup\n\n- Fix /contrib/btree_gist's implementation of inet indexing\n Bug from 2010 and apparent lack of interest\n\n\n*Character Formatting (this entire section would be removed)\n\n- Allow to_date() and to_timestamp() to accept localized month names\n The following attempts to pick this from the TODO list in 2008 didn't go\nanywhere:\n\nhttps://www.postgresql.org/message-id/flat/010401c86788%246f1ddb60%240a01a8c0%40gevmus\n\nhttps://www.postgresql.org/message-id/flat/CA%2BheTbrDQ6b0Am_mk0dJEcwNxwQz%2Br%3Daz_%3DzHTva%2B5BDnfOKjA%40mail.gmail.com\n\n- Add missing parameter handling in to_char()\n Very old\n\n- Throw an error from to_char() instead of printing a string of \"#\" when a\nnumber doesn't fit in the desired output format.\n Lack of consensus\n\n- Fix to_number() handling for values not matching the format string\n Large amount of work for questionable benefit\n\n\n*Multi-Language Support\n\n- Add a cares-about-collation column to pg_proc, so that\nunresolved-collation errors can be thrown at parse time\n Proposed while listing open items during 9.1. Doesn't seem to justify the\namount of work it would take.\n\n- Add octet_length_server() and octet_length_client()\n- Make octet_length_client() the same as octet_length()?\n Very old\n\n- Improve UTF8 combined character handling?\n Too vague\n\n- Fix problems with wrong runtime encoding conversion for NLS message files\n What problems? There is no discussion thread.\n\n- More sensible support for Unicode combining characters, normal forms\n We have normalization as of PG13, so I propose to mark this Done rather\nthan move it.\n\n--\nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHere are the next couple of sections with items proposed to be moved to the \"not worth doing\" page. As before, if there are any objections, let me know. I'll make the move in a few days. Also, since 13 has been released, I'll change the explanation of Done items to \"will appear in the PostgreSQL 14 release.\" Also, since that wasn't updated, it's not immediately clear to me which release the [D] marking for \"improve setting of visibility map bits for read-only and insert-only workloads\" refers to. Does anyone know which commit that is?*Functions- Enforce typmod for function inputs, function results and parameters for spi_prepare'd statements called from PLs Lack of consensus- Reduce memory usage of aggregates in set returning functions The issue and proposed patch is likely no longer an important thing to improve in this area nowadays.- Fix /contrib/ltree operator Bug from 2007 with zero followup- Fix /contrib/btree_gist's implementation of inet indexing Bug from 2010 and apparent lack of interest*Character Formatting (this entire section would be removed)- Allow to_date() and to_timestamp() to accept localized month names The following attempts to pick this from the TODO list in 2008 didn't go anywhere: https://www.postgresql.org/message-id/flat/010401c86788%246f1ddb60%240a01a8c0%40gevmus https://www.postgresql.org/message-id/flat/CA%2BheTbrDQ6b0Am_mk0dJEcwNxwQz%2Br%3Daz_%3DzHTva%2B5BDnfOKjA%40mail.gmail.com- Add missing parameter handling in to_char() Very old- Throw an error from to_char() instead of printing a string of \"#\" when a number doesn't fit in the desired output format. Lack of consensus- Fix to_number() handling for values not matching the format string Large amount of work for questionable benefit*Multi-Language Support- Add a cares-about-collation column to pg_proc, so that unresolved-collation errors can be thrown at parse time Proposed while listing open items during 9.1. Doesn't seem to justify the amount of work it would take.- Add octet_length_server() and octet_length_client()- Make octet_length_client() the same as octet_length()? Very old- Improve UTF8 combined character handling? Too vague- Fix problems with wrong runtime encoding conversion for NLS message files What problems? There is no discussion thread.- More sensible support for Unicode combining characters, normal forms We have normalization as of PG13, so I propose to mark this Done rather than move it.--John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 18 Nov 2020 14:26:46 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 02:26:46PM -0400, John Naylor wrote:\n> Here are the next couple of sections with items proposed to be moved to the\n> \"not worth doing\" page. As before, if there are any objections, let me know.\n> I'll make the move in a few days.�\n> \n> Also, since 13 has been released, I'll change the explanation of Done items to\n> \"will appear in the PostgreSQL 14 release.\" Also, since that wasn't updated,\n\nYes, please do that. I didn't go through this for PG 13 since you were\nalready working in this area.\n\n> it's not immediately clear to me which release the [D] marking for \"improve\n> setting of visibility map bits for read-only and insert-only workloads\" refers\n> to. Does anyone know which commit that is?\n\nYes, this was fixed in PG 13 with this commit:\n\n\tcommit b07642dbcd\n\tAuthor: David Rowley <drowley@postgresql.org>\n\tDate: Sat Mar 28 19:20:12 2020 +1300\n\t\n\t Trigger autovacuum based on number of INSERTs\n> - Fix problems with wrong runtime encoding conversion for NLS message files\n> � What problems? There is no discussion thread.\n\nI have no idea. I don't think I added this one since there is no URL,\nor it is very old.\n\n> - More sensible support for Unicode combining characters, normal forms\n> � We have normalization as of PG13, so I propose to mark this Done rather than\n> move it.\n\nAgreed. Also done items should be remove anyway since they were done in\nPG 13.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 18 Nov 2020 13:42:47 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> Here are the next couple of sections with items proposed to be moved to the\n> \"not worth doing\" page. As before, if there are any objections, let me\n> know. I'll make the move in a few days.\n\n> - Fix /contrib/ltree operator\n> Bug from 2007 with zero followup\n\nActually, I believe this was fixed by 70dc4c509; at least, the\ncase shown in the bug report now yields \"false\" as expected.\n\n> - Fix /contrib/btree_gist's implementation of inet indexing\n> Bug from 2010 and apparent lack of interest\n\nThis one's pretty clearly a bug. Lack of interest or no, we\nshould keep it around.\n\n> - Allow to_date() and to_timestamp() to accept localized month names\n\nThis is done too, see d67755049.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Nov 2020 14:05:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 2:42 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Nov 18, 2020 at 02:26:46PM -0400, John Naylor wrote:\n> > Here are the next couple of sections with items proposed to be moved to\n> the\n> > \"not worth doing\" page. As before, if there are any objections, let me\n> know.\n> > I'll make the move in a few days.\n> >\n> > Also, since 13 has been released, I'll change the explanation of Done\n> items to\n> > \"will appear in the PostgreSQL 14 release.\" Also, since that wasn't\n> updated,\n>\n> Yes, please do that. I didn't go through this for PG 13 since you were\n> already working in this area.\n>\n\nOK, I'll do that and remove items done during or before the PG13 cycle.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Nov 18, 2020 at 2:42 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Nov 18, 2020 at 02:26:46PM -0400, John Naylor wrote:\n> Here are the next couple of sections with items proposed to be moved to the\n> \"not worth doing\" page. As before, if there are any objections, let me know.\n> I'll make the move in a few days. \n> \n> Also, since 13 has been released, I'll change the explanation of Done items to\n> \"will appear in the PostgreSQL 14 release.\" Also, since that wasn't updated,\n\nYes, please do that. I didn't go through this for PG 13 since you were\nalready working in this area.OK, I'll do that and remove items done during or before the PG13 cycle.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 19 Nov 2020 10:53:23 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > Here are the next couple of sections with items proposed to be moved to\n> the\n> > \"not worth doing\" page. As before, if there are any objections, let me\n> > know. I'll make the move in a few days.\n>\n> > - Fix /contrib/ltree operator\n> > Bug from 2007 with zero followup\n>\n> Actually, I believe this was fixed by 70dc4c509; at least, the\n> case shown in the bug report now yields \"false\" as expected.\n>\n\nI'll remove this item.\n\n\n> > - Fix /contrib/btree_gist's implementation of inet indexing\n> > Bug from 2010 and apparent lack of interest\n>\n> This one's pretty clearly a bug. Lack of interest or no, we\n> should keep it around.\n>\n\nOkay.\n\n\n> > - Allow to_date() and to_timestamp() to accept localized month names\n>\n> This is done too, see d67755049.\n>\n\nI'll remove this too.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Nov 18, 2020 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:John Naylor <john.naylor@enterprisedb.com> writes:\n> Here are the next couple of sections with items proposed to be moved to the\n> \"not worth doing\" page. As before, if there are any objections, let me\n> know. I'll make the move in a few days.\n\n> - Fix /contrib/ltree operator\n> Bug from 2007 with zero followup\n\nActually, I believe this was fixed by 70dc4c509; at least, the\ncase shown in the bug report now yields \"false\" as expected.\nI'll remove this item. > - Fix /contrib/btree_gist's implementation of inet indexing\n> Bug from 2010 and apparent lack of interest\n\nThis one's pretty clearly a bug. Lack of interest or no, we\nshould keep it around.\nOkay. > - Allow to_date() and to_timestamp() to accept localized month names\n\nThis is done too, see d67755049.\nI'll remove this too.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 19 Nov 2020 10:55:11 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "With the exception of \"Fix /contrib/btree_gist's implementation of inet\nindexing\", all items above have been either moved over, or removed if they\nwere done already by PG13.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nWith the exception of \"Fix /contrib/btree_gist's implementation of inet indexing\", all items above have been either moved over, or removed if they were done already by PG13.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 23 Nov 2020 10:41:25 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 10:41:25AM -0400, John Naylor wrote:\n> With�the exception of \"Fix /contrib/btree_gist's implementation of inet\n> indexing\", all items above have been either moved over, or removed if they were\n> done already by PG13.\n\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 23 Nov 2020 10:39:36 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "Hi,\n\nContinuing with TODO list maintenance, first a couple things to clean up:\n\n- Allow ALTER INDEX ... RENAME concurrently\n\nThis was in the wrong section, but it's irrelevant: The lock level was\nlowered in commit 1b5d797cd4f, so I went ahead and removed this already.\n\n- Add CORRESPONDING BY to UNION/INTERSECT/EXCEPT\n\nThe link titled \"how not to write this patch\" points to a web archive of\nthe author's description of how he implemented the rejected patch. That\ndoesn't seem useful, since it was...rejected. I propose to replace that\nwith the -hackers thread, where there is discussion of the design problem:\n\nhttps://www.postgresql.org/message-id/flat/CAJZSWkWN3YwQ01C3%2Bcq0%2BeyZ1DmK%3D69_6vryrsVGMC%3D%2BfWrSZA%40mail.gmail.com\n\nNow, for the proposed items to move to \"Not Worth Doing\". As before, let me\nknow of any objections. I plan to move these early next week:\n\n*Views and Rules\n\n- Allow VIEW/RULE recompilation when the underlying tables change\n\nThe entry itself says \"This is both difficult and controversial.\" and the\nlinked threads confirm that.\n\n- Make it possible to use RETURNING together with conditional DO INSTEAD\nrules, such as for partitioning setups\n\nThis was from before we got native partitioning, so the stated rationale is\noutdated.\n\n\n*SQL Commands (this is a huge section, for now just doing the miscellany at\nthe top before the various subsections)\n\n- Add a GUC variable to warn about non-standard SQL usage in queries\n\nI don't see the reason for this, and sounds difficult anyway.\n\n- Add NOVICE output level for helpful messages\n\nThis would only be useful if turned on, so is going to be least used where\nit might help the most. It also sounds like a lot of slow menial work to\nimplement.\n\n- Allow DISTINCT to work in multiple-argument aggregate calls\n\nTom suggested this in 2006 for the sake of orthogonality. Given the amount\nof time passed, it seems not very important.\n\n- Allow DELETE and UPDATE to be used with LIMIT and ORDER BY\n\nSome use cases mentioned, but nearly all have some kind of workaround\nalready.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHi,Continuing with TODO list maintenance, first a couple things to clean up:- Allow ALTER INDEX ... RENAME concurrentlyThis was in the wrong section, but it's irrelevant: The lock level was lowered in commit 1b5d797cd4f, so I went ahead and removed this already.- Add CORRESPONDING BY to UNION/INTERSECT/EXCEPTThe link titled \"how not to write this patch\" points to a web archive of the author's description of how he implemented the rejected patch. That doesn't seem useful, since it was...rejected. I propose to replace that with the -hackers thread, where there is discussion of the design problem: https://www.postgresql.org/message-id/flat/CAJZSWkWN3YwQ01C3%2Bcq0%2BeyZ1DmK%3D69_6vryrsVGMC%3D%2BfWrSZA%40mail.gmail.comNow, for the proposed items to move to \"Not Worth Doing\". As before, let me know of any objections. I plan to move these early next week:*Views and Rules- Allow VIEW/RULE recompilation when the underlying tables changeThe entry itself says \"This is both difficult and controversial.\" and the linked threads confirm that.- Make it possible to use RETURNING together with conditional DO INSTEAD rules, such as for partitioning setupsThis was from before we got native partitioning, so the stated rationale is outdated.*SQL Commands (this is a huge section, for now just doing the miscellany at the top before the various subsections)- Add a GUC variable to warn about non-standard SQL usage in queriesI don't see the reason for this, and sounds difficult anyway.- Add NOVICE output level for helpful messagesThis would only be useful if turned on, so is going to be least used where it might help the most. It also sounds like a lot of slow menial work to implement.- Allow DISTINCT to work in multiple-argument aggregate callsTom suggested this in 2006 for the sake of orthogonality. Given the amount of time passed, it seems not very important.- Allow DELETE and UPDATE to be used with LIMIT and ORDER BYSome use cases mentioned, but nearly all have some kind of workaround already.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 10 Dec 2020 15:29:07 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 3:29 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> *Views and Rules\n\n> *SQL Commands\n\nHearing no objections, the items mentioned have been moved over.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Dec 10, 2020 at 3:29 PM John Naylor <john.naylor@enterprisedb.com> wrote:>> *Views and Rules> *SQL CommandsHearing no objections, the items mentioned have been moved over.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 14 Dec 2020 14:12:34 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 03:29:07PM -0400, John Naylor wrote:\n> Hi,\n\nI agree with all of your analysis, but have some feedback;\n\n> Continuing with TODO list maintenance, first a couple things to clean up:\n> \n> - Allow ALTER INDEX ... RENAME concurrently\n> \n> This was in the wrong section, but it's irrelevant: The lock level was lowered\n> in commit 1b5d797cd4f, so I went ahead and removed this already.\n\nGood.\n> \n> - Add CORRESPONDING BY to UNION/INTERSECT/EXCEPT\n> \n> The link titled \"how not to write this patch\" points to a web archive of the\n> author's description of how he implemented the rejected patch. That doesn't\n> seem useful, since it was...rejected. I propose to replace that with the\n> -hackers thread, where there is discussion of the design problem:\n> � https://www.postgresql.org/message-id/flat/\n> CAJZSWkWN3YwQ01C3%2Bcq0%2BeyZ1DmK%3D69_6vryrsVGMC%3D%2BfWrSZA%40mail.gmail.com\n> \n> Now, for the proposed items to move to \"Not Worth Doing\". As before, let me\n> know of any objections. I plan to move these early next week:\n\nAgreed.\n\n> *Views and Rules\n> \n> - Allow VIEW/RULE recompilation when the underlying tables change\n> \n> The entry itself says \"This is both difficult and controversial.\" and the\n> linked threads confirm that.\n\nYes, probably shouldn't be an item.\n> \n> - Make it possible to use RETURNING together with conditional DO INSTEAD rules,\n> such as for partitioning setups\n> \n> This was from before we got native partitioning, so the stated rationale is\n> outdated.\n\nI don't think we need that anymore.\n\n> *SQL Commands (this is a huge section, for now just doing the miscellany at the\n> top before the various subsections)\n> \n> - Add a GUC variable to warn about non-standard SQL usage in queries\n> \n> I don't see the reason for this, and sounds difficult anyway.\n\nIt is hard.\n\n> - Add NOVICE output level for helpful messages\n> \n> This would only be useful if turned on, so is going to be least used where it\n> might help the most. It also sounds like a lot of slow menial work to\n> implement.\n\nIt is menial work, but I thought it might inspire someone to do it. \nRemoval at this point seems fine.\n\n> - Allow DISTINCT to work in multiple-argument aggregate calls\n> \n> Tom suggested this in 2006 for the sake of orthogonality. Given the amount of\n> time passed, it seems not very important.\n\nYes.\n\n> - Allow DELETE and UPDATE to be used with LIMIT and ORDER BY\n> \n> Some use cases mentioned, but nearly all have some kind of workaround already.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 17 Dec 2020 09:58:58 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "Hi,\n\nI let this drop off my radar a few months ago, but I'm going to try to get\nback into the habit of looking at a few items a week. As before, let me\nknow in the next few days if anyone has thoughts or objections.\n\n(Optimizer / Executor)\n\n- Consider increasing the default values of from_collapse_limit,\njoin_collapse_limit, and/or geqo_threshold\n\nhttp://archives.postgresql.org/message-id/4136ffa0905210551u22eeb31bn5655dbe7c9a3aed5@mail.gmail.com\n\nThis seems to have been rejected.\n\n- Improve use of expression indexes for ORDER BY\n\nhttp://archives.postgresql.org/pgsql-hackers/2009-08/msg01553.php\n\nSkimming the thread, I'm not quite sure if index-only scans (not available\nat the time) solves the problem, or is orthogonal to it?\n\n- Modify the planner to better estimate caching effects\n\nhttp://archives.postgresql.org/pgsql-performance/2010-11/msg00117.php\n\nHuge discussion. This sounds like a research project, and maybe a risky one.\n\n- Allow shared buffer cache contents to affect index cost computations\n\nhttp://archives.postgresql.org/pgsql-hackers/2011-06/msg01140.php\n\nRelated to the above, but has a more specific approach in mind. The\ndiscussion thread is not useful for getting one's head around how to think\nabout the problem, much less to decide if it's worth working on -- it's\nmostly complaining about the review process. Independent of that, the idea\nof inspecting the buffer cache seems impractical.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHi,I let this drop off my radar a few months ago, but I'm going to try to get back into the habit of looking at a few items a week. As before, let me know in the next few days if anyone has thoughts or objections.(Optimizer / Executor)- Consider increasing the default values of from_collapse_limit, join_collapse_limit, and/or geqo_thresholdhttp://archives.postgresql.org/message-id/4136ffa0905210551u22eeb31bn5655dbe7c9a3aed5@mail.gmail.comThis seems to have been rejected.- Improve use of expression indexes for ORDER BYhttp://archives.postgresql.org/pgsql-hackers/2009-08/msg01553.phpSkimming the thread, I'm not quite sure if index-only scans (not available at the time) solves the problem, or is orthogonal to it?- Modify the planner to better estimate caching effectshttp://archives.postgresql.org/pgsql-performance/2010-11/msg00117.phpHuge discussion. This sounds like a research project, and maybe a risky one.- Allow shared buffer cache contents to affect index cost computationshttp://archives.postgresql.org/pgsql-hackers/2011-06/msg01140.phpRelated to the above, but has a more specific approach in mind. The discussion thread is not useful for getting one's head around how to think about the problem, much less to decide if it's worth working on -- it's mostly complaining about the review process. Independent of that, the idea of inspecting the buffer cache seems impractical.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 19 May 2021 13:52:03 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, May 19, 2021 at 01:52:03PM -0400, John Naylor wrote:\n> Related to the above, but has a more specific approach in mind. The discussion\n> thread is not useful for getting one's head around how to think about the\n> problem, much less to decide if it's worth working on -- it's mostly\n> complaining about the review process. Independent of that, the idea of\n> inspecting the buffer cache seems impractical.\n\nYes, I think you are right about all of these.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 20 May 2021 14:38:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "Here I'm just reviewing a couple items in the Sorting section:\n\nhttps://wiki.postgresql.org/wiki/Todo#Sorting\n\n- Consider whether duplicate keys should be sorted by block/offset\n\nhttps://www.postgresql.org/message-id/flat/23321.1205726381%40sss.pgh.pa.us\n\nIt's moot since we started requiring tid as a tiebreaker for btree indexes,\nso I'll just remove this.\n\n- Consider being smarter about memory and external files used during sorts\n\nhttps://www.postgresql.org/message-id/1196186626.4246.1078.camel@ebony.site\n\nWe've had a large number of improvements in this area since 2007, and it\nseems the ideas in this thread have mostly been done or superseded already.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHere I'm just reviewing a couple items in the Sorting section:https://wiki.postgresql.org/wiki/Todo#Sorting- Consider whether duplicate keys should be sorted by block/offsethttps://www.postgresql.org/message-id/flat/23321.1205726381%40sss.pgh.pa.usIt's moot since we started requiring tid as a tiebreaker for btree indexes, so I'll just remove this. - Consider being smarter about memory and external files used during sortshttps://www.postgresql.org/message-id/1196186626.4246.1078.camel@ebony.siteWe've had a large number of improvements in this area since 2007, and it seems the ideas in this thread have mostly been done or superseded already. -- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 28 Jun 2021 17:41:50 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 05:41:50PM -0400, John Naylor wrote:\n> Here I'm just reviewing a couple items in the Sorting section:\n> \n> https://wiki.postgresql.org/wiki/Todo#Sorting\n> \n> - Consider whether duplicate keys should be sorted by block/offset\n> \n> https://www.postgresql.org/message-id/flat/23321.1205726381%40sss.pgh.pa.us\n> \n> It's moot since we started requiring tid as a tiebreaker for btree indexes, so\n> I'll just remove this.\n> \n> - Consider being smarter about memory and external files used during sorts\n> \n> https://www.postgresql.org/message-id/1196186626.4246.1078.camel@ebony.site\n> \n> We've had a large number of improvements in this area since 2007, and it seems\n> the ideas in this thread have mostly been done or superseded already. \n\nAgreed. Please remove them or I can do it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 1 Jul 2021 21:23:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "Spotted in the \"Hashing\" section:\n\n\"Use \"lazy\" hash tables to look up only the tuples that are actually requested\"\n\nDavid Rowley has knocked that one off. He called it Result Cache.\n\n\n",
"msg_date": "Fri, 2 Jul 2021 14:18:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 10:19 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Spotted in the \"Hashing\" section:\n>\n> \"Use \"lazy\" hash tables to look up only the tuples that are actually\nrequested\"\n>\n> David Rowley has knocked that one off. He called it Result Cache.\n\nThanks, I'll take care of that one also.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 1, 2021 at 10:19 PM Thomas Munro <thomas.munro@gmail.com> wrote:>> Spotted in the \"Hashing\" section:>> \"Use \"lazy\" hash tables to look up only the tuples that are actually requested\">> David Rowley has knocked that one off. He called it Result Cache.Thanks, I'll take care of that one also.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 1 Jul 2021 22:33:46 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 9:23 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> Agreed. Please remove them or I can do it.\n\nDone, and also changed next release to \"15\".\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 1, 2021 at 9:23 PM Bruce Momjian <bruce@momjian.us> wrote:> Agreed. Please remove them or I can do it.Done, and also changed next release to \"15\".--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 3 Jul 2021 13:06:59 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "It's been a while, but here are a few more suggested\nremovals/edits/additions to the TODO list. Any objections or new\ninformation, let me know:\n\n- Auto-fill the free space map by scanning the buffer cache or by\nchecking pages written by the background writer\nhttp://archives.postgresql.org/pgsql-hackers/2006-02/msg01125.php\nhttps://www.postgresql.org/message-id/200603011716.16984.peter_e@gmx.net\n\nBoth these threads are from 2006, so have nothing to do with the current FSM.\n\n- Allow concurrent inserts to use recently created pages rather than\ncreating new ones\nhttp://archives.postgresql.org/pgsql-hackers/2010-05/msg00853.php\n\nSkimming the first few messages, I believe this has been covered by\ncommit 719c84c1b? (Extend relations multiple blocks at a time to\nimprove scalability.)\n\n- Allow VACUUM FULL and CLUSTER to update the visibility map\n\nThis topic has a current CF entry which seems to have stalled, so that\nnewer thread would be better to list here than the one from 2013.\n\n- Bias FSM towards returning free space near the beginning of the heap\nfile, in hopes that empty pages at the end can be truncated by VACUUM\nhttp://archives.postgresql.org/pgsql-hackers/2009-09/msg01124.php\nhttps://www.postgresql.org/message-id/20150424190403.GP4369@alvh.no-ip.org\n\nI'm not sure what to think of this, but independently of that, the\nsecond thread is actually talking about bringing back something like\nthe pre-9.0 vacuum full, so maybe it should be its own entry?\n\n- Consider a more compact data representation for dead tuple locations\nwithin VACUUM\nhttp://archives.postgresql.org/pgsql-patches/2007-05/msg00143.php\n\nGreat, but let's link to this more recent thread instead:\nhttps://www.postgresql.org/message-id/flat/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com\n\n- Improve autovacuum tuning\nhttp://www.postgresql.org/message-id/5078AD6B.8060802@agliodbs.com\nhttp://www.postgresql.org/message-id/20130124215715.GE4528@alvh.no-ip.org\n\nI'm kind of on the fence about these. The title is way too broad, and\nI doubt we are going to forget to keep improving this area.\n\nIt seems the first thread is really about auto-analyze thresholds, so\nmaybe it should be in a separate entry if we want to do anything\nmentioned in the thread?\n\nThe second thread is really about autovacuum launcher scheduling.\nProbably still relevant, but the thread is very long and doesn't seem\nterribly helpful to someone trying to get up to speed on the issues\nthat are still relevant. I don't see any more recent discussion,\neither. Thoughts?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Dec 2021 13:40:31 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Thu, Dec 9, 2021 at 2:40 AM John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> - Improve autovacuum tuning\n> http://www.postgresql.org/message-id/5078AD6B.8060802@agliodbs.com\n> http://www.postgresql.org/message-id/20130124215715.GE4528@alvh.no-ip.org\n>\n> I'm kind of on the fence about these. The title is way too broad, and\n> I doubt we are going to forget to keep improving this area.\n>\n> It seems the first thread is really about auto-analyze thresholds, so\n> maybe it should be in a separate entry if we want to do anything\n> mentioned in the thread?\n>\n> The second thread is really about autovacuum launcher scheduling.\n> Probably still relevant, but the thread is very long and doesn't seem\n> terribly helpful to someone trying to get up to speed on the issues\n> that are still relevant. I don't see any more recent discussion,\n> either. Thoughts?\n\nThere is another discussion on autovacuum scheduling in 2018 here:\n\nhttps://www.postgresql.org/message-id/0A3221C70F24FB45833433255569204D1F8A4DC6%40G01JPEXMBYT05\n\nSome algorithms were proposed there and I implemented a PoC patch:\n\nhttps://www.postgresql.org/message-id/CAD21AoBUaSRBypA6pd9ZD%3DU-2TJCHtbyZRmrS91Nq0eVQ0B3BA%40mail.gmail.com\n\nBut not completed.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 9 Dec 2021 09:11:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 1:40 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> It's been a while, but here are a few more suggested\n> removals/edits/additions to the TODO list. Any objections or new\n> information, let me know:\n>\n> - Auto-fill the free space map by scanning the buffer cache or by\n> checking pages written by the background writer\n> http://archives.postgresql.org/pgsql-hackers/2006-02/msg01125.php\n> https://www.postgresql.org/message-id/200603011716.16984.peter_e@gmx.net\n>\n> Both these threads are from 2006, so have nothing to do with the current FSM.\n\nMoved to the Not Worth Doing list.\n\n> - Allow concurrent inserts to use recently created pages rather than\n> creating new ones\n> http://archives.postgresql.org/pgsql-hackers/2010-05/msg00853.php\n>\n> Skimming the first few messages, I believe this has been covered by\n> commit 719c84c1b? (Extend relations multiple blocks at a time to\n> improve scalability.)\n\nRemoved.\n\n> - Allow VACUUM FULL and CLUSTER to update the visibility map\n>\n> This topic has a current CF entry which seems to have stalled, so that\n> newer thread would be better to list here than the one from 2013.\n\nAdded.\n\n> - Bias FSM towards returning free space near the beginning of the heap\n> file, in hopes that empty pages at the end can be truncated by VACUUM\n> http://archives.postgresql.org/pgsql-hackers/2009-09/msg01124.php\n> https://www.postgresql.org/message-id/20150424190403.GP4369@alvh.no-ip.org\n>\n> I'm not sure what to think of this, but independently of that, the\n> second thread is actually talking about bringing back something like\n> the pre-9.0 vacuum full, so maybe it should be its own entry?\n\nDone.\n\n> - Consider a more compact data representation for dead tuple locations\n> within VACUUM\n> http://archives.postgresql.org/pgsql-patches/2007-05/msg00143.php\n>\n> Great, but let's link to this more recent thread instead:\n> https://www.postgresql.org/message-id/flat/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com\n\nDone.\n\n> > The second thread is really about autovacuum launcher scheduling.\n> > Probably still relevant, but the thread is very long and doesn't seem\n> > terribly helpful to someone trying to get up to speed on the issues\n> > that are still relevant. I don't see any more recent discussion,\n> > either. Thoughts?\n\nSplit into two entries.\n\nOn Wed, Dec 8, 2021 at 8:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> There is another discussion on autovacuum scheduling in 2018 here:\n>\n> https://www.postgresql.org/message-id/0A3221C70F24FB45833433255569204D1F8A4DC6%40G01JPEXMBYT05\n>\n> Some algorithms were proposed there and I implemented a PoC patch:\n>\n> https://www.postgresql.org/message-id/CAD21AoBUaSRBypA6pd9ZD%3DU-2TJCHtbyZRmrS91Nq0eVQ0B3BA%40mail.gmail.com\n\nAdded, thanks!\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Dec 2021 10:58:56 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "So, I had intended to spend some time on this at least three times a year.\nI've clearly failed at that, but now is as good a time as any to pick it\nback up again.\n\nOver in [1], Tom opined:\n\n> John Naylor <john.naylor@enterprisedb.com> writes:\n>\n> > \"WARNING for Developers: Unfortunately this list does not contain all\nthe\n> > information necessary for someone to start coding a feature. Some of\nthese\n> > items might have become unnecessary since they were added --- others\nmight\n> > be desirable but the implementation might be unclear. When selecting\nitems\n> > listed below, be prepared to first discuss the value of the feature. Do\nnot\n> > assume that you can select one, code it and then expect it to be\ncommitted.\n> > \"\n>\n> I think we could make that even stronger: there's basically nothing on\n> the TODO list that isn't problematic in some way. Otherwise it would\n> have been done already. The entries involve large amounts of work,\n> or things that are subtler than they might appear, or cases where the\n> desirable semantics aren't clear, or tradeoffs that there's not\n> consensus about, or combinations of those.\n>\n> IME it's typically a lot more productive to approach things via\n> \"scratch your own itch\". If a problem is biting you directly, then\n> at least you have some clear idea of what it is that needs to be fixed.\n> You might have to work up to an understanding of how to fix it, but\n> you have a clear goal.\n\nI've come up with some revised language, including s/15/16/ and removing\nthe category of \"[E]\" (easier to implement), since it wouldn't be here if\nit were actually easy:\n\n--\nWARNING for Developers: This list contains some known PostgreSQL bugs, some\nfeature requests, and some things we are not even sure we want. This is not\nmeant to be a resource for beginning developers to get ideas for things to\nwork on. <WIP: maybe direct them to commitfest?>\n\nAll of these items are hard, and some are perhaps impossible. Some of these\nitems might have become unnecessary since they were added. Others might be\ndesirable but:\n\n- a large amount work is required\n- the problems are subtler than they might appear\n- the desirable semantics aren't clear\n- there are tradeoffs that there's not consensus about\n- some combinations of the above\n\nIf you really need a feature that is listed below, it will be worth reading\nthe linked email thread if there is one, since it will often show the\ndifficulties, or perhaps contain previous failed attempts to get a patch\ncommitted. If after that you still want to work on it, be prepared to first\ndiscuss the value of the feature. Do not assume that you can start coding\nand expect it to be committed. Always discuss design on the Hackers list\nbefore starting to code.\n\nOver time, it may become clear that a TODO item has become outdated or\notherwise determined to be either too controversial or not worth the\ndevelopment effort. Such items should be retired to the Not Worth Doing\npage.\n\n[D] marks changes that are done, and will appear in the PostgreSQL 16\nrelease.\n--\n\nWe could also revise the developer FAQ:\n- remove phrase \"Outstanding features are detailed in Todo.\"\n- add suggestion to to check the Todo or Not_worth_doing pages to see if\nthe desired feature is undesirable or problematic\n- rephrase \"Working in isolation is not advisable because others might be\nworking on the same TODO item, or you might have misunderstood the TODO\nitem.\" so it doesn't mention 'TODO' at all.\n\n[1] https://www.postgresql.org/message-id/415636.1673411259%40sss.pgh.pa.us\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nSo, I had intended to spend some time on this at least three times a year. I've clearly failed at that, but now is as good a time as any to pick it back up again.Over in [1], Tom opined:> John Naylor <john.naylor@enterprisedb.com> writes:>> > \"WARNING for Developers: Unfortunately this list does not contain all the> > information necessary for someone to start coding a feature. Some of these> > items might have become unnecessary since they were added --- others might> > be desirable but the implementation might be unclear. When selecting items> > listed below, be prepared to first discuss the value of the feature. Do not> > assume that you can select one, code it and then expect it to be committed.> > \">> I think we could make that even stronger: there's basically nothing on> the TODO list that isn't problematic in some way. Otherwise it would> have been done already. The entries involve large amounts of work,> or things that are subtler than they might appear, or cases where the> desirable semantics aren't clear, or tradeoffs that there's not> consensus about, or combinations of those.>> IME it's typically a lot more productive to approach things via> \"scratch your own itch\". If a problem is biting you directly, then> at least you have some clear idea of what it is that needs to be fixed.> You might have to work up to an understanding of how to fix it, but> you have a clear goal.I've come up with some revised language, including s/15/16/ and removing the category of \"[E]\" (easier to implement), since it wouldn't be here if it were actually easy:--WARNING for Developers: This list contains some known PostgreSQL bugs, some feature requests, and some things we are not even sure we want. This is not meant to be a resource for beginning developers to get ideas for things to work on. <WIP: maybe direct them to commitfest?>All of these items are hard, and some are perhaps impossible. Some of these items might have become unnecessary since they were added. Others might be desirable but:- a large amount work is required- the problems are subtler than they might appear- the desirable semantics aren't clear- there are tradeoffs that there's not consensus about- some combinations of the aboveIf you really need a feature that is listed below, it will be worth reading the linked email thread if there is one, since it will often show the difficulties, or perhaps contain previous failed attempts to get a patch committed. If after that you still want to work on it, be prepared to first discuss the value of the feature. Do not assume that you can start coding and expect it to be committed. Always discuss design on the Hackers list before starting to code.Over time, it may become clear that a TODO item has become outdated or otherwise determined to be either too controversial or not worth the development effort. Such items should be retired to the Not Worth Doing page.[D] marks changes that are done, and will appear in the PostgreSQL 16 release.--We could also revise the developer FAQ:- remove phrase \"Outstanding features are detailed in Todo.\"- add suggestion to to check the Todo or Not_worth_doing pages to see if the desired feature is undesirable or problematic- rephrase \"Working in isolation is not advisable because others might be working on the same TODO item, or you might have misunderstood the TODO item.\" so it doesn't mention 'TODO' at all.[1] https://www.postgresql.org/message-id/415636.1673411259%40sss.pgh.pa.us--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Jan 2023 14:09:56 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "I wrote:\n\n> We could also revise the developer FAQ:\n> - remove phrase \"Outstanding features are detailed in Todo.\"\n> - add suggestion to to check the Todo or Not_worth_doing pages to see if\nthe desired feature is undesirable or problematic\n> - rephrase \"Working in isolation is not advisable because others might be\nworking on the same TODO item, or you might have misunderstood the TODO\nitem.\" so it doesn't mention 'TODO' at all.\n\nThere is also\n\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F#TODOs\n\nChanging the description of what links to the todo list will probably do\nmore to reduce confusion than language in the todo list itself.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> We could also revise the developer FAQ:> - remove phrase \"Outstanding features are detailed in Todo.\"> - add suggestion to to check the Todo or Not_worth_doing pages to see if the desired feature is undesirable or problematic> - rephrase \"Working in isolation is not advisable because others might be working on the same TODO item, or you might have misunderstood the TODO item.\" so it doesn't mention 'TODO' at all.There is alsohttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F#TODOsChanging the description of what links to the todo list will probably do more to reduce confusion than language in the todo list itself.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 Jan 2023 17:17:23 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 02:09:56PM +0700, John Naylor wrote:\n> I've come up with some revised language, including s/15/16/ and removing the\n> category of \"[E]\" (easier to implement), since it wouldn't be here if it were\n> actually easy:\n\nI think it is still possible for a simple item to be identified as\nwanted and easy, but not completed and put on the TODO list.\n\n> WARNING for Developers: This list contains some known PostgreSQL bugs, some\n> feature requests, and some things we are not even sure we want. This is not\n> meant to be a resource for beginning developers to get ideas for things to work\n> on. <WIP: maybe direct them to commitfest?>\n> \n> All of these items are hard, and some are perhaps impossible. Some of these\n> items might have become unnecessary since they were added. Others might be\n> desirable but:\n> \n> - a large amount work is required\n> - the problems are subtler than they might appear\n> - the desirable semantics aren't clear\n> - there are tradeoffs that there's not consensus about\n> - some combinations of the above\n> \n> If you really need a feature that is listed below, it will be worth reading the\n> linked email thread if there is one, since it will often show the difficulties,\n> or perhaps contain previous failed attempts to get a patch committed. If after\n> that you still want to work on it, be prepared to first discuss the value of\n> the feature. Do not assume that you can start coding and expect it to be\n> committed. Always discuss design on the Hackers list before starting to code.\n> \n> Over time, it may become clear that a TODO item has become outdated or\n> otherwise determined to be either too controversial or not worth the\n> development effort. Such items should be retired to the Not Worth Doing page.\n> \n> [D] marks changes that are done, and will appear in the PostgreSQL 16 release.\n\nI think we risk overloading people with too many words above, and they\nwill not read it fully. Can it be simplified? I wonder if some of this\nbelows in the developer's FAQ and linked to that from the TODO list.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Tue, 17 Jan 2023 15:13:04 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 05:17:23PM +0700, John Naylor wrote:\n> \n> I wrote:\n> \n> > We could also revise the developer FAQ:\n> > - remove phrase \"Outstanding features are detailed in Todo.\"\n> > - add suggestion to to check the Todo or Not_worth_doing pages to see if the\n> desired feature is undesirable or problematic\n> > - rephrase \"Working in isolation is not advisable because others might be\n> working on the same TODO item, or you might have misunderstood the TODO item.\"\n> so it doesn't mention 'TODO' at all.\n> \n> There is also\n> \n> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F#TODOs\n> \n> Changing the description of what links to the todo list will probably do more\n> to reduce confusion than language in the todo list itself.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Tue, 17 Jan 2023 15:13:16 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 3:13 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I think we risk overloading people with too many words above, and they\n> will not read it fully. Can it be simplified? I wonder if some of this\n> belows in the developer's FAQ and linked to that from the TODO list.\n\nI think you're right. That further drives home what I mentioned a few days\nago: Maybe we don't need to change much in the actual list itself, but\nrephrase references to it from elsewhere. Here's a first draft to see what\nthat would look like:\n--\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F#TODOs\n\nfrom:\n\"PostgreSQL maintains a TODO list on our wiki:\nhttp://wiki.postgresql.org/wiki/TODO\n\nWe have attempted to organize issues and link in relevant discussions from\nour mailing list archives. Please read background information if it is\navailable before attempting to resolve a TODO. If no background information\nis available, it is appropriate to post a question to\npgsql-hackers@postgresql.org and request additional information and inquire\nabout the status of any ongoing work on the problem.\"\n\nto:\n\"It's worth checking if the feature of interest is found in the TODO list\non our wiki: http://wiki.postgresql.org/wiki/TODO. That list contains some\nknown PostgreSQL bugs, some feature requests, and some things we are not\neven sure we want. Many entries have a link to an email thread containing\nprior discussion, or perhaps attempts that for whatever reason didn't make\nit as far as getting committed.\"\n\n...which might make more sense if moved below the \"brand new features\"\nsection.\n\n--\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\n\n1)\nfrom:\n\"What areas need work?\nOutstanding features are detailed in Todo.\n\nYou can learn more about these features by consulting the archives, the SQL\nstandards and the recommended texts (see books for developers).\"\n\nto:\n??? -> For \"what areas need work?\", we need to have a different answer, but\nI'm not sure what it is.\n\n2)\nfrom:\n\"What do I do after choosing an item to work on?\n\nSend an email to pgsql-hackers with a proposal for what you want to do\n(assuming your contribution is not trivial). Working in isolation is not\nadvisable because others might be working on the same TODO item, or you\nmight have misunderstood the TODO item. In the email, discuss both the\ninternal implementation method you plan to use, and any user-visible\nchanges (new syntax, etc).\"\n\nto:\n\"What do I do after choosing an area to work on?\n\nSend an email to pgsql-hackers with a proposal for what you want to do\n(assuming your contribution is not trivial). Working in isolation is not\nadvisable because experience has shown that there are often requirements\nthat are not obvious, and if those are not agreed on beforehand it leads to\nwasted effort. In the email, discuss both the internal implementation\nmethod you plan to use, and any user-visible changes (new syntax, etc).\"\n\n> On Wed, Jan 11, 2023 at 02:09:56PM +0700, John Naylor wrote:\n> > I've come up with some revised language, including s/15/16/ and\nremoving the\n> > category of \"[E]\" (easier to implement), since it wouldn't be here if\nit were\n> > actually easy:\n>\n> I think it is still possible for a simple item to be identified as\n> wanted and easy, but not completed and put on the TODO list.\n\nTheoretically it's possible, but in practice no one puts any easy items\nhere. Currently, there are three marked as easy:\n\npg_dump:\n- Dump security labels and comments on databases in a way that allows to\nload a dump into a differently named database\n- Add full object name to the tag field. eg. for operators we need\n'=(integer, integer)', instead of just '='.\n\nDump and restore is critical to get right, and the code base is pretty\nlarge and hairy, so I don't actually believe for a second that these items\nare easy.\n\nECPG:\n- sqlwarn[6] should be 'W' if the PRECISION or SCALE value specified\n\nIn one sentence there are four uses of jargon that only someone with\nexperience would understand. I have hacked around ECPG multiple times and\nhave no idea what this means.\n\nThe last two also don't have any motivation spelled out, much less an email\nthread.\n\nSo my inclination is, the [E] marker here is unjustified.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jan 18, 2023 at 3:13 AM Bruce Momjian <bruce@momjian.us> wrote:> I think we risk overloading people with too many words above, and they> will not read it fully. Can it be simplified? I wonder if some of this> belows in the developer's FAQ and linked to that from the TODO list.I think you're right. That further drives home what I mentioned a few days ago: Maybe we don't need to change much in the actual list itself, but rephrase references to it from elsewhere. Here's a first draft to see what that would look like:--https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F#TODOsfrom:\"PostgreSQL maintains a TODO list on our wiki: http://wiki.postgresql.org/wiki/TODOWe have attempted to organize issues and link in relevant discussions from our mailing list archives. Please read background information if it is available before attempting to resolve a TODO. If no background information is available, it is appropriate to post a question to pgsql-hackers@postgresql.org and request additional information and inquire about the status of any ongoing work on the problem.\"to:\"It's worth checking if the feature of interest is found in the TODO list on our wiki: http://wiki.postgresql.org/wiki/TODO. That list contains some known PostgreSQL bugs, some feature requests, and some things we are not even sure we want. Many entries have a link to an email thread containing prior discussion, or perhaps attempts that for whatever reason didn't make it as far as getting committed.\"...which might make more sense if moved below the \"brand new features\" section.--https://wiki.postgresql.org/wiki/Developer_FAQ1)from:\"What areas need work?Outstanding features are detailed in Todo.You can learn more about these features by consulting the archives, the SQL standards and the recommended texts (see books for developers).\"to:??? -> For \"what areas need work?\", we need to have a different answer, but I'm not sure what it is.2)from:\"What do I do after choosing an item to work on?Send an email to pgsql-hackers with a proposal for what you want to do (assuming your contribution is not trivial). Working in isolation is not advisable because others might be working on the same TODO item, or you might have misunderstood the TODO item. In the email, discuss both the internal implementation method you plan to use, and any user-visible changes (new syntax, etc).\"to:\"What do I do after choosing an area to work on?Send an email to pgsql-hackers with a proposal for what you want to do (assuming your contribution is not trivial). Working in isolation is not advisable because experience has shown that there are often requirements that are not obvious, and if those are not agreed on beforehand it leads to wasted effort. In the email, discuss both the internal implementation method you plan to use, and any user-visible changes (new syntax, etc).\"> On Wed, Jan 11, 2023 at 02:09:56PM +0700, John Naylor wrote:> > I've come up with some revised language, including s/15/16/ and removing the> > category of \"[E]\" (easier to implement), since it wouldn't be here if it were> > actually easy:>> I think it is still possible for a simple item to be identified as> wanted and easy, but not completed and put on the TODO list.Theoretically it's possible, but in practice no one puts any easy items here. Currently, there are three marked as easy:pg_dump:- Dump security labels and comments on databases in a way that allows to load a dump into a differently named database- Add full object name to the tag field. eg. for operators we need '=(integer, integer)', instead of just '='.Dump and restore is critical to get right, and the code base is pretty large and hairy, so I don't actually believe for a second that these items are easy.ECPG:- sqlwarn[6] should be 'W' if the PRECISION or SCALE value specifiedIn one sentence there are four uses of jargon that only someone with experience would understand. I have hacked around ECPG multiple times and have no idea what this means.The last two also don't have any motivation spelled out, much less an email thread.So my inclination is, the [E] marker here is unjustified.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 24 Jan 2023 10:46:34 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 10:46:34AM +0700, John Naylor wrote:\n> \n> On Wed, Jan 18, 2023 at 3:13 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> > I think we risk overloading people with too many words above, and they\n> > will not read it fully. Can it be simplified? I wonder if some of this\n> > belows in the developer's FAQ and linked to that from the TODO list.\n> \n> I think you're right. That further drives home what I mentioned a few days ago:\n> Maybe we don't need to change much in the actual list itself, but rephrase\n> references to it from elsewhere. Here's a first draft to see what that would\n\nThe TODO list started as a way to record items that were either bugs or\nfeatures we couldn't yet implement. At this point, it is more features\nwe are either not sure we want or don't know how to implement. This\nshould be clearly specified wherever the TODO list is referenced. I\nthink once this is done it should be clear how users and developers\nshould view that list. I think the top of the TODO list makes that\nclear:\n\n\tThis list contains some known PostgreSQL bugs, some feature\n\trequests, and some things we are not even sure we want. Many of\n\tthese items are hard, and some are perhaps impossible. If you\n\twould like to work on an item, please read the Developer FAQ\n\tfirst. There is also a development information page.\n\nThe rest of the text appears clear to me too.\n\n> look like:\n> --\n> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F#TODOs\n> \n> from:\n> \"PostgreSQL maintains a TODO list on our wiki: http://wiki.postgresql.org/wiki/\n> TODO\n> \n> We have attempted to organize issues and link in relevant discussions from our\n> mailing list archives. Please read background information if it is available\n> before attempting to resolve a TODO. If no background information is available,\n> it is appropriate to post a question to pgsql-hackers@postgresql.org and\n> request additional information and inquire about the status of any ongoing work\n> on the problem.\"\n> \n> to:\n> \"It's worth checking if the feature of interest is found in the TODO list on\n> our wiki: http://wiki.postgresql.org/wiki/TODO. That list contains some known\n> PostgreSQL bugs, some feature requests, and some things we are not even sure we\n> want. Many entries have a link to an email thread containing prior discussion,\n> or perhaps attempts that for whatever reason didn't make it as far as getting\n> committed.\"\n> \n> ...which might make more sense if moved below the \"brand new features\" section.\n\nI think we just point them at the TODO list and they will read the top\nof the list first, no? I think you are right that we updated the top of\nthe TODO but didn't update the places that link to it. I am thinking we\nshould just trim down the text linking to it and let the top of the TODO\nlist do its job.\n\n> --\n> https://wiki.postgresql.org/wiki/Developer_FAQ\n> \n> 1)\n> from:\n> \"What areas need work?\n> Outstanding features are detailed in Todo.\n> \n> You can learn more about these features by consulting the archives, the SQL\n> standards and the recommended texts (see books for developers).\"\n> \n> to:\n> ??? -> For \"what areas need work?\", we need to have a different answer, but I'm\n> not sure what it is.\n\nWow, I would not send a new person to the SQL standard docs. ;-) I am\nthinking we just don't have a good answer to this so let's say less.\n\n> 2)\n> from:\n> \"What do I do after choosing an item to work on?\n> \n> Send an email to pgsql-hackers with a proposal for what you want to do\n> (assuming your contribution is not trivial). Working in isolation is not\n> advisable because others might be working on the same TODO item, or you might\n> have misunderstood the TODO item. In the email, discuss both the internal\n> implementation method you plan to use, and any user-visible changes (new\n> syntax, etc).\"\n> \n> to:\n> \"What do I do after choosing an area to work on?\n> \n> Send an email to pgsql-hackers with a proposal for what you want to do\n> (assuming your contribution is not trivial). Working in isolation is not\n\nCan new people identify trivial?\n\n> advisable because experience has shown that there are often requirements that\n> are not obvious, and if those are not agreed on beforehand it leads to wasted\n> effort. In the email, discuss both the internal implementation method you plan\n> to use, and any user-visible changes (new syntax, etc).\"\n> \n> > On Wed, Jan 11, 2023 at 02:09:56PM +0700, John Naylor wrote:\n> > > I've come up with some revised language, including s/15/16/ and removing\n> the\n> > > category of \"[E]\" (easier to implement), since it wouldn't be here if it\n> were\n> > > actually easy:\n> >\n> > I think it is still possible for a simple item to be identified as\n> > wanted and easy, but not completed and put on the TODO list.\n> \n> Theoretically it's possible, but in practice no one puts any easy items here.\n> Currently, there are three marked as easy:\n> \n> pg_dump:\n> - Dump security labels and comments on databases in a way that allows to load a\n> dump into a differently named database\n> - Add full object name to the tag field. eg. for operators we need '=(integer,\n> integer)', instead of just '='.\n> \n> Dump and restore is critical to get right, and the code base is pretty large\n> and hairy, so I don't actually believe for a second that these items are easy.\n> \n> ECPG:\n> - sqlwarn[6] should be 'W' if the PRECISION or SCALE value specified\n> \n> In one sentence there are four uses of jargon that only someone with experience\n> would understand. I have hacked around ECPG multiple times and have no idea\n> what this means.\n> \n> The last two also don't have any motivation spelled out, much less an email\n> thread.\n> \n> So my inclination is, the [E] marker here is unjustified.\n\nI can now see that just removing the [E] label totally is the right\nanswer. Yes, there might be an easy item on there, but the fact we have\nthree labeled and they are not easy makes me thing [E] is causing more\nproblems than it solves.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 11:57:26 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 11:57 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Jan 24, 2023 at 10:46:34AM +0700, John Naylor wrote:\n> >\n\n> > https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F#TODOs\n\n> > to:\n> > \"It's worth checking if the feature of interest is found in the TODO\nlist on\n> > our wiki: http://wiki.postgresql.org/wiki/TODO. That list contains some\nknown\n> > PostgreSQL bugs, some feature requests, and some things we are not even\nsure we\n> > want. Many entries have a link to an email thread containing prior\ndiscussion,\n> > or perhaps attempts that for whatever reason didn't make it as far as\ngetting\n> > committed.\"\n> >\n> > ...which might make more sense if moved below the \"brand new features\"\nsection.\n>\n> I think we just point them at the TODO list and they will read the top\n> of the list first, no? I think you are right that we updated the top of\n> the TODO but didn't update the places that link to it. I am thinking we\n> should just trim down the text linking to it and let the top of the TODO\n> list do its job.\n\nOkay. How about:\n\n\"It's worth checking if the feature of interest is found in the TODO list\non our wiki: http://wiki.postgresql.org/wiki/TODO. The entries there often\nhave additional information about the feature and may point to reasons why\nit hasn't been implemented yet.\"\n\n> > --\n> > https://wiki.postgresql.org/wiki/Developer_FAQ\n> >\n> > 1)\n> > from:\n> > \"What areas need work?\n> > Outstanding features are detailed in Todo.\n> >\n> > You can learn more about these features by consulting the archives, the\nSQL\n> > standards and the recommended texts (see books for developers).\"\n> >\n> > to:\n> > ??? -> For \"what areas need work?\", we need to have a different answer,\nbut I'm\n> > not sure what it is.\n>\n> Wow, I would not send a new person to the SQL standard docs. ;-) I am\n> thinking we just don't have a good answer to this so let's say less.\n\nDo I understand right that we could just remove this entire section \"What\nareas need work?\"?\n\n> > 2)\n> > from:\n> > \"What do I do after choosing an item to work on?\n> >\n> > Send an email to pgsql-hackers with a proposal for what you want to do\n> > (assuming your contribution is not trivial). Working in isolation is not\n> > advisable because others might be working on the same TODO item, or you\nmight\n> > have misunderstood the TODO item. In the email, discuss both the\ninternal\n> > implementation method you plan to use, and any user-visible changes (new\n> > syntax, etc).\"\n> >\n> > to:\n> > \"What do I do after choosing an area to work on?\n> >\n> > Send an email to pgsql-hackers with a proposal for what you want to do\n> > (assuming your contribution is not trivial). Working in isolation is not\n>\n> Can new people identify trivial?\n\nI'd say they have some idea about that, since we do regularly get typo\nfixes and doc clarifications. Sure there is some grey area, but I don't\nthink the dividing point is important. The important thing is, we also\nsometimes get large and invasive patches without design discussion, which\nwe want to discourage.\n\n> I can now see that just removing the [E] label totally is the right\n> answer. Yes, there might be an easy item on there, but the fact we have\n> three labeled and they are not easy makes me thing [E] is causing more\n> problems than it solves.\n\nOkay, having heard no objections I'll remove it.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jan 24, 2023 at 11:57 PM Bruce Momjian <bruce@momjian.us> wrote:>> On Tue, Jan 24, 2023 at 10:46:34AM +0700, John Naylor wrote:> >> > https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F#TODOs> > to:> > \"It's worth checking if the feature of interest is found in the TODO list on> > our wiki: http://wiki.postgresql.org/wiki/TODO. That list contains some known> > PostgreSQL bugs, some feature requests, and some things we are not even sure we> > want. Many entries have a link to an email thread containing prior discussion,> > or perhaps attempts that for whatever reason didn't make it as far as getting> > committed.\"> >> > ...which might make more sense if moved below the \"brand new features\" section.>> I think we just point them at the TODO list and they will read the top> of the list first, no? I think you are right that we updated the top of> the TODO but didn't update the places that link to it. I am thinking we> should just trim down the text linking to it and let the top of the TODO> list do its job.Okay. How about:\"It's worth checking if the feature of interest is found in the TODO list on our wiki: http://wiki.postgresql.org/wiki/TODO. The entries there often have additional information about the feature and may point to reasons why it hasn't been implemented yet.\"> > --> > https://wiki.postgresql.org/wiki/Developer_FAQ> >> > 1)> > from:> > \"What areas need work?> > Outstanding features are detailed in Todo.> >> > You can learn more about these features by consulting the archives, the SQL> > standards and the recommended texts (see books for developers).\"> >> > to:> > ??? -> For \"what areas need work?\", we need to have a different answer, but I'm> > not sure what it is.>> Wow, I would not send a new person to the SQL standard docs. ;-) I am> thinking we just don't have a good answer to this so let's say less.Do I understand right that we could just remove this entire section \"What areas need work?\"? > > 2)> > from:> > \"What do I do after choosing an item to work on?> >> > Send an email to pgsql-hackers with a proposal for what you want to do> > (assuming your contribution is not trivial). Working in isolation is not> > advisable because others might be working on the same TODO item, or you might> > have misunderstood the TODO item. In the email, discuss both the internal> > implementation method you plan to use, and any user-visible changes (new> > syntax, etc).\"> >> > to:> > \"What do I do after choosing an area to work on?> >> > Send an email to pgsql-hackers with a proposal for what you want to do> > (assuming your contribution is not trivial). Working in isolation is not>> Can new people identify trivial?I'd say they have some idea about that, since we do regularly get typo fixes and doc clarifications. Sure there is some grey area, but I don't think the dividing point is important. The important thing is, we also sometimes get large and invasive patches without design discussion, which we want to discourage.> I can now see that just removing the [E] label totally is the right> answer. Yes, there might be an easy item on there, but the fact we have> three labeled and they are not easy makes me thing [E] is causing more> problems than it solves.Okay, having heard no objections I'll remove it.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 30 Jan 2023 13:13:45 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 01:13:45PM +0700, John Naylor wrote:\n> On Tue, Jan 24, 2023 at 11:57 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I think we just point them at the TODO list and they will read the top\n> > of the list first, no? I think you are right that we updated the top of\n> > the TODO but didn't update the places that link to it. I am thinking we\n> > should just trim down the text linking to it and let the top of the TODO\n> > list do its job.\n> \n> Okay. How about:\n> \n> \"It's worth checking if the feature of interest is found in the TODO list on\n> our wiki: http://wiki.postgresql.org/wiki/TODO. The entries there often have\n> additional information about the feature and may point to reasons why it hasn't\n> been implemented yet.\"\n\nGood.\n\n> > Wow, I would not send a new person to the SQL standard docs. ;-) I am\n> > thinking we just don't have a good answer to this so let's say less.\n> \n> Do I understand right that we could just remove this entire section \"What areas\n> need work?\"?\n\nYes, I think so.\n\n> > > 2)\n> > > from:\n> > > \"What do I do after choosing an item to work on?\n> > >\n> > > Send an email to pgsql-hackers with a proposal for what you want to do\n> > > (assuming your contribution is not trivial). Working in isolation is not\n> > > advisable because others might be working on the same TODO item, or you\n> might\n> > > have misunderstood the TODO item. In the email, discuss both the internal\n> > > implementation method you plan to use, and any user-visible changes (new\n> > > syntax, etc).\"\n> > >\n> > > to:\n> > > \"What do I do after choosing an area to work on?\n> > >\n> > > Send an email to pgsql-hackers with a proposal for what you want to do\n> > > (assuming your contribution is not trivial). Working in isolation is not\n> >\n> > Can new people identify trivial?\n> \n> I'd say they have some idea about that, since we do regularly get typo fixes\n> and doc clarifications. Sure there is some grey area, but I don't think the\n> dividing point is important. The important thing is, we also sometimes get\n> large and invasive patches without design discussion, which we want to\n> discourage.\n\nAgreed.\n\n> > I can now see that just removing the [E] label totally is the right\n> > answer. Yes, there might be an easy item on there, but the fact we have\n> > three labeled and they are not easy makes me thing [E] is causing more\n> > problems than it solves.\n> \n> Okay, having heard no objections I'll remove it.\n\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Mon, 30 Jan 2023 10:07:07 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 10:07 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Jan 30, 2023 at 01:13:45PM +0700, John Naylor wrote:\n\n> > \"It's worth checking if the feature of interest is found in the TODO\nlist on\n> > our wiki: http://wiki.postgresql.org/wiki/TODO. The entries there often\nhave\n> > additional information about the feature and may point to reasons why\nit hasn't\n> > been implemented yet.\"\n>\n> Good.\n\n> > Do I understand right that we could just remove this entire section\n\"What areas\n> > need work?\"?\n>\n> Yes, I think so.\n\n> > > I can now see that just removing the [E] label totally is the right\n> > > answer. Yes, there might be an easy item on there, but the fact we\nhave\n> > > three labeled and they are not easy makes me thing [E] is causing more\n> > > problems than it solves.\n> >\n> > Okay, having heard no objections I'll remove it.\n\nThese are all done now.\n\nI'll try to get back to culling the list items at the end of April.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jan 30, 2023 at 10:07 PM Bruce Momjian <bruce@momjian.us> wrote:>> On Mon, Jan 30, 2023 at 01:13:45PM +0700, John Naylor wrote:> > \"It's worth checking if the feature of interest is found in the TODO list on> > our wiki: http://wiki.postgresql.org/wiki/TODO. The entries there often have> > additional information about the feature and may point to reasons why it hasn't> > been implemented yet.\">> Good.> > Do I understand right that we could just remove this entire section \"What areas> > need work?\"?>> Yes, I think so.> > > I can now see that just removing the [E] label totally is the right> > > answer. Yes, there might be an easy item on there, but the fact we have> > > three labeled and they are not easy makes me thing [E] is causing more> > > problems than it solves.> >> > Okay, having heard no objections I'll remove it.These are all done now.I'll try to get back to culling the list items at the end of April.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 6 Feb 2023 11:04:57 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Mon, Feb 6, 2023 at 11:04 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n> I'll try to get back to culling the list items at the end of April.\n\nI've made another pass at this. Previously, I went one or two sections at a\ntime, but this time I tried just skimming the whole thing and noting what\njumps out at me. Also, I've separated things into three categories: Remove,\nmove to \"not wanted list\", and revise. Comments and objections welcome, as\nalways.\n\n---------------------------------\n1. Remove\n\nThese are either ill-specified, outdated, possibly done already, or not\nenough interest. If someone proposed them again, we could consider it, so I\npropose to just remove these, but not move them to the Not Wanted list.\nAlso some questions:\n\nImprove UTF8 combined character handling?\n-> If this is referring to normalization, we have it now. If not, what\nneeds improving?\n\nImprove COPY performance\n-> What's the criterion for declaring this done? There are many areas that\nget performance improvements -- why does this need to be called out\nseparately? (There's been some work in the past couple years, so maybe at\nleast we need to find out the current bottlenecks.)\n\nImprove line drawing characters\n-> This links to a proposal with no responses titled \"Add a setting in psql\nthat set the linestyle to unicode only if the client encoding was actually\nUTF8\". If we don't drop this, the entry text should at least give an idea\nwhat the proposal is.\n\nConsider improving the continuation prompt\n-> A cosmetic proposal that stalled -- time to retire it?\n\nSMP scalability improvements\n-> Both threads are from 2007\n\nConsider GnuTLS if OpenSSL license becomes a problem\n-> We now have the ability to swap in another implementation during the\nbuild process\n\nAllow creation of universal binaries for Darwin\n-> From 2008: Is this still a thing?\n\nAllow plug-in modules to emulate features from other databases\n-> This sounds like a hook or extension.\n\nRethink our type system\n-> Way out of scope, and short on details.\n\nAdd support for polymorphic arguments and return types to languages other\nthan PL/PgSQL\n-> Seems if someone needed this, they would say so (no thread).\n\nAdd support for OUT and INOUT parameters to languages other than PL/PgSQL\n-> Ditto\n\n--------------------\n2. Propose to move to the \"Not Wanted list\":\n\n(Anything already at the bottom under the heading \"Features We Do Not\nWant\", with the exception of \"threads in a single process\". I'll just\nremove that -- if we ever decide that's worth pursuing, it'll be because we\ndecided we can't really avoid it anymore, and in that case we surely don't\nneed to put it here.)\n\nAdd SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM ANALYZE,\nand CLUSTER\n-> There are external tools that help with this kind of analysis\n\nAllow regex operations in PL/Perl using UTF8 characters in non-UTF8 encoded\ndatabases\n-> Seems pie-in-the-sky as well as a niche problem?\n\nAdd option to print advice for people familiar with other databases\n-> Doesn't seem relevant anymore?\n\nConsider having single-page pruning update the visibility map\n-> Comment from Heikki in the thread:\n\"I think I was worried about the possible performance impact of having to\nclear the bit in visibility map again. If you're frequently updating a\ntuple so that HOT and page pruning is helping you, setting the bit in\nvisibility map seems counter-productive; it's going to be cleared soon\nagain by another UPDATE. That's just a hunch, though. Maybe the overhead\nis negligible.\"\n\nConsider mmap()'ing entire files into a backend?\n-> Isn't this a can of worms?\n\nConsider allowing control of upper/lower case folding of unquoted\nidentifiers\n-> Would we ever consider this?\n\n---------------------------------\nOther -- need adjustment or update?\n\nDo async I/O for faster random read-ahead of data\n-> This section needs to be revised, since there is on-going work on AIO.\nThere are a couple other entries that should maybe be put under a different\nheading?\n\n*** The whole section on Windows has lots of old stuff -- which are still\nrelevant?\n\n(ECPG) Fix nested C comments\n-> What needs fixing? It should work fine.\n\nImprove speed of tab completion\n-> Is this still a problem?\n\nTesting pgstat via pg_regress is tricky and inefficient. Consider making a\ndedicated pgstat test-suite.\n-> This has significantly changed recently -- how are things now?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Feb 6, 2023 at 11:04 AM John Naylor <john.naylor@enterprisedb.com> wrote:> I'll try to get back to culling the list items at the end of April.I've made another pass at this. Previously, I went one or two sections at a time, but this time I tried just skimming the whole thing and noting what jumps out at me. Also, I've separated things into three categories: Remove, move to \"not wanted list\", and revise. Comments and objections welcome, as always.---------------------------------1. RemoveThese are either ill-specified, outdated, possibly done already, or not enough interest. If someone proposed them again, we could consider it, so I propose to just remove these, but not move them to the Not Wanted list. Also some questions:Improve UTF8 combined character handling?-> If this is referring to normalization, we have it now. If not, what needs improving?Improve COPY performance-> What's the criterion for declaring this done? There are many areas that get performance improvements -- why does this need to be called out separately? (There's been some work in the past couple years, so maybe at least we need to find out the current bottlenecks.)Improve line drawing characters-> This links to a proposal with no responses titled \"Add a setting in psql that set the linestyle to unicode only if the client encoding was actually UTF8\". If we don't drop this, the entry text should at least give an idea what the proposal is.Consider improving the continuation prompt-> A cosmetic proposal that stalled -- time to retire it?SMP scalability improvements-> Both threads are from 2007Consider GnuTLS if OpenSSL license becomes a problem-> We now have the ability to swap in another implementation during the build processAllow creation of universal binaries for Darwin-> From 2008: Is this still a thing?Allow plug-in modules to emulate features from other databases-> This sounds like a hook or extension.Rethink our type system-> Way out of scope, and short on details.Add support for polymorphic arguments and return types to languages other than PL/PgSQL-> Seems if someone needed this, they would say so (no thread).Add support for OUT and INOUT parameters to languages other than PL/PgSQL-> Ditto--------------------2. Propose to move to the \"Not Wanted list\":(Anything already at the bottom under the heading \"Features We Do Not Want\", with the exception of \"threads in a single process\". I'll just remove that -- if we ever decide that's worth pursuing, it'll be because we decided we can't really avoid it anymore, and in that case we surely don't need to put it here.)Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM ANALYZE, and CLUSTER-> There are external tools that help with this kind of analysisAllow regex operations in PL/Perl using UTF8 characters in non-UTF8 encoded databases-> Seems pie-in-the-sky as well as a niche problem?Add option to print advice for people familiar with other databases-> Doesn't seem relevant anymore?Consider having single-page pruning update the visibility map-> Comment from Heikki in the thread:\"I think I was worried about the possible performance impact of having toclear the bit in visibility map again. If you're frequently updating atuple so that HOT and page pruning is helping you, setting the bit invisibility map seems counter-productive; it's going to be cleared soonagain by another UPDATE. That's just a hunch, though. Maybe the overheadis negligible.\"Consider mmap()'ing entire files into a backend?-> Isn't this a can of worms?Consider allowing control of upper/lower case folding of unquoted identifiers-> Would we ever consider this?---------------------------------Other -- need adjustment or update?Do async I/O for faster random read-ahead of data-> This section needs to be revised, since there is on-going work on AIO. There are a couple other entries that should maybe be put under a different heading?*** The whole section on Windows has lots of old stuff -- which are still relevant?(ECPG) Fix nested C comments-> What needs fixing? It should work fine.Improve speed of tab completion-> Is this still a problem?Testing pgstat via pg_regress is tricky and inefficient. Consider making a dedicated pgstat test-suite.-> This has significantly changed recently -- how are things now?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 13 May 2023 11:41:45 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> I've made another pass at this. Previously, I went one or two sections at a\n> time, but this time I tried just skimming the whole thing and noting what\n> jumps out at me. Also, I've separated things into three categories: Remove,\n> move to \"not wanted list\", and revise. Comments and objections welcome, as\n> always.\n\nGenerally agree that the items you've listed are obsolete. Some comments:\n\n> Allow creation of universal binaries for Darwin\n> -> From 2008: Is this still a thing?\n\nThis entry might stem from 4b362c662. It's still something that nobody\nhas bothered to make happen, not even after another architecture\ntransition on Apple's part. And there are things partly outside our\ncontrol in the area, see d69a419e6. I doubt it will ever happen.\n\n> Add support for polymorphic arguments and return types to languages other\n> than PL/PgSQL\n> -> Seems if someone needed this, they would say so (no thread).\n\nI think this is still an issue. Surprised nobody has yet gotten annoyed\nenough to do something about it.\n\n> Add support for OUT and INOUT parameters to languages other than PL/PgSQL\n> -> Ditto\n\nAnd ditto.\n\n> Consider allowing control of upper/lower case folding of unquoted\n> identifiers\n> -> Would we ever consider this?\n\nI think that one's dead as a doornail.\n\n> (ECPG) Fix nested C comments\n> -> What needs fixing? It should work fine.\n\nI might be mistaken, but I think 8ac5e88f9 may have fixed this.\n\n> Improve speed of tab completion\n> -> Is this still a problem?\n\nI keep worrying that tab-complete.c will become so ungainly as to\npresent a human-scale performance problem. But there's been pretty\nmuch zero complaints so far. Let's drop this one until some actual\nissue emerges.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 May 2023 01:31:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Sat, May 13, 2023 at 01:31:21AM -0400, Tom Lane wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > I've made another pass at this. Previously, I went one or two sections at a\n> > time, but this time I tried just skimming the whole thing and noting what\n> > jumps out at me. Also, I've separated things into three categories: Remove,\n> > move to \"not wanted list\", and revise. Comments and objections welcome, as\n> > always.\n> \n> Generally agree that the items you've listed are obsolete. Some comments:\n\nI agree with this email and John's.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Mon, 15 May 2023 12:22:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Sat, May 13, 2023 at 12:42 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Add support for polymorphic arguments and return types to languages other than PL/PgSQL\n> -> Seems if someone needed this, they would say so (no thread).\n>\n> Add support for OUT and INOUT parameters to languages other than PL/PgSQL\n> -> Ditto\n\nThese actually seem like pretty interesting projects.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 May 2023 13:59:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Sat, 13 May 2023 at 06:42, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Mon, Feb 6, 2023 at 11:04 AM John Naylor <john.naylor@enterprisedb.com> wrote:\n> > I'll try to get back to culling the list items at the end of April.\n>\n> I've made another pass at this. Previously, I went one or two sections at a time, but this time I tried just skimming the whole thing and noting what jumps out at me. Also, I've separated things into three categories: Remove, move to \"not wanted list\", and revise. Comments and objections welcome, as always.\n>\n> [...]\n> --------------------\n> 2. Propose to move to the \"Not Wanted list\":\n>\n> (Anything already at the bottom under the heading \"Features We Do Not Want\", with the exception of \"threads in a single process\". I'll just remove that -- if we ever decide that's worth pursuing, it'll be because we decided we can't really avoid it anymore, and in that case we surely don't need to put it here.)\n>\n> Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM ANALYZE, and CLUSTER\n> -> There are external tools that help with this kind of analysis\n\nAlthrough there are external tools which help with the analysis, the\nsilent complaint of this item seems to be \"PostgreSQL doesn't provide\nthe user with actionable maintenance tasks/DDL info\", and I\nwholeheartedly agree with that. Finding out which plans are bad, why\nthey're bad, and how to fix them currently has quite a steep learning\ncurve; and while external tools do help, they are not at all commonly\navailable.\n\nThe result I got when searching for \"automatic postgresql index\nsuggestions\" was a combination of hypopg, pg_qualstats and some manual\nglue to get suggested indexes in the current database - but none of\nthese are available in the main distribution.\n\nI'm not asking this to be part of the main PostgreSQL binary, but I\ndon't think that the idea of 'automated suggestions' should be moved\nto the \"not wanted\" list - I'd suggest adding it to a list for Contrib\ninstead, if we're insisting on removing it from the main TODO list.\n\nKind regards,\n\nMatthias van de Meent\n\nPS. note how we already have _some_ suggestions about vacuum and\nreindex in PostgreSQL, but that is only when things are obviously\nwrong, and we don't make what I would call intelligent suggestions -\nin one place we still suggest to shut down the postmaster and then\nvacuum specific databases in single-user mode.\n\n\n",
"msg_date": "Mon, 15 May 2023 20:04:47 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, May 13, 2023 at 12:42 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n>> Add support for polymorphic arguments and return types to languages other than PL/PgSQL\n>> -> Seems if someone needed this, they would say so (no thread).\n>> \n>> Add support for OUT and INOUT parameters to languages other than PL/PgSQL\n>> -> Ditto\n\n> These actually seem like pretty interesting projects.\n\nYeah. I'm surprised that nobody has gotten around to scratching\nthis itch yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 May 2023 14:16:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Mon, May 15, 2023 at 2:05 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM ANALYZE, and CLUSTER\n> > -> There are external tools that help with this kind of analysis\n>\n> Althrough there are external tools which help with the analysis, the\n> silent complaint of this item seems to be \"PostgreSQL doesn't provide\n> the user with actionable maintenance tasks/DDL info\", and I\n> wholeheartedly agree with that.\n\nWell, if SET PERFORMANCE_TIPS is adding a new GUC, that's inherently\nsomething that could not happen in a contrib module, only in core. If\nit's dedicated syntax of some kind, same thing. I am not at all\nconvinced that adding something spelled SET PERFORMANCE_TIPS in any\nform is a good idea, but at the very least someone would have to\npropose what the possible values of that option would be and what they\nwould do in a pretty detailed way for us to decide whether we liked it\nor not. It seems to me that it would be quite difficult to get any\nkind of consensus. If you came up with several different kinds of\nperformance tips and made the GUC a comma-separated list of tip types,\nI suspect that you'd get a good bit of opposition: maybe Tom would\ndislike one of the tip types for some reason, me a second, and Andres\na third. If you remove all three of those you've gutted your\nimplementation. Whee.\n\nBut even if we leave the syntax aside, it is very difficult, IMHO, to\ncome up with something in this area that makes sense to put in core.\nThere are so many things you could warn about, and so many possible\nalgorithms that you could use to emit warnings. We do have a few\nthings somewhat like this in core, like the warning that checkpoints\nare happening too close together, or the hint that when you said DROP\nTABLE olders maybe you really meant DROP TABLE orders. But in those\ncases, the situation is fairly unambiguous: if your checkpoints are\nhappening too close together, you should probably raise max_wal_size,\nas long as it's not going to run you out of disk space. If you\nspecified a non-existent object name, you should probably correct the\nobject name to something that does exist.\n\nBut things like CREATE INDEX or CLUSTER are a lot trickier. I struggle\nto think of what individual PostgreSQL command would have enough\ncontext to know that those things are a good idea. For example,\nclustering a table on an index makes sense if (1) there are queries\nthat would run faster with the clustering and (2) they are run\nfrequently enough and are expensive enough that the savings would be\nmaterial and (3) the clustering wouldn't degrade so quickly as to be\npointless. But I don't see how it would be possible to discover this\nsituation without instrumenting the whole workload, or at least having\nsome trace of the workload. Even if you have the data, you probably\nneed to do a bunch of number-crunching to come up with good\nrecommendations, and that's expensive, and you probably have to be OK\nwith a significantly higher risk of wrong answers, too, because the\npast may be different from the future, and the planner's estimates of\nwhat the clustering would save might be wrong.\n\nI wouldn't go so far as to say that doing anything of this sort is\nabsolutely and categorically hopeless, but suggesting to an aspiring\nhacker (or even an established one) that they go try to implement SET\nPERFORMANCE_TIPS isn't helpful at all. At least in my opinion, it's\nnot clear what that means, or that we want it, or what we might want\ninstead, or even that we want anything at all. We should aim to have a\nTODO list filled with things that are actionable and likely to be\nworth the effort someone might choose to invest in them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 May 2023 14:51:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On 2023-May-13, John Naylor wrote:\n\n> --------------------\n> 2. Propose to move to the \"Not Wanted list\":\n\n> Consider having single-page pruning update the visibility map\n> -> Comment from Heikki in the thread:\n> \"I think I was worried about the possible performance impact of having to\n> clear the bit in visibility map again. If you're frequently updating a\n> tuple so that HOT and page pruning is helping you, setting the bit in\n> visibility map seems counter-productive; it's going to be cleared soon\n> again by another UPDATE. That's just a hunch, though. Maybe the overhead\n> is negligible.\"\n\nHmm, I agree with removing the entry from the TODO list, but why is this\nsomething we Do Not Want? If somebody shows up and do some analysis\nthat in a certain workload it is beneficial to do this, then I don't\nthink we should turn them down.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 16 May 2023 10:50:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, May 16, 2023 at 4:50 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hmm, I agree with removing the entry from the TODO list, but why is this\n> something we Do Not Want? If somebody shows up and do some analysis\n> that in a certain workload it is beneficial to do this, then I don't\n> think we should turn them down.\n\nYeah, I think Do Not Want should only be to discourage people from\nsubmitting patches for things we know were not going to do, and that\nunless we think that's going to be a problem, we should just remove\nitems completely.\n\nThanks, John, for working on this, BTW.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 May 2023 08:16:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Mon, 15 May 2023 at 20:51, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, May 15, 2023 at 2:05 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM ANALYZE, and CLUSTER\n> > > -> There are external tools that help with this kind of analysis\n> >\n> > Althrough there are external tools which help with the analysis, the\n> > silent complaint of this item seems to be \"PostgreSQL doesn't provide\n> > the user with actionable maintenance tasks/DDL info\", and I\n> > wholeheartedly agree with that.\n>\n> Well, if SET PERFORMANCE_TIPS is adding a new GUC, that's inherently\n> something that could not happen in a contrib module, only in core. If\n\nWhy wouldn't it? pgss has (amongst others) pgss.save, other extensions\nhave their own GUCs, why can't this be one of them? Sure, it'd need to\nbe namespaced to the extension, but that's never held anyone back from\nadding their own GUCs.\n\n> it's dedicated syntax of some kind, same thing. I am not at all\n> convinced that adding something spelled SET PERFORMANCE_TIPS in any\n> form is a good idea,\n\nWell, that part I agree with; the design of this\nmaintenance/performance helper is very different from how I would\nimagine such feature to exist in or alongside PostgreSQL. I did not\nmake that very clear in my initial mail, sorry for that, but I\nwouldn't suggest that \"SET PERFORMANCE_TIPS\"\n\n> but at the very least someone would have to\n> propose what the possible values of that option would be and what they\n> would do in a pretty detailed way for us to decide whether we liked it\n> or not. It seems to me that it would be quite difficult to get any\n> kind of consensus. If you came up with several different kinds of\n> performance tips and made the GUC a comma-separated list of tip types,\n> I suspect that you'd get a good bit of opposition: maybe Tom would\n> dislike one of the tip types for some reason, me a second, and Andres\n> a third. If you remove all three of those you've gutted your\n> implementation. Whee.\n\nYes, I know that bikeshedding and strong personal opinions are common\non this list - I know I am guilty of that as well. Yet the expectation\nthat people will tend to bikeshed shouldn't be a reason to move a\nfeature to the \"Not Wanted\" list. If that were the qualifier, then we\nwouldn't add any new major features, because there's always going to\nbe some bikeshedding for major new features.\n\n> But even if we leave the syntax aside, it is very difficult, IMHO, to\n> come up with something in this area that makes sense to put in core.\n> There are so many things you could warn about, and so many possible\n> algorithms that you could use to emit warnings. We do have a few\n> things somewhat like this in core, like the warning that checkpoints\n> are happening too close together, or the hint that when you said DROP\n> TABLE olders maybe you really meant DROP TABLE orders. But in those\n> cases, the situation is fairly unambiguous: if your checkpoints are\n> happening too close together, you should probably raise max_wal_size,\n> as long as it's not going to run you out of disk space. If you\n> specified a non-existent object name, you should probably correct the\n> object name to something that does exist.\n>\n> But things like CREATE INDEX or CLUSTER are a lot trickier. I struggle\n> to think of what individual PostgreSQL command would have enough\n> context to know that those things are a good idea. For example,\n> clustering a table on an index makes sense if (1) there are queries\n> that would run faster with the clustering and (2) they are run\n> frequently enough and are expensive enough that the savings would be\n> material and (3) the clustering wouldn't degrade so quickly as to be\n> pointless. But I don't see how it would be possible to discover this\n> situation without instrumenting the whole workload, or at least having\n> some trace of the workload. Even if you have the data, you probably\n> need to do a bunch of number-crunching to come up with good\n> recommendations, and that's expensive, and you probably have to be OK\n> with a significantly higher risk of wrong answers, too, because the\n> past may be different from the future, and the planner's estimates of\n> what the clustering would save might be wrong.\n\nAgreed on most parts, including the \"one statement is unlikely to have\nenough context\". But, not all of those issues need to be tackled to\nhave actionable suggestions. Just a \"look at <X = shared_buffers>\nbecause it is likely you left performance on the table\" would be a\ngood start. Other examples of such suggestions would be detecting a\nsmall fraction of hot updates vs missed-hot updates which could\nproduce a hint to decrease the fillfactor of a table; a high number of\ntable scans vs tuples returned could produce an indication about\nlikely missing indexes; etc. These hints wouldn't necessarily have to\nbe produced with per-statement hints like the TODO suggests.\n\n> I wouldn't go so far as to say that doing anything of this sort is\n> absolutely and categorically hopeless, but suggesting to an aspiring\n> hacker (or even an established one) that they go try to implement SET\n> PERFORMANCE_TIPS isn't helpful at all. At least in my opinion, it's\n> not clear what that means, or that we want it, or what we might want\n> instead, or even that we want anything at all. We should aim to have a\n> TODO list filled with things that are actionable and likely to be\n> worth the effort someone might choose to invest in them.\n\nAgreed; and that's why I'm not against removing the specific wording\nof the item. This may not have been clearly described in my previous\nmail, but I would instead like to see a TODO list item which covers\nthe need to improve the number of cases where we provide actionable\nadvice, and specifically those cases where there is not One Obvious\nIssue (OOI;s like when getting close to wraparound; or close\ncheckpoints, or ...).\nFor example, we could have an extension, similar to pageinspect, that\nscans the catalogs and detects actionable problems like tables with\n50%+ unused space, or tables that are clustered on an index but have\nlost all semblance of ordering (according to statistics), or ...\n\nI guess my main point is that we gather a lot of statistics, but do\nnot provide tooling which provides actionable reports. And I think\nthat is the core of such a todo item - we could use a feature like\nthat either in core, or more likely in contrib as an extension (like\npgss and pageinspect).\n\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Tue, 16 May 2023 14:18:07 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, May 16, 2023 at 8:18 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Agreed; and that's why I'm not against removing the specific wording\n> of the item. This may not have been clearly described in my previous\n> mail, but I would instead like to see a TODO list item which covers\n> the need to improve the number of cases where we provide actionable\n> advice, and specifically those cases where there is not One Obvious\n> Issue (OOI;s like when getting close to wraparound; or close\n> checkpoints, or ...).\n\nMy vote is for just removing the item, rather than putting it on the\nnot wanted list. I don't think it's useful to put things as general as\nwhat you say here on the list. But putting this item in the not wanted\nsection might imply that it's not an area we're looking to improve,\nwhich as you say, is false.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 May 2023 08:27:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, 16 May 2023 at 14:27, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 16, 2023 at 8:18 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Agreed; and that's why I'm not against removing the specific wording\n> > of the item. This may not have been clearly described in my previous\n> > mail, but I would instead like to see a TODO list item which covers\n> > the need to improve the number of cases where we provide actionable\n> > advice, and specifically those cases where there is not One Obvious\n> > Issue (OOI;s like when getting close to wraparound; or close\n> > checkpoints, or ...).\n>\n> My vote is for just removing the item, rather than putting it on the\n> not wanted list. I don't think it's useful to put things as general as\n> what you say here on the list. But putting this item in the not wanted\n> section might imply that it's not an area we're looking to improve,\n> which as you say, is false.\n\nThat makes sense. Agreed.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 16 May 2023 15:52:02 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, 16 May 2023, 02:05 Matthias van de Meent,\n\n>\n> The result I got when searching for \"automatic postgresql index\n> suggestions\" was a combination of hypopg, pg_qualstats and some manual\n> glue to get suggested indexes in the current database - but none of\n> these are available in the main distribution.\n>\n\nFTR pg_qualstats has an integrated \"automatic index suggestion\" feature\nsince many years, so no glue needed.\n\n>\n\nOn Tue, 16 May 2023, 02:05 Matthias van de Meent,\n\nThe result I got when searching for \"automatic postgresql index\nsuggestions\" was a combination of hypopg, pg_qualstats and some manual\nglue to get suggested indexes in the current database - but none of\nthese are available in the main distribution.FTR pg_qualstats has an integrated \"automatic index suggestion\" feature since many years, so no glue needed.",
"msg_date": "Wed, 17 May 2023 10:50:29 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cutting down the TODO list thread"
},
{
"msg_contents": "On Tue, May 16, 2023 at 1:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n\n> >> Add support for polymorphic arguments and return types to languages\nother than PL/PgSQL\n> >> Add support for OUT and INOUT parameters to languages other than\nPL/PgSQL\n\n> > These actually seem like pretty interesting projects.\n>\n> Yeah. I'm surprised that nobody has gotten around to scratching\n> this itch yet.\n\nOkay, keeping these.\n\nOn Tue, May 16, 2023 at 3:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n>\n> On 2023-May-13, John Naylor wrote:\n\n> > Consider having single-page pruning update the visibility map\n\n> Hmm, I agree with removing the entry from the TODO list, but why is this\n> something we Do Not Want? If somebody shows up and do some analysis\n> that in a certain workload it is beneficial to do this, then I don't\n> think we should turn them down.\n\nOkay, removing but not adding to Do Not Want.\n\nOn Tue, May 16, 2023 at 8:52 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n>\n> On Tue, 16 May 2023 at 14:27, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, May 16, 2023 at 8:18 AM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > > Agreed; and that's why I'm not against removing the specific wording\n> > > of the item. This may not have been clearly described in my previous\n> > > mail, but I would instead like to see a TODO list item which covers\n> > > the need to improve the number of cases where we provide actionable\n> > > advice, and specifically those cases where there is not One Obvious\n> > > Issue (OOI;s like when getting close to wraparound; or close\n> > > checkpoints, or ...).\n> >\n> > My vote is for just removing the item, rather than putting it on the\n> > not wanted list. I don't think it's useful to put things as general as\n> > what you say here on the list. But putting this item in the not wanted\n> > section might imply that it's not an area we're looking to improve,\n> > which as you say, is false.\n>\n> That makes sense. Agreed.\n\n(This was for SET PERFORMANCE_TIPS) -- removing but not adding to Do Not\nWant.\n\nI've removed all else proposed to simply remove.\n\nAlso removing \"ECPG - Fix nested C comments\" as done.\n\nAs for this:\n\nOn Sat, May 13, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Improve speed of tab completion\n> > -> Is this still a problem?\n>\n> I keep worrying that tab-complete.c will become so ungainly as to\n> present a human-scale performance problem. But there's been pretty\n> much zero complaints so far. Let's drop this one until some actual\n> issue emerges.\n\nLooking in the thread, the issue has to do with catalog queries, and in\nfact I must have fat-fingered copying the entry -- it should be \"Improve\nspeed of tab completion by using LIKE\":\n\nhttp://www.postgresql.org/message-id/20120821174847.GL1267@tamriel.snowman.net\n\nI've left it alone for now just in case.\n\n(I have yet to think about concrete revisions that seem needed, but I'll do\nthat separately.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, May 16, 2023 at 1:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Robert Haas <robertmhaas@gmail.com> writes:> >> Add support for polymorphic arguments and return types to languages other than PL/PgSQL> >> Add support for OUT and INOUT parameters to languages other than PL/PgSQL> > These actually seem like pretty interesting projects.>> Yeah. I'm surprised that nobody has gotten around to scratching> this itch yet.Okay, keeping these.On Tue, May 16, 2023 at 3:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:>> On 2023-May-13, John Naylor wrote:> > Consider having single-page pruning update the visibility map> Hmm, I agree with removing the entry from the TODO list, but why is this> something we Do Not Want? If somebody shows up and do some analysis> that in a certain workload it is beneficial to do this, then I don't> think we should turn them down.Okay, removing but not adding to Do Not Want.On Tue, May 16, 2023 at 8:52 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:>> On Tue, 16 May 2023 at 14:27, Robert Haas <robertmhaas@gmail.com> wrote:> >> > On Tue, May 16, 2023 at 8:18 AM Matthias van de Meent> > <boekewurm+postgres@gmail.com> wrote:> > > Agreed; and that's why I'm not against removing the specific wording> > > of the item. This may not have been clearly described in my previous> > > mail, but I would instead like to see a TODO list item which covers> > > the need to improve the number of cases where we provide actionable> > > advice, and specifically those cases where there is not One Obvious> > > Issue (OOI;s like when getting close to wraparound; or close> > > checkpoints, or ...).> >> > My vote is for just removing the item, rather than putting it on the> > not wanted list. I don't think it's useful to put things as general as> > what you say here on the list. But putting this item in the not wanted> > section might imply that it's not an area we're looking to improve,> > which as you say, is false.>> That makes sense. Agreed.(This was for SET PERFORMANCE_TIPS) -- removing but not adding to Do Not Want.I've removed all else proposed to simply remove.Also removing \"ECPG - Fix nested C comments\" as done.As for this:On Sat, May 13, 2023 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> > Improve speed of tab completion> > -> Is this still a problem?>> I keep worrying that tab-complete.c will become so ungainly as to> present a human-scale performance problem. But there's been pretty> much zero complaints so far. Let's drop this one until some actual> issue emerges.Looking in the thread, the issue has to do with catalog queries, and in fact I must have fat-fingered copying the entry -- it should be \"Improve speed of tab completion by using LIKE\":http://www.postgresql.org/message-id/20120821174847.GL1267@tamriel.snowman.netI've left it alone for now just in case.(I have yet to think about concrete revisions that seem needed, but I'll do that separately.)--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 17 May 2023 09:54:55 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: cutting down the TODO list thread"
}
] |
[
{
"msg_contents": "Hi,\n\nThe patch in [1] makes the horizon logic in procarray.c aware of temp\ntables not needing to care about other session's snapshots (also\ndiscussed in[2]). Extending a7212be8b9e, which did that for VACUUM, but\nnot HOT pruning etc.\n\nWhile polishing that patch I was wondering whether there are other\nclasses of relations that we might want to treat differently. And\nthere's one more that we don't special case right now: unlogged tables.\n\nAs unlogged tables aren't replicated via physical rep, we don't need to\napply vacuum_defer_cleanup_age, hot standby feedback and slot based\nhorizons.\n\nThe obvious question is, is that worth doing? My intuition is that yes,\nit probably is: Unlogged tables are often used for hotly updated\ntransient state, allowing that to be cleaned up more aggressively will\nreduce bloat.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20201015083735.derdzysdtqdvxshp%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/20201014203103.72oke6hqywcyhx7s%40alap3.anarazel.de\n\n\n",
"msg_date": "Tue, 27 Oct 2020 12:58:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "More aggressive vacuuming of unlogged relations?"
}
] |
[
{
"msg_contents": "Forking this thread, since the existing CFs have been closed.\nhttps://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n\nOn Tue, Oct 06, 2020 at 01:38:23PM +0900, Michael Paquier wrote:\n> On Mon, Oct 05, 2020 at 10:07:33PM -0500, Justin Pryzby wrote:\n> > Honestly, I think you're over-thinking and over-engineering indisclustered.\n> > \n> > If \"clusteredness\" was something we offered to maintain across DML, I think\n> > that might be important to provide stronger guarantees. As it is now, I don't\n> > think this patch is worth changing the catalog definition.\n> \n> Well, this use case is new because we are discussing the relationship\n> of indisclustered across multiple transactions for multiple indexes,\n> so I'd rather have this discussion than not, and I have learnt\n> the hard way with REINDEX that we should care a lot about the\n> consistency of partition trees at any step of the operation.\n\nindisclustered is only used as a default for \"CLUSTER\" (without USING). The\nworst thing that can happen if it's \"inconsistent\" is that \"CLUSTER;\" clusters\na table on the \"old\" clustered index (that it was already clustered on), which\nis what would've happened before running some command which was interrupted.\n\n> Let's\n> imagine a simple example here, take this partition tree: p (parent),\n> and two partitions p1 and p2. p has two partitioned indexes i and j,\n> indexes also present in p1 and p2 as i1, i2, j1 and j2. Let's assume\n> that the user has done a CLUSTER on p USING i that completes, meaning\n> that i, i1 and i2 have indisclustered set. Now let's assume that the\n> user does a CLUSTER on p USING j this time, and that this command\n> fails while processing p2, meaning that indisclustered is set for j1,\n> i2, and perhaps i or j depending on what the patch does.\n\nI think the state of \"indisclustered\" at that point is not critical.\nThe command failed, and the user can re-run it, or ALTER..SET CLUSTER.\nActually, I think the only inconsistent state is if two indexes are both marked\nindisclustered.\n\nI'm attaching a counter-proposal to your catalog change, which preserves\nindisclustered on children of clustered, partitioned indexes, and invalidates\nindisclustered when attaching unclustered indexes.\n\nAlso, I noticed that CREATE TABLE (LIKE.. INCLUDING INDEXES) doesn't preserve\nindisclustered, but I can't say that's an issue.\n\n-- \nJustin",
"msg_date": "Tue, 27 Oct 2020 19:33:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "CLUSTER on partitioned index"
},
{
"msg_contents": "@cfbot: rebased\n\nOn Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> I'm attaching a counter-proposal to your catalog change, which preserves\n> indisclustered on children of clustered, partitioned indexes, and invalidates\n> indisclustered when attaching unclustered indexes.\n\n..and now propagates CLUSTER ON to child indexes.\n\nI left this as separate patches to show what I mean and what's new while we\ndiscuss it.\n\n-- \nJustin",
"msg_date": "Wed, 4 Nov 2020 20:23:56 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Wed, Nov 04, 2020 at 08:23:56PM -0600, Justin Pryzby wrote:\n> On Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> > I'm attaching a counter-proposal to your catalog change, which preserves\n> > indisclustered on children of clustered, partitioned indexes, and invalidates\n> > indisclustered when attaching unclustered indexes.\n> \n> ..and now propagates CLUSTER ON to child indexes.\n> \n> I left this as separate patches to show what I mean and what's new while we\n> discuss it.\n\nThis fixes some omissions in the previous patch and error in its test cases.\n\nCLUSTER ON recurses to children, since I think a clustered parent index means\nthat all its child indexes are clustered. \"SET WITHOUT CLUSTER\" doesn't have\nto recurse to children, but I did it like that for consistency and it avoids\nthe need to special case InvalidOid.\n\n-- \nJustin",
"msg_date": "Sun, 15 Nov 2020 19:53:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Sun, Nov 15, 2020 at 07:53:35PM -0600, Justin Pryzby wrote:\n> On Wed, Nov 04, 2020 at 08:23:56PM -0600, Justin Pryzby wrote:\n> > On Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> > > I'm attaching a counter-proposal to your catalog change, which preserves\n> > > indisclustered on children of clustered, partitioned indexes, and invalidates\n> > > indisclustered when attaching unclustered indexes.\n> > \n> > ..and now propagates CLUSTER ON to child indexes.\n> > \n> > I left this as separate patches to show what I mean and what's new while we\n> > discuss it.\n> \n> This fixes some omissions in the previous patch and error in its test cases.\n> \n> CLUSTER ON recurses to children, since I think a clustered parent index means\n> that all its child indexes are clustered. \"SET WITHOUT CLUSTER\" doesn't have\n> to recurse to children, but I did it like that for consistency and it avoids\n> the need to special case InvalidOid.\n\nThe previous patch failed pg_upgrade when restoring a clustered, parent index,\nsince it's marked INVALID until indexes have been built on all child tables, so\nCLUSTER ON was rejected on invalid index.\n\nSo I think CLUSTER ON needs to be a separate pg_dump object, to allow attaching\nthe child index (thereby making the parent \"valid\") to happen before SET\nCLUSTER on the parent index.\n\n-- \nJustin",
"msg_date": "Sat, 28 Nov 2020 20:03:02 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Sat, Nov 28, 2020 at 08:03:02PM -0600, Justin Pryzby wrote:\n> On Sun, Nov 15, 2020 at 07:53:35PM -0600, Justin Pryzby wrote:\n> > On Wed, Nov 04, 2020 at 08:23:56PM -0600, Justin Pryzby wrote:\n> > > On Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> > > > I'm attaching a counter-proposal to your catalog change, which preserves\n> > > > indisclustered on children of clustered, partitioned indexes, and invalidates\n> > > > indisclustered when attaching unclustered indexes.\n> > > \n> > > ..and now propagates CLUSTER ON to child indexes.\n> > > \n> > > I left this as separate patches to show what I mean and what's new while we\n> > > discuss it.\n> > \n> > This fixes some omissions in the previous patch and error in its test cases.\n> > \n> > CLUSTER ON recurses to children, since I think a clustered parent index means\n> > that all its child indexes are clustered. \"SET WITHOUT CLUSTER\" doesn't have\n> > to recurse to children, but I did it like that for consistency and it avoids\n> > the need to special case InvalidOid.\n> \n> The previous patch failed pg_upgrade when restoring a clustered, parent index,\n> since it's marked INVALID until indexes have been built on all child tables, so\n> CLUSTER ON was rejected on invalid index.\n> \n> So I think CLUSTER ON needs to be a separate pg_dump object, to allow attaching\n> the child index (thereby making the parent \"valid\") to happen before SET\n> CLUSTER on the parent index.\n\nRebased on b5913f612 and now a3dc92600.\n\nThis patch is intertwined with the tablespace patch: not only will it get\nrebase conflict, but will also need to test the functionality of\nCLUSTER (TABLESPACE a) partitioned_table;\n\n-- \nJustin",
"msg_date": "Mon, 18 Jan 2021 12:34:59 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 12:34:59PM -0600, Justin Pryzby wrote:\n> On Sat, Nov 28, 2020 at 08:03:02PM -0600, Justin Pryzby wrote:\n> > On Sun, Nov 15, 2020 at 07:53:35PM -0600, Justin Pryzby wrote:\n> > > On Wed, Nov 04, 2020 at 08:23:56PM -0600, Justin Pryzby wrote:\n> > > > On Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> > > > > I'm attaching a counter-proposal to your catalog change, which preserves\n> > > > > indisclustered on children of clustered, partitioned indexes, and invalidates\n> > > > > indisclustered when attaching unclustered indexes.\n> > > > \n> > > > ..and now propagates CLUSTER ON to child indexes.\n> > > > \n> > > > I left this as separate patches to show what I mean and what's new while we\n> > > > discuss it.\n> > > \n> > > This fixes some omissions in the previous patch and error in its test cases.\n> > > \n> > > CLUSTER ON recurses to children, since I think a clustered parent index means\n> > > that all its child indexes are clustered. \"SET WITHOUT CLUSTER\" doesn't have\n> > > to recurse to children, but I did it like that for consistency and it avoids\n> > > the need to special case InvalidOid.\n> > \n> > The previous patch failed pg_upgrade when restoring a clustered, parent index,\n> > since it's marked INVALID until indexes have been built on all child tables, so\n> > CLUSTER ON was rejected on invalid index.\n> > \n> > So I think CLUSTER ON needs to be a separate pg_dump object, to allow attaching\n> > the child index (thereby making the parent \"valid\") to happen before SET\n> > CLUSTER on the parent index.\n> \n> Rebased on b5913f612 and now a3dc92600.\n\nThis resolves ORDER BY test failure with COLLATE \"C\".\n\n-- \nJustin",
"msg_date": "Sat, 6 Feb 2021 08:45:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "Hi,\nFor v7-0002-Implement-CLUSTER-of-partitioned-table.patch:\n\n+ * We have to build the list in a different memory context so it\nwill\n+ * survive the cross-transaction processing\n+ */\n+ old_context = MemoryContextSwitchTo(cluster_context);\n\ncluster_context is not modified within the loop. Can the memory context\nswitching code be moved outside the loop ?\n\nCheers\n\nOn Sat, Feb 6, 2021 at 6:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Jan 18, 2021 at 12:34:59PM -0600, Justin Pryzby wrote:\n> > On Sat, Nov 28, 2020 at 08:03:02PM -0600, Justin Pryzby wrote:\n> > > On Sun, Nov 15, 2020 at 07:53:35PM -0600, Justin Pryzby wrote:\n> > > > On Wed, Nov 04, 2020 at 08:23:56PM -0600, Justin Pryzby wrote:\n> > > > > On Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> > > > > > I'm attaching a counter-proposal to your catalog change, which\n> preserves\n> > > > > > indisclustered on children of clustered, partitioned indexes,\n> and invalidates\n> > > > > > indisclustered when attaching unclustered indexes.\n> > > > >\n> > > > > ..and now propagates CLUSTER ON to child indexes.\n> > > > >\n> > > > > I left this as separate patches to show what I mean and what's new\n> while we\n> > > > > discuss it.\n> > > >\n> > > > This fixes some omissions in the previous patch and error in its\n> test cases.\n> > > >\n> > > > CLUSTER ON recurses to children, since I think a clustered parent\n> index means\n> > > > that all its child indexes are clustered. \"SET WITHOUT CLUSTER\"\n> doesn't have\n> > > > to recurse to children, but I did it like that for consistency and\n> it avoids\n> > > > the need to special case InvalidOid.\n> > >\n> > > The previous patch failed pg_upgrade when restoring a clustered,\n> parent index,\n> > > since it's marked INVALID until indexes have been built on all child\n> tables, so\n> > > CLUSTER ON was rejected on invalid index.\n> > >\n> > > So I think CLUSTER ON needs to be a separate pg_dump object, to allow\n> attaching\n> > > the child index (thereby making the parent \"valid\") to happen before\n> SET\n> > > CLUSTER on the parent index.\n> >\n> > Rebased on b5913f612 and now a3dc92600.\n>\n> This resolves ORDER BY test failure with COLLATE \"C\".\n>\n> --\n> Justin\n>\n\nHi,For v7-0002-Implement-CLUSTER-of-partitioned-table.patch:+ * We have to build the list in a different memory context so it will+ * survive the cross-transaction processing+ */+ old_context = MemoryContextSwitchTo(cluster_context);cluster_context is not modified within the loop. Can the memory context switching code be moved outside the loop ?CheersOn Sat, Feb 6, 2021 at 6:46 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Jan 18, 2021 at 12:34:59PM -0600, Justin Pryzby wrote:\n> On Sat, Nov 28, 2020 at 08:03:02PM -0600, Justin Pryzby wrote:\n> > On Sun, Nov 15, 2020 at 07:53:35PM -0600, Justin Pryzby wrote:\n> > > On Wed, Nov 04, 2020 at 08:23:56PM -0600, Justin Pryzby wrote:\n> > > > On Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> > > > > I'm attaching a counter-proposal to your catalog change, which preserves\n> > > > > indisclustered on children of clustered, partitioned indexes, and invalidates\n> > > > > indisclustered when attaching unclustered indexes.\n> > > > \n> > > > ..and now propagates CLUSTER ON to child indexes.\n> > > > \n> > > > I left this as separate patches to show what I mean and what's new while we\n> > > > discuss it.\n> > > \n> > > This fixes some omissions in the previous patch and error in its test cases.\n> > > \n> > > CLUSTER ON recurses to children, since I think a clustered parent index means\n> > > that all its child indexes are clustered. \"SET WITHOUT CLUSTER\" doesn't have\n> > > to recurse to children, but I did it like that for consistency and it avoids\n> > > the need to special case InvalidOid.\n> > \n> > The previous patch failed pg_upgrade when restoring a clustered, parent index,\n> > since it's marked INVALID until indexes have been built on all child tables, so\n> > CLUSTER ON was rejected on invalid index.\n> > \n> > So I think CLUSTER ON needs to be a separate pg_dump object, to allow attaching\n> > the child index (thereby making the parent \"valid\") to happen before SET\n> > CLUSTER on the parent index.\n> \n> Rebased on b5913f612 and now a3dc92600.\n\nThis resolves ORDER BY test failure with COLLATE \"C\".\n\n-- \nJustin",
"msg_date": "Sat, 6 Feb 2021 09:21:08 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Sat, Feb 06, 2021 at 08:45:49AM -0600, Justin Pryzby wrote:\n> On Mon, Jan 18, 2021 at 12:34:59PM -0600, Justin Pryzby wrote:\n> > On Sat, Nov 28, 2020 at 08:03:02PM -0600, Justin Pryzby wrote:\n> > > On Sun, Nov 15, 2020 at 07:53:35PM -0600, Justin Pryzby wrote:\n> > > > On Wed, Nov 04, 2020 at 08:23:56PM -0600, Justin Pryzby wrote:\n> > > > > On Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> > > > > > I'm attaching a counter-proposal to your catalog change, which preserves\n> > > > > > indisclustered on children of clustered, partitioned indexes, and invalidates\n> > > > > > indisclustered when attaching unclustered indexes.\n> > > > > \n> > > > > ..and now propagates CLUSTER ON to child indexes.\n> > > > > \n> > > > > I left this as separate patches to show what I mean and what's new while we\n> > > > > discuss it.\n> > > > \n> > > > This fixes some omissions in the previous patch and error in its test cases.\n> > > > \n> > > > CLUSTER ON recurses to children, since I think a clustered parent index means\n> > > > that all its child indexes are clustered. \"SET WITHOUT CLUSTER\" doesn't have\n> > > > to recurse to children, but I did it like that for consistency and it avoids\n> > > > the need to special case InvalidOid.\n> > > \n> > > The previous patch failed pg_upgrade when restoring a clustered, parent index,\n> > > since it's marked INVALID until indexes have been built on all child tables, so\n> > > CLUSTER ON was rejected on invalid index.\n> > > \n> > > So I think CLUSTER ON needs to be a separate pg_dump object, to allow attaching\n> > > the child index (thereby making the parent \"valid\") to happen before SET\n> > > CLUSTER on the parent index.\n> > \n> > Rebased on b5913f612 and now a3dc92600.\n> \n> This resolves ORDER BY test failure with COLLATE \"C\".\n\nIt occured to me that progress reporting should expose this.\n\nI did this in the style of pg_stat_progress_create_index, adding columns\npartitions_total and partitions_done showing the overall progress. The progress\nof individual partitions is also visible in {blocks,tuples}_{done,total}.\nThis seems odd, but that's how the index view behaves.\n\n-- \nJustin",
"msg_date": "Wed, 10 Feb 2021 14:04:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Wed, Feb 10, 2021 at 02:04:58PM -0600, Justin Pryzby wrote:\n> On Sat, Feb 06, 2021 at 08:45:49AM -0600, Justin Pryzby wrote:\n> > On Mon, Jan 18, 2021 at 12:34:59PM -0600, Justin Pryzby wrote:\n> > > On Sat, Nov 28, 2020 at 08:03:02PM -0600, Justin Pryzby wrote:\n> > > > On Sun, Nov 15, 2020 at 07:53:35PM -0600, Justin Pryzby wrote:\n> > > > > On Wed, Nov 04, 2020 at 08:23:56PM -0600, Justin Pryzby wrote:\n> > > > > > On Tue, Oct 27, 2020 at 07:33:12PM -0500, Justin Pryzby wrote:\n> > > > > > > I'm attaching a counter-proposal to your catalog change, which preserves\n> > > > > > > indisclustered on children of clustered, partitioned indexes, and invalidates\n> > > > > > > indisclustered when attaching unclustered indexes.\n> > > > > > \n> > > > > > ..and now propagates CLUSTER ON to child indexes.\n> > > > > > \n> > > > > > I left this as separate patches to show what I mean and what's new while we\n> > > > > > discuss it.\n> > > > > \n> > > > > This fixes some omissions in the previous patch and error in its test cases.\n> > > > > \n> > > > > CLUSTER ON recurses to children, since I think a clustered parent index means\n> > > > > that all its child indexes are clustered. \"SET WITHOUT CLUSTER\" doesn't have\n> > > > > to recurse to children, but I did it like that for consistency and it avoids\n> > > > > the need to special case InvalidOid.\n> > > > \n> > > > The previous patch failed pg_upgrade when restoring a clustered, parent index,\n> > > > since it's marked INVALID until indexes have been built on all child tables, so\n> > > > CLUSTER ON was rejected on invalid index.\n> > > > \n> > > > So I think CLUSTER ON needs to be a separate pg_dump object, to allow attaching\n> > > > the child index (thereby making the parent \"valid\") to happen before SET\n> > > > CLUSTER on the parent index.\n> > > \n> > > Rebased on b5913f612 and now a3dc92600.\n> > \n> > This resolves ORDER BY test failure with COLLATE \"C\".\n> \n> It occured to me that progress reporting should expose this.\n> \n> I did this in the style of pg_stat_progress_create_index, adding columns\n> partitions_total and partitions_done showing the overall progress. The progress\n> of individual partitions is also visible in {blocks,tuples}_{done,total}.\n> This seems odd, but that's how the index view behaves.\n\nRebased on 8a8f4d8ede288c2a29105f4708e22ce7f3526149.\n\nThis also resolves an issue in the last patch which would've broken progress\nreporting of vacuum full.\n\nAnd take the suggestion to move memory context switching outside the loop.\n\n-- \nJustin",
"msg_date": "Sat, 6 Mar 2021 22:13:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "@cfbot: rebased",
"msg_date": "Fri, 2 Apr 2021 15:03:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "Hi,\nFor v10-0002-Implement-CLUSTER-of-partitioned-table.patch :\n\nor that an partitioned index was previously set clustered.\n\n'an partitioned index' -> a partitioned index\n\n+ * Return a List of tables and associated index, where each index is a\n\nassociated index -> associated indices\n\nFor cluster():\n- rel = table_open(tableOid, NoLock);\n+ rel = table_open(tableOid, ShareUpdateExclusiveLock);\n\nConsidering the comment preceding cluster() (forced to acquire exclusive\nlocks on all the tables), maybe add a comment explaining why it is safe to\ntake ShareUpdateExclusiveLock.\n\n+cluster_multiple_rels(List *rvs, int options)\n\nI think the multiple in the method name is not needed since the relation is\nin plural.\n\nCheers\n\nOn Fri, Apr 2, 2021 at 1:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> @cfbot: rebased\n>\n\nHi,For v10-0002-Implement-CLUSTER-of-partitioned-table.patch :or that an partitioned index was previously set clustered.'an partitioned index' -> a partitioned index+ * Return a List of tables and associated index, where each index is aassociated index -> associated indicesFor cluster():- rel = table_open(tableOid, NoLock);+ rel = table_open(tableOid, ShareUpdateExclusiveLock);Considering the comment preceding cluster() (forced to acquire exclusive locks on all the tables), maybe add a comment explaining why it is safe to take ShareUpdateExclusiveLock.+cluster_multiple_rels(List *rvs, int options)I think the multiple in the method name is not needed since the relation is in plural.CheersOn Fri, Apr 2, 2021 at 1:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:@cfbot: rebased",
"msg_date": "Sat, 3 Apr 2021 09:14:33 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "I have to wonder if there really *is* a use case for CLUSTER in the\nfirst place on regular tables, let alone on partitioned tables, which\nare likely to be large and thus take a lot of time. What justifies\nspending so much time on this implementation? My impression is that\nCLUSTER is pretty much a fringe command nowadays, because of the access\nexclusive lock required.\n\nDoes anybody actually use it?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n\n\n",
"msg_date": "Tue, 20 Jul 2021 20:27:02 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 08:27:02PM -0400, Alvaro Herrera wrote:\n> I have to wonder if there really *is* a use case for CLUSTER in the\n> first place on regular tables, let alone on partitioned tables, which\n> are likely to be large and thus take a lot of time. What justifies\n> spending so much time on this implementation? My impression is that\n> CLUSTER is pretty much a fringe command nowadays, because of the access\n> exclusive lock required.\n> \n> Does anybody actually use it?\n\nYeah, I am not getting really excited about doing anything here\neither. I thought for some time about the interactions with\nindisclustered and partitioned tables, but anything I could come up\nwith felt clunky.\n--\nMichael",
"msg_date": "Wed, 21 Jul 2021 09:34:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Tue, 2021-07-20 at 20:27 -0400, Alvaro Herrera wrote:\n> I have to wonder if there really *is* a use case for CLUSTER in the\n> first place on regular tables, let alone on partitioned tables, which\n> are likely to be large and thus take a lot of time. What justifies\n> spending so much time on this implementation? My impression is that\n> CLUSTER is pretty much a fringe command nowadays, because of the access\n> exclusive lock required.\n> \n> Does anybody actually use it?\n\nI see is used in the field occasionally, as it can really drastically\nimprove performance. But I admit is is not frequently used.\n\nIn a data warehouse, which is updated only occasionally, running\nCLUSTER after an update can make a lot of sense.\n\nI personally think that it is enough to be able to cluster the table\npartiton by partition.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 21 Jul 2021 13:01:11 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 08:27:02PM -0400, Alvaro Herrera wrote:\n> I have to wonder if there really *is* a use case for CLUSTER in the\n> first place on regular tables, let alone on partitioned tables, which\n> are likely to be large and thus take a lot of time.\n\nThe cluster now is done one partition at a time, so it might take a long time,\nbut doesn't lock the whole partition heirarchy. Same as VACUUM (since v10) and\n(since v14) REINDEX.\n\nThe patch series would be simpler if partitioned indexes weren't allowed to be\nmarked CLUSTERED ON. Then, \"USING <index>\" would be required, which is a step\nforward from not supporting cluster on partitioned index at all. As attached.\nIt's arguably true that the follow-up patches supporting indisclustered on\npartitioned indexes aren't worth the trouble.\n\nFor sure CLUSTER is useful, see eg.\nhttps://github.com/bucardo/check_postgres/issues/29\n\nIt's sometimes important that the table is clustered to allow index scan to\nwork well (or be chosen at all).\n\nIf a table is scanned by an index, and isn't well-clustered, then a larger\nfraction (multiple) of the table will be read than what's optimal. That\nrequires more IO, and more cache space.\n\nA year ago, I partitioned one of our previously-unpartitioned tables, and ended\nup clustering the partitions on their partition key (and indexed column) using\n\\gexec. This was preferable to doing INSERT .. SELECT .. ORDER BY, which\nwould've made the initial process slower - maybe unjustifiably slower for some\ncustomers. Cluster (using \\gexec) was something I was able to do afterward,\nfor completeness, since I expect the partitions to be mostly-clustered\nautomatically, so it was bothering me that the existing data was unordered, and\nthat it might behave differently in the future.\n\n> What justifies spending so much time on this implementation?\n\nActually, I don't use partitioned indexes at all here, so this is not for us..\nI worked on this after Adger asked about CIC on partitioned tables (for which I\nhave a patch in the queue). Isn't it worth supporting that (or should we\ninclude an example about how to use format() with %I and \\gexec) ?\n\nVACUUM [FULL] has recursed into partitions since v10 (f0e44751d).\nREINDEX supports partitioned tables in v14 (a6642b3ae). \nPartitioned indexes exist since v11 (as you well know), so it's somewhat odd\nthat CLUSTER isn't supported, and seems increasingly weird as decreasing number\nof DDL commands are not supported. Supporting DDL on partitioned tables\nsupports the idea that the physical partitions can be seen as an implementation\ndetail by the DBA, which I understand was the intent since v10.\n\nYou're right that I wouldn't plan to *routinely* re-cluster a partitioned\ntable. Rather, I'd cluster only its \"recent\" *partitions*, and leave the old\nones alone. Or cluster the partitions, a single time, once they're no longer\nrecent. I don't think the feature is marginal just because I don't use it\nroutinely.\n\n> My impression is that CLUSTER is pretty much a fringe command nowadays,\n> because of the access exclusive lock required.\n\nA step forward would be to integrate something like pg_repack/reorg/squeeze.\nI used pg_repack --index until v12 got REINDEX CONCURRENTLY. The goal there\nwas to improve index scans on some large, append-only partitions where the\nplanner gave an index scan, but performance was poor (now, we use BRIN so it\nworks well without reindex). I tested that this would still be an issue by\ncreating a non-brin index for a single day's table (even with v13 deduplication\nand v12 TID tiebreak).\n\nAs I see it, support for partitioned cluster is orthogonal to an\nonline/concurrent cluster, which is a job for another patch.\n\n-- \nJustin",
"msg_date": "Sun, 12 Sep 2021 15:10:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Sun, 12 Sept 2021 at 22:10, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Jul 20, 2021 at 08:27:02PM -0400, Alvaro Herrera wrote:\n> > I have to wonder if there really *is* a use case for CLUSTER in the\n> > first place on regular tables, let alone on partitioned tables, which\n> > are likely to be large and thus take a lot of time.\n>\n> The cluster now is done one partition at a time, so it might take a long time,\n> but doesn't lock the whole partition heirarchy. Same as VACUUM (since v10) and\n> (since v14) REINDEX.\n\nNote: The following review is based on the assumption that this v11\nrevision was meant to contain only one patch. I put this up as a note,\nbecause it seemed quite limited when compared to earlier versions of\nthe patchset.\n\nI noticed that you store the result of find_all_inheritors(...,\nNoLock) in get_tables_to_cluster_partitioned, without taking care of\npotential concurrent partition hierarchy changes that the comment on\nfind_all_inheritors warns against or documenting why it is safe, which\nsounds dangerous in case someone wants to adapt the code. One problem\nI can think of is that only storing reloid and indoid is not\nnecessarily safe, as they might be reused by drop+create table running\nparallel to the CLUSTER command.\n\nApart from that, I think it would be useful (though not strictly\nnecessary for this patch) if you could adapt the current CLUSTER\nprogress reporting to report the progress for the whole partition\nhierarchy, instead of a new progress report for each relation, as was\nthe case in earlier versions of the patchset.\n\nThe v11 patch seems quite incomplete when compared to previous\ndiscussions, or at the very least is very limited (no ALTER TABLE\nclustering commands for partitioned tables, but `CLUSTER ptable USING\npindex` is supported). If v11 is the new proposed direction for ptable\nclustering, could you also document these limitations in the\ncluster.sgml and alter_table.sgml docs?\n\n> [ v11-0001-Implement-CLUSTER-of-partitioned-table.patch ]\n\n> diff --git a/src/test/regress/expected/cluster.out b/src/test/regress/expected/cluster.out\n> ...\n> +ALTER TABLE clstrpart SET WITHOUT CLUSTER;\n> +ERROR: cannot mark index clustered in partitioned table\n\nThis error message does not seem to match my expectation as a user: I\nam not trying to mark an index as clustered, and for a normal table\n\"SET WITHOUT CLUSTER\" does not fail for unclustered tables. I think\nthat behaviour of normal unclustered tables should be shared here as\nwell. At the very least, the error message should be changed.\n\n> ALTER TABLE clstrpart CLUSTER ON clstrpart_idx;\n> ERROR: cannot mark index clustered in partitioned table\n\nA \"HINT: use the CLUSTER command to cluster partitioned tables\" (or\nequivalent) should be added if we decide to keep the clustering APIs\nof ALTER TABLE disabled for partitioned tables, as CLUSTER is now\nimplemented for partitioned tables.\n\n> -DROP TABLE clstrpart;\n\nI believe that this cleanup should not be fully removed, but moved to\nbefore '-- Test CLUSTER with external tuplesorting', as the table is\nnot used after that line.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 23 Sep 2021 20:18:41 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 08:18:41PM +0200, Matthias van de Meent wrote:\n> On Sun, 12 Sept 2021 at 22:10, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Tue, Jul 20, 2021 at 08:27:02PM -0400, Alvaro Herrera wrote:\n> > > I have to wonder if there really *is* a use case for CLUSTER in the\n> > > first place on regular tables, let alone on partitioned tables, which\n> > > are likely to be large and thus take a lot of time.\n> >\n> > The cluster now is done one partition at a time, so it might take a long time,\n> > but doesn't lock the whole partition heirarchy. Same as VACUUM (since v10) and\n> > (since v14) REINDEX.\n> \n> Note: The following review is based on the assumption that this v11\n> revision was meant to contain only one patch. I put this up as a note,\n> because it seemed quite limited when compared to earlier versions of\n> the patchset.\n\nAlvaro's critique was that the patchset was too complicated for what was\nclaimed to be a marginal feature. My response was to rearrange the patchset to\nits minimal form, supporting CLUSTER without marking the index as clustered.\n\n> I noticed that you store the result of find_all_inheritors(...,\n> NoLock) in get_tables_to_cluster_partitioned, without taking care of\n> potential concurrent partition hierarchy changes that the comment on\n> find_all_inheritors warns against or documenting why it is safe, which\n> sounds dangerous in case someone wants to adapt the code. One problem\n> I can think of is that only storing reloid and indoid is not\n> necessarily safe, as they might be reused by drop+create table running\n> parallel to the CLUSTER command.\n\nThe parallel code in vacuum is expand_vacuum_rel(), which is where the\ncorresponding things happens for vacuum full. This patch is to make cluster()\ndo all the same stuff before calling cluster_rel().\n\nWhat VACUUM tries to do is to avoid erroring if a partition is dropped while\ncluster is running. cluster_rel() does the same thing by calling\ncluster_multiple_rels() ,which uses CLUOPT_RECHECK.\n\nIf the OIDs wrapped around, I think existing vacuum could accidentally process\na new table with the same OID as a dropped partition. I think cluster would\n*normally* catch that case and error in check_index_is_clusterable():\n| Check that index is in fact an index on the given relation\n\nArguably VACUUM FULL could call cluster() (not cluster_rel()) and pass the\npartitioned table rather than first expanding it. But non-full vacuum needs to \nexpand partitioned tables anyway.\n\n> Apart from that, I think it would be useful (though not strictly\n> necessary for this patch) if you could adapt the current CLUSTER\n> progress reporting to report the progress for the whole partition\n> hierarchy, instead of a new progress report for each relation, as was\n> the case in earlier versions of the patchset.\n\nYea, but this is already true for VACUUM FULL (which uses CLUSTER and supports\npartitioned tables since v10) and REINDEX.\nSee also https://postgr.es/m/20210216064214.GI28165@telsasoft.com\n\nMy goal is to present a minimal patch and avoid any nonessential complexity.\n\n> The v11 patch seems quite incomplete when compared to previous\n> discussions, or at the very least is very limited (no ALTER TABLE\n> clustering commands for partitioned tables, but `CLUSTER ptable USING\n> pindex` is supported). If v11 is the new proposed direction for ptable\n> clustering, could you also document these limitations in the\n> cluster.sgml and alter_table.sgml docs?\n\nYou said it's less complete, but it's is due to deliberate reduction in scope.\ncluster.sgml says:\n+ Clustering a partitioned table clusters each of its partitions using the\n+ partition of the specified partitioned index (which must be specified).\n\nThe ALTER restriction hasn't changed, so I didn't touch the documentation.\n\nI am still curious myself to know if this is the direction the patch should\nmove.\n\n> > [ v11-0001-Implement-CLUSTER-of-partitioned-table.patch ]\n> \n> > diff --git a/src/test/regress/expected/cluster.out b/src/test/regress/expected/cluster.out\n> > ...\n> > +ALTER TABLE clstrpart SET WITHOUT CLUSTER;\n> > +ERROR: cannot mark index clustered in partitioned table\n> \n> This error message does not seem to match my expectation as a user: I\n> am not trying to mark an index as clustered, and for a normal table\n> \"SET WITHOUT CLUSTER\" does not fail for unclustered tables. I think\n> that behaviour of normal unclustered tables should be shared here as\n> well. At the very least, the error message should be changed.\n\nThis is the pre-existing behavior.\n\n> > -DROP TABLE clstrpart;\n> \n> I believe that this cleanup should not be fully removed, but moved to\n> before '-- Test CLUSTER with external tuplesorting', as the table is\n> not used after that line.\n\nYou're right - this was from when the patchset handled CLUSTER ON.\nLeaving the index allows testing in pg_dump - a large part of the complexity of\nthe elided patches is to handle restoring a partitioned index, without\nviolating the rule that partitions of an clustered index must also be\nclustered. I adjusted this in my local branch.\n\nThanks for looking. I'm going to see about updating comments based on\ncorresponding parts of vacuum and on this message itself.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 23 Sep 2021 18:56:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 06:56:26PM -0500, Justin Pryzby wrote:\n> On Thu, Sep 23, 2021 at 08:18:41PM +0200, Matthias van de Meent wrote:\n>> Note: The following review is based on the assumption that this v11\n>> revision was meant to contain only one patch. I put this up as a note,\n>> because it seemed quite limited when compared to earlier versions of\n>> the patchset.\n> \n> Alvaro's critique was that the patchset was too complicated for what was\n> claimed to be a marginal feature. My response was to rearrange the patchset to\n> its minimal form, supporting CLUSTER without marking the index as clustered.\n>\n> My goal is to present a minimal patch and avoid any nonessential complexity.\n\nFWIW, my opinion on the matter is similar to Alvaro's, and an extra\nread of the patch gives me the same impression. Let's see if others\nhave an opinion on the matter.\n\n> Thanks for looking. I'm going to see about updating comments based on\n> corresponding parts of vacuum and on this message itself.\n\nIt doesn't feel right to just discard the patch at this stage, and it\nneeds an update, so I have moved it to the next CF for now, waiting on\nauthor. If this does not really move on, my suggestion is to discard\nthe patch at the end of next CF, aka 2022-01.\n--\nMichael",
"msg_date": "Fri, 3 Dec 2021 10:16:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Fri, Dec 03, 2021 at 10:16:24AM +0900, Michael Paquier wrote:\n> On Thu, Sep 23, 2021 at 06:56:26PM -0500, Justin Pryzby wrote:\n> > On Thu, Sep 23, 2021 at 08:18:41PM +0200, Matthias van de Meent wrote:\n> >> Note: The following review is based on the assumption that this v11\n> >> revision was meant to contain only one patch. I put this up as a note,\n> >> because it seemed quite limited when compared to earlier versions of\n> >> the patchset.\n> > \n> > Alvaro's critique was that the patchset was too complicated for what was\n> > claimed to be a marginal feature. My response was to rearrange the patchset to\n> > its minimal form, supporting CLUSTER without marking the index as clustered.\n> >\n> > My goal is to present a minimal patch and avoid any nonessential complexity.\n> \n> FWIW, my opinion on the matter is similar to Alvaro's, and an extra\n> read of the patch gives me the same impression. Let's see if others\n> have an opinion on the matter.\n\nYou and Alvaro thought the patch was too complicated for its value, so I\nreduced the scope to its essential form.\n\nCLUSTER was claimed to be of marginal utility, since the table is locked.\nBut locking one partition at a time would be less disruptive than locking an\nequivalent non-partitioned table.\n\nThere's only about a dozen, other remaining restrictions/limitations on\npartitioned tables, (AMs, triggers, identity, generated, exclusion, CIC,\nFREEZE). \n\nSince it's supported to VACUUM (including VACUUM FULL) and REINDEX a\npartitioned table, I'm still suprised there's much hesitation to support\nCLUSTER (which is used by vacuum full).\n\n> > Thanks for looking. I'm going to see about updating comments based on\n> > corresponding parts of vacuum and on this message itself.\n> \n> It doesn't feel right to just discard the patch at this stage, and it\n> needs an update, so I have moved it to the next CF for now, waiting on\n> author. If this does not really move on, my suggestion is to discard\n> the patch at the end of next CF, aka 2022-01.\n\nThis includes minor updates based on Mathias review (commit message and test\ncase).\n\n-- \nJustin",
"msg_date": "Sat, 8 Jan 2022 13:55:28 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 08:27:02PM -0400, Alvaro Herrera wrote:\n> I have to wonder if there really *is* a use case for CLUSTER in the\n> first place on regular tables, let alone on partitioned tables, which\n> are likely to be large and thus take a lot of time. What justifies\n> spending so much time on this implementation? My impression is that\n> CLUSTER is pretty much a fringe command nowadays, because of the access\n> exclusive lock required.\n> \n> Does anybody actually use it?\n\nI hope that Alvaro will comment on the simplified patches. If multiple people\nthink the patch isn't worth it, feel free to close it. But I don't see how\ncomplexity could be the reason.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 23 Feb 2022 12:47:32 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On 2022-Feb-23, Justin Pryzby wrote:\n\n> I hope that Alvaro will comment on the simplified patches. If multiple people\n> think the patch isn't worth it, feel free to close it. But I don't see how\n> complexity could be the reason.\n\nI gave your patch a look and it seems a reasonable thing to do. Maybe\nnot terribly useful in most cases, but there may be some cases for which\nit is. I found some part of it a bit repetitive, so I moved things\naround a bit. What do think about this?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL! PostgreSQL was\namazing when I first started using it at 7.2, and I'm continually astounded by\nlearning new features and techniques made available by the continuing work of\nthe development team.\"\nBerend Tober, http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php",
"msg_date": "Wed, 30 Mar 2022 22:51:43 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "I realized after posting that we used to allow clustering toast tables,\nbut after my changes we no longer do. (Justin's version had a\nRELKIND_HAS_STORAGE test here instead, which seemed a little too lax.) I\ndon't know why we allowed it and I don't know of anyone who has ever\nused that feature and we don't have any test coverage for it, but I\ndon't have any reason to explicitly disallow it either. So I propose to\ncontinue to allow it:\n\n>From 05ba6124422fb7c2fd19575e905e444ba3eef1e5 Mon Sep 17 00:00:00 2001\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Thu, 31 Mar 2022 12:49:57 +0200\nSubject: [PATCH] allow to cluster toast tables\n\n---\n src/backend/commands/cluster.c | 3 ++-\n 1 file changed, 2 insertions(+), 1 deletion(-)\n\ndiff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c\nindex 8417cbdb67..b391d7c434 100644\n--- a/src/backend/commands/cluster.c\n+++ b/src/backend/commands/cluster.c\n@@ -451,7 +451,8 @@ cluster_rel(Oid tableOid, Oid indexOid, ClusterParams *params)\n \t}\n \n \tAssert(OldHeap->rd_rel->relkind == RELKIND_RELATION ||\n-\t\t OldHeap->rd_rel->relkind == RELKIND_MATVIEW);\n+\t\t OldHeap->rd_rel->relkind == RELKIND_MATVIEW ||\n+\t\t OldHeap->rd_rel->relkind == RELKIND_TOASTVALUE);\n \n \t/*\n \t * All predicate locks on the tuples or pages are about to be made\n-- \n2.30.2\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n\n",
"msg_date": "Thu, 31 Mar 2022 12:54:36 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 6:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I realized after posting that we used to allow clustering toast tables,\n> but after my changes we no longer do. (Justin's version had a\n> RELKIND_HAS_STORAGE test here instead, which seemed a little too lax.) I\n> don't know why we allowed it and I don't know of anyone who has ever\n> used that feature and we don't have any test coverage for it, but I\n> don't have any reason to explicitly disallow it either. So I propose to\n> continue to allow it:\n\nI think that's probably a good decision. It's certainly useful to have\na way to force a rewrite of a TOAST table, although a lot of people\nwho would benefit from that operation probably don't know that they\nneed it, or don't know that they need just that, and end up rewriting\nboth the main table and the TOAST table. Whether it's useful to be\nable to run CLUSTER specifically rather than VACUUM FULL on the TOAST\ntable is less clear, but I don't think we're likely to save anything\nby forbidding it. Maybe we should consider adding a test, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Mar 2022 08:39:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 10:51:43PM +0200, Alvaro Herrera wrote:\n> On 2022-Feb-23, Justin Pryzby wrote:\n> \n> > I hope that Alvaro will comment on the simplified patches. If multiple people\n> > think the patch isn't worth it, feel free to close it. But I don't see how\n> > complexity could be the reason.\n> \n> I gave your patch a look and it seems a reasonable thing to do. Maybe\n> not terribly useful in most cases, but there may be some cases for which\n> it is. I found some part of it a bit repetitive, so I moved things\n> around a bit. What do think about this?\n\nThanks for looking at it.\n\nThe changes to finish_heap_swap() and get_tables_to_cluster() are superfluous,\nright ?\n\nI think this comment is worth preserving (it'd be okay if it lived in the\ncommit message).\n- * Expand partitioned relations for CLUSTER (the corresponding\n- * thing for VACUUM FULL happens in and around expand_vacuum_rel()\n\n+ if (rel != NULL) In this case, maybe it should Assert() that it's\nrelkind=p (mostly for purposes of self-documentation).\n\n+ partition of the specified partitioned index (which must be specified).\nThis is my own language, but now seems repetitive. I think the parenthetical\npart should be a separate sentance: \"For partitioned indexes, the index may not\nbe omitted.\".\n\nOtherwise looks ok.\n\ndiff --git a/doc/src/sgml/ref/cluster.sgml b/doc/src/sgml/ref/cluster.sgml\nindex b3463ae5c46..fbc090cd0b0 100644\n--- a/doc/src/sgml/ref/cluster.sgml\n+++ b/doc/src/sgml/ref/cluster.sgml\n@@ -199,7 +199,8 @@ CLUSTER [VERBOSE]\n \n <para>\n Clustering a partitioned table clusters each of its partitions using the\n- partition of the specified partitioned index (which must be specified).\n+ partition of the specified partitioned index. When clustering a\n+ partitioned table, the index may not be omitted.\n </para>\n \n </refsect1>\ndiff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c\nindex 8417cbdb67f..412147f05bc 100644\n--- a/src/backend/commands/cluster.c\n+++ b/src/backend/commands/cluster.c\n@@ -231,6 +231,7 @@ cluster(ParseState *pstate, ClusterStmt *stmt, bool isTopLevel)\n \tparams.options |= CLUOPT_RECHECK;\n \tif (rel != NULL)\n \t{\n+\t\tAssert (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE);\n \t\tcheck_index_is_clusterable(rel, indexOid, true, AccessShareLock);\n \t\trtcs = get_tables_to_cluster_partitioned(cluster_context, indexOid);\n \n@@ -451,6 +452,7 @@ cluster_rel(Oid tableOid, Oid indexOid, ClusterParams *params)\n \t}\n \n \tAssert(OldHeap->rd_rel->relkind == RELKIND_RELATION ||\n+\t\t OldHeap->rd_rel->relkind == RELKIND_TOASTVALUE ||\n \t\t OldHeap->rd_rel->relkind == RELKIND_MATVIEW);\n \n \t/*\ndiff --git a/src/test/regress/expected/cluster.out b/src/test/regress/expected/cluster.out\nindex 3f2758d13f6..6cf18c8d321 100644\n--- a/src/test/regress/expected/cluster.out\n+++ b/src/test/regress/expected/cluster.out\n@@ -305,6 +305,8 @@ WHERE pg_class.oid=indexrelid\n ---------\n (0 rows)\n \n+-- Verify that toast is clusterable\n+CLUSTER pg_toast.pg_toast_826 USING pg_toast_826_index;\n -- Verify that clustering all tables does in fact cluster the right ones\n CREATE USER regress_clstr_user;\n CREATE TABLE clstr_1 (a INT PRIMARY KEY);\ndiff --git a/src/test/regress/sql/cluster.sql b/src/test/regress/sql/cluster.sql\nindex 74118993a82..ae27c35f65d 100644\n--- a/src/test/regress/sql/cluster.sql\n+++ b/src/test/regress/sql/cluster.sql\n@@ -104,6 +104,9 @@ WHERE pg_class.oid=indexrelid\n \tAND pg_class_2.relname = 'clstr_tst'\n \tAND indisclustered;\n \n+-- Verify that toast is clusterable\n+CLUSTER pg_toast.pg_toast_826 USING pg_toast_826_index;\n+\n -- Verify that clustering all tables does in fact cluster the right ones\n CREATE USER regress_clstr_user;\n CREATE TABLE clstr_1 (a INT PRIMARY KEY);\n\n\n",
"msg_date": "Thu, 31 Mar 2022 09:10:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 12:54:36PM +0200, Alvaro Herrera wrote:\n> I realized after posting that we used to allow clustering toast tables,\n> but after my changes we no longer do. (Justin's version had a\n> RELKIND_HAS_STORAGE test here instead, which seemed a little too lax.) I\n> don't know why we allowed it and I don't know of anyone who has ever\n> used that feature and we don't have any test coverage for it, but I\n> don't have any reason to explicitly disallow it either. So I propose to\n> continue to allow it:\n\nGood catch.\n\nMy daily vacuum script would've discovered that they're no longer supported, as\nit tests for (among other things) c.relkind IN ('r','t'). That clusters tables\nthat have an indisclustered set and vacuums various others. (BTW, it's the\nsame script that discovered in 2019 that clustering on expressional indexes had\nbeen broken by the heapam changes).\n\nI think the response should be to add a test case, which could be 0001 or\n00099.\n\n\n",
"msg_date": "Thu, 31 Mar 2022 09:11:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "Thanks, pushed.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La fuerza no está en los medios físicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n",
"msg_date": "Sat, 2 Apr 2022 19:11:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "Small things here.\n\n1. in VACUUM FULL we only process partitions that are owned by the\ninvoking user. We don't have this test in the new code. I'm not sure\nwhy do we do that there; is it worth doing the same here?\n\n2. We should silently skip a partition that's a foreign table, I\nsuppose.\n\n3. We do mark the index on the partitions as indisclustered AFAICS (we\nclaim that the partitioned table's index is not marked, which is\naccurate). So users doing unadorned CLUSTER afterwards will get the\npartitions clustered too, once they cluster the partitioned table. If\nthey don't want this, they would have to ALTER TABLE to remove the\nmarking. How likely is that this will be a problem? Maybe documenting\nthis point is enough.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nVoy a acabar con todos los humanos / con los humanos yo acabaré\nvoy a acabar con todos (bis) / con todos los humanos acabaré ¡acabaré! (Bender)\n\n\n",
"msg_date": "Sat, 2 Apr 2022 19:21:11 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Sat, Apr 02, 2022 at 07:11:47PM +0200, Alvaro Herrera wrote:\n> Thanks, pushed.\n\nThank you for revisiting it, and thanks to Zhihong Yu for earlier review.\n\nI'll look into your outstanding questions later this week.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 4 Apr 2022 07:59:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Sat, Apr 02, 2022 at 07:21:11PM +0200, Alvaro Herrera wrote:\n> Small things here.\n\n> 1. in VACUUM FULL we only process partitions that are owned by the\n> invoking user. We don't have this test in the new code. I'm not sure\n> why do we do that there; is it worth doing the same here?\n\nThat dates to a556549d7 (see also cbe24a6dd8 for an earlier commit in CLUSTER\nitself). The reason was to avoid blocking if an unprivileged user runs VACUUM\nFULL which would try to lock things (including shared catalogs) before checking\nif they have permission to vacuum them. That commit also initially checks the\nowner of the partitioned table, and then re-checking owner of partitions later\non.\n\nA similar issue exists here. But 1) catalog tables are not partitioned, and,\n2) ownership of a partitioned table is checked immediately. So the problem can\nonly occur if a user who owns a partitioned table but doesn't own all its\npartitions tries to cluster it, and it blocks behind another session. Fixing\nthis is probably a good idea, but seems improbable that it would avoid a DOS.\n\n> 2. We should silently skip a partition that's a foreign table, I\n> suppose.\n\nI think it's not needed, since the loop over index children doesn't see a child\nindex on the foreign table. ?\n\n> 3. We do mark the index on the partitions as indisclustered AFAICS (we\n> claim that the partitioned table's index is not marked, which is\n> accurate). So users doing unadorned CLUSTER afterwards will get the\n> partitions clustered too, once they cluster the partitioned table. If\n> they don't want this, they would have to ALTER TABLE to remove the\n> marking. How likely is that this will be a problem? Maybe documenting\n> this point is enough.\n\nIt seems at least as likely that someone would *want* the partitions to be\nmarked clustered as that someone would want them to be unchanged.\n\nThe cluster mark accurately reflects having been clustered. It seems unlikely\nthat a user would want something else to be clustered later by \"cluster;\".\nSince clustering on a partitioned table wasn't supported before, nothing weird\nwill happen to someone who upgrades to v15 unless they elect to use the new\nfeature. As this seems to be POLA, it doesn't even need to be documented. ?",
"msg_date": "Mon, 11 Apr 2022 09:06:09 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 7:06 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sat, Apr 02, 2022 at 07:21:11PM +0200, Alvaro Herrera wrote:\n> > Small things here.\n>\n> > 1. in VACUUM FULL we only process partitions that are owned by the\n> > invoking user. We don't have this test in the new code. I'm not sure\n> > why do we do that there; is it worth doing the same here?\n>\n> That dates to a556549d7 (see also cbe24a6dd8 for an earlier commit in\n> CLUSTER\n> itself). The reason was to avoid blocking if an unprivileged user runs\n> VACUUM\n> FULL which would try to lock things (including shared catalogs) before\n> checking\n> if they have permission to vacuum them. That commit also initially checks\n> the\n> owner of the partitioned table, and then re-checking owner of partitions\n> later\n> on.\n>\n> A similar issue exists here. But 1) catalog tables are not partitioned,\n> and,\n> 2) ownership of a partitioned table is checked immediately. So the\n> problem can\n> only occur if a user who owns a partitioned table but doesn't own all its\n> partitions tries to cluster it, and it blocks behind another session.\n> Fixing\n> this is probably a good idea, but seems improbable that it would avoid a\n> DOS.\n>\n> > 2. We should silently skip a partition that's a foreign table, I\n> > suppose.\n>\n> I think it's not needed, since the loop over index children doesn't see a\n> child\n> index on the foreign table. ?\n>\n> > 3. We do mark the index on the partitions as indisclustered AFAICS (we\n> > claim that the partitioned table's index is not marked, which is\n> > accurate). So users doing unadorned CLUSTER afterwards will get the\n> > partitions clustered too, once they cluster the partitioned table. If\n> > they don't want this, they would have to ALTER TABLE to remove the\n> > marking. How likely is that this will be a problem? Maybe documenting\n> > this point is enough.\n>\n> It seems at least as likely that someone would *want* the partitions to be\n> marked clustered as that someone would want them to be unchanged.\n>\n> The cluster mark accurately reflects having been clustered. It seems\n> unlikely\n> that a user would want something else to be clustered later by \"cluster;\".\n> Since clustering on a partitioned table wasn't supported before, nothing\n> weird\n> will happen to someone who upgrades to v15 unless they elect to use the new\n> feature. As this seems to be POLA, it doesn't even need to be\n> documented. ?\n>\nHi,\nFor v13-0002-cluster-early-ownership-check-of-partitions.patch :\n\nonly for it to fails ownership check anyway\n\nto fails -> to fail\n\nCheers\n\nOn Mon, Apr 11, 2022 at 7:06 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, Apr 02, 2022 at 07:21:11PM +0200, Alvaro Herrera wrote:\n> Small things here.\n\n> 1. in VACUUM FULL we only process partitions that are owned by the\n> invoking user. We don't have this test in the new code. I'm not sure\n> why do we do that there; is it worth doing the same here?\n\nThat dates to a556549d7 (see also cbe24a6dd8 for an earlier commit in CLUSTER\nitself). The reason was to avoid blocking if an unprivileged user runs VACUUM\nFULL which would try to lock things (including shared catalogs) before checking\nif they have permission to vacuum them. That commit also initially checks the\nowner of the partitioned table, and then re-checking owner of partitions later\non.\n\nA similar issue exists here. But 1) catalog tables are not partitioned, and,\n2) ownership of a partitioned table is checked immediately. So the problem can\nonly occur if a user who owns a partitioned table but doesn't own all its\npartitions tries to cluster it, and it blocks behind another session. Fixing\nthis is probably a good idea, but seems improbable that it would avoid a DOS.\n\n> 2. We should silently skip a partition that's a foreign table, I\n> suppose.\n\nI think it's not needed, since the loop over index children doesn't see a child\nindex on the foreign table. ?\n\n> 3. We do mark the index on the partitions as indisclustered AFAICS (we\n> claim that the partitioned table's index is not marked, which is\n> accurate). So users doing unadorned CLUSTER afterwards will get the\n> partitions clustered too, once they cluster the partitioned table. If\n> they don't want this, they would have to ALTER TABLE to remove the\n> marking. How likely is that this will be a problem? Maybe documenting\n> this point is enough.\n\nIt seems at least as likely that someone would *want* the partitions to be\nmarked clustered as that someone would want them to be unchanged.\n\nThe cluster mark accurately reflects having been clustered. It seems unlikely\nthat a user would want something else to be clustered later by \"cluster;\".\nSince clustering on a partitioned table wasn't supported before, nothing weird\nwill happen to someone who upgrades to v15 unless they elect to use the new\nfeature. As this seems to be POLA, it doesn't even need to be documented. ?Hi,For v13-0002-cluster-early-ownership-check-of-partitions.patch :only for it to fails ownership check anyway to fails -> to failCheers",
"msg_date": "Mon, 11 Apr 2022 08:16:07 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 09:06:09AM -0500, Justin Pryzby wrote:\n> On Sat, Apr 02, 2022 at 07:21:11PM +0200, Alvaro Herrera wrote:\n>> 1. in VACUUM FULL we only process partitions that are owned by the\n>> invoking user. We don't have this test in the new code. I'm not sure\n>> why do we do that there; is it worth doing the same here?\n\nI think that adding a test is a good idea for such things. Perhaps we\ncould have an isolation test, but what Justin is proposing seems good\nenough to me for this goal.\n\n> That dates to a556549d7 (see also cbe24a6dd8 for an earlier commit in CLUSTER\n> itself). The reason was to avoid blocking if an unprivileged user runs VACUUM\n> FULL which would try to lock things (including shared catalogs) before checking\n> if they have permission to vacuum them. That commit also initially checks the\n> owner of the partitioned table, and then re-checking owner of partitions later\n> on.\n> \n> A similar issue exists here. But 1) catalog tables are not partitioned, and,\n> 2) ownership of a partitioned table is checked immediately. So the problem can\n> only occur if a user who owns a partitioned table but doesn't own all its\n> partitions tries to cluster it, and it blocks behind another session. Fixing\n> this is probably a good idea, but seems improbable that it would avoid a DOS.\n\nCatalogs are out of the picture as you say and I would not worry about\nthem becoming somewhat partitioned even in the far future. Are you\nsaying that it is possible for a user kicking a CLUSTER command on a\npartitioned table who has no ownership on some of the partitions to\ndo some blocking table_open() calls if the permission check is not\ndone in get_tables_to_cluster_partitioned()? Hence, this user could\nblock the access to such partitions? I am not sure that we need to\nadd any new ownership checks here as CLUOPT_RECHECK gets added to the\nparameters in cluster() before calling cluster_multiple_rels(), then\nwe do a mix of try_relation_open() with a skip when we are not the\nowner anymore. So this logic looks sound to me. In short, you don't\nneed this extra check, and the test proposed in 0002 keeps the same\nbehavior.\n\n>> 2. We should silently skip a partition that's a foreign table, I\n>> suppose.\n> \n> I think it's not needed, since the loop over index children doesn't see a child\n> index on the foreign table?\n\nHmm. That may be a sign to add an assertion, at least, or something\nbased on RELKIND_HAS_STORAGE().\n\nI was wondering what 0001 was doing here as that's a separate issue,\nbut it looked fine so I have applied it.\n\n+ /* Use a permanent memory context for the result list */\n+ old_context = MemoryContextSwitchTo(cluster_context);\n+\n rtc = (RelToCluster *) palloc(sizeof(RelToCluster));\n\nIndependently of the extra ownership check, the memory context\nmanipulation has to be fixed and the code shoudl switch to\nRelToCluster only when saving an item.\n\n+CREATE ROLE ptnowner;\nRoles that are created in the regression tests need to be prefixed\nwith \"regress_\", or some buildfarm members will complain. FWIW, I\nenforce -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS in all my dev\nbuilds.\n\nI have added an open item for now, but the whole looks\nstraight-forward to me.\n--\nMichael",
"msg_date": "Wed, 13 Apr 2022 15:50:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 03:50:15PM +0900, Michael Paquier wrote:\n> \n> > That dates to a556549d7 (see also cbe24a6dd8 for an earlier commit in CLUSTER\n> > itself). The reason was to avoid blocking if an unprivileged user runs VACUUM\n> > FULL which would try to lock things (including shared catalogs) before checking\n> > if they have permission to vacuum them. That commit also initially checks the\n> > owner of the partitioned table, and then re-checking owner of partitions later\n> > on.\n> > \n> > A similar issue exists here. But 1) catalog tables are not partitioned, and,\n> > 2) ownership of a partitioned table is checked immediately. So the problem can\n> > only occur if a user who owns a partitioned table but doesn't own all its\n> > partitions tries to cluster it, and it blocks behind another session. Fixing\n> > this is probably a good idea, but seems improbable that it would avoid a DOS.\n> \n> Catalogs are out of the picture as you say and I would not worry about\n> them becoming somewhat partitioned even in the far future. Are you\n> saying that it is possible for a user kicking a CLUSTER command on a\n> partitioned table who has no ownership on some of the partitions to\n> do some blocking table_open() calls if the permission check is not\n> done in get_tables_to_cluster_partitioned()? Hence, this user could\n> block the access to such partitions? I am not sure that we need to\n> add any new ownership checks here as CLUOPT_RECHECK gets added to the\n> parameters in cluster() before calling cluster_multiple_rels(), then\n> we do a mix of try_relation_open() with a skip when we are not the\n> owner anymore. So this logic looks sound to me. In short, you don't\n> need this extra check, and the test proposed in 0002 keeps the same\n> behavior.\n\nAre you sure? The ownership re-check in cluster_rel() occurs after acquiring\nlocks.\n\ns1:\npostgres=# CREATE TABLE p(i int) PARTITION BY LIST (i);\npostgres=# CREATE TABLE p1 PARTITION OF p FOR VALUES IN (1);\npostgres=# CREATE TABLE p2 PARTITION OF p FOR VALUES IN (2);\npostgres=# CREATE INDEX ON p (i);\npostgres=# CREATE ROLE po WITH LOGIN;\npostgres=# ALTER TABLE p OWNER TO po;\npostgres=# begin; SELECT FROM p1;\n\ns2:\npostgres=> SET client_min_messages =debug;\npostgres=> CLUSTER VERBOSE p USING p_i_idx ;\nLOG: process 26058 still waiting for AccessExclusiveLock on relation 39577 of database 5 after 1000.105 ms\npostgres=> SELECT 39577::regclass;\nregclass | p1\n\n\n",
"msg_date": "Wed, 13 Apr 2022 05:52:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 05:52:14AM -0500, Justin Pryzby wrote:\n> Are you sure? The ownership re-check in cluster_rel() occurs after acquiring\n> locks.\n\nYep, you are right. However, the SQL test does not check for this\nblocking scenario. In short, removing the new ACL check in\nget_tables_to_cluster_partitioned() makes the test behave the same\nway. Could you implement an isolation check to make sure that the\ndifference is visible? The SQL check looks useful in itself, either\nway.\n--\nMichael",
"msg_date": "Thu, 14 Apr 2022 06:11:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "Thanks for the patch -- I have pushed it now, with some wording changes\nand renaming the role to regress_* to avoid buildfarm's ire.\n\nMichaël in addition proposes an isolation test. I'm not sure; is it\nworth the additional test run time? It doesn't seem a critical issue.\nBut if anybody feels like contributing one, step right ahead.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 14 Apr 2022 22:37:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 10:37:06PM +0200, Alvaro Herrera wrote:\n> Thanks for the patch -- I have pushed it now, with some wording changes\n> and renaming the role to regress_* to avoid buildfarm's ire.\n\nCool, thanks.\n\n> Michaël in addition proposes an isolation test. I'm not sure; is it\n> worth the additional test run time? It doesn't seem a critical issue.\n> But if anybody feels like contributing one, step right ahead.\n\nWell, I am a bit annoyed that we don't actually check that a CLUSTER\ncommand does not block when doing a CLUSTER on a partitioned table\nwhile a lock is held on one of its partitions. So, attached is a\nproposal of patch to improve the test coverage in this area. While on\nit, I have added a test with a normal table. You can see the\ndifference once you remove the ACL check added recently in\nget_tables_to_cluster_partitioned(). What do you think?\n--\nMichael",
"msg_date": "Sat, 16 Apr 2022 20:58:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On Sat, Apr 16, 2022 at 08:58:50PM +0900, Michael Paquier wrote:\n> Well, I am a bit annoyed that we don't actually check that a CLUSTER\n> command does not block when doing a CLUSTER on a partitioned table\n> while a lock is held on one of its partitions. So, attached is a\n> proposal of patch to improve the test coverage in this area. While on\n> it, I have added a test with a normal table. You can see the\n> difference once you remove the ACL check added recently in\n> get_tables_to_cluster_partitioned(). What do you think?\n\nThis was the last reason why this was listed as an open item, so,\nhearing nothing, I have applied this patch to add those extra tests,\nand switched the item as fixed.\n--\nMichael",
"msg_date": "Tue, 26 Apr 2022 14:17:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
},
{
"msg_contents": "On 2022-Apr-26, Michael Paquier wrote:\n\n> On Sat, Apr 16, 2022 at 08:58:50PM +0900, Michael Paquier wrote:\n> > Well, I am a bit annoyed that we don't actually check that a CLUSTER\n> > command does not block when doing a CLUSTER on a partitioned table\n> > while a lock is held on one of its partitions. So, attached is a\n> > proposal of patch to improve the test coverage in this area.\n> \n> This was the last reason why this was listed as an open item, so,\n> hearing nothing, I have applied this patch to add those extra tests,\n> and switched the item as fixed.\n\nThank you!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 26 Apr 2022 07:28:15 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER on partitioned index"
}
] |
[
{
"msg_contents": "Forking this thread, since the existing CFs have been closed.\nhttps://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n\nOn Mon, Sep 14, 2020 at 09:31:03AM -0500, Justin Pryzby wrote:\n> On Sat, Sep 12, 2020 at 10:35:34AM +0900, Michael Paquier wrote:\n> > On Fri, Sep 11, 2020 at 07:13:01PM -0500, Justin Pryzby wrote:\n> > > On Tue, Sep 08, 2020 at 01:31:05PM +0900, Michael Paquier wrote:\n> > >> - CIC on partitioned relations. (Should we also care about DROP INDEX\n> > >> CONCURRENTLY as well?)\n> > > \n> > > Do you have any idea what you think that should look like for DROP INDEX\n> > > CONCURRENTLY ?\n> > \n> > Making the maintenance of the partition tree consistent to the user is\n> > the critical part here, so my guess on this matter is:\n> > 1) Remove each index from the partition tree and mark the indexes as\n> > invalid in the same transaction. This makes sure that after commit no\n> > indexes would get used for scans, and the partition dependency tree\n> > pis completely removed with the parent table. That's close to what we\n> > do in index_concurrently_swap() except that we just want to remove the\n> > dependencies with the partitions, and not just swap them of course.\n> > 2) Switch each index to INDEX_DROP_SET_DEAD, one per transaction\n> > should be fine as that prevents inserts.\n> > 3) Finish the index drop.\n> > \n> > Step 2) and 3) could be completely done for each index as part of\n> > index_drop(). The tricky part is to integrate 1) cleanly within the\n> > existing dependency machinery while still knowing about the list of\n> > partitions that can be removed. I think that this part is not that\n> > straight-forward, but perhaps we could just make this logic part of\n> > RemoveRelations() when listing all the objects to remove.\n> \n> Thanks.\n> \n> I see three implementation ideas..\n> \n> 1. I think your way has an issue that the dependencies are lost. If there's an\n> interruption, the user is maybe left with hundreds or thousands of detached\n> indexes to clean up. This is strange since there's actually no detach command\n> for indexes (but they're implicitly \"attached\" when a matching parent index is\n> created). A 2nd issue is that DETACH currently requires an exclusive lock (but\n> see Alvaro's WIP patch).\n\nI think this is a critical problem with this approach. It's not ok if a\nfailure leaves behind N partition indexes not attached to any parent.\nThey may have pretty different names, which is a mess to clean up.\n\n> 2. Maybe the easiest way is to mark all indexes invalid and then drop all\n> partitions (concurrently) and then the partitioned parent. If interrupted,\n> this would leave a parent index marked \"invalid\", and some child tables with no\n> indexes. I think this may be \"ok\". The same thing is possible if a concurrent\n> build is interrupted, right ?\n\nI think adding the \"invalid\" mark in the simple/naive way isn't enough - it has\nto do everything DROP INDEX CONCURRENTLY does (of course).\n\n> 3. I have a patch which changes index_drop() to \"expand\" a partitioned index into\n> its list of children. Each of these becomes a List:\n> | indexId, heapId, userIndexRelation, userHeapRelation, heaplocktag, heaprelid, indexrelid\n> The same process is followed as for a single index, but handling all partitions\n> at once in two transactions total. Arguably, this is bad since that function\n> currently takes a single Oid but would now ends up operating on a list of indexes.\n\nThis is what's implemented in the attached. It's very possible I've missed\nopportunities for better simplification/integration.\n\n-- \nJustin",
"msg_date": "Tue, 27 Oct 2020 19:44:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "DROP INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Hi Michael,\n\nOn 10/27/20 8:44 PM, Justin Pryzby wrote:\n> \n>> 3. I have a patch which changes index_drop() to \"expand\" a partitioned index into\n>> its list of children. Each of these becomes a List:\n>> | indexId, heapId, userIndexRelation, userHeapRelation, heaplocktag, heaprelid, indexrelid\n>> The same process is followed as for a single index, but handling all partitions\n>> at once in two transactions total. Arguably, this is bad since that function\n>> currently takes a single Oid but would now ends up operating on a list of indexes.\n> \n> This is what's implemented in the attached. It's very possible I've missed\n> opportunities for better simplification/integration.\n\nIt appears there are still some issues to be resolved with this patch, \nbut the next step seems to be for you to have a look at Justin's most \nrecent patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 5 Mar 2021 09:27:05 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: DROP INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 09:27:05AM -0500, David Steele wrote:\n> It appears there are still some issues to be resolved with this patch, but\n> the next step seems to be for you to have a look at Justin's most recent\n> patch.\n\nNot sure if I'll be able to do that by the end of this month. Looking\nquicky at the patch, I am not much a fan of the new code path\nintroduced for the deletion of dependent objects in the partition\ntree, so my gut is telling me that we need to think harder about the\nproblem at hand first.\n--\nMichael",
"msg_date": "Sat, 6 Mar 2021 10:43:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DROP INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 6:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Forking this thread, since the existing CFs have been closed.\n> https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n>\n> On Mon, Sep 14, 2020 at 09:31:03AM -0500, Justin Pryzby wrote:\n> > On Sat, Sep 12, 2020 at 10:35:34AM +0900, Michael Paquier wrote:\n> > > On Fri, Sep 11, 2020 at 07:13:01PM -0500, Justin Pryzby wrote:\n> > > > On Tue, Sep 08, 2020 at 01:31:05PM +0900, Michael Paquier wrote:\n> > > >> - CIC on partitioned relations. (Should we also care about DROP INDEX\n> > > >> CONCURRENTLY as well?)\n> > > >\n> > > > Do you have any idea what you think that should look like for DROP INDEX\n> > > > CONCURRENTLY ?\n> > >\n> > > Making the maintenance of the partition tree consistent to the user is\n> > > the critical part here, so my guess on this matter is:\n> > > 1) Remove each index from the partition tree and mark the indexes as\n> > > invalid in the same transaction. This makes sure that after commit no\n> > > indexes would get used for scans, and the partition dependency tree\n> > > pis completely removed with the parent table. That's close to what we\n> > > do in index_concurrently_swap() except that we just want to remove the\n> > > dependencies with the partitions, and not just swap them of course.\n> > > 2) Switch each index to INDEX_DROP_SET_DEAD, one per transaction\n> > > should be fine as that prevents inserts.\n> > > 3) Finish the index drop.\n> > >\n> > > Step 2) and 3) could be completely done for each index as part of\n> > > index_drop(). The tricky part is to integrate 1) cleanly within the\n> > > existing dependency machinery while still knowing about the list of\n> > > partitions that can be removed. I think that this part is not that\n> > > straight-forward, but perhaps we could just make this logic part of\n> > > RemoveRelations() when listing all the objects to remove.\n> >\n> > Thanks.\n> >\n> > I see three implementation ideas..\n> >\n> > 1. I think your way has an issue that the dependencies are lost. If there's an\n> > interruption, the user is maybe left with hundreds or thousands of detached\n> > indexes to clean up. This is strange since there's actually no detach command\n> > for indexes (but they're implicitly \"attached\" when a matching parent index is\n> > created). A 2nd issue is that DETACH currently requires an exclusive lock (but\n> > see Alvaro's WIP patch).\n>\n> I think this is a critical problem with this approach. It's not ok if a\n> failure leaves behind N partition indexes not attached to any parent.\n> They may have pretty different names, which is a mess to clean up.\n>\n> > 2. Maybe the easiest way is to mark all indexes invalid and then drop all\n> > partitions (concurrently) and then the partitioned parent. If interrupted,\n> > this would leave a parent index marked \"invalid\", and some child tables with no\n> > indexes. I think this may be \"ok\". The same thing is possible if a concurrent\n> > build is interrupted, right ?\n>\n> I think adding the \"invalid\" mark in the simple/naive way isn't enough - it has\n> to do everything DROP INDEX CONCURRENTLY does (of course).\n>\n> > 3. I have a patch which changes index_drop() to \"expand\" a partitioned index into\n> > its list of children. Each of these becomes a List:\n> > | indexId, heapId, userIndexRelation, userHeapRelation, heaplocktag, heaprelid, indexrelid\n> > The same process is followed as for a single index, but handling all partitions\n> > at once in two transactions total. Arguably, this is bad since that function\n> > currently takes a single Oid but would now ends up operating on a list of indexes.\n>\n> This is what's implemented in the attached. It's very possible I've missed\n> opportunities for better simplification/integration.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Jul 2021 21:18:12 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DROP INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 09:18:12PM +0530, vignesh C wrote:\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nI'm withdrawing this patch at least until the corresponding patch for CIC is\nprogressed.\n\nhttps://commitfest.postgresql.org/33/2815/\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 24 Jul 2021 11:46:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: DROP INDEX CONCURRENTLY on partitioned index"
}
] |
[
{
"msg_contents": "Hi,\n\nLog message for GSS connection is missing once connection\nauthorization is successful. We have similar log messages for SSL\nconnections once the connection authorization is successful. This\nmessage will help the user to identify the connection that was\nselected from the logfile. I'm not sure if this log message was\nintentionally left out due to some reason for GSS.\nIf the above analysis looks correct, then please find a patch that\nadds log for gss connections.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 28 Oct 2020 08:29:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 8:29 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Log message for GSS connection is missing once connection\n> authorization is successful. We have similar log messages for SSL\n> connections once the connection authorization is successful. This\n> message will help the user to identify the connection that was\n> selected from the logfile. I'm not sure if this log message was\n> intentionally left out due to some reason for GSS.\n> If the above analysis looks correct, then please find a patch that\n> adds log for gss connections.\n>\n> Thoughts?\n>\n\n+1 for the idea. This is useful in knowing whether or not the user is\nauthenticated using GSS APIs.\n\nHere are few comments on the patch:\n\n1. How about using(like below) #ifdef, #elif ... #endif directives\ninstead of #ifdef, #endif, #ifdef, #endif?\n\n#ifdef USE_SSL\n blah,blah,blah...\n#elif defined(ENABLE_GSS)\n blah,blah,blah...\n#else\n blah,blah,blah...\n#endif\n\n2. I think we must use be_gssapi_get_auth(port) instead of\nbe_gssapi_get_enc(port) in the if condition, because we log for gss\nauthentications irrespective of encoding is enabled or not. Put it\nanother way, maybe gss authentications are possible without\nencoding[1]. We can have the information whether the encryption is\nenabled or not in the log message, be_gssapi_get_enc(port) ? _(\"on\") :\n_(\"off\"),.\n#ifdef ENABLE_GSS\n if (be_gssapi_get_enc(port))\n ereport(LOG,\n\nWe do not need be_gssapi_get_auth(port) ? _(\"on\") : _(\"off\") this in\nthe log message, only in the if condition we need this check.\n\n[1] By looking at the below code it seems that gss authentication\nwithout encryption is possible.\n #ifdef ENABLE_GSS\n port->gss->auth = true;\n if (port->gss->enc)\n status = pg_GSS_checkauth(port);\n else\n {\n sendAuthRequest(port, AUTH_REQ_GSS, NULL, 0);\n status = pg_GSS_recvauth(port);\n }\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Oct 2020 09:48:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Greetings,\n\n* vignesh C (vignesh21@gmail.com) wrote:\n> Log message for GSS connection is missing once connection\n> authorization is successful. We have similar log messages for SSL\n> connections once the connection authorization is successful. This\n> message will help the user to identify the connection that was\n> selected from the logfile. I'm not sure if this log message was\n> intentionally left out due to some reason for GSS.\n> If the above analysis looks correct, then please find a patch that\n> adds log for gss connections.\n> \n> Thoughts?\n\nI agree with logging the principal and if GSS encryption is being used\nor not as part of the connection authorized message. Not logging the\nprincipal isn't great and has been something I've wanted to fix for a\nwhile, so glad to see someone else is thinking about this.\n\n> From 95c906b9eaf1493ad10ac65d6cf7b27a7dd6acb9 Mon Sep 17 00:00:00 2001\n> From: Vignesh C <vignesh21@gmail.com>\n> Date: Wed, 28 Oct 2020 08:19:06 +0530\n> Subject: [PATCH v1] Log message for GSS connection is missing once connection\n> authorization is successful.\n> \n> Log message for GSS connection is missing once connection authorization is\n> successful. We have similar log message for SSL connections once the connection\n> authorization is successful. This message will help the user to identify the\n> connection that was selected from the logfile.\n> ---\n> src/backend/utils/init/postinit.c | 29 +++++++++++++++++++++++++++++\n> 1 file changed, 29 insertions(+)\n> \n> diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> index d4ab4c7..0fd38b7 100644\n> --- a/src/backend/utils/init/postinit.c\n> +++ b/src/backend/utils/init/postinit.c\n> @@ -267,6 +267,21 @@ PerformAuthentication(Port *port)\n> \t\t\t\t\t\t\t\t be_tls_get_compression(port) ? _(\"on\") : _(\"off\"))));\n> \t\t\telse\n> #endif\n> +#ifdef ENABLE_GSS\n> +\t\t\tif (be_gssapi_get_enc(port))\n\nThis is checking if GSS *encryption* is being used.\n\n> +\t\t\t\tereport(LOG,\n> +\t\t\t\t\t\t(port->application_name != NULL\n> +\t\t\t\t\t\t ? errmsg(\"replication connection authorized: user=%s application_name=%s GSS enabled (gssapi autorization=%s, principal=%s)\",\n> +\t\t\t\t\t\t\t\t port->user_name,\n> +\t\t\t\t\t\t\t\t port->application_name,\n> +\t\t\t\t\t\t\t\t be_gssapi_get_auth(port) ? _(\"on\") : _(\"off\"),\n> +\t\t\t\t\t\t\t\t be_gssapi_get_princ(port))\n> +\t\t\t\t\t\t : errmsg(\"replication connection authorized: user=%s GSS enabled (gssapi autorization=%s, principal=%s)\",\n> +\t\t\t\t\t\t\t\t port->user_name,\n> +\t\t\t\t\t\t\t\t be_gssapi_get_auth(port) ? _(\"on\") : _(\"off\"),\n> +\t\t\t\t\t\t\t\t be_gssapi_get_princ(port))));\n\nThis is checking if GSS *authentication* was used.\n\nYou can certainly have GSS authentication used without encryption, and\nyou can (though I'm not sure how useful it really is) have GSS\nencryption with 'trust' authentication, so we should really break this\nout into their own sets of checks, which would look something like:\n\nif (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n connection authorized: GSS %s (principal=%s)\n\nWith the first %s being: (authentication || encrypted || authenticated and encrypted) \n\nOr something along those lines, I would think.\n\nI don't think 'enabled' is a good term to use here.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 28 Oct 2020 12:14:48 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Thanks Stephen for your comments.\n\nOn Wed, Oct 28, 2020 at 9:44 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * vignesh C (vignesh21@gmail.com) wrote:\n> > Log message for GSS connection is missing once connection\n> > authorization is successful. We have similar log messages for SSL\n> > connections once the connection authorization is successful. This\n> > message will help the user to identify the connection that was\n> > selected from the logfile. I'm not sure if this log message was\n> > intentionally left out due to some reason for GSS.\n> > If the above analysis looks correct, then please find a patch that\n> > adds log for gss connections.\n> >\n> > Thoughts?\n>\n> I agree with logging the principal and if GSS encryption is being used\n> or not as part of the connection authorized message. Not logging the\n> principal isn't great and has been something I've wanted to fix for a\n> while, so glad to see someone else is thinking about this.\n>\n> > From 95c906b9eaf1493ad10ac65d6cf7b27a7dd6acb9 Mon Sep 17 00:00:00 2001\n> > From: Vignesh C <vignesh21@gmail.com>\n> > Date: Wed, 28 Oct 2020 08:19:06 +0530\n> > Subject: [PATCH v1] Log message for GSS connection is missing once connection\n> > authorization is successful.\n> >\n> > Log message for GSS connection is missing once connection authorization is\n> > successful. We have similar log message for SSL connections once the connection\n> > authorization is successful. This message will help the user to identify the\n> > connection that was selected from the logfile.\n> > ---\n> > src/backend/utils/init/postinit.c | 29 +++++++++++++++++++++++++++++\n> > 1 file changed, 29 insertions(+)\n> >\n> > diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> > index d4ab4c7..0fd38b7 100644\n> > --- a/src/backend/utils/init/postinit.c\n> > +++ b/src/backend/utils/init/postinit.c\n> > @@ -267,6 +267,21 @@ PerformAuthentication(Port *port)\n> > be_tls_get_compression(port) ? _(\"on\") : _(\"off\"))));\n> > else\n> > #endif\n> > +#ifdef ENABLE_GSS\n> > + if (be_gssapi_get_enc(port))\n>\n> This is checking if GSS *encryption* is being used.\n>\n> > + ereport(LOG,\n> > + (port->application_name != NULL\n> > + ? errmsg(\"replication connection authorized: user=%s application_name=%s GSS enabled (gssapi autorization=%s, principal=%s)\",\n> > + port->user_name,\n> > + port->application_name,\n> > + be_gssapi_get_auth(port) ? _(\"on\") : _(\"off\"),\n> > + be_gssapi_get_princ(port))\n> > + : errmsg(\"replication connection authorized: user=%s GSS enabled (gssapi autorization=%s, principal=%s)\",\n> > + port->user_name,\n> > + be_gssapi_get_auth(port) ? _(\"on\") : _(\"off\"),\n> > + be_gssapi_get_princ(port))));\n>\n> This is checking if GSS *authentication* was used.\n>\n> You can certainly have GSS authentication used without encryption, and\n> you can (though I'm not sure how useful it really is) have GSS\n> encryption with 'trust' authentication, so we should really break this\n> out into their own sets of checks, which would look something like:\n>\n> if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n> connection authorized: GSS %s (principal=%s)\n>\n> With the first %s being: (authentication || encrypted || authenticated and encrypted)\n>\n> Or something along those lines, I would think.\n>\n> I don't think 'enabled' is a good term to use here.\n>\n\nI have made a v2 patch based on the changes you have suggested. The\npatch for the same is attached.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 29 Oct 2020 10:04:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Thanks Bharath for your comments.\n\nOn Wed, Oct 28, 2020 at 9:48 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Oct 28, 2020 at 8:29 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Log message for GSS connection is missing once connection\n> > authorization is successful. We have similar log messages for SSL\n> > connections once the connection authorization is successful. This\n> > message will help the user to identify the connection that was\n> > selected from the logfile. I'm not sure if this log message was\n> > intentionally left out due to some reason for GSS.\n> > If the above analysis looks correct, then please find a patch that\n> > adds log for gss connections.\n> >\n> > Thoughts?\n> >\n>\n> +1 for the idea. This is useful in knowing whether or not the user is\n> authenticated using GSS APIs.\n>\n> Here are few comments on the patch:\n>\n> 1. How about using(like below) #ifdef, #elif ... #endif directives\n> instead of #ifdef, #endif, #ifdef, #endif?\n>\n> #ifdef USE_SSL\n> blah,blah,blah...\n> #elif defined(ENABLE_GSS)\n> blah,blah,blah...\n> #else\n> blah,blah,blah...\n> #endif\n>\n\nI preferred the way it is in the patch to maintain the similar style\nthat is used in other places like fe-connect.c.\n\n> 2. I think we must use be_gssapi_get_auth(port) instead of\n> be_gssapi_get_enc(port) in the if condition, because we log for gss\n> authentications irrespective of encoding is enabled or not. Put it\n> another way, maybe gss authentications are possible without\n> encoding[1]. We can have the information whether the encryption is\n> enabled or not in the log message, be_gssapi_get_enc(port) ? _(\"on\") :\n> _(\"off\"),.\n> #ifdef ENABLE_GSS\n> if (be_gssapi_get_enc(port))\n> ereport(LOG,\n>\n> We do not need be_gssapi_get_auth(port) ? _(\"on\") : _(\"off\") this in\n> the log message, only in the if condition we need this check.\n>\n> [1] By looking at the below code it seems that gss authentication\n> without encryption is possible.\n> #ifdef ENABLE_GSS\n> port->gss->auth = true;\n> if (port->gss->enc)\n> status = pg_GSS_checkauth(port);\n> else\n> {\n> sendAuthRequest(port, AUTH_REQ_GSS, NULL, 0);\n> status = pg_GSS_recvauth(port);\n> }\n\nStephen also shared his thoughts for the above changes, I have\nprovided an updated patch for the same in the previous mail. Please\nhave a look and let me know if you have any comments.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Oct 2020 11:01:21 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Please add this to commitfest to not lose track of it.\n\nI took a look at v2 patch, here are some comments.\n\nOn Thu, Oct 29, 2020 at 11:01 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Stephen also shared his thoughts for the above changes, I have\n> provided an updated patch for the same in the previous mail. Please\n> have a look and let me know if you have any comments.\n>\n> if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n> connection authorized: GSS %s (principal=%s)\n> With the first %s being: (authentication || encrypted || authenticated\nand encrypted)\n>\n\n1. Instead of just \"on/off\" after GSS %s in the log message, wouldn't it be\ninformative if we have authenticated and/or encrypted as suggested by\nStephen?\n\nSo the log message would look like this:\n\nif(be_gssapi_get_auth(port))\nreplication connection authorized: user=bob application_name=foo GSS\nauthenticated (principal=bar)\n\nif(be_gssapi_get_enc(port))\nreplication connection authorized: user=bob application_name=foo GSS\nencrypted (principal=bar)\n\nif(be_gssapi_get_auth(port) && be_gssapi_get_enc(port))\nreplication connection authorized: user=bob application_name=foo GSS\nauthenticated and encrypted (principal=bar)\n\n+#ifdef ENABLE_GSS\n+ if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n+ ereport(LOG,\n+ (port->application_name != NULL\n+ ? errmsg(\"replication connection authorized:\nuser=%s application_name=%s GSS %s (principal=%s)\",\n+ port->user_name,\n+ port->application_name,\n+ be_gssapi_get_auth(port) ||\nbe_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),\n+ be_gssapi_get_princ(port))\n+ : errmsg(\"replication connection authorized:\nuser=%s GSS %s (principal=%s)\",\n+ port->user_name,\n+ be_gssapi_get_auth(port) ||\nbe_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),\n+ be_gssapi_get_princ(port))));\n+ else\n\n2. I think the log message preparation looks a bit clumsy with ternary\noperators and duplicate log message texts(I know that we already do this\nfor SSL). Can we have the log message prepared using StringInfoData data\nstructure/APIs and use just a single ereport? This way, that part of the\ncode looks cleaner.\n\nHere's what I'm visualizing:\n\nif (Log_connections)\n{\nStringInfoData msg;\n\nif (am_walsender)\nappend(\"replication connection authorized: user=%s\");\nelse\nappend(\"connection authorized: user=%s database=%s\");\n\nif (port->application_name)\nappend(\"application_name=%s\");\n\n#ifdef USE_SSL\nif (port->ssl_in_use)\nappend(\"SSL enabled (protocol=%s, cipher=%s, bits=%d, compression=%s\");\n#elif defined(ENABLE_GSS)\n blah,blah,blah\n#endif\n\nereport (LOG, msg.data);\n}\n\n3. + if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n\nIf be_gssapi_get_auth(port) returns false, I think there's no way that\nbe_gssapi_get_princ(port) would return a non null value, see the comment.\nThe function be_gssapi_get_princ() returns NULL if the auth is false, so\nthe check if ( be_gssapi_get_princ(port)) would suffice.\n\ngss_name_t name; /* GSSAPI client name */\n\n* char *princ; /* GSSAPI Principal used for auth, NULL if\n * GSSAPI auth was not used */*\n bool auth; /* GSSAPI Authentication used */\n bool enc; /* GSSAPI encryption in use */\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nPlease add this to commitfest to not lose track of it.I took a look at v2 patch, here are some comments.On Thu, Oct 29, 2020 at 11:01 AM vignesh C <vignesh21@gmail.com> wrote:>> Stephen also shared his thoughts for the above changes, I have> provided an updated patch for the same in the previous mail. Please> have a look and let me know if you have any comments.>> if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))> connection authorized: GSS %s (principal=%s)> With the first %s being: (authentication || encrypted || authenticated and encrypted)>1. Instead of just \"on/off\" after GSS %s in the log message, wouldn't it be informative if we have authenticated and/or encrypted as suggested by Stephen?So the log message would look like this:if(be_gssapi_get_auth(port))replication connection authorized: user=bob application_name=foo GSS authenticated (principal=bar)if(be_gssapi_get_enc(port))replication connection authorized: user=bob application_name=foo GSS encrypted (principal=bar)if(be_gssapi_get_auth(port) && be_gssapi_get_enc(port))replication connection authorized: user=bob application_name=foo GSS authenticated and encrypted (principal=bar)+#ifdef ENABLE_GSS+ if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))+ ereport(LOG,+ (port->application_name != NULL+ ? errmsg(\"replication connection authorized: user=%s application_name=%s GSS %s (principal=%s)\",+ port->user_name,+ port->application_name,+ be_gssapi_get_auth(port) || be_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),+ be_gssapi_get_princ(port))+ : errmsg(\"replication connection authorized: user=%s GSS %s (principal=%s)\",+ port->user_name,+ be_gssapi_get_auth(port) || be_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),+ be_gssapi_get_princ(port))));+ else2. I think the log message preparation looks a bit clumsy with ternary operators and duplicate log message texts(I know that we already do this for SSL). Can we have the log message prepared using StringInfoData data structure/APIs and use just a single ereport? This way, that part of the code looks cleaner.Here's what I'm visualizing:if (Log_connections){\tStringInfoData msg; \tif (am_walsender)\t\tappend(\"replication connection authorized: user=%s\");\telse\t\tappend(\"connection authorized: user=%s database=%s\"); \tif (port->application_name)\t\tappend(\"application_name=%s\"); \t#ifdef USE_SSL\t\tif (port->ssl_in_use)\t\t\tappend(\"SSL enabled (protocol=%s, cipher=%s, bits=%d, compression=%s\");\t#elif defined(ENABLE_GSS) \tblah,blah,blah#endif \tereport (LOG, msg.data);}3. + if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))If be_gssapi_get_auth(port) returns false, I think there's no way that be_gssapi_get_princ(port) would return a non null value, see the comment. The function be_gssapi_get_princ() returns NULL if the auth is false, so the check if ( be_gssapi_get_princ(port)) would suffice.gss_name_t name; /* GSSAPI client name */ char *princ; /* GSSAPI Principal used for auth, NULL if * GSSAPI auth was not used */ bool auth; /* GSSAPI Authentication used */ bool enc; /* GSSAPI encryption in use */With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 29 Oct 2020 12:15:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Greetings,\n\n* vignesh C (vignesh21@gmail.com) wrote:\n> I have made a v2 patch based on the changes you have suggested. The\n> patch for the same is attached.\n\n> From b067cf823750f200102be0a0cad9a26a08e29a92 Mon Sep 17 00:00:00 2001\n> From: Vignesh C <vignesh21@gmail.com>\n> Date: Wed, 28 Oct 2020 08:19:06 +0530\n> Subject: [PATCH v2] Log message for GSS connection is missing once connection\n> authorization is successful.\n> \n> Log message for GSS connection is missing once connection authorization is\n> successful. We have similar log message for SSL connections once the connection\n> authorization is successful. This message will help the user to identify the\n> connection that was selected from the logfile.\n\nJust to be clear- it's not that the message is 'missing', it's just not\nproviding the (certainly useful) information about how the connection\nwas authorized. The commit message should make it clear that what we're\ndoing here is improving the connection authorization message for GSS\nauthenticated or encrypted connections.\n\n> diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> index d4ab4c7..7980e92 100644\n> --- a/src/backend/utils/init/postinit.c\n> +++ b/src/backend/utils/init/postinit.c\n> @@ -267,6 +267,21 @@ PerformAuthentication(Port *port)\n> \t\t\t\t\t\t\t\t be_tls_get_compression(port) ? _(\"on\") : _(\"off\"))));\n> \t\t\telse\n> #endif\n> +#ifdef ENABLE_GSS\n> +\t\t\tif (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n> +\t\t\t\tereport(LOG,\n> +\t\t\t\t\t\t(port->application_name != NULL\n> +\t\t\t\t\t\t ? errmsg(\"replication connection authorized: user=%s application_name=%s GSS %s (principal=%s)\",\n> +\t\t\t\t\t\t\t\t port->user_name,\n> +\t\t\t\t\t\t\t\t port->application_name,\n> +\t\t\t\t\t\t\t\t be_gssapi_get_auth(port) || be_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),\n> +\t\t\t\t\t\t\t\t be_gssapi_get_princ(port))\n> +\t\t\t\t\t\t : errmsg(\"replication connection authorized: user=%s GSS %s (principal=%s)\",\n> +\t\t\t\t\t\t\t\t port->user_name,\n> +\t\t\t\t\t\t\t\t be_gssapi_get_auth(port) || be_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),\n> +\t\t\t\t\t\t\t\t be_gssapi_get_princ(port))));\n> +\t\t\telse\n> +#endif\n\nNo, this isn't what I was suggesting. \"on\" and \"off\" really isn't\ncommunicating the details about the GSS-using connection. What I\nsuggested before was something like:\n\nerrmsg(\"replication connection authorized: user=%s application_name=%s GSS %s (principal=%s)\",\n\tport->user_name,\n\tport->application_name,\n\t(be_gssapi_get_auth(port) && be_gssapi_get_enc(port)) ? \"authenticated and encrypted\" : be_gssapi_get_auth(port) ? \"authenticated\" : \"encrypted\",\n\tbe_gssapi_get_princ(port))\n\nThough I'll admit that perhaps there's something better which could be\ndone here- but just 'on/off' certainly isn't that. Another option might\nbe:\n\nerrmsg(\"replication connection authorized: user=%s application_name=%s GSS authenticated: %s, encrypted: %s, principal: %s\",\n\tport->user_name,\n\tport->application_name,\n\tbe_gssapi_get_auth(port) ? \"yes\" : \"no\",\n\tbe_gssapi_get_enc(port) ? \"yes\" : \"no\",\n\tbe_gssapi_get_princ(port))\n\nAlso, it would be good to see if there's a way to add to the tests we\nhave for GSSAPI authentication/encryption to show that we hit each of\nthe possible cases and check that we get the correct messages in the log\nas a result.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 29 Oct 2020 09:56:41 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 7:26 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * vignesh C (vignesh21@gmail.com) wrote:\n> > I have made a v2 patch based on the changes you have suggested. The\n> > patch for the same is attached.\n>\n> > From b067cf823750f200102be0a0cad9a26a08e29a92 Mon Sep 17 00:00:00 2001\n> > From: Vignesh C <vignesh21@gmail.com>\n> > Date: Wed, 28 Oct 2020 08:19:06 +0530\n> > Subject: [PATCH v2] Log message for GSS connection is missing once connection\n> > authorization is successful.\n> >\n> > Log message for GSS connection is missing once connection authorization is\n> > successful. We have similar log message for SSL connections once the connection\n> > authorization is successful. This message will help the user to identify the\n> > connection that was selected from the logfile.\n>\n> Just to be clear- it's not that the message is 'missing', it's just not\n> providing the (certainly useful) information about how the connection\n> was authorized. The commit message should make it clear that what we're\n> doing here is improving the connection authorization message for GSS\n> authenticated or encrypted connections.\n>\n\nI have updated the commit message accordingly.\n\n> > diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c\n> > index d4ab4c7..7980e92 100644\n> > --- a/src/backend/utils/init/postinit.c\n> > +++ b/src/backend/utils/init/postinit.c\n> > @@ -267,6 +267,21 @@ PerformAuthentication(Port *port)\n> > be_tls_get_compression(port) ? _(\"on\") : _(\"off\"))));\n> > else\n> > #endif\n> > +#ifdef ENABLE_GSS\n> > + if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n> > + ereport(LOG,\n> > + (port->application_name != NULL\n> > + ? errmsg(\"replication connection authorized: user=%s application_name=%s GSS %s (principal=%s)\",\n> > + port->user_name,\n> > + port->application_name,\n> > + be_gssapi_get_auth(port) || be_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),\n> > + be_gssapi_get_princ(port))\n> > + : errmsg(\"replication connection authorized: user=%s GSS %s (principal=%s)\",\n> > + port->user_name,\n> > + be_gssapi_get_auth(port) || be_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),\n> > + be_gssapi_get_princ(port))));\n> > + else\n> > +#endif\n>\n> No, this isn't what I was suggesting. \"on\" and \"off\" really isn't\n> communicating the details about the GSS-using connection. What I\n> suggested before was something like:\n>\n> errmsg(\"replication connection authorized: user=%s application_name=%s GSS %s (principal=%s)\",\n> port->user_name,\n> port->application_name,\n> (be_gssapi_get_auth(port) && be_gssapi_get_enc(port)) ? \"authenticated and encrypted\" : be_gssapi_get_auth(port) ? \"authenticated\" : \"encrypted\",\n> be_gssapi_get_princ(port))\n>\n> Though I'll admit that perhaps there's something better which could be\n> done here- but just 'on/off' certainly isn't that. Another option might\n> be:\n>\n> errmsg(\"replication connection authorized: user=%s application_name=%s GSS authenticated: %s, encrypted: %s, principal: %s\",\n> port->user_name,\n> port->application_name,\n> be_gssapi_get_auth(port) ? \"yes\" : \"no\",\n> be_gssapi_get_enc(port) ? \"yes\" : \"no\",\n> be_gssapi_get_princ(port))\n>\n\nI like the above method that you suggested, I have changed it based on\nthe above.\n\n> Also, it would be good to see if there's a way to add to the tests we\n> have for GSSAPI authentication/encryption to show that we hit each of\n> the possible cases and check that we get the correct messages in the log\n> as a result.\n>\n\nI have added the log validation to the existing tests that are present\nfor authentication.\n\nAttached v3 patch has the change for the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 30 Oct 2020 18:13:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Thanks for the comments Bharath.\n\nOn Thu, Oct 29, 2020 at 12:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> 1. Instead of just \"on/off\" after GSS %s in the log message, wouldn't it be informative if we have authenticated and/or encrypted as suggested by Stephen?\n>\n> So the log message would look like this:\n>\n> if(be_gssapi_get_auth(port))\n> replication connection authorized: user=bob application_name=foo GSS authenticated (principal=bar)\n>\n> if(be_gssapi_get_enc(port))\n> replication connection authorized: user=bob application_name=foo GSS encrypted (principal=bar)\n>\n> if(be_gssapi_get_auth(port) && be_gssapi_get_enc(port))\n> replication connection authorized: user=bob application_name=foo GSS authenticated and encrypted (principal=bar)\n>\n> +#ifdef ENABLE_GSS\n> + if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n> + ereport(LOG,\n> + (port->application_name != NULL\n> + ? errmsg(\"replication connection authorized: user=%s application_name=%s GSS %s (principal=%s)\",\n> + port->user_name,\n> + port->application_name,\n> + be_gssapi_get_auth(port) || be_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),\n> + be_gssapi_get_princ(port))\n> + : errmsg(\"replication connection authorized: user=%s GSS %s (principal=%s)\",\n> + port->user_name,\n> + be_gssapi_get_auth(port) || be_gssapi_get_enc(port) ? _(\"on\") : _(\"off\"),\n> + be_gssapi_get_princ(port))));\n> + else\n>\n\nThis is handled in v3 patch posted at [1].\n\n> 2. I think the log message preparation looks a bit clumsy with ternary operators and duplicate log message texts(I know that we already do this for SSL). Can we have the log message prepared using StringInfoData data structure/APIs and use just a single ereport? This way, that part of the code looks cleaner.\n>\n> Here's what I'm visualizing:\n>\n> if (Log_connections)\n> {\n> StringInfoData msg;\n>\n> if (am_walsender)\n> append(\"replication connection authorized: user=%s\");\n> else\n> append(\"connection authorized: user=%s database=%s\");\n>\n> if (port->application_name)\n> append(\"application_name=%s\");\n>\n> #ifdef USE_SSL\n> if (port->ssl_in_use)\n> append(\"SSL enabled (protocol=%s, cipher=%s, bits=%d, compression=%s\");\n> #elif defined(ENABLE_GSS)\n> blah,blah,blah\n> #endif\n>\n> ereport (LOG, msg.data);\n> }\n\nThis is handled in the v3 patch posted.\n\n>\n> 3. + if (be_gssapi_get_auth(port) || be_gssapi_get_princ(port))\n>\n> If be_gssapi_get_auth(port) returns false, I think there's no way that be_gssapi_get_princ(port) would return a non null value, see the comment. The function be_gssapi_get_princ() returns NULL if the auth is false, so the check if ( be_gssapi_get_princ(port)) would suffice.\n>\n> gss_name_t name; /* GSSAPI client name */\n> char *princ; /* GSSAPI Principal used for auth, NULL if\n> * GSSAPI auth was not used */\n> bool auth; /* GSSAPI Authentication used */\n> bool enc; /* GSSAPI encryption in use */\n>\n\nThis is handled in the v3 patch posted.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Oct 2020 18:22:35 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Fri, 30 Oct 2020 at 09:43, vignesh C <vignesh21@gmail.com> wrote:\n\n>\n> Attached v3 patch has the change for the same.\n>\n>\nHi Vignesh,\n\n+ appendStringInfo(&logmsg, \"replication \");\n+\n+ appendStringInfo(&logmsg, \"connection authorized: user=%s\",\n+ port->user_name);\n+ if (!am_walsender)\n+ appendStringInfo(&logmsg, \" database=%s\", port->database_name);\n+\n+ if (port->application_name != NULL)\n+ appendStringInfo(&logmsg, \" application_name=%s\",\n+ port->application_name);\n+\n\nYour approach breaks localization. You should use multiple errmsg.\n\n+$node->append_conf('postgresql.conf', \"logging_collector= 'on'\");\n+$node->append_conf('postgresql.conf', \"log_connections= 'on'\");\n\nbooleans don't need quotes.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 30 Oct 2020 at 09:43, vignesh C <vignesh21@gmail.com> wrote:\nAttached v3 patch has the change for the same.\nHi Vignesh,+\t\t\tappendStringInfo(&logmsg, \"replication \");++\t\tappendStringInfo(&logmsg, \"connection authorized: user=%s\",+\t\t\t\t\t\t port->user_name);+\t\tif (!am_walsender)+\t\t\tappendStringInfo(&logmsg, \" database=%s\", port->database_name);++\t\tif (port->application_name != NULL)+\t\t\tappendStringInfo(&logmsg, \" application_name=%s\",+\t\t\t\t\t\t\t port->application_name);+Your approach breaks localization. You should use multiple errmsg.+$node->append_conf('postgresql.conf', \"logging_collector= 'on'\");+$node->append_conf('postgresql.conf', \"log_connections= 'on'\");booleans don't need quotes.-- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 30 Oct 2020 10:05:28 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 6:35 PM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> + appendStringInfo(&logmsg, \"replication \");\n> +\n> + appendStringInfo(&logmsg, \"connection authorized: user=%s\",\n> + port->user_name);\n> + if (!am_walsender)\n> + appendStringInfo(&logmsg, \" database=%s\", port->database_name);\n> +\n> + if (port->application_name != NULL)\n> + appendStringInfo(&logmsg, \" application_name=%s\",\n> + port->application_name);\n> +\n>\n> Your approach breaks localization. You should use multiple errmsg.\n>\n\nIIUC, isn't it enough calling a single errmsg() inside ereport() with\nthe prepared logmsg.data (which is a string)? The errmsg() function\nwill do the required translation of the logmsg.data. Why do we need\nmultiple errmsg() calls? Could you please elaborate a bit on how the\nway currently it is done in the patch breaks localization?\n\n+ ereport(LOG, errmsg(\"%s\", logmsg.data));\n\n>\n> +$node->append_conf('postgresql.conf', \"logging_collector= 'on'\");\n> +$node->append_conf('postgresql.conf', \"log_connections= 'on'\");\n>\n> booleans don't need quotes.\n>\n\nI think that's not correct. If I'm right, the snippet pointed above is\nfrom a perl script. In C, the strings are null terminated and they are\nrepresented within double quotes. So we need to use double quotes for\n_(\"on\") : _(\"off\"). And also the definition of _( ) macro points to a\nfunction err_gettext() that expects C-style string i.e null\nterminated.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 31 Oct 2020 09:04:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Sat, Oct 31, 2020 at 9:04 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > +$node->append_conf('postgresql.conf', \"logging_collector= 'on'\");\n> > +$node->append_conf('postgresql.conf', \"log_connections= 'on'\");\n> >\n> > booleans don't need quotes.\n> >\n>\n> I think that's not correct. If I'm right, the snippet pointed above is\n> from a perl script. In C, the strings are null terminated and they are\n> represented within double quotes. So we need to use double quotes for\n> _(\"on\") : _(\"off\"). And also the definition of _( ) macro points to a\n> function err_gettext() that expects C-style string i.e null\n> terminated.\n>\n\nI'm sorry for the above point, I misunderstood it. I took a further\nlook at the patch. It seems like it's a mix. In some place we are not\nusing quotes for booleans, for instance,\n\n$node->append_conf('postgresql.conf', 'autovacuum=off');\n$node->append_conf('postgresql.conf', 'track_commit_timestamp = on');\n\nbut in one place we are using quotes\n\n$node->append_conf('postgresql.conf', \"ssl = 'on'\");\n\nEither way seems to be fine as we don't have any variables inside the\nstrings to be replaced by the perl interpreter.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 31 Oct 2020 09:19:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 6:13 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I have added the log validation to the existing tests that are present\n> for authentication.\n>\n\nI took a look at v3 patch. Here are some comments.\n\n1. Why are the input strings(not the newly added GSS log message\nstring) to test_access() function are in some places double-quoted and\nin some places single quoted?\n\n 'succeeds with mapping with default gssencmode and host hba',\n 'connection authorized: user=test1 database=postgres\napplication_name=001_auth.pl GSS \\(authenticated=yes, encrypted=yes,\nprincipal=test1\\@EXAMPLE.COM\\)'\n);\n \"succeeds with GSS-encrypted access required with host hba\",\n 'connection authorized: user=test1 database=postgres\napplication_name=001_auth.pl GSS \\(authenticated=yes, encrypted=yes,\nprincipal=test1\\@EXAMPLE.COM\\)'\n);\n\nAnd also for\n\ntest_access(\n $node,\n 'test1', <<< single quotes\n\ntest_access(\n $node,\n \"test1\", <<< double quotes\n\nLooks like we use double quoted strings in perl if we have any\nvariables inside the string to be replaced by the interpreter or else\nsingle quoted strings are fine[1]. If this is true, can we make it\nuniform across this file at least?\n\n2. Instead of using hardcoded values for application_name and\nprincipal, can we use variables? For application_name we can directly\nuse a single variable and use it. I think principal name is a formed\nvalue, can we use that formed variable?\n\n application_name=001_auth.pl GSS \\(authenticated=yes, encrypted=yes,\nprincipal=test1\\@EXAMPLE.COM\\)'\n\n3. Why are we using escape character before ( and @, IIUC, to not let\ninterpreter replace it with any value. If this is correct, it doesn't\nmake sense here as we are using single quoted strings. The perl\ninterpreter replaces the variables only when strings are used in\ndouble quotes[1].\n\n+ 'connection authorized: user=test1 database=postgres\napplication_name=001_auth.pl GSS \\(authenticated=yes, encrypted=yes,\nprincipal=test1\\@EXAMPLE.COM\\)'\n+);\n\nI ran the keroberos tests on my dev machine. make check of 001_auth.pl\nis passing.\n\n[1] - https://www.geeksforgeeks.org/perl-quoted-interpolated-and-escaped-strings/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 31 Oct 2020 10:18:31 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Thanks for the comments Bharath.\nOn Sat, Oct 31, 2020 at 10:18 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I took a look at v3 patch. Here are some comments.\n>\n> 1. Why are the input strings(not the newly added GSS log message\n> string) to test_access() function are in some places double-quoted and\n> in some places single quoted?\n>\n> 'succeeds with mapping with default gssencmode and host hba',\n> 'connection authorized: user=test1 database=postgres\n> application_name=001_auth.pl GSS \\(authenticated=yes, encrypted=yes,\n> principal=test1\\@EXAMPLE.COM\\)'\n> );\n> \"succeeds with GSS-encrypted access required with host hba\",\n> 'connection authorized: user=test1 database=postgres\n> application_name=001_auth.pl GSS \\(authenticated=yes, encrypted=yes,\n> principal=test1\\@EXAMPLE.COM\\)'\n> );\n>\n> And also for\n>\n> test_access(\n> $node,\n> 'test1', <<< single quotes\n>\n> test_access(\n> $node,\n> \"test1\", <<< double quotes\n>\n> Looks like we use double quoted strings in perl if we have any\n> variables inside the string to be replaced by the interpreter or else\n> single quoted strings are fine[1]. If this is true, can we make it\n> uniform across this file at least?\n\nI have made this uniform across this file.\n\n>\n> 2. Instead of using hardcoded values for application_name and\n> principal, can we use variables? For application_name we can directly\n> use a single variable and use it. I think principal name is a formed\n> value, can we use that formed variable?\n>\n> application_name=001_auth.pl GSS \\(authenticated=yes, encrypted=yes,\n> principal=test1\\@EXAMPLE.COM\\)'\n>\n\nUsed variables for this.\n\n> 3. Why are we using escape character before ( and @, IIUC, to not let\n> interpreter replace it with any value. If this is correct, it doesn't\n> make sense here as we are using single quoted strings. The perl\n> interpreter replaces the variables only when strings are used in\n> double quotes[1].\n>\n> + 'connection authorized: user=test1 database=postgres\n> application_name=001_auth.pl GSS \\(authenticated=yes, encrypted=yes,\n> principal=test1\\@EXAMPLE.COM\\)'\n> +);\n>\n> I ran the keroberos tests on my dev machine. make check of 001_auth.pl\n> is passing.\n>\n\nI have changed this within double quotes now as it includes passing of\nthe variable also. Removed the escape sequence which is not required.\n\nThe v4 patch attached has the fixes for this.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 31 Oct 2020 16:40:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Sat, 31 Oct 2020 at 00:34, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Fri, Oct 30, 2020 at 6:35 PM Euler Taveira\n> <euler.taveira@2ndquadrant.com> wrote:\n> >\n> > + appendStringInfo(&logmsg, \"replication \");\n> > +\n> > + appendStringInfo(&logmsg, \"connection authorized: user=%s\",\n> > + port->user_name);\n> > + if (!am_walsender)\n> > + appendStringInfo(&logmsg, \" database=%s\", port->database_name);\n> > +\n> > + if (port->application_name != NULL)\n> > + appendStringInfo(&logmsg, \" application_name=%s\",\n> > + port->application_name);\n> > +\n> >\n> > Your approach breaks localization. You should use multiple errmsg.\n> >\n>\n> IIUC, isn't it enough calling a single errmsg() inside ereport() with\n> the prepared logmsg.data (which is a string)? The errmsg() function\n> will do the required translation of the logmsg.data. Why do we need\n> multiple errmsg() calls? Could you please elaborate a bit on how the\n> way currently it is done in the patch breaks localization?\n>\n>\nNo. The strings are specified in the appendStringInfo, hence you should add\n_()\naround the string to be translated. There is nothing to be translated if\nyou\nspecify only the format identifier. You can always test if gettext extracts\nthe\nstring to be translated by executing 'make update-po' (after specifying\n--enable-nls in the configure). Search for your string in one of the\ngenerated\nfiles (po/LL.po.new).\n\nYou shouldn't split messages like that because not all languages have the\nsame\norder as English. Having said that you risk providing a nonsense translation\nbecause someone decided to translate pieces of a sentence separately.\n\n+ appendStringInfo(&logmsg, \"replication \");\n+\n+ appendStringInfo(&logmsg, \"connection authorized: user=%s\",\n+ port->user_name);\n\nThis hunk will break translation. In Portuguese, the adjective\n\"replication\" is\ntranslated after the noun \"connection\". If you decided to keep this code as\nis,\nthe printed message won't follow the grammar rules. You will have\n\"replicação\nconexão autorizada\" instead of \"conexão de replicação autorizada\". The\nformer\nisn't grammatically correct. Avoid splitting sentences that are translated.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Sat, 31 Oct 2020 at 00:34, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Fri, Oct 30, 2020 at 6:35 PM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> + appendStringInfo(&logmsg, \"replication \");\n> +\n> + appendStringInfo(&logmsg, \"connection authorized: user=%s\",\n> + port->user_name);\n> + if (!am_walsender)\n> + appendStringInfo(&logmsg, \" database=%s\", port->database_name);\n> +\n> + if (port->application_name != NULL)\n> + appendStringInfo(&logmsg, \" application_name=%s\",\n> + port->application_name);\n> +\n>\n> Your approach breaks localization. You should use multiple errmsg.\n>\n\nIIUC, isn't it enough calling a single errmsg() inside ereport() with\nthe prepared logmsg.data (which is a string)? The errmsg() function\nwill do the required translation of the logmsg.data. Why do we need\nmultiple errmsg() calls? Could you please elaborate a bit on how the\nway currently it is done in the patch breaks localization?\nNo. The strings are specified in the appendStringInfo, hence you should add _() around the string to be translated. There is nothing to be translated if you specify only the format identifier. You can always test if gettext extracts the string to be translated by executing 'make update-po' (after specifying--enable-nls in the configure). Search for your string in one of the generatedfiles (po/LL.po.new).You shouldn't split messages like that because not all languages have the sameorder as English. Having said that you risk providing a nonsense translationbecause someone decided to translate pieces of a sentence separately.+ appendStringInfo(&logmsg, \"replication \"); ++ appendStringInfo(&logmsg, \"connection authorized: user=%s\",+ port->user_name);This hunk will break translation. In Portuguese, the adjective \"replication\" istranslated after the noun \"connection\". If you decided to keep this code as is, the printed message won't follow the grammar rules. You will have \"replicaçãoconexão autorizada\" instead of \"conexão de replicação autorizada\". The formerisn't grammatically correct. Avoid splitting sentences that are translated. -- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 31 Oct 2020 19:03:58 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Sun, Nov 1, 2020 at 3:34 AM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> No. The strings are specified in the appendStringInfo, hence you should add _()\n> around the string to be translated. There is nothing to be translated if you\n> specify only the format identifier. You can always test if gettext extracts the\n> string to be translated by executing 'make update-po' (after specifying\n> --enable-nls in the configure). Search for your string in one of the generated\n> files (po/LL.po.new).\n>\n\nThanks a lot for the detailed explanation.\n\n>\n> You shouldn't split messages like that because not all languages have the same\n> order as English. Having said that you risk providing a nonsense translation\n> because someone decided to translate pieces of a sentence separately.\n>\n> + appendStringInfo(&logmsg, \"replication \");\n> +\n> + appendStringInfo(&logmsg, \"connection authorized: user=%s\",\n> + port->user_name);\n>\n> This hunk will break translation. In Portuguese, the adjective \"replication\" is\n> translated after the noun \"connection\". If you decided to keep this code as is,\n> the printed message won't follow the grammar rules. You will have \"replicação\n> conexão autorizada\" instead of \"conexão de replicação autorizada\". The former\n> isn't grammatically correct. Avoid splitting sentences that are translated.\n>\n\nAgreed. Looks like we don't break localization rules if we have\nsomething like below, which is done in similar way for a log message\nin heap_vacuum_rel(): msgfmt = _(\"automatic aggressive vacuum to\nprevent wraparound of table \\\"%s.%s.%s\\\": index scans: %d\\n\");\n\n if (am_walsender)\n appendStringInfo(&logmsg, _(\"replication connection\nauthorized: user=%s\"),\n port->user_name);\n else\n appendStringInfo(&logmsg, _(\"connection authorized: user=%s\"),\n port->user_name);\n\n if (!am_walsender)\n appendStringInfo(&logmsg, _(\" database=%s\"), port->database_name);\n\n if (port->application_name != NULL)\n appendStringInfo(&logmsg, _(\" application_name=%s\"),\n port->application_name);\n\n#ifdef USE_SSL\n if (port->ssl_in_use)\n appendStringInfo(&logmsg, _(\" SSL enabled (protocol=%s,\ncipher=%s, bits=%d, compression=%s)\"),\n be_tls_get_version(port),\n be_tls_get_cipher(port),\n be_tls_get_cipher_bits(port),\n be_tls_get_compression(port) ? _(\"on\") : _(\"off\"));\n#endif\n#ifdef ENABLE_GSS\n if (be_gssapi_get_princ(port))\n appendStringInfo(&logmsg, _(\" GSS (authenticated=%s,\nencrypted=%s, principal=%s)\"),\n be_gssapi_get_auth(port) ? _(\"yes\") : _(\"no\"),\n be_gssapi_get_enc(port) ? _(\"yes\") : _(\"no\"),\n be_gssapi_get_princ(port));\n#endif\n\n ereport(LOG, errmsg_internal(\"%s\", logmsg.data));\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 1 Nov 2020 12:12:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Sun, Nov 1, 2020 at 3:34 AM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> On Sat, 31 Oct 2020 at 00:34, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Fri, Oct 30, 2020 at 6:35 PM Euler Taveira\n>> <euler.taveira@2ndquadrant.com> wrote:\n>> >\n>> > + appendStringInfo(&logmsg, \"replication \");\n>> > +\n>> > + appendStringInfo(&logmsg, \"connection authorized: user=%s\",\n>> > + port->user_name);\n>> > + if (!am_walsender)\n>> > + appendStringInfo(&logmsg, \" database=%s\", port->database_name);\n>> > +\n>> > + if (port->application_name != NULL)\n>> > + appendStringInfo(&logmsg, \" application_name=%s\",\n>> > + port->application_name);\n>> > +\n>> >\n>> > Your approach breaks localization. You should use multiple errmsg.\n>> >\n>>\n>> IIUC, isn't it enough calling a single errmsg() inside ereport() with\n>> the prepared logmsg.data (which is a string)? The errmsg() function\n>> will do the required translation of the logmsg.data. Why do we need\n>> multiple errmsg() calls? Could you please elaborate a bit on how the\n>> way currently it is done in the patch breaks localization?\n>>\n>\n> No. The strings are specified in the appendStringInfo, hence you should add _()\n> around the string to be translated. There is nothing to be translated if you\n> specify only the format identifier. You can always test if gettext extracts the\n> string to be translated by executing 'make update-po' (after specifying\n> --enable-nls in the configure). Search for your string in one of the generated\n> files (po/LL.po.new).\n>\n> You shouldn't split messages like that because not all languages have the same\n> order as English. Having said that you risk providing a nonsense translation\n> because someone decided to translate pieces of a sentence separately.\n>\n> + appendStringInfo(&logmsg, \"replication \");\n> +\n> + appendStringInfo(&logmsg, \"connection authorized: user=%s\",\n> + port->user_name);\n>\n> This hunk will break translation. In Portuguese, the adjective \"replication\" is\n> translated after the noun \"connection\". If you decided to keep this code as is,\n> the printed message won't follow the grammar rules. You will have \"replicação\n> conexão autorizada\" instead of \"conexão de replicação autorizada\". The former\n> isn't grammatically correct. Avoid splitting sentences that are translated.\n>\n\nThanks for the explanation, I have attached a v5 patch with the\nchanges where the translation should not have any problem.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 3 Nov 2020 12:48:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the explanation, I have attached a v5 patch with the\n> changes where the translation should not have any problem.\n>\n\nI took a look at the V5 patch. Below are some comments:\n\n1. Do we need to generate and add the translation of the new GSS\nmessage in all the language specific files under po/ directory?. See\nbelow for the translated SSL log message added in all the language\nspecific .po files. [1] may help.\nI'm not quite sure whether translation should be part of the patch or\nis it done separately? Say someone doing tralsations for a bunch of\nlog messages together in a single commit?\n\n#: utils/init/postinit.c:237\n#, c-format\nmsgid \"replication connection authorized: user=%s SSL enabled\n(protocol=%s, cipher=%s, compression=%s)\"\nmsgstr \"conexão de replicação autorizada: usuário=%s SSL habilitado\n(protocolo=%s, cifra=%s, compressão=%s)\"\n\n2. I have one concern about the test case, where we look for an\nexpected message[2](in English language), but what happens if the\nlogging collector collects the log messages in a different language,\nsay[3]? Will the test case fail? I saw that in 004_logrotate.pl we\nlook for \"division by zero\" in the logs, will the same concern apply\nto this as well?\n\n[1] - https://www.postgresql.org/docs/current/nls-translator.html\n[2] - \"connection authorized: user=$username database=$dbname\napplication_name=$application\n[3] - \"conexão autorizada: usuário=%s banco de dados=%s SSL habilitado\n(protocolo=%s, cifra=%s, compressão=%s)\"\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Nov 2020 07:22:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Wed, 4 Nov 2020 at 22:52, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Tue, Nov 3, 2020 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> 1. Do we need to generate and add the translation of the new GSS\n> message in all the language specific files under po/ directory?. See\n> below for the translated SSL log message added in all the language\n> specific .po files. [1] may help.\n> I'm not quite sure whether translation should be part of the patch or\n> is it done separately? Say someone doing tralsations for a bunch of\n> log messages together in a single commit?\n>\n> No. Don't worry with translations during the development. Make sure to\nfollow\nthe instructions provided here [1]. Translations are coordinated in a\ndifferent\nmailing list: pgsql-translators [2]. There is a different repository [3]\nfor\nhandling PO files and the updated files are merged by Peter Eisentraut just\nbefore each minor/major release. We usually start to update translations\nafter\nfeature freeze.\n\n\n> 2. I have one concern about the test case, where we look for an\n> expected message[2](in English language), but what happens if the\n> logging collector collects the log messages in a different language,\n> say[3]? Will the test case fail? I saw that in 004_logrotate.pl we\n> look for \"division by zero\" in the logs, will the same concern apply\n> to this as well?\n>\n> pg_regress changes the lc_messages to C. There won't be test failures due\nto\ndifferent LANG.\n\n\n[1] https://www.postgresql.org/docs/current/nls-programmer.html\n[2] https://www.postgresql.org/list/pgsql-translators/\n[3]\nhttps://git.postgresql.org/gitweb/?p=pgtranslation/messages.git;a=summary\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Wed, 4 Nov 2020 at 22:52, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Tue, Nov 3, 2020 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:\n1. Do we need to generate and add the translation of the new GSS\nmessage in all the language specific files under po/ directory?. See\nbelow for the translated SSL log message added in all the language\nspecific .po files. [1] may help.\nI'm not quite sure whether translation should be part of the patch or\nis it done separately? Say someone doing tralsations for a bunch of\nlog messages together in a single commit?\nNo. Don't worry with translations during the development. Make sure to followthe instructions provided here [1]. Translations are coordinated in a differentmailing list: pgsql-translators [2]. There is a different repository [3] for handling PO files and the updated files are merged by Peter Eisentraut justbefore each minor/major release. We usually start to update translations afterfeature freeze. \n2. I have one concern about the test case, where we look for an\nexpected message[2](in English language), but what happens if the\nlogging collector collects the log messages in a different language,\nsay[3]? Will the test case fail? I saw that in 004_logrotate.pl we\nlook for \"division by zero\" in the logs, will the same concern apply\nto this as well?\npg_regress changes the lc_messages to C. There won't be test failures due todifferent LANG.[1] https://www.postgresql.org/docs/current/nls-programmer.html[2] https://www.postgresql.org/list/pgsql-translators/[3] https://git.postgresql.org/gitweb/?p=pgtranslation/messages.git;a=summary-- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 4 Nov 2020 23:25:01 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 7:55 AM Euler Taveira <euler.taveira@2ndquadrant.com>\nwrote:\n>\n> No. Don't worry with translations during the development. Make sure to\nfollow\n> the instructions provided here [1]. Translations are coordinated in a\ndifferent\n> mailing list: pgsql-translators [2]. There is a different repository [3]\nfor\n> handling PO files and the updated files are merged by Peter Eisentraut\njust\n> before each minor/major release. We usually start to update translations\nafter\n> feature freeze.\n>\n\nThanks.\n\nOn Tue, Nov 3, 2020 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the explanation, I have attached a v5 patch with the\n> changes where the translation should not have any problem.\n>\n\nI have taken a further look at the V5 patch:\n\n1. We wait 10sec until the syslogger process logs the expected message,\nwhat happens if someone intentionally made the syslogger process to wait\nfor a longer duration? Will the new tests fail?\n # might need to retry if logging collector process is slow...\n my $max_attempts = 10 * 10;\n my $first_logfile;\n for (my $attempts = 0; $attempts < $max_attempts; $attempts++)\n {\n $first_logfile = slurp_file($node->data_dir . '/' . $lfname);\n last if $first_logfile =~ m/$expect_log_msg /;\n usleep(100_000);\n }\n\n2. I intentionally altered(for testing purpose only) the expected log\nmessage input given to test_access(), expecting the tests to fail, but the\ntest succeeded. Am I missing something here? Is it that the syslogger\nprocess not logging the message at all or within the 10sec waiting? Do we\nneed to increase the wait duration? Do we need to do something to fail the\ntest when we don't see the expected log message in test_access()?\n\n\"*cXNnnection* authorized: user=......\n\"*connecTEion *authorized: user=....\n\"connection *auTThorized*:.....\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Nov 5, 2020 at 7:55 AM Euler Taveira <euler.taveira@2ndquadrant.com> wrote:>> No. Don't worry with translations during the development. Make sure to follow> the instructions provided here [1]. Translations are coordinated in a different> mailing list: pgsql-translators [2]. There is a different repository [3] for> handling PO files and the updated files are merged by Peter Eisentraut just> before each minor/major release. We usually start to update translations after> feature freeze.>Thanks.On Tue, Nov 3, 2020 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:>> Thanks for the explanation, I have attached a v5 patch with the> changes where the translation should not have any problem.>I have taken a further look at the V5 patch:1. We wait 10sec until the syslogger process logs the expected message, what happens if someone intentionally made the syslogger process to wait for a longer duration? Will the new tests fail? # might need to retry if logging collector process is slow... my $max_attempts = 10 * 10; my $first_logfile; for (my $attempts = 0; $attempts < $max_attempts; $attempts++) { $first_logfile = slurp_file($node->data_dir . '/' . $lfname); last if $first_logfile =~ m/$expect_log_msg /; usleep(100_000); }2. I intentionally altered(for testing purpose only) the expected log message input given to test_access(), expecting the tests to fail, but the test succeeded. Am I missing something here? Is it that the syslogger process not logging the message at all or within the 10sec waiting? Do we need to increase the wait duration? Do we need to do something to fail the test when we don't see the expected log message in test_access()?\"cXNnnection authorized: user=......\"connecTEion authorized: user=....\"connection auTThorized:.....With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 5 Nov 2020 09:50:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 9:50 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 5, 2020 at 7:55 AM Euler Taveira <euler.taveira@2ndquadrant.com> wrote:\n> >\n> > No. Don't worry with translations during the development. Make sure to follow\n> > the instructions provided here [1]. Translations are coordinated in a different\n> > mailing list: pgsql-translators [2]. There is a different repository [3] for\n> > handling PO files and the updated files are merged by Peter Eisentraut just\n> > before each minor/major release. We usually start to update translations after\n> > feature freeze.\n> >\n>\n> Thanks.\n>\n> On Tue, Nov 3, 2020 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the explanation, I have attached a v5 patch with the\n> > changes where the translation should not have any problem.\n> >\n>\n> I have taken a further look at the V5 patch:\n>\n> 1. We wait 10sec until the syslogger process logs the expected message, what happens if someone intentionally made the syslogger process to wait for a longer duration? Will the new tests fail?\n> # might need to retry if logging collector process is slow...\n> my $max_attempts = 10 * 10;\n> my $first_logfile;\n> for (my $attempts = 0; $attempts < $max_attempts; $attempts++)\n> {\n> $first_logfile = slurp_file($node->data_dir . '/' . $lfname);\n> last if $first_logfile =~ m/$expect_log_msg /;\n> usleep(100_000);\n> }\n>\n\nYes the test will fail if it takes more than the max_attempts as there\nis a like statement immediately after the loop:\n like($first_logfile, qr/\\Q$expect_log_msg\\E/,\n 'found expected log file content');\nI have also increased the attempts to 180 seconds just like other\ntests to avoid failure in very slow systems.\n\n> 2. I intentionally altered(for testing purpose only) the expected log message input given to test_access(), expecting the tests to fail, but the test succeeded. Am I missing something here? Is it that the syslogger process not logging the message at all or within the 10sec waiting? Do we need to increase the wait duration? Do we need to do something to fail the test when we don't see the expected log message in test_access()?\n>\n> \"cXNnnection authorized: user=......\n> \"connecTEion authorized: user=....\n> \"connection auTThorized:.....\n>\n\nThanks for testing this, I had missed testing this. The expression\nmatching was not correct. Attached v6 patch which includes the fix for\nthis.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 7 Nov 2020 09:27:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Sat, Nov 7, 2020 at 9:27 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Yes the test will fail if it takes more than the max_attempts as there\n> is a like statement immediately after the loop:\n> like($first_logfile, qr/\\Q$expect_log_msg\\E/,\n> 'found expected log file content');\n> I have also increased the attempts to 180 seconds just like other\n> tests to avoid failure in very slow systems.\n>\n\n+1 for this.\n\n>\n> > 2. I intentionally altered(for testing purpose only) the expected log message input given to test_access(), expecting the tests to fail, but the test succeeded. Am I missing something here? Is it that the syslogger process not logging the message at all or within the 10sec waiting? Do we need to increase the wait duration? Do we need to do something to fail the test when we don't see the expected log message in test_access()?\n> >\n> > \"cXNnnection authorized: user=......\n> > \"connecTEion authorized: user=....\n> > \"connection auTThorized:.....\n> >\n>\n> Thanks for testing this, I had missed testing this. The expression\n> matching was not correct. Attached v6 patch which includes the fix for\n> this.\n>\n\nThis use case works as expected i.e. test fails if the log message is\naltered intentionally.\n\n>\n> Attached v6 patch which includes the fix for this.\n>\n\nThanks. I have no further comments on the V6 patch, it looks good to\nme. make check of 001_auth.pl, regression tests make check and make\ncheck world passes. It can be passed to committer for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 7 Nov 2020 12:54:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Greetings,\n\n* vignesh C (vignesh21@gmail.com) wrote:\n> Thanks for testing this, I had missed testing this. The expression\n> matching was not correct. Attached v6 patch which includes the fix for\n> this.\n\nThis generally looks pretty good to me. I did reword the commit message\na bit, run pgindent, and added the appropriate log message for the last\ntest (was there a reason you didn't include that..?). In general, this\nlooks pretty good to commit to me.\n\nI'll look at it again over the weekend or early next week and unless\nthere's objections, I'll push it.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Nov 2020 17:01:36 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * vignesh C (vignesh21@gmail.com) wrote:\n> > Thanks for testing this, I had missed testing this. The expression\n> > matching was not correct. Attached v6 patch which includes the fix for\n> > this.\n> \n> This generally looks pretty good to me. I did reword the commit message\n> a bit, run pgindent, and added the appropriate log message for the last\n> test (was there a reason you didn't include that..?). In general, this\n> looks pretty good to commit to me.\n> \n> I'll look at it again over the weekend or early next week and unless\n> there's objections, I'll push it.\n\nAnd committed.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 2 Dec 2020 14:44:31 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Wed, Dec 02, 2020 at 02:44:31PM -0500, Stephen Frost wrote:\n> And committed.\n\nThis has been committed as of dc11f31a, that changed the configuration\nof the node in the kerberos test to use logging_collector. Wouldn't\nit be simpler to not use the logging collector here and use a logic\nsimilar to what we do in PostgresNode::issues_sql_like() where we\ntruncate the log file before checking for a pattern?\n\nIt seems to me that this would make the tests faster, that the test\nwould not need to wait for the logging collector and that the code\ncould just use slurp_file($node->logfile) to get the data it wants to\ncheck for a given pattern without looking at current_logfiles. I also\nthink that not using truncate() on the logfile generated has the\ndisadvantage to make the code fuzzy for its verification once we\nintroduce patterns close to each other, as there could easily be an\noverlap. That's one problem that SQL pattern checks had to deal with\nin the past. Thoughts?\n--\nMichael",
"msg_date": "Sat, 20 Mar 2021 17:37:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Sat, Mar 20, 2021 at 05:37:47PM +0900, Michael Paquier wrote:\n> It seems to me that this would make the tests faster, that the test\n> would not need to wait for the logging collector and that the code\n> could just use slurp_file($node->logfile) to get the data it wants to\n> check for a given pattern without looking at current_logfiles. I also\n> think that not using truncate() on the logfile generated has the\n> disadvantage to make the code fuzzy for its verification once we\n> introduce patterns close to each other, as there could easily be an\n> overlap. That's one problem that SQL pattern checks had to deal with\n> in the past. Thoughts?\n\nAnd, in terms of code, this really simplifies things. Please see the\nattached that I would like to apply.\n--\nMichael",
"msg_date": "Sat, 20 Mar 2021 20:29:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Sat, Mar 20, 2021 at 4:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Mar 20, 2021 at 05:37:47PM +0900, Michael Paquier wrote:\n> > It seems to me that this would make the tests faster, that the test\n> > would not need to wait for the logging collector and that the code\n> > could just use slurp_file($node->logfile) to get the data it wants to\n> > check for a given pattern without looking at current_logfiles. I also\n> > think that not using truncate() on the logfile generated has the\n> > disadvantage to make the code fuzzy for its verification once we\n> > introduce patterns close to each other, as there could easily be an\n> > overlap. That's one problem that SQL pattern checks had to deal with\n> > in the past. Thoughts?\n>\n> And, in terms of code, this really simplifies things. Please see the\n> attached that I would like to apply.\n\n+1 from me. So, after every call to test_access, the node's current\nlogfile gets truncated and we don't need the logging collector process\nto step in for rotation of the logfile.\n\nThe patch looks good to me and the kerberos regression tests pass with it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 21 Mar 2021 17:53:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Sat, Mar 20, 2021 at 05:37:47PM +0900, Michael Paquier wrote:\n> > It seems to me that this would make the tests faster, that the test\n> > would not need to wait for the logging collector and that the code\n> > could just use slurp_file($node->logfile) to get the data it wants to\n> > check for a given pattern without looking at current_logfiles. I also\n> > think that not using truncate() on the logfile generated has the\n> > disadvantage to make the code fuzzy for its verification once we\n> > introduce patterns close to each other, as there could easily be an\n> > overlap. That's one problem that SQL pattern checks had to deal with\n> > in the past. Thoughts?\n> \n> And, in terms of code, this really simplifies things. Please see the\n> attached that I would like to apply.\n\nAgreed, that does look better/simpler.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 21 Mar 2021 10:35:00 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
},
{
"msg_contents": "On Sun, Mar 21, 2021 at 05:53:04PM +0530, Bharath Rupireddy wrote:\n> +1 from me. So, after every call to test_access, the node's current\n> logfile gets truncated and we don't need the logging collector process\n> to step in for rotation of the logfile.\n> \n> The patch looks good to me and the kerberos regression tests pass with it.\n\nThanks Stephen and Bharath for looking at it! I have applied that\nnow.\n--\nMichael",
"msg_date": "Mon, 22 Mar 2021 09:09:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Log message for GSS connection is missing once connection\n authorization is successful."
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nI tried to run PG in a Valgrind on OSX 10.13 with Clang 9/10 and get an\nassertion error inside Valgrind\n\nvalgrind --leak-check=no --gen-suppressions=all --time-stamp=yes \\\n --error-markers=VALGRINDERROR-BEGIN,VALGRINDERROR-END \\\n --log-file=%p.log --trace-children=yes postgres -D ./pgdata \\\n --log_line_prefix=\"%m %p \" --log_statement=all\n\nThen I got an assertion error in Valgrind\nm_signals.c:1106 (void handle_SCSS_change(Bool)):\nAssertion 'ksa_old.sa_flags == skss_old.skss_per_sig[sig].skss_flags'\nfailed\n\nI am not very much sure it has much do to with Postgres and I will report\nthis error to Valgrind community but it is not raised when I try to run\ndifferent programs under Valgrind on the same system. I haven't dived deep\ninto the issue and I am not sure that this patch is not an utter hack but\nit heals an issue which may be useful for some of you.\n\nIf you have ideas on the issue please share.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Wed, 28 Oct 2020 15:37:53 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Valgrind run error with Postgres on OSX"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThe following error was encountered by our customers:\nThem have very huge catalog (size of pg_class relation is more than \n30Gb) blowned by temporary relations.\nWhen them try to vacuum it, the following error is reported:\n\nvacuum full analyze pg_catalog.pg_class;\nERROR: cannot freeze committed xmax 596099954\n\nThe following records are present in pg_class:\n\n(standard input)-10436009-<Data> ------\n(standard input)-10436010- Item 1 -- Length: 229 Offset: 7936 \n(0x1f00) Flags: NORMAL\n(standard input):10436011: XMIN: 596098791 XMAX: 596099954 CID|XVAC: \n1 OID: 930322390\n(standard input)-10436012- Block Id: 108700 linp Index: 17 Attributes: \n33 Size: 32\n(standard input)-10436013- infomask: 0x290b \n(HASNULL|HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED)\n(standard input)-10436014- t_bits: [0]: 0xff [1]: 0xff [2]: 0xff [3]: 0x7f\n(standard input)-10436015- [4]: 0x00\n(standard input)-10436016-\n(standard input)-10436017- Item 2 -- Length: 184 Offset: 7752 \n(0x1e48) Flags: NORMAL\n(standard input):10436018: XMIN: 596098791 XMAX: 596099954 CID|XVAC: \n2 OID: 930322393\n(standard input)-10436019- Block Id: 108700 linp Index: 18 Attributes: \n33 Size: 32\n(standard input)-10436020- infomask: 0x2909 \n(HASNULL|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED)\n(standard input)-10436021- t_bits: [0]: 0xff [1]: 0xff [2]: 0xff [3]: 0x3f\n(standard input)-10436022- [4]: 0x00\n(standard input)-10436023-\n(standard input)-10436024- Item 3 -- Length: 184 Offset: 7568 \n(0x1d90) Flags: NORMAL\n(standard input):10436025: XMIN: 596098791 XMAX: 596099954 CID|XVAC: \n3 OID: 930322395\n(standard input)-10436026- Block Id: 108700 linp Index: 19 Attributes: \n33 Size: 32\n(standard input)-10436027- infomask: 0x2909 \n(HASNULL|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED)\n(standard input)-10436028- t_bits: [0]: 0xff [1]: 0xff [2]: 0xff [3]: 0x3f\n(standard input)-10436029- [4]: 0x00\n\nThis error is reported in heap_prepare_freeze_tuple:\n\n /*\n * Process xmax. To thoroughly examine the current Xmax value we \nneed to\n * resolve a MultiXactId to its member Xids, in case some of them are\n * below the given cutoff for Xids. In that case, those values \nmight need\n * freezing, too. Also, if a multi needs freezing, we cannot \nsimply take\n * it out --- if there's a live updater Xid, it needs to be kept.\n *\n * Make sure to keep heap_tuple_needs_freeze in sync with this.\n */\n xid = HeapTupleGetRawXmax(htup);\n\n if (tuple->t_infomask & HEAP_XMAX_IS_MULTI)\n {\n ...\n\n }\n else if (TransactionIdIsNormal(xid))\n {\n\n ...\n\n if (TransactionIdPrecedes(xid, cutoff_xid))\n {\n /*\n * If we freeze xmax, make absolutely sure that it's not an XID\n * that is important. (Note, a lock-only xmax can be removed\n * independent of committedness, since a committed lock \nholder has\n * released the lock).\n */\n if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) &&\n TransactionIdDidCommit(xid))\n ereport(ERROR,\n (errcode(ERRCODE_DATA_CORRUPTED),\n errmsg_internal(\"cannot freeze committed xmax \n\" XID_FMT,\n xid)));\n freeze_xmax = true;\n }\n else\n freeze_xmax = false;\n ...\n }\n else if ((tuple->t_infomask & HEAP_XMAX_INVALID) ||\n !TransactionIdIsValid(HeapTupleGetRawXmax(htup)))\n {\n freeze_xmax = false;\n xmax_already_frozen = true;\n }\n\n\nSo, as you can see, in all this records HEAP_XMAX_INVALID is set, but \nxmax is normal transaction id.\nThis is why we produce error before check for HEAP_XMAX_INVALID in the \nsubsequent if.\nI do not know value of cutoff_xid, because do not have access to the \ndebugger at customer site.\n\nI will be please or any help how to localize the source of the problem.\n\nLooks like there is no assumption that xmax should be set to \nInvalidTransactionId when HEAP_XMAX_INVALID bit is set.\nAnd I didn't find any check preventing cutoff_xid to be greater than \nXID of some transaction which was aborted long time ago.\n\nSo is there some logical error that xmax is compared with cutoff_xid \nbefore HEAP_XMAX_INVALID bit is checked?\nOtherwise, where this constraint most likely be violated?\n\nIt is PG 11.7 version of Postgres.\nThanks is advance,\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 28 Oct 2020 16:44:12 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "cannot freeze committed xmax"
},
{
"msg_contents": "\n\n> On Oct 28, 2020, at 6:44 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n> \n> Looks like there is no assumption that xmax should be set to InvalidTransactionId when HEAP_XMAX_INVALID bit is set.\n> And I didn't find any check preventing cutoff_xid to be greater than XID of some transaction which was aborted long time ago.\n\nNothing in your example suggests that cutoff_xid is wrong, so I'd assume that part is probably working ok.\n\nYour data shows that HEAP_XMAX_INVALID and HEAP_UPDATED flags are both set. That should only happen if the updating transaction aborted. But the query of clog is saying that it committed. Something is wrong with that. How did the hint bits get set to HEAP_XMAX_INVALID if the transaction did commit. Either some process is setting that hint bit when it shouldn't, or your clog is corrupted and returning a bogus answer about the xmax having been committed. Either way, you've got corruption.\n\nYour question \"preventing cutoff_xid to be greater than XID of some transaction which was aborted long time ago\" seems to be ignoring that TransactionIdDidCommit(xid) is returning true, suggesting the transaction did not abort.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 28 Oct 2020 08:25:31 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cannot freeze committed xmax"
},
{
"msg_contents": "\n\nOn 28.10.2020 18:25, Mark Dilger wrote:\n>\n>> On Oct 28, 2020, at 6:44 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n>>\n>> Looks like there is no assumption that xmax should be set to InvalidTransactionId when HEAP_XMAX_INVALID bit is set.\n>> And I didn't find any check preventing cutoff_xid to be greater than XID of some transaction which was aborted long time ago.\n> Nothing in your example suggests that cutoff_xid is wrong, so I'd assume that part is probably working ok.\n>\n> Your data shows that HEAP_XMAX_INVALID and HEAP_UPDATED flags are both set. That should only happen if the updating transaction aborted. But the query of clog is saying that it committed. Something is wrong with that. How did the hint bits get set to HEAP_XMAX_INVALID if the transaction did commit. Either some process is setting that hint bit when it shouldn't, or your clog is corrupted and returning a bogus answer about the xmax having been committed. Either way, you've got corruption.\n>\n> Your question \"preventing cutoff_xid to be greater than XID of some transaction which was aborted long time ago\" seems to be ignoring that TransactionIdDidCommit(xid) is returning true, suggesting the transaction did not abort.\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\nYes, I forgot to say that transaction is treated as committed \n(txid_status() returns \"committed\").\nAlso database was previously upgraded from 11.5 to 11.7\nCertainly the hypothesis of CLOG corruption explains everything.\nI wonder if there can be some other scenario (upgrade, multixacts, \nprevious freeze attempt...) which can cause such combination of flags?\nI have inspected all cases where HEAP_XMAX_INVALID is set, but have not \nfound any one which can explain it.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Wed, 28 Oct 2020 18:56:48 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: cannot freeze committed xmax"
},
{
"msg_contents": "\n\n> On Oct 28, 2020, at 8:56 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n> \n> \n> \n> On 28.10.2020 18:25, Mark Dilger wrote:\n>> \n>>> On Oct 28, 2020, at 6:44 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n>>> \n>>> Looks like there is no assumption that xmax should be set to InvalidTransactionId when HEAP_XMAX_INVALID bit is set.\n>>> And I didn't find any check preventing cutoff_xid to be greater than XID of some transaction which was aborted long time ago.\n>> Nothing in your example suggests that cutoff_xid is wrong, so I'd assume that part is probably working ok.\n>> \n>> Your data shows that HEAP_XMAX_INVALID and HEAP_UPDATED flags are both set. That should only happen if the updating transaction aborted. But the query of clog is saying that it committed. Something is wrong with that. How did the hint bits get set to HEAP_XMAX_INVALID if the transaction did commit. Either some process is setting that hint bit when it shouldn't, or your clog is corrupted and returning a bogus answer about the xmax having been committed. Either way, you've got corruption.\n>> \n>> Your question \"preventing cutoff_xid to be greater than XID of some transaction which was aborted long time ago\" seems to be ignoring that TransactionIdDidCommit(xid) is returning true, suggesting the transaction did not abort.\n>> \n>> —\n>> Mark Dilger\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n> Yes, I forgot to say that transaction is treated as committed (txid_status() returns \"committed\").\n> Also database was previously upgraded from 11.5 to 11.7\n> Certainly the hypothesis of CLOG corruption explains everything.\n> I wonder if there can be some other scenario (upgrade, multixacts, previous freeze attempt...) which can cause such combination of flags?\n> I have inspected all cases where HEAP_XMAX_INVALID is set, but have not found any one which can explain it.\n\nThe other possibillity is that this tuple is erroneously marked as HEAP_UPDATED. heap_update() sets that, which makes sense. rewrite_heap_tuple() copies the old tuple's bits to the new tuple and then does some work to resolve update chains. I guess you could look at whether that logic might leave things in an invalid state. I don't have any theory about that.\n\nLooking at the git logs, it seems 699bf7d05c68734f800052829427c20674eb2c6b introduced the check that is ereporting, and did so along with commit 9c2f0a6c3cc8bb85b78191579760dbe9fb7814ec, which cleaned up some corruption bugs. I wonder if you're just unlucky enough to have had one of these corruptions, and now you're bumping into the ereport which is intended to prevent the corruption from spreading further?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 28 Oct 2020 09:21:38 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cannot freeze committed xmax"
}
] |
[
{
"msg_contents": "Hi,\n\nBug #16427 mentioned that temporary files are not removed after a crash. I\nheard similar complaints from some customers. In the bug report [1], I\nproposed a new GUC to control whether temporary files are removed after a\ncrash recovery. The current behavior is only useful for debugging purposes.\nIt also has an undesirable behavior: you have to restart the service to\nreclaim the space. Interrupting the service continuity isn't always an\noption and due to limited resources, you have no choice but to restart the\nservice.\n\nThe new GUC cleanup_temp_files_after_crash is marked as SIGHUP. Hence, you\ncan enable it to debug without service interruption. The default value is\non which means it changes the current behavior. Documentation and tests are\nincluded.\n\n[1]\nhttps://www.postgresql.org/message-id/CAH503wB28N1382YReXWjqpqZE6iqaxERoZUqnf02eNOYs0cZOA%40mail.gmail.com\n\nRegards,\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 28 Oct 2020 11:16:26 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "cleanup temporary files after crash"
},
{
"msg_contents": "Hi Euler,\n\nOn Wed, Oct 28, 2020 at 11:16:26AM -0300, Euler Taveira wrote:\n>Hi,\n>\n>Bug #16427 mentioned that temporary files are not removed after a crash. I\n>heard similar complaints from some customers. In the bug report [1], I\n>proposed a new GUC to control whether temporary files are removed after a\n>crash recovery. The current behavior is only useful for debugging purposes.\n>It also has an undesirable behavior: you have to restart the service to\n>reclaim the space. Interrupting the service continuity isn't always an\n>option and due to limited resources, you have no choice but to restart the\n>service.\n>\n>The new GUC cleanup_temp_files_after_crash is marked as SIGHUP. Hence, you\n>can enable it to debug without service interruption. The default value is\n>on which means it changes the current behavior. Documentation and tests are\n>included.\n>\n\nI did a quick review and the patch seems fine to me. Let's wait for a\nbit and see if there are any objections - if not, I'll get it committed\nin the next CF.\n\nOne thing I'm not sure about is whether we should have the GUC as\nproposed, or have a negative \"keep_temp_files_after_restart\" defaulting\nto false. But I don't have a very good justification for the alternative\nother than vague personal preference.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 28 Oct 2020 19:42:52 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Wed, 28 Oct 2020 at 15:42, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> I did a quick review and the patch seems fine to me. Let's wait for a\n> bit and see if there are any objections - if not, I'll get it committed\n> in the next CF.\n>\n>\nTomas, thanks for your review.\n\n\n> One thing I'm not sure about is whether we should have the GUC as\n> proposed, or have a negative \"keep_temp_files_after_restart\" defaulting\n> to false. But I don't have a very good justification for the alternative\n> other than vague personal preference.\n>\n>\nI thought about not providing a GUC at all or provide it in the developer\nsection. I've never heard someone saying that they use those temporary\nfiles to\ninvestigate an issue. Regarding a crash, all information is already\navailable\nand temporary files don't provide extra details. This new GUC is just to\nkeep the\nprevious behavior. I'm fine without the GUC, though.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Wed, 28 Oct 2020 at 15:42, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\nI did a quick review and the patch seems fine to me. Let's wait for a\nbit and see if there are any objections - if not, I'll get it committed\nin the next CF.\nTomas, thanks for your review. \nOne thing I'm not sure about is whether we should have the GUC as\nproposed, or have a negative \"keep_temp_files_after_restart\" defaulting\nto false. But I don't have a very good justification for the alternative\nother than vague personal preference.\nI thought about not providing a GUC at all or provide it in the developersection. I've never heard someone saying that they use those temporary files toinvestigate an issue. Regarding a crash, all information is already availableand temporary files don't provide extra details. This new GUC is just to keep the previous behavior. I'm fine without the GUC, though.-- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 31 Oct 2020 21:01:15 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Sat, Oct 31, 2020 at 09:01:15PM -0300, Euler Taveira wrote:\n> I thought about not providing a GUC at all or provide it in the developer\n> section. I've never heard someone saying that they use those temporary\n> files to investigate an issue. Regarding a crash, all information is already\n> available and temporary files don't provide extra details. This new\n> GUC is just to keep the previous behavior. I'm fine without the GUC, though.\n\nThe original behavior is as old as 4a5f38c4, and last time we talked\nabout that there were arguments about still keeping the existing\nbehavior to not cleanup files during a restart-after-crash scenario\nfor the sake of being useful just \"in case\". I have never used that\nproperty myself, TBH, and I have seen much more cases of users caring\nabout the data folder not facing an ENOSPC particularly if they don't\nuse different partitions for pg_wal/ and the main data folder.\n--\nMichael",
"msg_date": "Sun, 1 Nov 2020 10:25:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 01.11.2020 04:25, Michael Paquier wrote:\n> On Sat, Oct 31, 2020 at 09:01:15PM -0300, Euler Taveira wrote:\n>> I thought about not providing a GUC at all or provide it in the developer\n>> section. I've never heard someone saying that they use those temporary\n>> files to investigate an issue. Regarding a crash, all information is already\n>> available and temporary files don't provide extra details. This new\n>> GUC is just to keep the previous behavior. I'm fine without the GUC, though.\n> The original behavior is as old as 4a5f38c4, and last time we talked\n> about that there were arguments about still keeping the existing\n> behavior to not cleanup files during a restart-after-crash scenario\n> for the sake of being useful just \"in case\". I have never used that\n> property myself, TBH, and I have seen much more cases of users caring\n> about the data folder not facing an ENOSPC particularly if they don't\n> use different partitions for pg_wal/ and the main data folder.\n> --\n> Michael\n\nThank you, Euler for submitting this.\n+1 for the feature. One of the platforms we support uses temp files a \nlot and we faced the problem, while never actually used these orphan \nfiles for debugging purposes.\n\nI also think that the GUC is not needed here. This 'feature' was \ninternal from the very beginning, so users shouldn't care about \npreserving old behavior. Without the GUC the patch is very simple, \nplease see attached version. I also omit the test, because I am not sure \nit will be stable given that the RemovePgTempFiles() allows the \npossibility of failure.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 26 Nov 2020 11:48:46 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Thu, 26 Nov 2020 at 05:48, Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote:\n\n>\n> I also think that the GUC is not needed here. This 'feature' was\n> internal from the very beginning, so users shouldn't care about\n> preserving old behavior. Without the GUC the patch is very simple,\n> please see attached version. I also omit the test, because I am not sure\n> it will be stable given that the RemovePgTempFiles() allows the\n> possibility of failure.\n>\n> Anastasia, thanks for reviewing it. As I said, I'm fine without the GUC.\nHowever, if we decided to go with the GUC, default behavior should be remove\nthe temporary files after the crash.\n\nRegards,\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Thu, 26 Nov 2020 at 05:48, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:\nI also think that the GUC is not needed here. This 'feature' was \ninternal from the very beginning, so users shouldn't care about \npreserving old behavior. Without the GUC the patch is very simple, \nplease see attached version. I also omit the test, because I am not sure \nit will be stable given that the RemovePgTempFiles() allows the \npossibility of failure.\nAnastasia, thanks for reviewing it. As I said, I'm fine without the GUC.However, if we decided to go with the GUC, default behavior should be removethe temporary files after the crash.Regards,-- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 26 Nov 2020 10:14:37 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "Hi,\n\nLet's move this patch forward. Based on the responses, I agree the\ndefault behavior should be to remove the temp files, and I think we\nshould have the GUC (on the off chance that someone wants to preserve\nthe temporary files for debugging or whatever other reason).\n\nI propose to rename the GUC to remove_temp_files_after_crash, I think\n\"remove\" is a bit clearer than \"cleanup\". I've also reworded the sgml\ndocs a little bit.\n\nAttached is a patch with those changes. Barring objections, I'll get\nthis committed in the next couple days.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 9 Mar 2021 02:28:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Tue, Mar 09, 2021 at 02:28:43AM +0100, Tomas Vondra wrote:\n> Let's move this patch forward. Based on the responses, I agree the\n> default behavior should be to remove the temp files, and I think we\n> should have the GUC (on the off chance that someone wants to preserve\n> the temporary files for debugging or whatever other reason).\n\nThanks for taking care of this. I am having some second-thoughts\nabout changing this behavior by default, still that's much more useful\nthis way.\n\n> I propose to rename the GUC to remove_temp_files_after_crash, I think\n> \"remove\" is a bit clearer than \"cleanup\". I've also reworded the sgml\n> docs a little bit.\n\n\"remove\" sounds fine to me.\n\n> Attached is a patch with those changes. Barring objections, I'll get\n> this committed in the next couple days.\n\n+ When set to on, <productname>PostgreSQL</productname> will automatically\nNit: using a <literal> markup for the \"on\" value.\n\n+#remove_temp_files_after_crash = on # remove temporary files after\n+# # backend crash?\nThe indentation of the second line is incorrect here (Incorrect number\nof spaces in tabs perhaps?), and there is no need for the '#' at the\nbeginning of the line.\n--\nMichael",
"msg_date": "Tue, 9 Mar 2021 21:31:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 9:31 AM, Michael Paquier wrote:\n> On Tue, Mar 09, 2021 at 02:28:43AM +0100, Tomas Vondra wrote:\n> > Let's move this patch forward. Based on the responses, I agree the\n> > default behavior should be to remove the temp files, and I think we\n> > should have the GUC (on the off chance that someone wants to preserve\n> > the temporary files for debugging or whatever other reason).\n> \n> Thanks for taking care of this. I am having some second-thoughts\n> about changing this behavior by default, still that's much more useful\n> this way.\n> \n> > I propose to rename the GUC to remove_temp_files_after_crash, I think\n> > \"remove\" is a bit clearer than \"cleanup\". I've also reworded the sgml\n> > docs a little bit.\n> \n> \"remove\" sounds fine to me.\n+1.\n\n> > Attached is a patch with those changes. Barring objections, I'll get\n> > this committed in the next couple days.\n> \n> + When set to on, <productname>PostgreSQL</productname> will automatically\n> Nit: using a <literal> markup for the \"on\" value.\n> \n> +#remove_temp_files_after_crash = on # remove temporary files after\n> +# # backend crash?\n> The indentation of the second line is incorrect here (Incorrect number\n> of spaces in tabs perhaps?), and there is no need for the '#' at the\n> beginning of the line.\n> --\n> Michael\nThat was my fault. Editor automatically added #.\n\nI'm not sure Tomas will include the tests. If so. the terminology should be adjusted too.\n\n+++ b/src/test/recovery/t/022_crash_temp_files.pl\n@@ -0,0 +1,194 @@\n+#\n+# Test cleanup of temporary files after a crash.\n\ns/cleanup/remove/\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Mar 9, 2021, at 9:31 AM, Michael Paquier wrote:On Tue, Mar 09, 2021 at 02:28:43AM +0100, Tomas Vondra wrote:> Let's move this patch forward. Based on the responses, I agree the> default behavior should be to remove the temp files, and I think we> should have the GUC (on the off chance that someone wants to preserve> the temporary files for debugging or whatever other reason).Thanks for taking care of this. I am having some second-thoughtsabout changing this behavior by default, still that's much more usefulthis way.> I propose to rename the GUC to remove_temp_files_after_crash, I think> \"remove\" is a bit clearer than \"cleanup\". I've also reworded the sgml> docs a little bit.\"remove\" sounds fine to me.+1.> Attached is a patch with those changes. Barring objections, I'll get> this committed in the next couple days.+ When set to on, <productname>PostgreSQL</productname> will automaticallyNit: using a <literal> markup for the \"on\" value.+#remove_temp_files_after_crash = on # remove temporary files after+# # backend crash?The indentation of the second line is incorrect here (Incorrect numberof spaces in tabs perhaps?), and there is no need for the '#' at thebeginning of the line.--MichaelThat was my fault. Editor automatically added #.I'm not sure Tomas will include the tests. If so. the terminology should be adjusted too.+++ b/src/test/recovery/t/022_crash_temp_files.pl@@ -0,0 +1,194 @@+#+# Test cleanup of temporary files after a crash.s/cleanup/remove/--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 09 Mar 2021 10:20:50 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 1:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Mar 09, 2021 at 02:28:43AM +0100, Tomas Vondra wrote:\n> > Let's move this patch forward. Based on the responses, I agree the\n> > default behavior should be to remove the temp files, and I think we\n> > should have the GUC (on the off chance that someone wants to preserve\n> > the temporary files for debugging or whatever other reason).\n>\n> Thanks for taking care of this. I am having some second-thoughts\n> about changing this behavior by default, still that's much more useful\n> this way.\n\n+1 for having it on by default.\n\nI was also just looking at this patch and came here to say LGTM except\nfor two cosmetic things, below.\n\n> > I propose to rename the GUC to remove_temp_files_after_crash, I think\n> > \"remove\" is a bit clearer than \"cleanup\". I've also reworded the sgml\n> > docs a little bit.\n>\n> \"remove\" sounds fine to me.\n\n+1\n\n> > Attached is a patch with those changes. Barring objections, I'll get\n> > this committed in the next couple days.\n>\n> + When set to on, <productname>PostgreSQL</productname> will automatically\n> Nit: using a <literal> markup for the \"on\" value.\n\nMaybe should say \"which is the default\", like other similar things?\n\n> +#remove_temp_files_after_crash = on # remove temporary files after\n> +# # backend crash?\n> The indentation of the second line is incorrect here (Incorrect number\n> of spaces in tabs perhaps?), and there is no need for the '#' at the\n> beginning of the line.\n\nYeah, that's wrong. For some reason that one file uses a tab size of\n8, unlike the rest of the tree (I guess because people will read that\nfile in software with the more common setting of 8). If you do :set\ntabstop=8 in vim, suddenly it all makes sense, but it is revealed that\nthis patch has it wrong, as you said. (Perhaps this file should have\nsome of those special Vim/Emacs control messages so we don't keep\ngetting this wrong?)\n\n\n",
"msg_date": "Mon, 15 Mar 2021 15:01:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Sun, Mar 14, 2021, at 11:01 PM, Thomas Munro wrote:\n> On Wed, Mar 10, 2021 at 1:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Tue, Mar 09, 2021 at 02:28:43AM +0100, Tomas Vondra wrote:\n> > > Let's move this patch forward. Based on the responses, I agree the\n> > > default behavior should be to remove the temp files, and I think we\n> > > should have the GUC (on the off chance that someone wants to preserve\n> > > the temporary files for debugging or whatever other reason).\n> >\n> > Thanks for taking care of this. I am having some second-thoughts\n> > about changing this behavior by default, still that's much more useful\n> > this way.\n> \n> +1 for having it on by default.\n> \n> I was also just looking at this patch and came here to say LGTM except\n> for two cosmetic things, below.\nThanks for taking a look at this patch. I'm not sure Tomas is preparing a new\npatch that includes the suggested modifications but I decided to do it. This\nnew version has the new GUC name (using \"remove\"). I also replaced \"cleanup\"\nwith \"remove\" in the all the remain places. As pointed by Thomas, I reworded\nthe paragraph that describes the GUC moving the default information to the\nbeginning of the sentence. I also added the \"literal\" as suggested by Michael.\nThe postgresql.conf.sample was fixed. The tests was slightly modified. I\nreworded some comments and added a hack to avoid breaking the temporary file\ntest in slow machines. A usleep() after sending the query provides some time\nfor the query to create the temporary file. I used an arbitrarily sleep (10ms)\nthat seems to be sufficient.\n\n> > +#remove_temp_files_after_crash = on # remove temporary files after\n> > +# # backend crash?\n> > The indentation of the second line is incorrect here (Incorrect number\n> > of spaces in tabs perhaps?), and there is no need for the '#' at the\n> > beginning of the line.\n> \n> Yeah, that's wrong. For some reason that one file uses a tab size of\n> 8, unlike the rest of the tree (I guess because people will read that\n> file in software with the more common setting of 8). If you do :set\n> tabstop=8 in vim, suddenly it all makes sense, but it is revealed that\n> this patch has it wrong, as you said. (Perhaps this file should have\n> some of those special Vim/Emacs control messages so we don't keep\n> getting this wrong?)\nI hadn't noticed that this file use ts=8. (This explains the misalignment that\nI observed in some parameter comments). I'm not sure if people care about the\nindentation in this file. From the development perspective, we can use the same\nnumber of spaces for Tab as the source code so it is not required to fix your\ndev environment. However, the regular user doesn't follow the dev guidelines so\nit could probably observe the misalignment while using Vim (that has 8 spaces\nas default), for example. For now, I will add\n\nautocmd BufRead,BufNewFile postgresql.conf* setlocal ts=8\n\nto my .vimrc. We should probably fix some settings that are misaligned such as\nparallel_setup_cost and shared_preload_libraries. The parameters\ntimezone_abbreviations and max_pred_locks_per_page are using spaces instead of\ntabs and should probably be fixed too.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 16 Mar 2021 22:34:23 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "\n\nOn 3/17/21 2:34 AM, Euler Taveira wrote:\n> On Sun, Mar 14, 2021, at 11:01 PM, Thomas Munro wrote:\n>> On Wed, Mar 10, 2021 at 1:31 AM Michael Paquier <michael@paquier.xyz\n>> <mailto:michael@paquier.xyz>> wrote:\n>> > On Tue, Mar 09, 2021 at 02:28:43AM +0100, Tomas Vondra wrote:\n>> > > Let's move this patch forward. Based on the responses, I agree the\n>> > > default behavior should be to remove the temp files, and I think we\n>> > > should have the GUC (on the off chance that someone wants to preserve\n>> > > the temporary files for debugging or whatever other reason).\n>> >\n>> > Thanks for taking care of this. I am having some second-thoughts\n>> > about changing this behavior by default, still that's much more useful\n>> > this way.\n>>\n>> +1 for having it on by default.\n>>\n>> I was also just looking at this patch and came here to say LGTM except\n>> for two cosmetic things, below.\n> Thanks for taking a look at this patch. I'm not sure Tomas is preparing\n> a new\n> patch that includes the suggested modifications but I decided to do it. This\n> new version has the new GUC name (using \"remove\"). I also replaced \"cleanup\"\n> with \"remove\" in the all the remain places. As pointed by Thomas, I reworded\n> the paragraph that describes the GUC moving the default information to the\n> beginning of the sentence. I also added the \"literal\" as suggested by\n> Michael.\n> The postgresql.conf.sample was fixed. The tests was slightly modified. I\n> reworded some comments and added a hack to avoid breaking the temporary file\n> test in slow machines. A usleep() after sending the query provides some time\n> for the query to create the temporary file. I used an arbitrarily sleep\n> (10ms)\n> that seems to be sufficient.\n> \n\nThanks. Pushed with some minor changes to docs wording.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Mar 2021 17:40:22 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "Hmm,\n\ncrake and florican seem to have failed because of this commit, with this\nerror in the new TAP test:\n\nerror running SQL: 'psql:<stdin>:1: ERROR: could not open directory\n\"base/pgsql_tmp\": No such file or directory'\nwhile running 'psql -XAtq -d port=64336 host=/tmp/sv1WjSvj3P\ndbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'SELECT COUNT(1)\nFROM pg_ls_dir($$base/pgsql_tmp$$)' at\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm\nline 1572.\n\nSo it seems the pgsql_tmp directory does not exist, for some reason.\nConsidering the directory should be created for the first temp file,\nthat either means the query in the TAP test does not actually create a\ntemp file on those machines, or it gets killed too early.\n\nThe 500 rows used by the test seems fairly low, so maybe those machines\ncan do the sort entirely in memory?\n\nThe other option is that the sleep in the TAP test is a bit too short,\nbut those machines don't seem to be that slow.\n\nAnyway, TAP test relying on timing like this may not be the best idea,\nso I wonder how else to test this ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Mar 2021 18:51:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 3/18/21 6:51 PM, Tomas Vondra wrote:\n> Hmm,\n> \n> crake and florican seem to have failed because of this commit, with this\n> error in the new TAP test:\n> \n> error running SQL: 'psql:<stdin>:1: ERROR: could not open directory\n> \"base/pgsql_tmp\": No such file or directory'\n> while running 'psql -XAtq -d port=64336 host=/tmp/sv1WjSvj3P\n> dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'SELECT COUNT(1)\n> FROM pg_ls_dir($$base/pgsql_tmp$$)' at\n> /home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm\n> line 1572.\n> \n> So it seems the pgsql_tmp directory does not exist, for some reason.\n> Considering the directory should be created for the first temp file,\n> that either means the query in the TAP test does not actually create a\n> temp file on those machines, or it gets killed too early.\n> \n> The 500 rows used by the test seems fairly low, so maybe those machines\n> can do the sort entirely in memory?\n> \n> The other option is that the sleep in the TAP test is a bit too short,\n> but those machines don't seem to be that slow.\n> \n> Anyway, TAP test relying on timing like this may not be the best idea,\n> so I wonder how else to test this ...\n> \n\nI think a better way to test this would be to use a tuple lock:\n\nsetup:\n\n create table t (a int unique);\n\nsession 1:\n\n begin;\n insert into t values (1);\n ... keep open ...\n\nsession 2:\n\n begin;\n set work_mem = '64kB';\n insert into t select i from generate_series(1,10000) s(i);\n ... should block ...\n\nThen, once the second session gets waiting on the tuple, kill the\nbackend. We might as well test that there actually is a temp file first,\nand then test that it disappeared.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Mar 2021 20:20:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Thu, Mar 18, 2021, at 4:20 PM, Tomas Vondra wrote:\n> I think a better way to test this would be to use a tuple lock:\nI predicated such issues with this test. Your suggestion works for me. Maybe\nyou should use less rows in the session 2 query.\n\n> setup:\n> \n> create table t (a int unique);\n> \n> session 1:\n> \n> begin;\n> insert into t values (1);\n> ... keep open ...\n> \n> session 2:\n> \n> begin;\n> set work_mem = '64kB';\n> insert into t select i from generate_series(1,10000) s(i);\n> ... should block ...\n> \n> Then, once the second session gets waiting on the tuple, kill the\n> backend. We might as well test that there actually is a temp file first,\n> and then test that it disappeared.\nYour suggestion works for me. Maybe you could use less rows in the session 2\nquery. I experimented with 1k rows and it generates a temporary file.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Mar 18, 2021, at 4:20 PM, Tomas Vondra wrote:I think a better way to test this would be to use a tuple lock:I predicated such issues with this test. Your suggestion works for me. Maybeyou should use less rows in the session 2 query.setup: create table t (a int unique);session 1: begin; insert into t values (1); ... keep open ...session 2: begin; set work_mem = '64kB'; insert into t select i from generate_series(1,10000) s(i); ... should block ...Then, once the second session gets waiting on the tuple, kill thebackend. We might as well test that there actually is a temp file first,and then test that it disappeared.Your suggestion works for me. Maybe you could use less rows in the session 2query. I experimented with 1k rows and it generates a temporary file.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 18 Mar 2021 17:06:23 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "\n\nOn 3/18/21 9:06 PM, Euler Taveira wrote:\n> On Thu, Mar 18, 2021, at 4:20 PM, Tomas Vondra wrote:\n>> I think a better way to test this would be to use a tuple lock:\n> I predicated such issues with this test. Your suggestion works for me. Maybe\n> you should use less rows in the session 2 query.\n> \n>> setup:\n>>\n>> create table t (a int unique);\n>>\n>> session 1:\n>>\n>> begin;\n>> insert into t values (1);\n>> ... keep open ...\n>>\n>> session 2:\n>>\n>> begin;\n>> set work_mem = '64kB';\n>> insert into t select i from generate_series(1,10000) s(i);\n>> ... should block ...\n>>\n>> Then, once the second session gets waiting on the tuple, kill the\n>> backend. We might as well test that there actually is a temp file first,\n>> and then test that it disappeared.\n> Your suggestion works for me. Maybe you could use less rows in the session 2\n> query. I experimented with 1k rows and it generates a temporary file.\n> \n\nOK. Can you prepare a patch with the proposed test approach?\n\nFWIW I can reproduce this on a 32-bit ARM system (rpi4), where 500 rows\nsimply does not use a temp file, and with 1000 rows it works fine. On\nthe x86_64 the temp file is created even with 500 rows. So there clearly\nis some platform dependency, not sure if it's due to 32/64 bits,\nalignment or something else. In any case, the 500 rows seems to be just\non the threshold.\n\nWe need to do both - stop using the timing and increase the number of\nrows, to consistently get temp files.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Mar 2021 21:51:11 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Thu, Mar 18, 2021, at 5:51 PM, Tomas Vondra wrote:\n> OK. Can you prepare a patch with the proposed test approach?\nI'm on it.\n\n> FWIW I can reproduce this on a 32-bit ARM system (rpi4), where 500 rows\n> simply does not use a temp file, and with 1000 rows it works fine. On\n> the x86_64 the temp file is created even with 500 rows. So there clearly\n> is some platform dependency, not sure if it's due to 32/64 bits,\n> alignment or something else. In any case, the 500 rows seems to be just\n> on the threshold.\n> \n> We need to do both - stop using the timing and increase the number of\n> rows, to consistently get temp files.\nYeah.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Mar 18, 2021, at 5:51 PM, Tomas Vondra wrote:OK. Can you prepare a patch with the proposed test approach?I'm on it.FWIW I can reproduce this on a 32-bit ARM system (rpi4), where 500 rowssimply does not use a temp file, and with 1000 rows it works fine. Onthe x86_64 the temp file is created even with 500 rows. So there clearlyis some platform dependency, not sure if it's due to 32/64 bits,alignment or something else. In any case, the 500 rows seems to be juston the threshold.We need to do both - stop using the timing and increase the number ofrows, to consistently get temp files.Yeah.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 18 Mar 2021 18:00:14 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Thu, Mar 18, 2021, at 6:00 PM, Euler Taveira wrote:\n> On Thu, Mar 18, 2021, at 5:51 PM, Tomas Vondra wrote:\n>> OK. Can you prepare a patch with the proposed test approach?\n> I'm on it.\nWhat do you think about this patch?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 18 Mar 2021 18:44:53 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 3/18/21 10:44 PM, Euler Taveira wrote:\n> On Thu, Mar 18, 2021, at 6:00 PM, Euler Taveira wrote:\n>> On Thu, Mar 18, 2021, at 5:51 PM, Tomas Vondra wrote:\n>>> OK. Can you prepare a patch with the proposed test approach?\n>> I'm on it.\n> What do you think about this patch?\n> \n\nWell, that's better, bit it still does not do the trick on the 32-bit\nmachine - in that case a 1000 rows with int4 still fit into work_mem, so\nthe temp file is not created. Per my experiments about 1040 rows are\nneeded - soooo close ;-) So let's make it 2000.\n\nWe might as well check that the temp file actually exists, before\nkilling the backend. Just to be sure.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Mar 2021 00:34:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Thu, Mar 18, 2021, at 8:34 PM, Tomas Vondra wrote:\n> Well, that's better, bit it still does not do the trick on the 32-bit\n> machine - in that case a 1000 rows with int4 still fit into work_mem, so\n> the temp file is not created. Per my experiments about 1040 rows are\n> needed - soooo close ;-) So let's make it 2000.\nMy 32-bit laptop needs some repairs so I blindly chose 1k rows.\n\n> We might as well check that the temp file actually exists, before\n> killing the backend. Just to be sure.\nDo you mean with remove_temp_files_after_crash = on? New version attached.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 18 Mar 2021 21:54:01 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 3/19/21 1:54 AM, Euler Taveira wrote:\n> On Thu, Mar 18, 2021, at 8:34 PM, Tomas Vondra wrote:\n>> Well, that's better, bit it still does not do the trick on the 32-bit\n>> machine - in that case a 1000 rows with int4 still fit into work_mem, so\n>> the temp file is not created. Per my experiments about 1040 rows are\n>> needed - soooo close ;-) So let's make it 2000.\n> My 32-bit laptop needs some repairs so I blindly chose 1k rows.\n> \n\nYeah, it's not immediately obvious how many rows are needed. 2000 would\nbe enough, but I just bumped it up to 5000 for good measure.\n\n>> We might as well check that the temp file actually exists, before\n>> killing the backend. Just to be sure.\n> Do you mean with remove_temp_files_after_crash = on? New version attached.\n> \n\nOn second thought, we don't need that check. I realized the second part\nof the test (checking remove_temp_files_after_crash=off) actually does\nverify that we've created the file, so that should be OK.\n\nI've pushed the previous version with the 5000 rows. Let's see what\ncrake and florican think about this ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Mar 2021 02:21:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 3/19/21 2:21 AM, Tomas Vondra wrote:\n> On 3/19/21 1:54 AM, Euler Taveira wrote:\n>> On Thu, Mar 18, 2021, at 8:34 PM, Tomas Vondra wrote:\n>>> Well, that's better, bit it still does not do the trick on the 32-bit\n>>> machine - in that case a 1000 rows with int4 still fit into work_mem, so\n>>> the temp file is not created. Per my experiments about 1040 rows are\n>>> needed - soooo close ;-) So let's make it 2000.\n>> My 32-bit laptop needs some repairs so I blindly chose 1k rows.\n>>\n> \n> Yeah, it's not immediately obvious how many rows are needed. 2000 would\n> be enough, but I just bumped it up to 5000 for good measure.\n> \n>>> We might as well check that the temp file actually exists, before\n>>> killing the backend. Just to be sure.\n>> Do you mean with remove_temp_files_after_crash = on? New version attached.\n>>\n> \n> On second thought, we don't need that check. I realized the second part\n> of the test (checking remove_temp_files_after_crash=off) actually does\n> verify that we've created the file, so that should be OK.\n> \n> I've pushed the previous version with the 5000 rows. Let's see what\n> crake and florican think about this ...\n> \n\nSo crake and florican seem to be happy now. Not sure about lapwing yet.\n\nBut interestingly enough, prion and curculio got unhappy. They worked\nfine with the older test, but now it fails with the \"no such file or\ndirectory\" message. I wonder what makes them different from the other\nx86_64 machines ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Mar 2021 03:40:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> So crake and florican seem to be happy now. Not sure about lapwing yet.\n> But interestingly enough, prion and curculio got unhappy. They worked\n> fine with the older test, but now it fails with the \"no such file or\n> directory\" message. I wonder what makes them different from the other\n> x86_64 machines ...\n\nI'm confused about why the new query would use a temp file at all.\nMaybe they aren't using the same plan?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Mar 2021 22:57:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 3/19/21 3:57 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> So crake and florican seem to be happy now. Not sure about lapwing yet.\n>> But interestingly enough, prion and curculio got unhappy. They worked\n>> fine with the older test, but now it fails with the \"no such file or\n>> directory\" message. I wonder what makes them different from the other\n>> x86_64 machines ...\n> \n> I'm confused about why the new query would use a temp file at all.\n> Maybe they aren't using the same plan?\n> \n\nBecause generate_series() stashes the results into a tuplestore, and the\ntest sets work_mem to just 64kB. Maybe I'm missing some detail, though.\n\nThe plan is extremely simple - just Function Scan feeding data into an\nInsert, not sure what other plan could be used.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Mar 2021 04:07:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "[ reads code... ]\n... no, I think the problem is the test is still full of race conditions.\n\nIn the first place, waiting till you see the output of a SELECT that's\nbefore the useful query is not enough to guarantee that the useful query\nis done, or even started. That's broken on both sessions.\n\nIn the second place, even if the second INSERT has started, you don't know\nthat it's reached the point of blocking on the tuple conflict yet.\nWhich in turn means that it might not've filled its tuplestore yet.\n\nIn short, this script is designed to ensure that the test query can't\nfinish too soon, but that proves nothing about whether the test query\nhas even started. And since you also haven't really guaranteed that the\nintended-to-be-blocking query is done, I don't think that the first\ncondition really holds either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Mar 2021 23:23:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Fri, Mar 19, 2021, at 12:23 AM, Tom Lane wrote:\n> [ reads code... ]\n> ... no, I think the problem is the test is still full of race conditions.\n> \n> In the first place, waiting till you see the output of a SELECT that's\n> before the useful query is not enough to guarantee that the useful query\n> is done, or even started. That's broken on both sessions.\nThat's an ugly and fragile mechanism to workaround the fact that pump_until\nreacts after you have the query return.\n\n> In the second place, even if the second INSERT has started, you don't know\n> that it's reached the point of blocking on the tuple conflict yet.\n> Which in turn means that it might not've filled its tuplestore yet.\n> \n> In short, this script is designed to ensure that the test query can't\n> finish too soon, but that proves nothing about whether the test query\n> has even started. And since you also haven't really guaranteed that the\n> intended-to-be-blocking query is done, I don't think that the first\n> condition really holds either.\nIn order to avoid the race condition between filling the tuplestore and killing\nthe backend, we could use a pool_query_until() before SIGKILL to wait the\ntemporary file being created. Do you think this modification will make this\ntest more stable?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 19 Mar 2021 11:17:50 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 3/19/21 3:17 PM, Euler Taveira wrote:\n> On Fri, Mar 19, 2021, at 12:23 AM, Tom Lane wrote:\n>> [ reads code... ]\n>> ... no, I think the problem is the test is still full of race conditions.\n>>\n>> In the first place, waiting till you see the output of a SELECT that's\n>> before the useful query is not enough to guarantee that the useful query\n>> is done, or even started. That's broken on both sessions.\n> That's an ugly and fragile mechanism to workaround the fact that pump_until\n> reacts after you have the query return.\n> \n>> In the second place, even if the second INSERT has started, you don't know\n>> that it's reached the point of blocking on the tuple conflict yet.\n>> Which in turn means that it might not've filled its tuplestore yet.\n>>\n>> In short, this script is designed to ensure that the test query can't\n>> finish too soon, but that proves nothing about whether the test query\n>> has even started. And since you also haven't really guaranteed that the\n>> intended-to-be-blocking query is done, I don't think that the first\n>> condition really holds either.\n> In order to avoid the race condition between filling the tuplestore and\n> killing\n> the backend, we could use a pool_query_until() before SIGKILL to wait the\n> temporary file being created. Do you think this modification will make this\n> test more stable?\n> \n\nWow, I thought I understand the perl code, but clearly I was wrong.\n\nI think the solution is not particularly difficult.\n\nFor the initial insert, it's fine to switch the order of the SQL\ncommands, so that the insert happens first, and only then emit the string.\n\nFor the second insert, we can't do that, because it's expected to block\non the lock. So we have to verify that by looking at the lock directly.\n\nI however don't understand what the pump_until function does. AFAICS it\nshould run the query in a loop, and bail out once it matches the\nexpected output. But that does not seem to happen, so when the query\ngets executed before the lock is there, it results in infinite loop.\n\nIn the attached patch I've simulated this by random() < 0.5.\n\nIf I replace this with a wait loop in a plpgsql block, that works\nperfectly fine (no infinite loops). Tested both on x86_64 and rpi.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 19 Mar 2021 17:23:13 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "\nOn 3/19/21 5:23 PM, Tomas Vondra wrote:\n> ...\n>\n> If I replace this with a wait loop in a plpgsql block, that works\n> perfectly fine (no infinite loops). Tested both on x86_64 and rpi.\n>\n\nFor the record, here's the version with plpgsql block, which seems to be\nworking just fine.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Mar 2021 17:25:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 3/19/21 5:23 PM, Tomas Vondra wrote:\n> ...\n>\n> If I replace this with a wait loop in a plpgsql block, that works\n> perfectly fine (no infinite loops). Tested both on x86_64 and rpi.\n>\n\nFor the record, here's the version with plpgsql block, which seems to be\nworking just fine.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 19 Mar 2021 17:26:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On 3/19/21 5:26 PM, Tomas Vondra wrote:\n> On 3/19/21 5:23 PM, Tomas Vondra wrote:\n>> ...\n>>\n>> If I replace this with a wait loop in a plpgsql block, that works\n>> perfectly fine (no infinite loops). Tested both on x86_64 and rpi.\n>>\n> \n> For the record, here's the version with plpgsql block, which seems to be\n> working just fine.\n> \n\nI've pushed this version, with the plpgsql block. I've tested it on all\nthe machines I have here, hopefully it'll make buildfarm happy too.\n\nAFAICS I was mistaken about what the pump() functions do - it clearly\ndoes not run the command repeatedly, it just waits for the right output\nto appear. So a busy loop in plpgsql seems like a reasonable solution.\nPerhaps there's a better way to do this in TAP, not sure.\n\nMy brain hurts from reading too much Perl today ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Mar 2021 18:27:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Fri, Mar 19, 2021, at 2:27 PM, Tomas Vondra wrote:\n> I've pushed this version, with the plpgsql block. I've tested it on all\n> the machines I have here, hopefully it'll make buildfarm happy too.\nThanks for taking care of this issue.\n\n> AFAICS I was mistaken about what the pump() functions do - it clearly\n> does not run the command repeatedly, it just waits for the right output\n> to appear. So a busy loop in plpgsql seems like a reasonable solution.\n> Perhaps there's a better way to do this in TAP, not sure.\nI took 3 times longer to do that fragile test than the code itself. :-/\n\n> My brain hurts from reading too much Perl today ...\nI feel the same when I have to deal with Perl code.\n\nIt seems the animals are happy with this fix.\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Mar 19, 2021, at 2:27 PM, Tomas Vondra wrote:I've pushed this version, with the plpgsql block. I've tested it on allthe machines I have here, hopefully it'll make buildfarm happy too.Thanks for taking care of this issue.AFAICS I was mistaken about what the pump() functions do - it clearlydoes not run the command repeatedly, it just waits for the right outputto appear. So a busy loop in plpgsql seems like a reasonable solution.Perhaps there's a better way to do this in TAP, not sure.I took 3 times longer to do that fragile test than the code itself. :-/My brain hurts from reading too much Perl today ...I feel the same when I have to deal with Perl code.It seems the animals are happy with this fix.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 19 Mar 2021 14:51:39 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Sat, Oct 31, 2020 at 09:01:15PM -0300, Euler Taveira wrote:\n> > > The current behavior is only useful for debugging purposes.\n\nOn Wed, 28 Oct 2020 at 15:42, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > One thing I'm not sure about is whether we should have the GUC as\n> > proposed, or have a negative \"keep_temp_files_after_restart\" defaulting\n> > to false. But I don't have a very good justification for the alternative\n> > other than vague personal preference.\n\nOn Sat, Oct 31, 2020 at 09:01:15PM -0300, Euler Taveira wrote:\n> I thought about not providing a GUC at all or provide it in the developer\n> section. I've never heard someone saying that they use those temporary\n> files to investigate an issue. Regarding a crash, all information is already\n> available and temporary files don't provide extra details. This new GUC is just to keep the\n> previous behavior. I'm fine without the GUC, though.\n\nShould this GUC be classified as a developer option, and removed from\npostgresql.sample.conf ?\n\nThat was discussed initially in October but not since.\n\nOn Wed, Oct 28, 2020 at 11:16:26AM -0300, Euler Taveira wrote:\n> It also has an undesirable behavior: you have to restart the service to\n> reclaim the space.\n\nBTW, that's not really true - you can remove the tempfiles while the server is\nrunning. The complainant in bug#16427 was being careful - which is good.\nI watch for and remove tempfiles older than 2 days. The worst consequence of\nremoving a tempfile would be a failed query. \n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 11 Jun 2021 19:43:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Fri, Jun 11, 2021, at 9:43 PM, Justin Pryzby wrote:\n> On Sat, Oct 31, 2020 at 09:01:15PM -0300, Euler Taveira wrote:\n> > > > The current behavior is only useful for debugging purposes.\n> \n> On Wed, 28 Oct 2020 at 15:42, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > > One thing I'm not sure about is whether we should have the GUC as\n> > > proposed, or have a negative \"keep_temp_files_after_restart\" defaulting\n> > > to false. But I don't have a very good justification for the alternative\n> > > other than vague personal preference.\n> \n> On Sat, Oct 31, 2020 at 09:01:15PM -0300, Euler Taveira wrote:\n> > I thought about not providing a GUC at all or provide it in the developer\n> > section. I've never heard someone saying that they use those temporary\n> > files to investigate an issue. Regarding a crash, all information is already\n> > available and temporary files don't provide extra details. This new GUC is just to keep the\n> > previous behavior. I'm fine without the GUC, though.\n> \n> Should this GUC be classified as a developer option, and removed from\n> postgresql.sample.conf ?\nIt probably should.\n\n> That was discussed initially in October but not since.\nWas it? I seem to have missed this suggestion.\n\nI'm attaching a patch to fix it.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 22 Jun 2021 16:12:51 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Tue, Jun 22, 2021 at 04:12:51PM -0300, Euler Taveira wrote:\n> Was it? I seem to have missed this suggestion.\n> \n> I'm attaching a patch to fix it.\n\nLooks adapted to me.\n--\nMichael",
"msg_date": "Wed, 23 Jun 2021 08:46:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
},
{
"msg_contents": "On Wed, Jun 23, 2021 at 08:46:12AM +0900, Michael Paquier wrote:\n> Looks adapted to me.\n\nApplied as it was still on the stack. Please note that\nGUC_NOT_IN_SAMPLE was missing.\n--\nMichael",
"msg_date": "Fri, 25 Jun 2021 08:43:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cleanup temporary files after crash"
}
] |
[
{
"msg_contents": "On 28.10.2020 17:57, Даниил Захлыстов wrote:\n> Hi, thanks for the patch!\n> I’ve made a quick review and found one issue.\n> If the backend sends a CompressionAck message followed by some already \n> compressed message (for example, AuthenticationOk), then there is a \n> chance that pqReadData() will read both messages into the read buffer \n> at once. In this case, the CompressionAck message will be read \n> normally, but the client will fail to recognize the next message (for \n> example, AuthenticationOk) since it came in a compressed form but was \n> incorrectly read as a regular message. So the client would not be able \n> to recognize the second message and will crash.\n> Example of a successful launch (added some debug output):\n> usernamedt-osx: ~ usernamedt $ psql -d \"host = x.x.x.x port = 6432 \n> dbname = testdb user = testuser compression = 1\"\n> NUM_READ: 6 (pqReadData read CompressionAck (6 bytes) and nothing more)\n> pqReadData RC: 1\n> NUM_READ: 346\n> pqReadData RC: 1\n> psql (14devel)\n> Type \"help\" for help.\n> testdb => // OK\n> Example of a failed launch:\n> usernamedt-osx: ~ usernamedt $ psql -d \"host = x.x.x.x port = 6432 \n> dbname = testdb user = testuser compression = 1\"\n> NUM_READ: 24 (pqReadData read CompressionAck (6 bytes) and compressed \n> AuthenticationOk (18 bytes) came after it)\n> pqReadData RC: 1\n> psql: error: could not connect to server: expected authentication \n> request from server, but received x // FAIL\n\nThank you for reporting the problem.\nFix is attached.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 28 Oct 2020 19:25:34 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: libpq compression"
}
] |
[
{
"msg_contents": "Starting separate threads to keep from cluttering the TODO list thread.\n\nHere's a patch for the subject, as mentioned in\nhttps://www.postgresql.org/message-id/20201027220555.GS4951%40momjian.us\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Oct 2020 14:15:27 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "document pg_settings view doesn't display custom options"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 2:15 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> Starting separate threads to keep from cluttering the TODO list thread.\n>\n> Here's a patch for the subject, as mentioned in\n> https://www.postgresql.org/message-id/20201027220555.GS4951%40momjian.us\n>\n\nI just realized I introduced a typo, so here's v2.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 28 Oct 2020 14:45:30 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "\n\nOn 2020/10/29 3:45, John Naylor wrote:\n> On Wed, Oct 28, 2020 at 2:15 PM John Naylor <john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com>> wrote:\n> \n> Starting separate threads to keep from cluttering the TODO list thread.\n> \n> Here's a patch for the subject, as mentioned in\n> https://www.postgresql.org/message-id/20201027220555.GS4951%40momjian.us\n> \n> \n> I just realized I introduced a typo, so here's v2.\n\n+ The <structname>pg_settings</structname> view does not display\n+ <link linkend=\"runtime-config-custom\">customized options</link>.\n\nThis is true until the module that defines the customized options is loaded,\nbut not after that. No? For example, pg_settings displays\npg_stat_statements.max after pg_stat_statements is loaded.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 29 Oct 2020 12:38:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 11:38 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/10/29 3:45, John Naylor wrote:\n> > On Wed, Oct 28, 2020 at 2:15 PM John Naylor <\n> john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com>> wrote:\n> >\n> > Starting separate threads to keep from cluttering the TODO list\n> thread.\n> >\n> > Here's a patch for the subject, as mentioned in\n> >\n> https://www.postgresql.org/message-id/20201027220555.GS4951%40momjian.us\n> >\n> >\n> > I just realized I introduced a typo, so here's v2.\n>\n> + The <structname>pg_settings</structname> view does not display\n> + <link linkend=\"runtime-config-custom\">customized options</link>.\n>\n> This is true until the module that defines the customized options is\n> loaded,\n> but not after that. No? For example, pg_settings displays\n> pg_stat_statements.max after pg_stat_statements is loaded.\n>\n\nTrue, how about this:\n\n The <structname>pg_settings</structname> does not display\n <link linkend=\"runtime-config-custom\">customized options</link>\n that have been set before the relevant extension module has been loaded.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Oct 28, 2020 at 11:38 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/10/29 3:45, John Naylor wrote:\n> On Wed, Oct 28, 2020 at 2:15 PM John Naylor <john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com>> wrote:\n> \n> Starting separate threads to keep from cluttering the TODO list thread.\n> \n> Here's a patch for the subject, as mentioned in\n> https://www.postgresql.org/message-id/20201027220555.GS4951%40momjian.us\n> \n> \n> I just realized I introduced a typo, so here's v2.\n\n+ The <structname>pg_settings</structname> view does not display\n+ <link linkend=\"runtime-config-custom\">customized options</link>.\n\nThis is true until the module that defines the customized options is loaded,\nbut not after that. No? For example, pg_settings displays\npg_stat_statements.max after pg_stat_statements is loaded.\nTrue, how about this: The <structname>pg_settings</structname> does not display <link linkend=\"runtime-config-custom\">customized options</link> that have been set before the relevant extension module has been loaded.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 29 Oct 2020 08:54:07 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "\n\nOn 2020/10/29 21:54, John Naylor wrote:\n> \n> \n> On Wed, Oct 28, 2020 at 11:38 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> \n> On 2020/10/29 3:45, John Naylor wrote:\n> > On Wed, Oct 28, 2020 at 2:15 PM John Naylor <john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com> <mailto:john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com>>> wrote:\n> >\n> > Starting separate threads to keep from cluttering the TODO list thread.\n> >\n> > Here's a patch for the subject, as mentioned in\n> > https://www.postgresql.org/message-id/20201027220555.GS4951%40momjian.us\n> >\n> >\n> > I just realized I introduced a typo, so here's v2.\n> \n> + The <structname>pg_settings</structname> view does not display\n> + <link linkend=\"runtime-config-custom\">customized options</link>.\n> \n> This is true until the module that defines the customized options is loaded,\n> but not after that. No? For example, pg_settings displays\n> pg_stat_statements.max after pg_stat_statements is loaded.\n> \n> \n> True, how about this:\n> \n> The <structname>pg_settings</structname> does not display\n> <link linkend=\"runtime-config-custom\">customized options</link>\n> that have been set before the relevant extension module has been loaded.\n\nI guess that someone can misread this as\n\n customized options that have been set before the relevant extension\n module has been loaded are not displayed even after the module is loaded.\n\nSo what about the following, instead?\n\n The pg_settings does not display customized options until the extension\n module that defines them has been loaded.\n\nAlso I think this note should be in the different paragraph from the paragraph\nof \"The pg_settings view cannot be inserted into or deleted from....\" because\nthey are different topics. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 30 Oct 2020 12:50:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 11:51 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/10/29 21:54, John Naylor wrote:\n>\n> > The <structname>pg_settings</structname> does not display\n> > <link linkend=\"runtime-config-custom\">customized options</link>\n> > that have been set before the relevant extension module has been\n> loaded.\n>\n> I guess that someone can misread this as\n>\n> customized options that have been set before the relevant extension\n> module has been loaded are not displayed even after the module is\n> loaded.\n>\n> So what about the following, instead?\n>\n> The pg_settings does not display customized options until the\n> extension\n> module that defines them has been loaded.\n>\n> Also I think this note should be in the different paragraph from the\n> paragraph\n> of \"The pg_settings view cannot be inserted into or deleted from....\"\n> because\n> they are different topics. Thought?\n>\n\nAgreed on both points. In a separate paragraph, I think it's awkward to\nstart two consecutive sentences with \"The pg_settings view\". If we put it\nin the previous paragraph we could phrase it like this:\n\n\"See Section 20.1 for more information about the various ways to change\nthese parameters. Customized options are not displayed until the\nextension module that defines them has been loaded.\n\nThe pg_settings view...\"\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Oct 29, 2020 at 11:51 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/10/29 21:54, John Naylor wrote:\n> The <structname>pg_settings</structname> does not display\n> <link linkend=\"runtime-config-custom\">customized options</link>\n> that have been set before the relevant extension module has been loaded.\n\nI guess that someone can misread this as\n\n customized options that have been set before the relevant extension\n module has been loaded are not displayed even after the module is loaded.\n\nSo what about the following, instead?\n\n The pg_settings does not display customized options until the extension\n module that defines them has been loaded.\n\nAlso I think this note should be in the different paragraph from the paragraph\nof \"The pg_settings view cannot be inserted into or deleted from....\" because\nthey are different topics. Thought?\nAgreed on both points. In a separate paragraph, I think it's awkward to start two consecutive sentences with \"The pg_settings view\". If we put it in the previous paragraph we could phrase it like this:\"See Section 20.1 for more information about the various ways to change these parameters. Customized options are not displayed until the extension module that defines them has been loaded.The pg_settings view...\"-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 30 Oct 2020 12:01:04 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Thu, Oct 29, 2020 at 11:51 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n>> Also I think this note should be in the different paragraph from the\n>> paragraph\n>> of \"The pg_settings view cannot be inserted into or deleted from....\"\n>> because\n>> they are different topics. Thought?\n\n> Agreed on both points. In a separate paragraph, I think it's awkward to\n> start two consecutive sentences with \"The pg_settings view\". If we put it\n> in the previous paragraph we could phrase it like this:\n\n> \"See Section 20.1 for more information about the various ways to change\n> these parameters. Customized options are not displayed until the\n> extension module that defines them has been loaded.\n\nThat just moves the subject-inconsistency to a different para :-(\nI think this item should be its own new para.\n\nAs for the repetitiveness, we could just say \"This view ...\", in one or\neven both paras.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Oct 2020 12:07:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Thu, Oct 29, 2020 at 11:51 PM Fujii Masao <\n> masao.fujii@oss.nttdata.com>\n> > wrote:\n> >> Also I think this note should be in the different paragraph from the\n> >> paragraph\n> >> of \"The pg_settings view cannot be inserted into or deleted from....\"\n> >> because\n> >> they are different topics. Thought?\n>\n> > Agreed on both points. In a separate paragraph, I think it's awkward to\n> > start two consecutive sentences with \"The pg_settings view\". If we put it\n> > in the previous paragraph we could phrase it like this:\n>\n> > \"See Section 20.1 for more information about the various ways to change\n> > these parameters. Customized options are not displayed until the\n> > extension module that defines them has been loaded.\n>\n> That just moves the subject-inconsistency to a different para :-(\n> I think this item should be its own new para.\n>\n> As for the repetitiveness, we could just say \"This view ...\", in one or\n> even both paras.\n>\n\nOkay, along those lines here's a patch using \"this view\" in a new paragraph\nfor simplicity.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 30 Oct 2020 12:35:47 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> Okay, along those lines here's a patch using \"this view\" in a new paragraph\n> for simplicity.\n\nBasically OK with me, but ...\n\n<quibble>\nIt seems fairly weird to use a nonspecific reference first and then a\nspecific one. That is, I'd expect to read \"The pg_settings view ...\"\nand then \"This view ...\", not the other way around. So we could\nput this para second, or put it first but make this para say\n\"The pg_settings view ...\" while the existing text gets reduced to\n\"This view ...\".\n\nOr just make them both say \"This view ...\" so we don't have to have\nthis discussion again the next time somebody wants to add a para here.\n</quibble>\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Oct 2020 12:48:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 12:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > Okay, along those lines here's a patch using \"this view\" in a new\n> paragraph\n> > for simplicity.\n>\n> Basically OK with me, but ...\n>\n> <quibble>\n> It seems fairly weird to use a nonspecific reference first and then a\n> specific one. That is, I'd expect to read \"The pg_settings view ...\"\n> and then \"This view ...\", not the other way around. So we could\n> put this para second, or put it first but make this para say\n> \"The pg_settings view ...\" while the existing text gets reduced to\n> \"This view ...\".\n>\n> Or just make them both say \"This view ...\" so we don't have to have\n> this discussion again the next time somebody wants to add a para here.\n> </quibble>\n>\n\nOkay, how's this?\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 30 Oct 2020 13:06:56 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "\n\nOn 2020/10/31 2:06, John Naylor wrote:\n> \n> \n> On Fri, Oct 30, 2020 at 12:48 PM Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> John Naylor <john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com>> writes:\n> > Okay, along those lines here's a patch using \"this view\" in a new paragraph\n> > for simplicity.\n> \n> Basically OK with me, but ...\n> \n> <quibble>\n> It seems fairly weird to use a nonspecific reference first and then a\n> specific one. That is, I'd expect to read \"The pg_settings view ...\"\n> and then \"This view ...\", not the other way around. So we could\n> put this para second, or put it first but make this para say\n> \"The pg_settings view ...\" while the existing text gets reduced to\n> \"This view ...\".\n> \n> Or just make them both say \"This view ...\" so we don't have to have\n> this discussion again the next time somebody wants to add a para here.\n> </quibble>\n> \n> \n> Okay, how's this?\n\nLooks good to me. Barring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 7 Nov 2020 00:42:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "\n\nOn 2020/11/07 0:42, Fujii Masao wrote:\n> \n> \n> On 2020/10/31 2:06, John Naylor wrote:\n>>\n>>\n>> On Fri, Oct 30, 2020 at 12:48 PM Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\n>>\n>> John Naylor <john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com>> writes:\n>> > Okay, along those lines here's a patch using \"this view\" in a new paragraph\n>> > for simplicity.\n>>\n>> Basically OK with me, but ...\n>>\n>> <quibble>\n>> It seems fairly weird to use a nonspecific reference first and then a\n>> specific one. That is, I'd expect to read \"The pg_settings view ...\"\n>> and then \"This view ...\", not the other way around. So we could\n>> put this para second, or put it first but make this para say\n>> \"The pg_settings view ...\" while the existing text gets reduced to\n>> \"This view ...\".\n>>\n>> Or just make them both say \"This view ...\" so we don't have to have\n>> this discussion again the next time somebody wants to add a para here.\n>> </quibble>\n>>\n>>\n>> Okay, how's this?\n> \n> Looks good to me. Barring any objection, I will commit the patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 9 Nov 2020 15:12:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 2:12 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> Pushed. Thanks!\n>\n\nThank you!\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Mon, Nov 9, 2020 at 2:12 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:Pushed. Thanks!Thank you! -- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Nov 2020 09:41:01 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: document pg_settings view doesn't display custom options"
}
] |
[
{
"msg_contents": "Hello,\n\nI know, nobody in their mind should do that, but, if the postmaster\nprocess is killed with SIGKILL signal, most backend processes\ncorrectly notice the fact of the postmaster process absence and exit.\nThere is one exception though, when there are autovacuum worker\nprocesses they are continuing to run until eventually finish and exit.\n\nSteps to reproduce:\n1. Initialize the cluster and start it up\n\n2. Create a table and fill it up with some data:\nlocalhost/postgres=# create table foo(id int);\nCREATE TABLE\nlocalhost/postgres=# alter table foo set (autovacuum_vacuum_cost_delay = 100);\nALTER TABLE\nlocalhost/postgres=# insert into foo select * from generate_series(1, 10000000);\nINSERT 0 10000000\nlocalhost/postgres=# \\dt+ foo\n List of relations\nSchema │ Name │ Type │ Owner │ Persistence │ Size │ Description\n────────┼──────┼───────┼──────────┼─────────────┼────────┼─────────────\npublic │ foo │ table │ postgres │ permanent │ 346 MB │\n(1 row)\n\n3. Wait until autovacuum worker process started and kill it:\n$ ps auxwwwf | grep [p]ostgres\nakukush+ 7728 0.0 0.1 321244 26836 ? S 15:51 0:00\npostgres -D data\nakukush+ 7730 0.0 0.0 173488 4312 ? Ss 15:51 0:00 \\_\npostgres: logger\nakukush+ 7732 0.0 0.0 321584 8808 ? Ss 15:51 0:00 \\_\npostgres: checkpointer\nakukush+ 7733 0.0 0.4 321376 70688 ? Ss 15:51 0:00 \\_\npostgres: background writer\nakukush+ 7734 0.0 0.0 321244 9780 ? Ss 15:51 0:00 \\_\npostgres: walwriter\nakukush+ 7735 0.0 0.0 321796 6684 ? Ss 15:51 0:00 \\_\npostgres: autovacuum launcher\nakukush+ 7736 0.0 0.0 175608 6224 ? Ss 15:51 0:00 \\_\npostgres: archiver last was 0000002E000000000000002A\nakukush+ 7737 0.0 0.0 175608 4340 ? Ss 15:51 0:00 \\_\npostgres: stats collector\nakukush+ 7738 0.0 0.0 321672 6812 ? Ss 15:51 0:00 \\_\npostgres: logical replication launcher\nakukush+ 7743 0.0 0.0 322460 14624 ? Ss 15:52 0:00 \\_\npostgres: postgres postgres 127.0.0.1(39130) idle\n\n$ ps auxwwwf | grep [p]ostgres | grep auto\nakukush+ 7735 0.0 0.0 321796 6684 ? Ss 15:51 0:00 \\_\npostgres: autovacuum launcher\nakukush+ 10483 1.0 0.0 322432 12472 ? Ss 16:28 0:00 \\_\npostgres: autovacuum worker postgres\n\n$ kill -9 7728\n\n$ ps auxwwwf | grep [p]ostgres | grep auto\nakukush+ 10483 0.7 0.0 322432 12472 ? Ss 16:28 0:00\npostgres: autovacuum worker postgres\n\nAnd here is gdb backtrace:\n(gdb) bt\n#0 0x00007f6e1c80c0f7 in __GI___select (nfds=nfds@entry=0,\nreadfds=readfds@entry=0x0, writefds=writefds@entry=0x0,\nexceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7fff6cf1b580)\nat ../sysdeps/unix/sysv/linux/select.c:41\n#1 0x000055dbd93ade2d in pg_usleep (microsec=<optimized out>) at\n./build/../src/port/pgsleep.c:56\n#2 0x000055dbd90b7543 in vacuum_delay_point () at\n./build/../src/backend/commands/vacuum.c:2034\n#3 0x000055dbd8f5c00d in lazy_scan_heap (aggressive=false,\nnindexes=0, Irel=0x0, vacrelstats=<optimized out>,\nparams=0x55dbdaac7e7c, onerel=<optimized out>) at\n./build/../src/backend/access/heap/vacuumlazy.c:1034\n#4 heap_vacuum_rel (onerel=<optimized out>, params=0x55dbdaac7e7c,\nbstrategy=<optimized out>) at\n./build/../src/backend/access/heap/vacuumlazy.c:518\n#5 0x000055dbd90b561d in table_relation_vacuum (bstrategy=<optimized\nout>, params=0x55dbdaac7e7c, rel=0x7f6e1f3105f0) at\n./build/../src/include/access/tableam.h:1460\n#6 vacuum_rel (relid=16396, relation=<optimized out>,\nparams=params@entry=0x55dbdaac7e7c) at\n./build/../src/backend/commands/vacuum.c:1893\n#7 0x000055dbd90b68c5 in vacuum (relations=0x55dbdab38588,\nparams=params@entry=0x55dbdaac7e7c, bstrategy=<optimized out>,\nbstrategy@entry=0x55dbdaac7f98, isTopLevel=isTopLevel@entry=true) at\n./build/../src/backend/commands/vacuum.c:449\n#8 0x000055dbd8f0777b in autovacuum_do_vac_analyze\n(bstrategy=0x55dbdaac7f98, tab=0x55dbdaac7e78) at\n./build/../src/backend/postmaster/autovacuum.c:3137\n#9 do_autovacuum () at ./build/../src/backend/postmaster/autovacuum.c:2467\n#10 0x000055dbd8f07e2a in AutoVacWorkerMain (argv=0x0, argc=0) at\n./build/../src/backend/postmaster/autovacuum.c:1694\n#11 0x000055dbd91a753a in StartAutoVacWorker () at\n./build/../src/backend/postmaster/autovacuum.c:1488\n#12 0x000055dbd91b54ca in StartAutovacuumWorker () at\n./build/../src/backend/postmaster/postmaster.c:5613\n#13 sigusr1_handler (postgres_signal_arg=<optimized out>) at\n./build/../src/backend/postmaster/postmaster.c:5320\n#14 <signal handler called>\n#15 0x00007f6e1c80c0f7 in __GI___select (nfds=nfds@entry=10,\nreadfds=readfds@entry=0x7fff6cf1c7f0, writefds=writefds@entry=0x0,\nexceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7fff6cf1c750)\nat ../sysdeps/unix/sysv/linux/select.c:41\n#16 0x000055dbd91b5759 in ServerLoop () at\n./build/../src/backend/postmaster/postmaster.c:1703\n#17 0x000055dbd91b75e3 in PostmasterMain (argc=17, argv=<optimized\nout>) at ./build/../src/backend/postmaster/postmaster.c:1412\n#18 0x000055dbd8f0a3c8 in main (argc=17, argv=0x55dbdaa4aef0) at\n./build/../src/backend/main/main.c:210\n\nFifteen minutes later process 10483 was still alive, but the backtrace\nlooked a bit different (it finished vacuuming and was doing analyze):\n(gdb) bt\n#0 0x00007f6e1c80c0f7 in __GI___select (nfds=nfds@entry=0,\nreadfds=readfds@entry=0x0, writefds=writefds@entry=0x0,\nexceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7fff6cf1b480)\nat ../sysdeps/unix/sysv/linux/select.c:41\n#1 0x000055dbd93ade2d in pg_usleep (microsec=<optimized out>) at\n./build/../src/port/pgsleep.c:56\n#2 0x000055dbd90b7543 in vacuum_delay_point () at\n./build/../src/backend/commands/vacuum.c:2034\n#3 0x000055dbd9037251 in acquire_sample_rows\n(onerel=onerel@entry=0x7f6e1f310070, elevel=elevel@entry=13,\nrows=rows@entry=0x7f6e1f2b8048, targrows=targrows@entry=30000,\ntotalrows=totalrows@entry=0x7fff6cf1b6d8,\ntotaldeadrows=totaldeadrows@entry=0x7fff6cf1b6e0)\n at ./build/../src/backend/commands/analyze.c:1079\n#4 0x000055dbd9039d51 in do_analyze_rel\n(onerel=onerel@entry=0x7f6e1f310070,\nparams=params@entry=0x55dbdaac7e7c, va_cols=va_cols@entry=0x0,\nacquirefunc=0x55dbd9037110 <acquire_sample_rows>, relpages=44248,\ninh=inh@entry=false, in_outer_xact=false, elevel=13)\n at ./build/../src/backend/commands/analyze.c:522\n#5 0x000055dbd903b452 in analyze_rel (relid=<optimized out>,\nrelation=<optimized out>, params=params@entry=0x55dbdaac7e7c,\nva_cols=0x0, in_outer_xact=<optimized out>, bstrategy=<optimized out>)\nat ./build/../src/backend/commands/analyze.c:263\n#6 0x000055dbd90b6884 in vacuum (relations=0x55dbdab38588,\nparams=params@entry=0x55dbdaac7e7c, bstrategy=<optimized out>,\nbstrategy@entry=0x55dbdaac7f98, isTopLevel=isTopLevel@entry=true) at\n./build/../src/backend/commands/vacuum.c:466\n#7 0x000055dbd8f0777b in autovacuum_do_vac_analyze\n(bstrategy=0x55dbdaac7f98, tab=0x55dbdaac7e78) at\n./build/../src/backend/postmaster/autovacuum.c:3137\n#8 do_autovacuum () at ./build/../src/backend/postmaster/autovacuum.c:2467\n#9 0x000055dbd8f07e2a in AutoVacWorkerMain (argv=0x0, argc=0) at\n./build/../src/backend/postmaster/autovacuum.c:1694\n#10 0x000055dbd91a753a in StartAutoVacWorker () at\n./build/../src/backend/postmaster/autovacuum.c:1488\n#11 0x000055dbd91b54ca in StartAutovacuumWorker () at\n./build/../src/backend/postmaster/postmaster.c:5613\n#12 sigusr1_handler (postgres_signal_arg=<optimized out>) at\n./build/../src/backend/postmaster/postmaster.c:5320\n#13 <signal handler called>\n#14 0x00007f6e1c80c0f7 in __GI___select (nfds=nfds@entry=10,\nreadfds=readfds@entry=0x7fff6cf1c7f0, writefds=writefds@entry=0x0,\nexceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7fff6cf1c750)\nat ../sysdeps/unix/sysv/linux/select.c:41\n#15 0x000055dbd91b5759 in ServerLoop () at\n./build/../src/backend/postmaster/postmaster.c:1703\n#16 0x000055dbd91b75e3 in PostmasterMain (argc=17, argv=<optimized\nout>) at ./build/../src/backend/postmaster/postmaster.c:1412\n#17 0x000055dbd8f0a3c8 in main (argc=17, argv=0x55dbdaa4aef0) at\n./build/../src/backend/main/main.c:210\n\nEventually, after 20-25 minutes the process 10483 exited.\n\nI was able to reproduce it with 13.0 and 12.4, and I believe older\nversions are also affected.\n\nRegards,\n--\nAlexander Kukushkin\n\n\n",
"msg_date": "Wed, 28 Oct 2020 19:44:18 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": true,
"msg_subject": "Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "ср, 28 окт. 2020 г. в 19:44, Alexander Kukushkin <cyberdemn@gmail.com>:\n\n> I know, nobody in their mind should do that, but, if the postmaster\n> process is killed with SIGKILL signal, most backend processes\n> correctly notice the fact of the postmaster process absence and exit.\n> There is one exception though, when there are autovacuum worker\n> processes they are continuing to run until eventually finish and exit.\n>\n> …\n>\n> I was able to reproduce it with 13.0 and 12.4, and I believe older\n> versions are also affected.\n>\n\nDo you get the same behaviour also on master?\nAs there was some work in this area for 14, see\nhttps://git.postgresql.org/pg/commitdiff/44fc6e259b\n\n-- \nVictor Yegorov\n\nср, 28 окт. 2020 г. в 19:44, Alexander Kukushkin <cyberdemn@gmail.com>:I know, nobody in their mind should do that, but, if the postmaster\nprocess is killed with SIGKILL signal, most backend processes\ncorrectly notice the fact of the postmaster process absence and exit.\nThere is one exception though, when there are autovacuum worker\nprocesses they are continuing to run until eventually finish and exit.\n…\n\nI was able to reproduce it with 13.0 and 12.4, and I believe older\nversions are also affected.Do you get the same behaviour also on master?As there was some work in this area for 14, see https://git.postgresql.org/pg/commitdiff/44fc6e259b -- Victor Yegorov",
"msg_date": "Thu, 29 Oct 2020 00:08:50 +0100",
"msg_from": "Victor Yegorov <vyegorov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Victor Yegorov <vyegorov@gmail.com> writes:\n> ср, 28 окт. 2020 г. в 19:44, Alexander Kukushkin <cyberdemn@gmail.com>:\n>> I know, nobody in their mind should do that, but, if the postmaster\n>> process is killed with SIGKILL signal, most backend processes\n>> correctly notice the fact of the postmaster process absence and exit.\n>> There is one exception though, when there are autovacuum worker\n>> processes they are continuing to run until eventually finish and exit.\n\n> Do you get the same behaviour also on master?\n> As there was some work in this area for 14, see\n> https://git.postgresql.org/pg/commitdiff/44fc6e259b\n\nThat was about SIGQUIT response, which isn't really related to this\nscenario. But I do not think Alexander has accurately characterized\nthe situation. *No* server processes will react instantly to postmaster\ndeath. Typically they'll only detect it while waiting for some other\ncondition, such as client input, or in some cases while iterating their\noutermost loop. So if they're busy with calculations they might not\nnotice for a long time. I don't think autovacuum is any worse than\na busy client backend on this score.\n\nIt's hard to do better than that, because on most platforms there's\nno way to get a signal on parent-process death, so the only way to\nnotice would be to poll the postmaster-death pipe constantly; which\nwould be hugely expensive in comparison to the value.\n\nOn the whole I'm skeptical that this is a useful consideration to\nexpend effort on. You shouldn't be killing the postmaster that way.\nIf you do, you'll soon learn not to, for plenty of reasons besides\nthis one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Oct 2020 19:41:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Victor Yegorov <vyegorov@gmail.com> writes:\n> > ср, 28 окт. 2020 г. в 19:44, Alexander Kukushkin <cyberdemn@gmail.com>:\n> >> I know, nobody in their mind should do that, but, if the postmaster\n> >> process is killed with SIGKILL signal, most backend processes\n> >> correctly notice the fact of the postmaster process absence and exit.\n> >> There is one exception though, when there are autovacuum worker\n> >> processes they are continuing to run until eventually finish and exit.\n> \n> > Do you get the same behaviour also on master?\n> > As there was some work in this area for 14, see\n> > https://git.postgresql.org/pg/commitdiff/44fc6e259b\n> \n> That was about SIGQUIT response, which isn't really related to this\n> scenario. But I do not think Alexander has accurately characterized\n> the situation. *No* server processes will react instantly to postmaster\n> death. Typically they'll only detect it while waiting for some other\n> condition, such as client input, or in some cases while iterating their\n> outermost loop. So if they're busy with calculations they might not\n> notice for a long time. I don't think autovacuum is any worse than\n> a busy client backend on this score.\n\nConsidering how long an autovacuum can run, it seems like it'd be\nworthwhile to find a useful place to check for postmaster-death.\nTypical well-running systems are going to be waiting for the client\npretty frequently and therefore this does make autovacuum stick out in\nthis case.\n\n> It's hard to do better than that, because on most platforms there's\n> no way to get a signal on parent-process death, so the only way to\n> notice would be to poll the postmaster-death pipe constantly; which\n> would be hugely expensive in comparison to the value.\n\nI agree that 'constantly' wouldn't be great, but with some periodicity\nthat's more frequent than 'not until a few hours later when we finally\nfinish vacuuming this relation' would be nice. At least with autovauum\nwe may be periodically sleeping anyway so it doesn't seem like polling\nat that point would really be terrible, though it'd be nice to check\nevery once in a while even if we aren't sleeping.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 29 Oct 2020 12:08:32 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I agree that 'constantly' wouldn't be great, but with some periodicity\n> that's more frequent than 'not until a few hours later when we finally\n> finish vacuuming this relation' would be nice. At least with autovauum\n> we may be periodically sleeping anyway so it doesn't seem like polling\n> at that point would really be terrible, though it'd be nice to check\n> every once in a while even if we aren't sleeping.\n\nMaybe put a check into vacuum_delay_point, and poll the pipe when we're\nabout to sleep anyway? That wouldn't fix anything except autovacuum,\nbut if you're right that that's a primary pain point then it'd help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Oct 2020 12:27:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "On 2020-Oct-29, Stephen Frost wrote:\n\n> > It's hard to do better than that, because on most platforms there's\n> > no way to get a signal on parent-process death, so the only way to\n> > notice would be to poll the postmaster-death pipe constantly; which\n> > would be hugely expensive in comparison to the value.\n> \n> I agree that 'constantly' wouldn't be great, but with some periodicity\n> that's more frequent than 'not until a few hours later when we finally\n> finish vacuuming this relation' would be nice. At least with autovauum\n> we may be periodically sleeping anyway so it doesn't seem like polling\n> at that point would really be terrible, though it'd be nice to check\n> every once in a while even if we aren't sleeping.\n\nvacuum_delay_point seems an obvious candidate, as soon as we've\ndetermined that the sleep interval is > 0; since we're going to sleep,\nthe cost of a syscall seems negligible. I'm not sure what to suggest\nfor vacuums that don't have vacuum costing active, though.\n\n\n",
"msg_date": "Thu, 29 Oct 2020 13:32:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "On 2020-Oct-28, Alexander Kukushkin wrote:\n\n> Hello,\n> \n> I know, nobody in their mind should do that, but, if the postmaster\n> process is killed with SIGKILL signal, most backend processes\n> correctly notice the fact of the postmaster process absence and exit.\n> There is one exception though, when there are autovacuum worker\n> processes they are continuing to run until eventually finish and exit.\n\nSo, if you have a manual vacuum running on the table (with\nvacuum_cost_delay=0) and kill -KILL the postmaster, that one also\nlingers arbitrarily long afterwards?\n\n(I suppose the problem is not as obvious just because the vacuum\nwouldn't run as long, because of no vacuum cost delay; but it'd still be\na problem if you made the table bigger.)\n\n\n",
"msg_date": "Thu, 29 Oct 2020 13:35:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-29 12:27:53 -0400, Tom Lane wrote:\n> Maybe put a check into vacuum_delay_point, and poll the pipe when we're\n> about to sleep anyway?\n\nPerhaps we should just replace the pg_usleep() with a latch wait?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Oct 2020 11:01:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-10-29 12:27:53 -0400, Tom Lane wrote:\n> > Maybe put a check into vacuum_delay_point, and poll the pipe when we're\n> > about to sleep anyway?\n> \n> Perhaps we should just replace the pg_usleep() with a latch wait?\n\nI'm not sure why, but I had the thought that we already had done that,\nand was a bit surprised that it wasn't that way, so +1 from my part.\n\nI do think it'd be good to find a way to check every once in a while\neven when we aren't going to delay though. Not sure what the best\nanswer there is.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 29 Oct 2020 14:05:55 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "On 2020-Oct-29, Stephen Frost wrote:\n\n> I do think it'd be good to find a way to check every once in a while\n> even when we aren't going to delay though. Not sure what the best\n> answer there is.\n\nMaybe instead of thinking specifically in terms of vacuum, we could\ncount buffer accesses (read from kernel) and check the latch once every\n1000th such, or something like that. Then a very long query doesn't\nhave to wait until it's run to completion. The cost is one integer\naddition per syscall, which should be bearable.\n\n(This doesn't help with a query that's running arbitrarily outside of\nPostgres, or doing something that doesn't access disk -- but it'd help\nwith a majority of problem cases.)\n\n\n",
"msg_date": "Thu, 29 Oct 2020 18:36:31 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Oct-29, Stephen Frost wrote:\n>> I do think it'd be good to find a way to check every once in a while\n>> even when we aren't going to delay though. Not sure what the best\n>> answer there is.\n\n> Maybe instead of thinking specifically in terms of vacuum, we could\n> count buffer accesses (read from kernel) and check the latch once every\n> 1000th such, or something like that. Then a very long query doesn't\n> have to wait until it's run to completion. The cost is one integer\n> addition per syscall, which should be bearable.\n\nI'm kind of unwilling to add any syscalls at all to normal execution\ncode paths for this purpose. People shouldn't be sig-kill'ing the\npostmaster, or if they do, cleaning up the mess is their responsibility.\nI'd also suggest that adding nearly-untestable code paths for this\npurpose is a fine way to add bugs we'll never catch.\n\nThe if-we're-going-to-delay-anyway path in vacuum_delay_point seems\nOK to add a touch more overhead to, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Oct 2020 17:47:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2020-Oct-29, Stephen Frost wrote:\n> >> I do think it'd be good to find a way to check every once in a while\n> >> even when we aren't going to delay though. Not sure what the best\n> >> answer there is.\n> \n> > Maybe instead of thinking specifically in terms of vacuum, we could\n> > count buffer accesses (read from kernel) and check the latch once every\n> > 1000th such, or something like that. Then a very long query doesn't\n> > have to wait until it's run to completion. The cost is one integer\n> > addition per syscall, which should be bearable.\n> \n> I'm kind of unwilling to add any syscalls at all to normal execution\n> code paths for this purpose. People shouldn't be sig-kill'ing the\n> postmaster, or if they do, cleaning up the mess is their responsibility.\n> I'd also suggest that adding nearly-untestable code paths for this\n> purpose is a fine way to add bugs we'll never catch.\n\nNot sure if either is at all viable, but I had a couple of thoughts\nabout other ways to possibly address this.\n\nThe first simplistic idea is this- we have lots of processes that pick\nup pretty quickly on the postmaster going away due to checking if it's\nstill around while waiting for something else to happen anyway (like the\nautovacuum launcher...), and we have CFI's in a lot of places where it's\nreasonable to do a CFI but isn't alright to check for postmaster death.\nWhile it'd be better if there were more platforms where parent death\nwould send a signal to the children, that doesn't seem to be coming any\ntime soon- so why don't we do it ourselves? That is, when we discover\nthat the postmaster has died, scan through the proc array (carefully,\nsince it could be garbage, but all we're looking for are the PIDs of\nanything that might still be around) and try sending a signal to any\nprocesses that are left? Those signals would hopefully get delivered\nand the other backends would discover the signal through CFI and exit\nreasonably quickly.\n\nThe other thought I had was around trying to check for postmaster death\nwhen we're about to do some I/O, which would probably catch a large\nnumber of these cases too though technically some process might stick\naround for a while if it's only dealing with things that are already in\nshared buffers, I suppose. Also seems complicated and expensive to do.\n\n> The if-we're-going-to-delay-anyway path in vacuum_delay_point seems\n> OK to add a touch more overhead to, though.\n\nYeah, this certainly seems reasonable to do too and on a well run system\nwould likely be enough 90+% of the time.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 30 Oct 2020 11:07:07 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2020-Oct-29, Stephen Frost wrote:\n> >> I do think it'd be good to find a way to check every once in a while\n> >> even when we aren't going to delay though. Not sure what the best\n> >> answer there is.\n> \n> > Maybe instead of thinking specifically in terms of vacuum, we could\n> > count buffer accesses (read from kernel) and check the latch once every\n> > 1000th such, or something like that. Then a very long query doesn't\n> > have to wait until it's run to completion. The cost is one integer\n> > addition per syscall, which should be bearable.\n> \n> I'm kind of unwilling to add any syscalls at all to normal execution\n> code paths for this purpose. People shouldn't be sig-kill'ing the\n> postmaster, or if they do, cleaning up the mess is their responsibility.\n> I'd also suggest that adding nearly-untestable code paths for this\n> purpose is a fine way to add bugs we'll never catch.\n> \n> The if-we're-going-to-delay-anyway path in vacuum_delay_point seems\n> OK to add a touch more overhead to, though.\n\nAlright, for this part at least, seems like it'd be something like the\nattached.\n\nOnly lightly tested, but does seem to address the specific example which\nwas brought up on this thread.\n\nThoughts..?\n\nThanks,\n\nStephen",
"msg_date": "Thu, 10 Dec 2020 13:57:13 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 5:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Maybe instead of thinking specifically in terms of vacuum, we could\n> count buffer accesses (read from kernel) and check the latch once every\n> 1000th such, or something like that. Then a very long query doesn't\n> have to wait until it's run to completion. The cost is one integer\n> addition per syscall, which should be bearable.\n\nInteresting idea. One related case is where everything is fine on the\nserver side but the client has disconnected and we don't notice that\nthe socket has changed state until something makes us try to send a\nmessage to the client, which might be a really long time if the\nserver's doing like a lengthy computation before generating any rows.\nIt would be really nice if we could find a cheap way to check for both\npostmaster death and client disconnect every now and then, like if a\nsingle system call could somehow answer both questions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 10 Dec 2020 14:34:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 8:34 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Oct 29, 2020 at 5:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Maybe instead of thinking specifically in terms of vacuum, we could\n> > count buffer accesses (read from kernel) and check the latch once every\n> > 1000th such, or something like that. Then a very long query doesn't\n> > have to wait until it's run to completion. The cost is one integer\n> > addition per syscall, which should be bearable.\n>\n> Interesting idea. One related case is where everything is fine on the\n> server side but the client has disconnected and we don't notice that\n> the socket has changed state until something makes us try to send a\n> message to the client, which might be a really long time if the\n> server's doing like a lengthy computation before generating any rows.\n> It would be really nice if we could find a cheap way to check for both\n> postmaster death and client disconnect every now and then, like if a\n> single system call could somehow answer both questions.\n\nFor the record, an alternative approach was proposed[1] that\nperiodically checks for disconnected sockets using a timer, that will\nthen cause the next CFI() to abort.\n\nDoing the check (a syscall) based on elapsed time rather than every\nnth CFI() or buffer access or whatever seems better in some ways,\nconsidering the difficulty of knowing what the frequency will be. One\nof the objections was that it added unacceptable setitimer() calls.\nWe discussed an idea to solve that problem generally, and then later I\nprototyped that idea in another thread[2] about idle session timeouts\n(not sure about that yet, comments welcome).\n\nI've also wondered about checking postmaster_possibly_dead in CFI() on\nplatforms where we have it (and working to increase that set of\nplatforms), instead of just reacting to PM death when sleeping. But\nit seems like the real problem in this specific case is the use of\npg_usleep() where WaitLatch() should be used, no?\n\nThe recovery loop is at the opposite end of the spectrum: while vacuum\ndoesn't check for postmaster death often enough, the recovery loop\nchecks potentially hundreds of thousands or millions of times per\nseconds, which sucks on systems that don't have parent-death signals\nand slows down recovery quite measurably. In the course of the\ndiscussion about fixing that[3] we spotted other places that are using\na pg_usleep() where they ought to be using WaitLatch() (which comes\nwith exit-on-PM-death behaviour built-in). By the way, the patch in\nthat thread does almost what Robert described, namely check for PM\ndeath every nth time (which in this case means every nth WAL record),\nexcept it's not in the main CFI(), it's in a special variant used just\nfor recovery.\n\n[1] https://www.postgresql.org/message-id/flat/77def86b27e41f0efcba411460e929ae%40postgrespro.ru\n[2] https://www.postgresql.org/message-id/flat/763A0689-F189-459E-946F-F0EC4458980B@hotmail.com\n[3] https://www.postgresql.org/message-id/flat/CA+hUKGK1607VmtrDUHQXrsooU=ap4g4R2yaoByWOOA3m8xevUQ@mail.gmail.com\n\n\n",
"msg_date": "Fri, 11 Dec 2020 15:02:51 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 7:57 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > The if-we're-going-to-delay-anyway path in vacuum_delay_point seems\n> > OK to add a touch more overhead to, though.\n>\n> Alright, for this part at least, seems like it'd be something like the\n> attached.\n>\n> Only lightly tested, but does seem to address the specific example which\n> was brought up on this thread.\n>\n> Thoughts..?\n\n+1\n\n\n",
"msg_date": "Tue, 23 Feb 2021 11:32:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> On Fri, Dec 11, 2020 at 7:57 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > The if-we're-going-to-delay-anyway path in vacuum_delay_point seems\n> > > OK to add a touch more overhead to, though.\n> >\n> > Alright, for this part at least, seems like it'd be something like the\n> > attached.\n> >\n> > Only lightly tested, but does seem to address the specific example which\n> > was brought up on this thread.\n> >\n> > Thoughts..?\n> \n> +1\n\nThanks for that. Attached is just a rebased version with a commit\nmessage added. If there aren't any other concerns, I'll commit this in\nthe next few days and back-patch it. When it comes to 12 and older,\ndoes anyone want to opine about the wait event to use? I was thinking\nPG_WAIT_TIMEOUT or WAIT_EVENT_PG_SLEEP ...\n\nOr do folks think this shouldn't be backpatched? That would mean it\nwouldn't help anyone for years, which would be pretty unfortuante, hence\nmy feeling that it's worthwhile to backpatch.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 22 Mar 2021 13:48:26 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "On Mon, Mar 22, 2021 at 1:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Thanks for that. Attached is just a rebased version with a commit\n> message added. If there aren't any other concerns, I'll commit this in\n> the next few days and back-patch it. When it comes to 12 and older,\n> does anyone want to opine about the wait event to use? I was thinking\n> PG_WAIT_TIMEOUT or WAIT_EVENT_PG_SLEEP ...\n\nI'm not sure if we should back-patch this, but I think if you do you\nshould just add a wait event, rather than using a generic one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Mar 2021 16:07:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "On Mon, Mar 22, 2021 at 04:07:12PM -0400, Robert Haas wrote:\n> On Mon, Mar 22, 2021 at 1:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Thanks for that. Attached is just a rebased version with a commit\n>> message added. If there aren't any other concerns, I'll commit this in\n>> the next few days and back-patch it. When it comes to 12 and older,\n>> does anyone want to opine about the wait event to use? I was thinking\n>> PG_WAIT_TIMEOUT or WAIT_EVENT_PG_SLEEP ...\n> \n> I'm not sure if we should back-patch this, but I think if you do you\n> should just add a wait event, rather than using a generic one.\n\nI would not back-patch that either, as this is an improvement of the\ncurrent state. I agree that this had better introduce a new wait\nevent. Even if this stuff gets backpatched, you won't introduce an\nABI incompatibility with a new event as long as you add the new event\nat the end of the existing enum lists, but let's keep the wait events\nordered on HEAD.\n--\nMichael",
"msg_date": "Wed, 24 Mar 2021 12:25:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, Mar 22, 2021 at 04:07:12PM -0400, Robert Haas wrote:\n> > On Mon, Mar 22, 2021 at 1:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >> Thanks for that. Attached is just a rebased version with a commit\n> >> message added. If there aren't any other concerns, I'll commit this in\n> >> the next few days and back-patch it. When it comes to 12 and older,\n> >> does anyone want to opine about the wait event to use? I was thinking\n> >> PG_WAIT_TIMEOUT or WAIT_EVENT_PG_SLEEP ...\n> > \n> > I'm not sure if we should back-patch this, but I think if you do you\n> > should just add a wait event, rather than using a generic one.\n> \n> I would not back-patch that either, as this is an improvement of the\n> current state. I agree that this had better introduce a new wait\n> event. Even if this stuff gets backpatched, you won't introduce an\n> ABI incompatibility with a new event as long as you add the new event\n> at the end of the existing enum lists, but let's keep the wait events\n> ordered on HEAD.\n\nAdding CFI's in places that really should have them is something we\ncertainly have back-patched in the past, and that's just 'an improvement\nof the current state' too, so I don't quite follow the argument being\nmade here that this shouldn't be back-patched.\n\nI don't have any problem with adding into the older releases, at the end\nof the existing lists, the same wait event that exists in 13+ for this\nalready.\n\nAny other thoughts on this, particularly about back-patching or not..?\n\nThanks,\n\nStephen",
"msg_date": "Wed, 24 Mar 2021 07:46:26 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Michael Paquier (michael@paquier.xyz) wrote:\n> > On Mon, Mar 22, 2021 at 04:07:12PM -0400, Robert Haas wrote:\n> > > On Mon, Mar 22, 2021 at 1:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > >> Thanks for that. Attached is just a rebased version with a commit\n> > >> message added. If there aren't any other concerns, I'll commit this in\n> > >> the next few days and back-patch it. When it comes to 12 and older,\n> > >> does anyone want to opine about the wait event to use? I was thinking\n> > >> PG_WAIT_TIMEOUT or WAIT_EVENT_PG_SLEEP ...\n> > > \n> > > I'm not sure if we should back-patch this, but I think if you do you\n> > > should just add a wait event, rather than using a generic one.\n> > \n> > I would not back-patch that either, as this is an improvement of the\n> > current state. I agree that this had better introduce a new wait\n> > event. Even if this stuff gets backpatched, you won't introduce an\n> > ABI incompatibility with a new event as long as you add the new event\n> > at the end of the existing enum lists, but let's keep the wait events\n> > ordered on HEAD.\n> \n> Adding CFI's in places that really should have them is something we\n> certainly have back-patched in the past, and that's just 'an improvement\n> of the current state' too, so I don't quite follow the argument being\n> made here that this shouldn't be back-patched.\n> \n> I don't have any problem with adding into the older releases, at the end\n> of the existing lists, the same wait event that exists in 13+ for this\n> already.\n> \n> Any other thoughts on this, particularly about back-patching or not..?\n\nWe seem to be at a bit of an impasse on this regarding back-patching,\nwhich seems unfortunate to me, but without someone else commenting it\nseems like it's stalled.\n\nI'll go ahead and push the change to HEAD soon, as there doesn't seem to\nbe any contention regarding that.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 28 Mar 2021 11:42:27 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Stephen Frost (sfrost@snowman.net) wrote:\n> > * Michael Paquier (michael@paquier.xyz) wrote:\n> > > On Mon, Mar 22, 2021 at 04:07:12PM -0400, Robert Haas wrote:\n> > > > On Mon, Mar 22, 2021 at 1:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > >> Thanks for that. Attached is just a rebased version with a commit\n> > > >> message added. If there aren't any other concerns, I'll commit this in\n> > > >> the next few days and back-patch it. When it comes to 12 and older,\n> > > >> does anyone want to opine about the wait event to use? I was thinking\n> > > >> PG_WAIT_TIMEOUT or WAIT_EVENT_PG_SLEEP ...\n> > > > \n> > > > I'm not sure if we should back-patch this, but I think if you do you\n> > > > should just add a wait event, rather than using a generic one.\n> > > \n> > > I would not back-patch that either, as this is an improvement of the\n> > > current state. I agree that this had better introduce a new wait\n> > > event. Even if this stuff gets backpatched, you won't introduce an\n> > > ABI incompatibility with a new event as long as you add the new event\n> > > at the end of the existing enum lists, but let's keep the wait events\n> > > ordered on HEAD.\n> > \n> > Adding CFI's in places that really should have them is something we\n> > certainly have back-patched in the past, and that's just 'an improvement\n> > of the current state' too, so I don't quite follow the argument being\n> > made here that this shouldn't be back-patched.\n> > \n> > I don't have any problem with adding into the older releases, at the end\n> > of the existing lists, the same wait event that exists in 13+ for this\n> > already.\n> > \n> > Any other thoughts on this, particularly about back-patching or not..?\n> \n> We seem to be at a bit of an impasse on this regarding back-patching,\n> which seems unfortunate to me, but without someone else commenting it\n> seems like it's stalled.\n> \n> I'll go ahead and push the change to HEAD soon, as there doesn't seem to\n> be any contention regarding that.\n\nDone.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 30 Mar 2021 12:55:06 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker doesn't immediately exit on postmaster death"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently, ExecInitAgg() performs quite a lot of work, to deduplicate \nidentical Aggrefs, as well as Aggrefs that can share the same transition \nstate. That doesn't really belong in the executor, we should perform \nthat work in the planner. It doesn't change from one invocation of the \nplan to another, and it would be nice to reflect the state-sharing in \nthe plan costs.\n\nAttached is a patch to do that. It adds two new fields to Aggref, \n'aggno' and 'aggtransno', to identify the unique aggregate and \ntransition states. The duplicates are detected, and those filled in, \nearly in the planning. Aside from those fields, the planner doesn't pass \nany other new information to to the executor, so the the executor still \nhas to do syscache lookups to get the transition, combine etc. functions.\n\nI tried a bigger refactoring at first, to pass more information from the \nplanner to the executor, but the patch grew really large before I got \nvery far with it. So as the first step, I think we should apply the \nattached patch, and further refactoring can be done after that, if it \nseems worthwhile.\n\nThere is one known regression failure, in the 'partition_aggregate' \ntest, which is caused by a plan change in one of the queries. The query \ncontains a few aggregates, and the planner now detects that some of them \nare identical, which changed the cost estimates, making a different plan \nlook cheaper. That's easy to fix, although I'm not sure yet if we should \naccept the new plan, or change the query to still get the old plan.\n\n- Heikki",
"msg_date": "Wed, 28 Oct 2020 21:10:41 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Deduplicate aggregates and transition functions in planner"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-28 21:10:41 +0200, Heikki Linnakangas wrote:\n> Currently, ExecInitAgg() performs quite a lot of work, to deduplicate\n> identical Aggrefs, as well as Aggrefs that can share the same transition\n> state. That doesn't really belong in the executor, we should perform that\n> work in the planner. It doesn't change from one invocation of the plan to\n> another, and it would be nice to reflect the state-sharing in the plan\n> costs.\n\nWoo! Very glad to see this tackled.\n\nIt wouldn't surprise me to see a small execution time speedup here -\nI've seen the load of the aggno show up in profiles.\n\n\n> Attached is a patch to do that. It adds two new fields to Aggref, 'aggno'\n> and 'aggtransno', to identify the unique aggregate and transition states.\n> The duplicates are detected, and those filled in, early in the planning.\n> Aside from those fields, the planner doesn't pass any other new information\n> to to the executor, so the the executor still has to do syscache lookups to\n> get the transition, combine etc. functions.\n\n> I tried a bigger refactoring at first, to pass more information from the\n> planner to the executor, but the patch grew really large before I got very\n> far with it. So as the first step, I think we should apply the attached\n> patch, and further refactoring can be done after that, if it seems\n> worthwhile.\n\nWorking incrementally makes sense.\n\n\n\n> @@ -783,14 +783,13 @@ ExecInitExprRec(Expr *node, ExprState *state,\n> \n> \t\t\t\tscratch.opcode = EEOP_AGGREF;\n> \t\t\t\tscratch.d.aggref.astate = astate;\n> -\t\t\t\tastate->aggref = aggref;\n> +\t\t\t\tastate->aggno = aggref->aggno;\n> \n> \t\t\t\tif (state->parent && IsA(state->parent, AggState))\n> \t\t\t\t{\n> \t\t\t\t\tAggState *aggstate = (AggState *) state->parent;\n> \n> -\t\t\t\t\taggstate->aggs = lappend(aggstate->aggs, astate);\n> -\t\t\t\t\taggstate->numaggs++;\n> +\t\t\t\t\taggstate->aggs = lappend(aggstate->aggs, aggref);\n\nHm. Why is aggstate->aggs still built during expression initialization?\nImo that's a pretty huge wart that also introduces more\norder-of-operation brittleness to executor startup.\n\nSince AggRef now knows its aggno, the state for EEOP_AGGREF should be\nchanged to be just an int, instead of a pointer to AggrefExprState..\n\n\n> @@ -3432,8 +3426,18 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n> \t/*\n> \t * We should now have found all Aggrefs in the targetlist and quals.\n> \t */\n> -\tnumaggs = aggstate->numaggs;\n> -\tAssert(numaggs == list_length(aggstate->aggs));\n> +\tnumaggrefs = list_length(aggstate->aggs);\n> +\tmax_aggno = -1;\n> +\tmax_transno = -1;\n> +\tforeach(l, aggstate->aggs)\n> +\t{\n> +\t\tAggref\t *aggref = (Aggref *) lfirst(l);\n> +\n> +\t\tmax_aggno = Max(max_aggno, aggref->aggno);\n> +\t\tmax_transno = Max(max_transno, aggref->aggtransno);\n> +\t}\n> +\tnumaggs = max_aggno + 1;\n> +\tnumtrans = max_transno + 1;\n\nWe must have previously determined this, why don't we stash it in struct\nAgg?\n\n> --- a/src/backend/jit/llvm/llvmjit_expr.c\n> +++ b/src/backend/jit/llvm/llvmjit_expr.c\n> @@ -1850,19 +1850,11 @@ llvm_compile_expr(ExprState *state)\n> \t\t\tcase EEOP_AGGREF:\n> \t\t\t\t{\n> \t\t\t\t\tAggrefExprState *aggref = op->d.aggref.astate;\n> -\t\t\t\t\tLLVMValueRef v_aggnop;\n> \t\t\t\t\tLLVMValueRef v_aggno;\n> \t\t\t\t\tLLVMValueRef value,\n> \t\t\t\t\t\t\t\tisnull;\n> \n> -\t\t\t\t\t/*\n> -\t\t\t\t\t * At this point aggref->aggno is not yet set (it's set up\n> -\t\t\t\t\t * in ExecInitAgg() after initializing the expression). So\n> -\t\t\t\t\t * load it from memory each time round.\n> -\t\t\t\t\t */\n> -\t\t\t\t\tv_aggnop = l_ptr_const(&aggref->aggno,\n> -\t\t\t\t\t\t\t\t\t\t l_ptr(LLVMInt32Type()));\n> -\t\t\t\t\tv_aggno = LLVMBuildLoad(b, v_aggnop, \"v_aggno\");\n> +\t\t\t\t\tv_aggno = l_int32_const(aggref->aggno);\n\nYay!\n\n\n> +/*\n> + * get_agg_clause_costs\n> + *\t Recursively find the Aggref nodes in an expression tree, and\n> + *\t accumulate cost information about them.\n\nThink this comment is out of date now.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Oct 2020 12:59:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Deduplicate aggregates and transition functions in planner"
},
{
"msg_contents": "On 28/10/2020 21:59, Andres Freund wrote:\n> On 2020-10-28 21:10:41 +0200, Heikki Linnakangas wrote:\n>> Currently, ExecInitAgg() performs quite a lot of work, to deduplicate\n>> identical Aggrefs, as well as Aggrefs that can share the same transition\n>> state. That doesn't really belong in the executor, we should perform that\n>> work in the planner. It doesn't change from one invocation of the plan to\n>> another, and it would be nice to reflect the state-sharing in the plan\n>> costs.\n> \n> Woo! Very glad to see this tackled.\n> \n> It wouldn't surprise me to see a small execution time speedup here -\n> I've seen the load of the aggno show up in profiles.\n\nI think you'd be hard-pressed to find a real-life query where it \nmatters. But if you don't care about real life:\n\nregression=# do $$\nbegin\n for i in 1..100000 loop\n perform sum(g), sum(g+0), sum(g+1), sum(g+2), sum(g+3), sum(g+4), \nsum(g+5), sum(g+6), sum(g+7), sum(g+8), sum(g+9), sum(g+10) from \ngenerate_series(1,1) g;\n end loop;\nend;\n$$;\nDO\nTime: 1282.701 ms (00:01.283)\n\nvs.\n\nTime: 860.323 ms\n\nwith the patch.\n\n>> @@ -783,14 +783,13 @@ ExecInitExprRec(Expr *node, ExprState *state,\n>> \n>> \t\t\t\tscratch.opcode = EEOP_AGGREF;\n>> \t\t\t\tscratch.d.aggref.astate = astate;\n>> -\t\t\t\tastate->aggref = aggref;\n>> +\t\t\t\tastate->aggno = aggref->aggno;\n>> \n>> \t\t\t\tif (state->parent && IsA(state->parent, AggState))\n>> \t\t\t\t{\n>> \t\t\t\t\tAggState *aggstate = (AggState *) state->parent;\n>> \n>> -\t\t\t\t\taggstate->aggs = lappend(aggstate->aggs, astate);\n>> -\t\t\t\t\taggstate->numaggs++;\n>> +\t\t\t\t\taggstate->aggs = lappend(aggstate->aggs, aggref);\n> \n> Hm. Why is aggstate->aggs still built during expression initialization?\n> Imo that's a pretty huge wart that also introduces more\n> order-of-operation brittleness to executor startup.\n\nThe Agg node itself doesn't include any information about the aggregates \nand transition functions. Because of that, ExecInitAgg needs a \n\"representive\" Aggref for each transition state and agg, to initialize \nthe per-trans and per-agg structs. The expression initialization makes \nthose Aggrefs available for ExecInitAgg.\n\nInstead of collecting all the Aggrefs in a list, ExecInitExprRec() could \nset each representative Aggref directly in the right per-trans and \nper-agg struct, based on the 'aggno' and 'aggtransno' fields. That \nrequires initializing the per-trans and per-agg arrays earlier, and for \nthat, we would need to store the # of aggs and transition states in the \nAgg node, like you also suggested. Certainly doable, but on the whole, \nit didn't really seem better to me. Attached is a patch, to demonstrate \nwhat that looks like, on top of the main patch. It's not complete, \nthere's at least one case with hash-DISTINCT for queries like \"SELECT \nDISTINCT aggregate(x) ...\" where the planner creates an Agg for the \nDISTINCT without aggregates, but the code currently passes numAggs=1 to \nthe executor. Some further changes would be needed in the planner, to \nmark the AggPath generated for deduplication differently from the \nAggPaths created for aggregation. Again that's doable, but on the whole \nI prefer the approach to scan the Aggrefs in executor startup, for now.\n\nI'd like to get rid of the \"representative Aggrefs\" altogether, and pass \ninformation about the transition and final functions from planner to \nexecutor in some other form. But that's exactly what got me into the \nrefactoring that was ballooning out of hand that I mentioned.\n\n- Heikki",
"msg_date": "Thu, 29 Oct 2020 10:17:20 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Deduplicate aggregates and transition functions in planner"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-29 10:17:20 +0200, Heikki Linnakangas wrote:\n> On 28/10/2020 21:59, Andres Freund wrote:\n> > It wouldn't surprise me to see a small execution time speedup here -\n> > I've seen the load of the aggno show up in profiles.\n> \n> I think you'd be hard-pressed to find a real-life query where it\n> matters. But if you don't care about real life:\n\nI was actually thinking about a different angle - that the evaluation of\nan Aggref can be faster, because we need less indirection to find the\naggno. As you have already implemented for the JITed code, but removing\nit for the expression code looks easy enough too. You'd need a lot of\ngroups and presumably a fair number of Aggrefs to see it.\n\nAttached is a quick version of what I am thinking wrt AggrefExprState.\n\n\n> > > @@ -783,14 +783,13 @@ ExecInitExprRec(Expr *node, ExprState *state,\n> > > \t\t\t\tscratch.opcode = EEOP_AGGREF;\n> > > \t\t\t\tscratch.d.aggref.astate = astate;\n> > > -\t\t\t\tastate->aggref = aggref;\n> > > +\t\t\t\tastate->aggno = aggref->aggno;\n> > > \t\t\t\tif (state->parent && IsA(state->parent, AggState))\n> > > \t\t\t\t{\n> > > \t\t\t\t\tAggState *aggstate = (AggState *) state->parent;\n> > > -\t\t\t\t\taggstate->aggs = lappend(aggstate->aggs, astate);\n> > > -\t\t\t\t\taggstate->numaggs++;\n> > > +\t\t\t\t\taggstate->aggs = lappend(aggstate->aggs, aggref);\n> > \n> > Hm. Why is aggstate->aggs still built during expression initialization?\n> > Imo that's a pretty huge wart that also introduces more\n> > order-of-operation brittleness to executor startup.\n> \n> The Agg node itself doesn't include any information about the aggregates and\n> transition functions. Because of that, ExecInitAgg needs a \"representive\"\n> Aggref for each transition state and agg, to initialize the per-trans and\n> per-agg structs. The expression initialization makes those Aggrefs available\n> for ExecInitAgg.\n\n> Instead of collecting all the Aggrefs in a list, ExecInitExprRec() could set\n> each representative Aggref directly in the right per-trans and per-agg\n> struct, based on the 'aggno' and 'aggtransno' fields.\n\nHold on a second: To me the question is why is it the right design that\nthe Agg node doesn't have the information about \"aggregates and\ntransition functions\"? Agg e.g. already does directly contains the group\nkeys...\n\nMy concern wouldn't really be addressed if we replace the lappend() in\nExecInitExprRec() with setting something \"directly in the right\nper-trans...\". I think it's structurally wrong to have to discover\nAggrefs at execution time at all.\n\nPerhaps the easiest incremental step would be to have something like a\nCookedAggref { int aggno; } and then just store the Aggref nodes in\nAgg->aggs, with aggno referencing that...\n\n\n> I'd like to get rid of the \"representative Aggrefs\" altogether, and pass\n> information about the transition and final functions from planner to\n> executor in some other form. But that's exactly what got me into the\n> refactoring that was ballooning out of hand that I mentioned.\n\nFair.\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 29 Oct 2020 10:48:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Deduplicate aggregates and transition functions in planner"
},
{
"msg_contents": "On 29/10/2020 19:48, Andres Freund wrote:\n> On 2020-10-29 10:17:20 +0200, Heikki Linnakangas wrote:\n>> On 28/10/2020 21:59, Andres Freund wrote:\n>>> It wouldn't surprise me to see a small execution time speedup here -\n>>> I've seen the load of the aggno show up in profiles.\n>>\n>> I think you'd be hard-pressed to find a real-life query where it\n>> matters. But if you don't care about real life:\n> \n> I was actually thinking about a different angle - that the evaluation of\n> an Aggref can be faster, because we need less indirection to find the\n> aggno. As you have already implemented for the JITed code, but removing\n> it for the expression code looks easy enough too. You'd need a lot of\n> groups and presumably a fair number of Aggrefs to see it.\n> \n> Attached is a quick version of what I am thinking wrt AggrefExprState.\n\nAh, I see, makes sense.\n\n>> The Agg node itself doesn't include any information about the aggregates and\n>> transition functions. Because of that, ExecInitAgg needs a \"representive\"\n>> Aggref for each transition state and agg, to initialize the per-trans and\n>> per-agg structs. The expression initialization makes those Aggrefs available\n>> for ExecInitAgg.\n> \n>> Instead of collecting all the Aggrefs in a list, ExecInitExprRec() could set\n>> each representative Aggref directly in the right per-trans and per-agg\n>> struct, based on the 'aggno' and 'aggtransno' fields.\n> \n> Hold on a second: To me the question is why is it the right design that\n> the Agg node doesn't have the information about \"aggregates and\n> transition functions\"? Agg e.g. already does directly contains the group\n> keys...\n> \n> My concern wouldn't really be addressed if we replace the lappend() in\n> ExecInitExprRec() with setting something \"directly in the right\n> per-trans...\". I think it's structurally wrong to have to discover\n> Aggrefs at execution time at all.\n> \n> Perhaps the easiest incremental step would be to have something like a\n> CookedAggref { int aggno; } and then just store the Aggref nodes in\n> Agg->aggs, with aggno referencing that...\n\nI started hacking on that CookedAggref approach, but it wasn't as simple \nas it seemed. I tried to replace the Aggrefs with CookedAggrefs in \nsetrefs.c, but when set_plan_references() replaces expressions with Vars \nreferring to the output of a subnode, it needs to be able to match an \nAggref at an upper node to a CookedAggref on the node below. \nFurthermore, the deparsing code in ruleutils.c needs to be able to find \nthe original Aggrefs, in order to print them nicely. All of that is \nsolvable, I'm sure, but it's not trivial. And it's new code that mostly \nbuilds on top of attached patch, so I think that can be done separately \nlater, it doesn't need to block this patch.\n\nSo barring objections, I'm going to push the attached updated patch that \nincludes the removal of AggrefExprState, and leave CookedAggrefs or \nother further refactorings for the future.\n\n- Heikki",
"msg_date": "Thu, 19 Nov 2020 12:38:51 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Deduplicate aggregates and transition functions in planner"
},
{
"msg_contents": "On 19/11/2020 12:38, Heikki Linnakangas wrote:\n> So barring objections, I'm going to push the attached updated patch that\n> includes the removal of AggrefExprState, and leave CookedAggrefs or\n> other further refactorings for the future.\n\nDone. Thanks!\n\n- Heikki\n\n\n",
"msg_date": "Tue, 24 Nov 2020 10:54:39 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Deduplicate aggregates and transition functions in planner"
}
] |
[
{
"msg_contents": "Hello,\n\nI the fast shutdown was initiated before pg_prewarm managed to load\nbuffers from the dump (and start the main loop), the pg_prewarm\nbgworker process never exits on SIGTERM and effectively preventing the\nclean shutdown of the cluster.\n\nThis problem bite me a few times, but yesterday I managed to attach to\nthe pg_prewarm process and got a stacktrace:\n(gdb) bt #0 0x00007f394d788d27 in epoll_wait () from\n/lib/x86_64-linux-gnu/libc.so.6 #1 0x000056059d6412f9 in\nWaitEventSetWaitBlock (nevents=1, occurred_events=0x7ffc598f2b00,\ncur_timeout=-1, set=0x56059f5757d8) at\n./build/../src/backend/storage/ipc/latch.c:1048 #2 WaitEventSetWait\n(set=set@entry=0x56059f5757d8, timeout=timeout@entry=-1,\noccurred_events=occurred_events@entry=0x7ffc598f2b00,\nnevents=nevents@entry=1,\nwait_event_info=wait_event_info@entry=134217728) at\n./build/../src/backend/storage/ipc/latch.c:1000 #3 0x000056059d641748\nin WaitLatchOrSocket (latch=0x7f393ec32164,\nwakeEvents=wakeEvents@entry=17, sock=sock@entry=-1, timeout=-1,\ntimeout@entry=0, wait_event_info=wait_event_info@entry=134217728) at\n./build/../src/backend/storage/ipc/latch.c:385 #4 0x000056059d641805\nin WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=17,\ntimeout=timeout@entry=0,\nwait_event_info=wait_event_info@entry=134217728) at\n./build/../src/backend/storage/ipc/latch.c:339 #5 0x000056059d5e1d40\nin WaitForBackgroundWorkerShutdown (handle=0x56059f57e9b0) at\n./build/../src/backend/postmaster/bgworker.c:1153 #6\n0x00007f3944e1a180 in apw_start_database_worker () at\n./build/../contrib/pg_prewarm/autoprewarm.c:866 #7 0x00007f3944e1a739\nin apw_load_buffers () at\n./build/../contrib/pg_prewarm/autoprewarm.c:404 #8 autoprewarm_main\n(main_arg=<optimized out>) at\n./build/../contrib/pg_prewarm/autoprewarm.c:203 #9 0x000056059d5e16ee\nin StartBackgroundWorker () at\n./build/../src/backend/postmaster/bgworker.c:834 #10\n0x000056059d5ed58c in do_start_bgworker (rw=0x56059f56cd10) at\n./build/../src/backend/postmaster/postmaster.c:5713 #11\nmaybe_start_bgworkers () at\n./build/../src/backend/postmaster/postmaster.c:5939 #12\n0x000056059d5ee02d in sigusr1_handler (postgres_signal_arg=<optimized\nout>) at ./build/../src/backend/postmaster/postmaster.c:5086 #13\n<signal handler called> #14 0x00007f394d77e0f7 in select () from\n/lib/x86_64-linux-gnu/libc.so.6 #15 0x000056059d5ee58b in ServerLoop\n() at ./build/../src/backend/postmaster/postmaster.c:1671 #16\n0x000056059d5f038d in PostmasterMain (argc=17, argv=0x56059f51a080) at\n./build/../src/backend/postmaster/postmaster.c:1380 #17\n0x000056059d37a992 in main (argc=17, argv=0x56059f51a080) at\n./build/../src/backend/main/main.c:228\n\nIt has happened on 11.9, but after looking at HEAD I think the problem\nstill exists.\n\nRegards,\n--\nAlexander Kukushkin\n\n\n",
"msg_date": "Wed, 28 Oct 2020 20:43:48 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_prewarm bgworker could break fast shutdown"
},
{
"msg_contents": "Alexander Kukushkin <cyberdemn@gmail.com> writes:\n> I the fast shutdown was initiated before pg_prewarm managed to load\n> buffers from the dump (and start the main loop), the pg_prewarm\n> bgworker process never exits on SIGTERM and effectively preventing the\n> clean shutdown of the cluster.\n\nI might be wrong about this, but I suspect what you've got here is that\nthe postmaster never launched the child bgworker (and now never will\nlaunch it), so GetBackgroundWorkerPid returns BGWH_NOT_YET_STARTED\nand then WaitForBackgroundWorkerShutdown keeps on waiting. If that\ninterpretation is accurate then the same problem could occur with\nparallel query. (And I believe it's been sufficiently demonstrated\nthat parallel query falls over very easily in such corner cases,\nso this isn't an astonishing conclusion.)\n\nI'm inclined to think what we need to do about this is to have\nthe postmaster transition all pending worker-start requests into\nSTOPPED state, or some new FAILED state, when it starts trying to\nshut stuff down. I sure don't see any such logic there now ---\nit just sends out a bunch of SIGTERMs and that's it. Also,\nit looks like bgworker_should_start_now() doesn't distinguish\npostmaster states that don't allow starting a bgworker right\nthis moment, but probably will allow it later, from states in\nwhich it never will be allowed and we need to fail the request\nnot just postpone it indefinitely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Oct 2020 20:20:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_prewarm bgworker could break fast shutdown"
}
] |
[
{
"msg_contents": "Forking this thread:\nhttps://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n\nI think these are old-fashioned since 9.6 (?), so remove them for v14.\n\nI found it confusing when re-familiarizing myself with modern streaming\nreplication that there are extensions which only help do things the \"old way\".",
"msg_date": "Wed, 28 Oct 2020 21:44:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] remove pg_archivecleanup and pg_standby"
},
{
"msg_contents": "Hi,\n\nAm Mittwoch, den 28.10.2020, 21:44 -0500 schrieb Justin Pryzby:\n> Forking this thread:\n> https://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n\nGlancing over this in the context of pg_standby/pg_archivecleanup, I am\nnot sure Heikki's \"Ditto\" is about \"remove pg_archivecleanup just like\npg_standby\" or rather \"keep the note until we get around doing something\nwith it\". Probably the former, but see below.\n\n> I think these are old-fashioned since 9.6 (?), so remove them for v14.\n\nWhy 9.6?\n\n> I found it confusing when re-familiarizing myself with modern streaming\n> replication that there are extensions which only help do things the \"old way\".\n\nI guess not many will complain about pg_standby going away, but I am\nunder the impression that pg_archivecleanup is still used a lot in PITR\nbackup environments as a handy tool to expire WAL related to expired\nbase backups. I certainly saw hand-assembled shell code fail with \"too\nmany files\" and things when it tried to act on large amount of WAL.\n\nSo I think the part about it being used in archive_cleanup_command can\nbe probably be removed, but the part about it being useful as a stand-\nalone tool, in particular this part:\n\n|In this mode, if you specify a .partial or .backup file name, then only\n\n|the file prefix will be used as the oldestkeptwalfile. This treatment\n|of .backup file name allows you to remove all WAL files archived prior\n|to a specific base backup without error.\n\nAt the very least, the commit message should give a rationale on why\npg_archivebackup is retired, and what it should be replaced with, in\ncase valid use-cases for it are still present.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n",
"msg_date": "Thu, 29 Oct 2020 20:40:31 +0100",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_archivecleanup and pg_standby"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 08:40:31PM +0100, Michael Banck wrote:\n> Am Mittwoch, den 28.10.2020, 21:44 -0500 schrieb Justin Pryzby:\n> > Forking this thread:\n> > https://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n\n> > I think these are old-fashioned since 9.6 (?), so remove them for v14.\n> \n> Why 9.6?\n\nMy work doesn't currently bring me in contact with replication, so I've had to\ndig through release notes. I think streaming replication was new in 9.0, and\nincreasingly mature throughout 9.x. Maybe someone else will say a different\nrelease was when streaming replication became the norm and wal shipping old.\n\n> > I found it confusing when re-familiarizing myself with modern streaming\n> > replication that there are extensions which only help do things the \"old way\".\n> \n> I guess not many will complain about pg_standby going away, but I am\n> under the impression that pg_archivecleanup is still used a lot in PITR\n> backup environments as a handy tool to expire WAL related to expired\n> base backups. I certainly saw hand-assembled shell code fail with \"too\n> many files\" and things when it tried to act on large amount of WAL.\n\nI anticipate you're right, and I'll withdraw 0002.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 2 Nov 2020 12:26:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] remove pg_archivecleanup and pg_standby"
},
{
"msg_contents": "On 02/11/2020 20:26, Justin Pryzby wrote:\n> On Thu, Oct 29, 2020 at 08:40:31PM +0100, Michael Banck wrote:\n>> Am Mittwoch, den 28.10.2020, 21:44 -0500 schrieb Justin Pryzby:\n>>> Forking this thread:\n>>> https://www.postgresql.org/message-id/fd93f1c5-7818-a02c-01e5-1075ac0d4def@iki.fi\n> \n>>> I think these are old-fashioned since 9.6 (?), so remove them for v14.\n>>\n>> Why 9.6?\n> \n> My work doesn't currently bring me in contact with replication, so I've had to\n> dig through release notes. I think streaming replication was new in 9.0, and\n> increasingly mature throughout 9.x. Maybe someone else will say a different\n> release was when streaming replication became the norm and wal shipping old.\n\nRemoving pg_standby has been proposed a couple of times in the past. See \nhttps://www.postgresql.org/message-id/20170913064824.rqflkadxwpboabgw@alap3.anarazel.de \nfor the latest attempt.\n\nMasao-san, back in 2014 you mentioned \"fast failover\" as a feature that \nwas missing from the built-in standby mode \n(https://www.postgresql.org/message-id/CAHGQGwEE_8vvpQk0ex6Qa_aXt-OSJ7OdZjX4uM_FtqKfxq5SbQ%40mail.gmail.com). \nI think that's been implemented since, with the recovery_target \nsettings. Would you agree?\n\nI'm pretty sure we can remove pg_standby by now. But if there's \nsomething crucial missing from the built-in facilities, we need to talk \nabout implementing them.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 3 Nov 2020 17:28:46 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_archivecleanup and pg_standby"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 3:40 PM Michael Banck <michael.banck@credativ.de> wrote:\n> I guess not many will complain about pg_standby going away, but I am\n> under the impression that pg_archivecleanup is still used a lot in PITR\n> backup environments as a handy tool to expire WAL related to expired\n> base backups. I certainly saw hand-assembled shell code fail with \"too\n> many files\" and things when it tried to act on large amount of WAL.\n\nYeah, I see pg_archivecleanup used in customer environments all the\ntime. Like just this morning, for example.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 3 Nov 2020 11:49:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_archivecleanup and pg_standby"
},
{
"msg_contents": "On Tue, Nov 03, 2020 at 05:28:46PM +0200, Heikki Linnakangas wrote:\n> Removing pg_standby has been proposed a couple of times in the past. See https://www.postgresql.org/message-id/20170913064824.rqflkadxwpboabgw@alap3.anarazel.de\n> for the latest attempt.\n> \n> Masao-san, back in 2014 you mentioned \"fast failover\" as a feature that was\n> missing from the built-in standby mode (https://www.postgresql.org/message-id/CAHGQGwEE_8vvpQk0ex6Qa_aXt-OSJ7OdZjX4uM_FtqKfxq5SbQ%40mail.gmail.com).\n> I think that's been implemented since, with the recovery_target settings.\n> Would you agree?\n> \n> I'm pretty sure we can remove pg_standby by now. But if there's something\n> crucial missing from the built-in facilities, we need to talk about\n> implementing them.\n\nReading the thread you are mentioning, it seems to me that the\nstatu-quo is the same, but I find rather scary that this tool is used\nin exactly zero tests.\n\nEchoing with Robert, I think that pg_archivecleanup is still useful in\nmany cases, so that's not something we should remove.\n--\nMichael",
"msg_date": "Mon, 16 Nov 2020 17:10:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_archivecleanup and pg_standby"
},
{
"msg_contents": "On 2020-10-29 03:44, Justin Pryzby wrote:\n> diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml\n> index 4e833d79ef..be4292ec33 100644\n> --- a/doc/src/sgml/contrib.sgml\n> +++ b/doc/src/sgml/contrib.sgml\n> @@ -199,6 +199,5 @@ pages.\n> part of the core <productname>PostgreSQL</productname> distribution.\n> </para>\n> \n> - &pgstandby;\n> </sect1>\n> </appendix>\n\nWith this removal, that section becomes empty. So you probably want to \nclean up or reorganize this a bit.\n\nSee https://www.postgresql.org/docs/devel/contrib-prog.html for the context.\n\n\n",
"msg_date": "Fri, 20 Nov 2020 17:26:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_archivecleanup and pg_standby"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 05:26:54PM +0100, Peter Eisentraut wrote:\n> On 2020-10-29 03:44, Justin Pryzby wrote:\n> > diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml\n> > index 4e833d79ef..be4292ec33 100644\n> > --- a/doc/src/sgml/contrib.sgml\n> > +++ b/doc/src/sgml/contrib.sgml\n> > @@ -199,6 +199,5 @@ pages.\n> > part of the core <productname>PostgreSQL</productname> distribution.\n> > </para>\n> > - &pgstandby;\n> > </sect1>\n> > </appendix>\n> \n> With this removal, that section becomes empty. So you probably want to\n> clean up or reorganize this a bit.\n> \n> See https://www.postgresql.org/docs/devel/contrib-prog.html for the context.\n\nOops. I guess I'd write something like this. If we just remove it, then\nthere'd no place to add a new server application, and \"client applications\"\nwould be the only subsection.\n\n-- \nJustin",
"msg_date": "Sat, 21 Nov 2020 13:41:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] remove pg_standby"
},
{
"msg_contents": "On 2020-11-21 20:41, Justin Pryzby wrote:\n> On Fri, Nov 20, 2020 at 05:26:54PM +0100, Peter Eisentraut wrote:\n>> On 2020-10-29 03:44, Justin Pryzby wrote:\n>>> diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml\n>>> index 4e833d79ef..be4292ec33 100644\n>>> --- a/doc/src/sgml/contrib.sgml\n>>> +++ b/doc/src/sgml/contrib.sgml\n>>> @@ -199,6 +199,5 @@ pages.\n>>> part of the core <productname>PostgreSQL</productname> distribution.\n>>> </para>\n>>> - &pgstandby;\n>>> </sect1>\n>>> </appendix>\n>>\n>> With this removal, that section becomes empty. So you probably want to\n>> clean up or reorganize this a bit.\n>>\n>> See https://www.postgresql.org/docs/devel/contrib-prog.html for the context.\n> \n> Oops. I guess I'd write something like this. If we just remove it, then\n> there'd no place to add a new server application, and \"client applications\"\n> would be the only subsection.\n\nI have committed the typo fix. I don't have a well-formed opinion yet \nabout whether all the reservations about removing pg_standby have been \naddressed.\n\n\n",
"msg_date": "Wed, 25 Nov 2020 10:04:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_standby"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 10:04 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 2020-11-21 20:41, Justin Pryzby wrote:\n> > Oops. I guess I'd write something like this. If we just remove it, then\n> > there'd no place to add a new server application, and \"client applications\"\n> > would be the only subsection.\n>\n> I have committed the typo fix. I don't have a well-formed opinion yet\n> about whether all the reservations about removing pg_standby have been\n> addressed.\n\nI would like to commit this, because \"waiting restore commands\" have\nconfusing interactions with my proposed prefetching-during-recovery\npatch[1]. Here's a version that fixes an error when building the docs\n(there was a stray remaining <xref linkend=\"pgstandby\"/>), and adds a\ncommit message. Any objections?\n\nFurthermore, I think we should also remove the section of the manual\nthat describes how to write your own \"waiting restore command\".\nThoughts?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKFeYPL9K%2BSRixcsx1%2B6HsHhqK%2BPOZyrnnZjw1jERpGcQ%40mail.gmail.com",
"msg_date": "Wed, 27 Jan 2021 16:13:24 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_standby"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 04:13:24PM +1300, Thomas Munro wrote:\n> I would like to commit this, because \"waiting restore commands\" have\n> confusing interactions with my proposed prefetching-during-recovery\n> patch[1]. Here's a version that fixes an error when building the docs\n> (there was a stray remaining <xref linkend=\"pgstandby\"/>), and adds a\n> commit message. Any objections?\n\nIt looks like you are missing two references in your patch set:\n$ git grep pg_standby\ndoc/src/sgml/high-availability.sgml: Do not use pg_standby or\nsimilar tools with the built-in standby mode\nsrc/backend/access/transam/xlog.c: * segment. Only recycle normal\nfiles, pg_standby for example can create\n\nThe logic assumed in RemoveXlogFile() is actually a bit scary. I have\nnot checked in details but it could be possible to clean up more code\nin this area?\n\n> Furthermore, I think we should also remove the section of the manual\n> that describes how to write your own \"waiting restore command\".\n> Thoughts?\n\nAgreed. No objections to that.\n--\nMichael",
"msg_date": "Wed, 27 Jan 2021 14:06:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_standby"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 6:06 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jan 27, 2021 at 04:13:24PM +1300, Thomas Munro wrote:\n> > I would like to commit this, because \"waiting restore commands\" have\n> > confusing interactions with my proposed prefetching-during-recovery\n> > patch[1]. Here's a version that fixes an error when building the docs\n> > (there was a stray remaining <xref linkend=\"pgstandby\"/>), and adds a\n> > commit message. Any objections?\n>\n> It looks like you are missing two references in your patch set:\n> $ git grep pg_standby\n> doc/src/sgml/high-availability.sgml: Do not use pg_standby or\n> similar tools with the built-in standby mode\n> src/backend/access/transam/xlog.c: * segment. Only recycle normal\n> files, pg_standby for example can create\n\nThanks, fixed.\n\n> The logic assumed in RemoveXlogFile() is actually a bit scary. I have\n> not checked in details but it could be possible to clean up more code\n> in this area?\n\nI think the check that it's a regular file is a good idea anyway, but\nI removed the offending comment.\n\n> > Furthermore, I think we should also remove the section of the manual\n> > that describes how to write your own \"waiting restore command\".\n> > Thoughts?\n>\n> Agreed. No objections to that.\n\nThanks!",
"msg_date": "Wed, 27 Jan 2021 18:32:35 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_standby"
},
{
"msg_contents": "\n\nOn 2021/01/27 14:32, Thomas Munro wrote:\n> On Wed, Jan 27, 2021 at 6:06 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Wed, Jan 27, 2021 at 04:13:24PM +1300, Thomas Munro wrote:\n>>> I would like to commit this, because \"waiting restore commands\" have\n>>> confusing interactions with my proposed prefetching-during-recovery\n>>> patch[1]. Here's a version that fixes an error when building the docs\n>>> (there was a stray remaining <xref linkend=\"pgstandby\"/>), and adds a\n>>> commit message. Any objections?\n\nI agree with this direction (i.e, remove pg_standby). BTW last month when I gave the talk about possible retire of pg_standby at PostgreSQL Unconference Tokyo, no one in audience complained about that retire.\n\nBut one question is; shouldn't we follow \"usual\" way to retire the feature instead of dropping that immediately? That is, mark pg_standby as obsolete, announce that pg_standby will be dropped after several releases, and then drop pg_standby. This seems safe because there might be some users. While it's been marked as obsolete, maybe WAL prefetch feature doesn't work with pg_standby, but we can live with that because it's obsolete.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 27 Jan 2021 17:08:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_standby"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 05:08:56PM +0900, Fujii Masao wrote:\n> But one question is; shouldn't we follow \"usual\" way to retire the\n> feature instead of dropping that immediately? That is, mark\n> pg_standby as obsolete, announce that pg_standby will be dropped\n> after several releases, and then drop pg_standby. This seems safe\n> because there might be some users. While it's been marked as\n> obsolete, maybe WAL prefetch feature doesn't work with pg_standby,\n> but we can live with that because it's obsolete.\n\nThanks. FWIW, at this stage, my take is just to move on and remove\nit. If we mark that as obsolete, it will stay around forever while\nannoying future development.\n--\nMichael",
"msg_date": "Thu, 28 Jan 2021 16:36:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_standby"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 8:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jan 27, 2021 at 05:08:56PM +0900, Fujii Masao wrote:\n> > But one question is; shouldn't we follow \"usual\" way to retire the\n> > feature instead of dropping that immediately? That is, mark\n> > pg_standby as obsolete, announce that pg_standby will be dropped\n> > after several releases, and then drop pg_standby. This seems safe\n> > because there might be some users. While it's been marked as\n> > obsolete, maybe WAL prefetch feature doesn't work with pg_standby,\n> > but we can live with that because it's obsolete.\n>\n> Thanks. FWIW, at this stage, my take is just to move on and remove\n> it. If we mark that as obsolete, it will stay around forever while\n> annoying future development.\n\nI agree. Also, this thing is entirely separate from the server, so a\nhypothetical user who really wants to upgrade to 14 but keep using\npg_standby a bit longer could always use the version that shipped with\n13.\n\n\n",
"msg_date": "Fri, 29 Jan 2021 11:13:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_standby"
},
{
"msg_contents": "On Fri, Jan 29, 2021 at 11:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jan 28, 2021 at 8:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Wed, Jan 27, 2021 at 05:08:56PM +0900, Fujii Masao wrote:\n> > > But one question is; shouldn't we follow \"usual\" way to retire the\n> > > feature instead of dropping that immediately? That is, mark\n> > > pg_standby as obsolete, announce that pg_standby will be dropped\n> > > after several releases, and then drop pg_standby. This seems safe\n> > > because there might be some users. While it's been marked as\n> > > obsolete, maybe WAL prefetch feature doesn't work with pg_standby,\n> > > but we can live with that because it's obsolete.\n> >\n> > Thanks. FWIW, at this stage, my take is just to move on and remove\n> > it. If we mark that as obsolete, it will stay around forever while\n> > annoying future development.\n>\n> I agree. Also, this thing is entirely separate from the server, so a\n> hypothetical user who really wants to upgrade to 14 but keep using\n> pg_standby a bit longer could always use the version that shipped with\n> 13.\n\nAnd, pushed.\n\n\n",
"msg_date": "Fri, 29 Jan 2021 14:19:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] remove pg_standby"
}
] |
[
{
"msg_contents": "postgres --help:\n -o OPTIONS pass \"OPTIONS\" to each server process (obsolete)\n\nThis was marked obsolete in 2006 (86c23a6eb28).\n\nIs it perhaps time to get rid of it?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 29 Oct 2020 15:41:48 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "-O switch"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> postgres --help:\n> -o OPTIONS pass \"OPTIONS\" to each server process (obsolete)\n\n> This was marked obsolete in 2006 (86c23a6eb28).\n\nI don't think it's really obsolete ... don't we use that to pass\nPGOPTIONS through from the client?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Oct 2020 11:45:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 4:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > postgres --help:\n> > -o OPTIONS pass \"OPTIONS\" to each server process (obsolete)\n>\n> > This was marked obsolete in 2006 (86c23a6eb28).\n>\n> I don't think it's really obsolete ... don't we use that to pass\n> PGOPTIONS through from the client?\n\nThen it probably shouldn't be labeled as obsolete :)\n\nThat said, I don't think we do, or I'm misunderstanding what you mean.\nThe startup packet which holds the client options is not read until\nwe're already in the child process, so there is no further exec to be\ndone?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 29 Oct 2020 17:03:01 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Thu, Oct 29, 2020 at 4:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't think it's really obsolete ... don't we use that to pass\n>> PGOPTIONS through from the client?\n\n> That said, I don't think we do, or I'm misunderstanding what you mean.\n> The startup packet which holds the client options is not read until\n> we're already in the child process, so there is no further exec to be\n> done?\n\n[ pokes around... ] Ah, you're right, that stuff goes through\nport->cmdline_options now. It looks like the mechanism for -o\nis the postmaster's ExtraOptions variable, which we could get\nrid of this way. Seems like a reasonable thing, especially since\nwe unified all the other postmaster/postgres options already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Oct 2020 12:25:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 5:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Thu, Oct 29, 2020 at 4:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I don't think it's really obsolete ... don't we use that to pass\n> >> PGOPTIONS through from the client?\n>\n> > That said, I don't think we do, or I'm misunderstanding what you mean.\n> > The startup packet which holds the client options is not read until\n> > we're already in the child process, so there is no further exec to be\n> > done?\n>\n> [ pokes around... ] Ah, you're right, that stuff goes through\n> port->cmdline_options now. It looks like the mechanism for -o\n> is the postmaster's ExtraOptions variable, which we could get\n> rid of this way. Seems like a reasonable thing, especially since\n> we unified all the other postmaster/postgres options already.\n\nPFA a patch to do this.\n\nInitially I kept the dynamic argv/argc in even though it's now\nhardcoded, in case we wanted to add something back. But given the way\nit looks now, perhaps we should just get rid of BackendRun()\ncompletely and directly call PostgresMain()? Or keep BackendRun() with\njust setting the TopMemoryContext, but removing the dynamic parts?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 2 Nov 2020 14:46:02 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> PFA a patch to do this.\n\nOne thing you missed is that the getopt() calls in both postmaster.c\nand postgres.c have 'o:' entries that should be removed. Also IIRC\nthere is a \"case 'o'\" in postgres.c to go along with that.\n\n> Initially I kept the dynamic argv/argc in even though it's now\n> hardcoded, in case we wanted to add something back. But given the way\n> it looks now, perhaps we should just get rid of BackendRun()\n> completely and directly call PostgresMain()? Or keep BackendRun() with\n> just setting the TopMemoryContext, but removing the dynamic parts?\n\nI'd be inclined to keep it as-is for now. It's not adding any significant\namount of cycles compared to the process fork, so we might as well\npreserve flexibility.\n\nIs it really possible to not include miscadmin.h in postmaster.c?\nI find that a bit surprising.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Nov 2020 12:58:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > PFA a patch to do this.\n>\n> One thing you missed is that the getopt() calls in both postmaster.c\n> and postgres.c have 'o:' entries that should be removed. Also IIRC\n> there is a \"case 'o'\" in postgres.c to go along with that.\n\nHa. Of course. Oops.\n\nPFA updated.\n\n\n> > Initially I kept the dynamic argv/argc in even though it's now\n> > hardcoded, in case we wanted to add something back. But given the way\n> > it looks now, perhaps we should just get rid of BackendRun()\n> > completely and directly call PostgresMain()? Or keep BackendRun() with\n> > just setting the TopMemoryContext, but removing the dynamic parts?\n>\n> I'd be inclined to keep it as-is for now. It's not adding any significant\n> amount of cycles compared to the process fork, so we might as well\n> preserve flexibility.\n>\n> Is it really possible to not include miscadmin.h in postmaster.c?\n> I find that a bit surprising.\n\nI did too, but having removed it postmaster.c still compiles fine\nwithout warnings for me. It did also pass the cfbot build step, but it\nmight be that it'll eventually break down on some more different\nbuildfarm animal.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 3 Nov 2020 10:38:39 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> [ remove_option_o_2.patch ]\n\nThis seems committable to me now, although ...\n\n> On Mon, Nov 2, 2020 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Magnus Hagander <magnus@hagander.net> writes:\n>>> Initially I kept the dynamic argv/argc in even though it's now\n>>> hardcoded, in case we wanted to add something back. But given the way\n>>> it looks now, perhaps we should just get rid of BackendRun()\n>>> completely and directly call PostgresMain()? Or keep BackendRun() with\n>>> just setting the TopMemoryContext, but removing the dynamic parts?\n\n>> I'd be inclined to keep it as-is for now. It's not adding any significant\n>> amount of cycles compared to the process fork, so we might as well\n>> preserve flexibility.\n\n... looking at this again, BackendRun certainly looks ridiculously\nover-engineered for what it still does. If we keep it like this, we\nshould at least add a comment along the lines of \"We once had the\nability to pass additional arguments to PostgresMain, and may someday\nwant to do that again\". But I wouldn't object to getting rid of the\ndynamic construction of the arg array, and the debug output too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Nov 2020 20:10:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 2:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > [ remove_option_o_2.patch ]\n>\n> This seems committable to me now, although ...\n>\n> > On Mon, Nov 2, 2020 at 6:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Magnus Hagander <magnus@hagander.net> writes:\n> >>> Initially I kept the dynamic argv/argc in even though it's now\n> >>> hardcoded, in case we wanted to add something back. But given the way\n> >>> it looks now, perhaps we should just get rid of BackendRun()\n> >>> completely and directly call PostgresMain()? Or keep BackendRun() with\n> >>> just setting the TopMemoryContext, but removing the dynamic parts?\n>\n> >> I'd be inclined to keep it as-is for now. It's not adding any significant\n> >> amount of cycles compared to the process fork, so we might as well\n> >> preserve flexibility.\n>\n> ... looking at this again, BackendRun certainly looks ridiculously\n> over-engineered for what it still does. If we keep it like this, we\n> should at least add a comment along the lines of \"We once had the\n> ability to pass additional arguments to PostgresMain, and may someday\n> want to do that again\". But I wouldn't object to getting rid of the\n> dynamic construction of the arg array, and the debug output too.\n\nYeah, looking at it again, I agree. PFA an updated patch, which I'll\ngo ahead and push shortly.\n\nI do noticed when looking through this -- the comment before the function says:\n\n * returns:\n * Shouldn't return at all.\n * If PostgresMain() fails, return status.\n\nI'm pretty sure that's incorrect in the current branches as well,\nsince it's a void function it will never return anything. Pretty sure\nit should just have the first point and not the second one there, or\nis this trying to convey some meaning I'm just not getting?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 9 Nov 2020 11:42:02 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Wed, Nov 4, 2020 at 2:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... looking at this again, BackendRun certainly looks ridiculously\n>> over-engineered for what it still does.\n\n> Yeah, looking at it again, I agree. PFA an updated patch, which I'll\n> go ahead and push shortly.\n\nLGTM.\n\n> I do noticed when looking through this -- the comment before the function says:\n\n> * returns:\n> * Shouldn't return at all.\n> * If PostgresMain() fails, return status.\n\n> I'm pretty sure that's incorrect in the current branches as well,\n> since it's a void function it will never return anything. Pretty sure\n> it should just have the first point and not the second one there, or\n> is this trying to convey some meaning I'm just not getting?\n\nLooking at old versions, BackendRun and PostgresMain used to be\ndeclared to return int. Whoever changed that to void evidently\nmissed updating this comment.\n\nI'd reduce the whole thing to \"Doesn't return.\" If you were feeling\nreally ambitious you could start plastering pg_attribute_noreturn() on\nthese functions ... but since that would be placed on the declarations,\na comment here would still be in order probably.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Nov 2020 10:58:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: -O switch"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 4:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Wed, Nov 4, 2020 at 2:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> ... looking at this again, BackendRun certainly looks ridiculously\n> >> over-engineered for what it still does.\n>\n> > Yeah, looking at it again, I agree. PFA an updated patch, which I'll\n> > go ahead and push shortly.\n>\n> LGTM.\n\nPushed.\n\n\n> > I do noticed when looking through this -- the comment before the function says:\n>\n> > * returns:\n> > * Shouldn't return at all.\n> > * If PostgresMain() fails, return status.\n>\n> > I'm pretty sure that's incorrect in the current branches as well,\n> > since it's a void function it will never return anything. Pretty sure\n> > it should just have the first point and not the second one there, or\n> > is this trying to convey some meaning I'm just not getting?\n>\n> Looking at old versions, BackendRun and PostgresMain used to be\n> declared to return int. Whoever changed that to void evidently\n> missed updating this comment.\n>\n> I'd reduce the whole thing to \"Doesn't return.\" If you were feeling\n> really ambitious you could start plastering pg_attribute_noreturn() on\n> these functions ... but since that would be placed on the declarations,\n> a comment here would still be in order probably.\n\nThey're already marked pg_attribute_noreturn() in the declarations.\nIt's just the comment that was a bit out of date.\n\nI'll go fix that one.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 10 Nov 2020 13:12:49 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: -O switch"
}
] |
[
{
"msg_contents": "Commit b918bf86c65 added the errorcode PXE_DECRYPT_FAILED to the existing set\nof PXE_ error codes. When pgcrypto was changed to the EVP APIs in 5ff4a67f63,\nno new error codes were added in favour of existing ones. This results in\nencryption failures returning PXE_ERR_GENERIC, which seems a bit inconsistent.\n\nThe attached introduce PXE_ENCRYPT_FAILED and use that for EVP_EncryptUpdate to\nideally be slightly clearer in case of errors. Any reason not to do that\ninstead of using ERR_GENERIC?\n\ncheers ./daniel",
"msg_date": "Thu, 29 Oct 2020 22:26:54 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Consistent error reporting for encryption/decryption in pgcrypto"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 10:26:54PM +0100, Daniel Gustafsson wrote:\n> The attached introduce PXE_ENCRYPT_FAILED and use that for EVP_EncryptUpdate to\n> ideally be slightly clearer in case of errors. Any reason not to do that\n> instead of using ERR_GENERIC?\n\n+1. While looking at that, I was wondering of the potential need of\nthis error code for other encryption code paths, but it happens that\nthis is only specific to OpenSSL. Rijndael or Blowfish don't need\nit.\n--\nMichael",
"msg_date": "Fri, 30 Oct 2020 09:31:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Consistent error reporting for encryption/decryption in pgcrypto"
},
{
"msg_contents": "Hi,\r\n\r\nthank you for your contribution.\r\n\r\nI did notice that the cfbot [1] is not failing for this patch.\r\n\r\nCheers,\r\n//Georgios\r\n\r\n[1] http://cfbot.cputube.org/daniel-gustafsson.html",
"msg_date": "Fri, 30 Oct 2020 15:54:54 +0000",
"msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consistent error reporting for encryption/decryption in pgcrypto"
},
{
"msg_contents": "> On 30 Oct 2020, at 16:54, Georgios Kokolatos <gkokolatos@protonmail.com> wrote:\n\n> I did notice that the cfbot [1] is not failing for this patch.\n\nI assume you mean s/failing/passing/? I noticed the red Travis and Appveyor\nruns, will fix over the weekend. Thanks for the heads-up.\n\ncheers ./daniel\n\n",
"msg_date": "Fri, 30 Oct 2020 23:23:27 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Consistent error reporting for encryption/decryption in pgcrypto"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 11:23:27PM +0100, Daniel Gustafsson wrote:\n> > On 30 Oct 2020, at 16:54, Georgios Kokolatos <gkokolatos@protonmail.com> wrote:\n> \n> > I did notice that the cfbot [1] is not failing for this patch.\n> \n> I assume you mean s/failing/passing/? I noticed the red Travis and Appveyor\n> runs, will fix over the weekend. Thanks for the heads-up.\n\nIt seems to me that you are just missing to declare a new error number\nin px.h, so I would suggest to just use -19.\n--\nMichael",
"msg_date": "Sat, 31 Oct 2020 10:03:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Consistent error reporting for encryption/decryption in pgcrypto"
},
{
"msg_contents": "> On 31 Oct 2020, at 02:03, Michael Paquier <michael@paquier.xyz> wrote:\n\n> It seems to me that you are just missing to declare a new error number\n> in px.h, so I would suggest to just use -19.\n\nAh yes, I accidentally fat-fingered the git add -p when splitting up the NSS\npatch into bite size pieces. Sorry about that. The attached v2 has the error\ndeclaration.\n\ncheers ./daniel",
"msg_date": "Sat, 31 Oct 2020 21:40:12 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Consistent error reporting for encryption/decryption in pgcrypto"
},
{
"msg_contents": "On Sat, Oct 31, 2020 at 09:40:12PM +0100, Daniel Gustafsson wrote:\n> Ah yes, I accidentally fat-fingered the git add -p when splitting up the NSS\n> patch into bite size pieces. Sorry about that. The attached v2 has the error\n> declaration.\n\nThanks for updatng the patch. Applied.\n--\nMichael",
"msg_date": "Sun, 1 Nov 2020 19:23:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Consistent error reporting for encryption/decryption in pgcrypto"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThere're probably mostly harmless, being mostly error and debug\nmessages and the like, and considering that eg OID parsing tolerates\nnegative numbers when reading them back in, but for what it's worth:\nGCC complains about many %d vs %u type mixups if you build with\n$SUBJECT.\n\n\n",
"msg_date": "Fri, 30 Oct 2020 10:37:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "-Wformat-signedness"
},
{
"msg_contents": "On 2020-10-29 22:37, Thomas Munro wrote:\n> There're probably mostly harmless, being mostly error and debug\n> messages and the like, and considering that eg OID parsing tolerates\n> negative numbers when reading them back in, but for what it's worth:\n> GCC complains about many %d vs %u type mixups if you build with\n> $SUBJECT.\n\nI had looked into this some time ago. I have dusted off my patch again. \nThe attached version fixes all warnings for me.\n\nThe following are the main categories of issues:\n\n1. enums are unsigned by default in gcc, so all those internal error \nmessages \"unrecognized blah kind: %d\" need to be changed to %u.\n\nI have split that into its own patch since it's easily separable. All \nthe remaining issues are in one patch.\n\n2. Various trickery at the boundary of internal counters that are \nunsigned and external functions or views using signed types. These need \nanother look.\n\n3. Various messages print signed values using %x formats, which need to \nbe unsigned. These might also need another look.\n\n4. Issues with constants being signed by default. For example, things \nlike elog(ERROR, \"foo is %u but should be %u\", somevar, 55) warns \nbecause of the constant. Should be changed to something like 55U for \nsymmetry, or change the %u to %d. This also reaches into genbki \nterritory with all the OID constants being generated.\n\n5. Some \"surprising\" but correct C behavior. For example, unsigned \nshort is promoted to int (not unsigned int) in variable arguments, so \nneeds a %d format.\n\n6. Finally, a bunch of uses were just plain wrong and should be corrected.\n\nI haven't found anything that is a really serious bug, but I imagine you \ncould run into trouble in various ways when you exceed the INT_MAX \nvalue. But then again, if you use up INT_MAX WAL timelines, you \nprobably have other problems. ;-)",
"msg_date": "Mon, 9 Nov 2020 16:02:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: -Wformat-signedness"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> 1. enums are unsigned by default in gcc, so all those internal error \n> messages \"unrecognized blah kind: %d\" need to be changed to %u.\n\nDo we have reason to think that that is true in every C compiler?\nMy own preference for this would be to leave the messages as-is\nand add explicit \"(int)\" casts to the arguments. There are some\nfraction of these that are like that already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Nov 2020 10:25:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: -Wformat-signedness"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 4:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > 1. enums are unsigned by default in gcc, so all those internal error\n> > messages \"unrecognized blah kind: %d\" need to be changed to %u.\n>\n> Do we have reason to think that that is true in every C compiler?\n> My own preference for this would be to leave the messages as-is\n> and add explicit \"(int)\" casts to the arguments. There are some\n> fraction of these that are like that already.\n\n From experimentation, it seems that GCC enumerator constants are int,\nbut enum variables are int or signed int depending on whether any\nnegative values were defined. Valid values have to be representable\nas int anyway regardless of what size and signedness a compiler\nchooses to use, so yeah, +1 for casting to int.\n\n\n",
"msg_date": "Tue, 10 Nov 2020 11:43:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: -Wformat-signedness"
}
] |
[
{
"msg_contents": "While hacking on the NSS patch I realized that sslinfo was passing the ->ssl\nPort member directly to OpenSSL in order to extract information regarding the\nconnection. This breaks the API provided by the backend, as well as duplicates\ncode for no real benefit. The attached 0001 patch rewrites sslinfo to use the\nbe_tls_* API where possible to reduce duplication and keep the codebase TLS\ndependency (mostly) tucked away behind a nice API. 0001 also contains a small\nsslinfo doc update to cover that TLSv1.3 is a supported protocol.\n\n0002 ports OpenSSL errorhandling introduced in d94c36a45ab which was performed\nfor sslinfo but not the backend. I agree with the commit message that the risk\nis small (but not non-existing), but if the checks were important enough for\nsslinfo I'd argue they make sense for the backend too.\n\nThis patchset was pulled from the NSS patch, but it is entirely independent\nfrom NSS.\n\ncheers ./daniel",
"msg_date": "Thu, 29 Oct 2020 23:48:57 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "contrib/sslinfo cleanup and OpenSSL errorhandling"
},
{
"msg_contents": "Hi,\n\nThanks for extracting these.\n\nOn 2020-10-29 23:48:57 +0100, Daniel Gustafsson wrote:> \n> /*\n> @@ -54,9 +53,16 @@ PG_FUNCTION_INFO_V1(ssl_version);\n> Datum\n> ssl_version(PG_FUNCTION_ARGS)\n> {\n> -\tif (MyProcPort->ssl == NULL)\n> +\tconst char *version;\n> +\n> +\tif (!MyProcPort->ssl_in_use)\n> +\t\tPG_RETURN_NULL();\n> +\n> +\tversion = be_tls_get_version(MyProcPort);\n> +\tif (version == NULL)\n> \t\tPG_RETURN_NULL();\n> -\tPG_RETURN_TEXT_P(cstring_to_text(SSL_get_version(MyProcPort->ssl)));\n> +\n> +\tPG_RETURN_TEXT_P(cstring_to_text(version));\n> }\n\nThere's quite a few copies of this code that look exactly the same,\nexcept for the be_tls_get_* call. Do you see a way to have fewer copies\nof the same code?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Oct 2020 16:40:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: contrib/sslinfo cleanup and OpenSSL errorhandling"
},
{
"msg_contents": "On Thu, Oct 29, 2020 at 04:40:32PM -0700, Andres Freund wrote:\n> There's quite a few copies of this code that look exactly the same,\n> except for the be_tls_get_* call. Do you see a way to have fewer copies\n> of the same code?\n\nEach one of those code paths is working on a different sub-API aiming\nat fetching a specific piece of TLS information, and the way each\nsub-API does its lookup at MyProcPort is different. One possible way\nwould be to move the checks on ssl_in_use into a macro for the\nbeginning part. The end part could be improved by making\nX509_NAME_field_to_text() and such return a text and not a Datum, and\nmove the null check + text-to-datum conversion into a separate macro.\nI am not sure if this would be an improvement in terms of readability\nthough.\n--\nMichael",
"msg_date": "Fri, 30 Oct 2020 11:47:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: contrib/sslinfo cleanup and OpenSSL errorhandling"
},
{
"msg_contents": "> On 30 Oct 2020, at 00:40, Andres Freund <andres@anarazel.de> wrote:\n\n> There's quite a few copies of this code that look exactly the same,\n> except for the be_tls_get_* call. Do you see a way to have fewer copies\n> of the same code?\n\nThere's really only two of the same, and two sets of those. I tried some\nvariations but didn't really achieve anything that would strike the right\nbalance on the codegolf-to-readability scale. Maybe others have a richer\nimagination than me.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 30 Oct 2020 23:20:16 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: contrib/sslinfo cleanup and OpenSSL errorhandling"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 11:20 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 30 Oct 2020, at 00:40, Andres Freund <andres@anarazel.de> wrote:\n>\n> > There's quite a few copies of this code that look exactly the same,\n> > except for the be_tls_get_* call. Do you see a way to have fewer copies\n> > of the same code?\n>\n> There's really only two of the same, and two sets of those. I tried some\n> variations but didn't really achieve anything that would strike the right\n> balance on the codegolf-to-readability scale. Maybe others have a richer\n> imagination than me.\n\nYeah, since it's only 2 of each, moving it to a macro wouldn't really\nsave a lot -- and it would make things less readable overall I think.\n\nSo I'd say the current version is OK.\n\nOne thing I noted was in the docs part of the patch there is a missing\ncomma -- but that one is missing previously as well. I'll go apply\nthat fix to the back branches while waiting to see if somebody comes\nup with a more creative way to avoid the repeated code :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 2 Nov 2020 15:19:55 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib/sslinfo cleanup and OpenSSL errorhandling"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 3:19 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Fri, Oct 30, 2020 at 11:20 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 30 Oct 2020, at 00:40, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > There's quite a few copies of this code that look exactly the same,\n> > > except for the be_tls_get_* call. Do you see a way to have fewer copies\n> > > of the same code?\n> >\n> > There's really only two of the same, and two sets of those. I tried some\n> > variations but didn't really achieve anything that would strike the right\n> > balance on the codegolf-to-readability scale. Maybe others have a richer\n> > imagination than me.\n>\n> Yeah, since it's only 2 of each, moving it to a macro wouldn't really\n> save a lot -- and it would make things less readable overall I think.\n>\n> So I'd say the current version is OK.\n>\n> One thing I noted was in the docs part of the patch there is a missing\n> comma -- but that one is missing previously as well. I'll go apply\n> that fix to the back branches while waiting to see if somebody comes\n> up with a more creative way to avoid the repeated code :)\n\nApplied, with the small adjustment of the comma in the docs.\n\nI wonder if we should perhaps backpatch 0002? The changes to sslinfo\nthat were ported go all the way back to 9.6, so it should be a safe\none I think?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 3 Nov 2020 10:05:59 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib/sslinfo cleanup and OpenSSL errorhandling"
},
{
"msg_contents": "> On 3 Nov 2020, at 10:05, Magnus Hagander <magnus@hagander.net> wrote:\n\n> Applied, with the small adjustment of the comma in the docs.\n\nThanks!\n\n> I wonder if we should perhaps backpatch 0002? The changes to sslinfo\n> that were ported go all the way back to 9.6, so it should be a safe\n> one I think?\n\nIt should be safe given that the code has been in production for a long time.\nThat being said, sslinfo doesn't have tests (yet) and probably isn't used that\nmuch; perhaps it's best to let this mature in HEAD for a few buildfarm cycles\nfirst?\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 3 Nov 2020 10:22:03 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: contrib/sslinfo cleanup and OpenSSL errorhandling"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 10:22 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 3 Nov 2020, at 10:05, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> > Applied, with the small adjustment of the comma in the docs.\n>\n> Thanks!\n>\n> > I wonder if we should perhaps backpatch 0002? The changes to sslinfo\n> > that were ported go all the way back to 9.6, so it should be a safe\n> > one I think?\n>\n> It should be safe given that the code has been in production for a long time.\n> That being said, sslinfo doesn't have tests (yet) and probably isn't used that\n> much; perhaps it's best to let this mature in HEAD for a few buildfarm cycles\n> first?\n\nYeah, that's a good point. I didn't realize we had no tests for that\none, so then it's a bit less safe in that regard. I agree with leaving\nit for at least one complete buildfarm run before backporting\nanything.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 3 Nov 2020 12:11:54 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib/sslinfo cleanup and OpenSSL errorhandling"
}
] |
[
{
"msg_contents": "Hi\n\nI run my development instances with track_io_timing=on, as I've found\nthat to be really useful. Unfortunately that causes tests to fail\nwhenever I forget to turn that off to run installcheck.\n\nThe diffs are caused by the additional data shown in the explain tests:\n...\n- \"Temp Written Blocks\": N +\n+ \"Temp Written Blocks\": N, +\n+ \"I/O Read Time\": N.N, +\n+ \"I/O Write Time\": N.N +\n...\n\n\nFirst, why is the output of these fields conditional when using a\nnon-text format? Seems we instead should output -1 or null. The latter\nseems a bit clearer, but is a bit json specific. I guess we could add a\nExplainPropertyNull() or such?\n\nSecond, as long as it is conditional, would anybody mind if I put a\ntrack_io_timing=false into explain.sql? We don't try to make the tests\npass with every possible option set, but track_io_timing seems common\nenough?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Oct 2020 16:10:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "EXPLAIN vs track_io_timing=on vs tests"
},
{
"msg_contents": "Hi,\n\nOn 2020-10-29 16:10:37 -0700, Andres Freund wrote:\n> I run my development instances with track_io_timing=on, as I've found\n> that to be really useful. Unfortunately that causes tests to fail\n> whenever I forget to turn that off to run installcheck.\n> \n> The diffs are caused by the additional data shown in the explain tests:\n> ...\n> - \"Temp Written Blocks\": N +\n> + \"Temp Written Blocks\": N, +\n> + \"I/O Read Time\": N.N, +\n> + \"I/O Write Time\": N.N +\n> ...\n> \n> \n> First, why is the output of these fields conditional when using a\n> non-text format? Seems we instead should output -1 or null. The latter\n> seems a bit clearer, but is a bit json specific. I guess we could add a\n> ExplainPropertyNull() or such?\n\nNot addressed so far.\n\n\n> Second, as long as it is conditional, would anybody mind if I put a\n> track_io_timing=false into explain.sql? We don't try to make the tests\n> pass with every possible option set, but track_io_timing seems common\n> enough?\n\nDone that now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 12 Mar 2022 14:24:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: EXPLAIN vs track_io_timing=on vs tests"
}
] |
[
{
"msg_contents": "Hi All,\n\nToday while working on some other task related to database encoding, I\nnoticed that the MINUS SIGN (with byte sequence a1-dd) in EUC-JP is\nmapped to FULLWIDTH HYPHEN-MINUS (with byte sequence ef-bc-8d) in\nUTF-8. See below:\n\npostgres=# select convert('\\xa1dd', 'euc_jp', 'utf8');\n convert\n----------\n \\xefbc8d\n(1 row)\n\nIsn't this a bug? Shouldn't this have been converted to the MINUS SIGN\n(with byte sequence e2-88-92) in UTF-8 instead of FULLWIDTH\nHYPHEN-MINUS SIGN.\n\nWhen the MINUS SIGN (with byte sequence e2-88-92) in UTF-8 is\nconverted to EUC-JP, the convert functions fails with an error saying:\n\"character with byte sequence 0xe2 0x88 0x92 in encoding UTF8 has no\nequivalent in encoding EUC_JP\". See below:\n\npostgres=# select convert('\\xe28892', 'utf-8', 'euc_jp');\nERROR: character with byte sequence 0xe2 0x88 0x92 in encoding \"UTF8\"\nhas no equivalent in encoding \"EUC_JP\"\n\nHowever, when the same MINUS SIGN in UTF-8 is converted to SJIS\nencoding, the convert function returns the correct result. See below:\n\npostgres=# select convert('\\xe28892', 'utf-8', 'sjis');\n convert\n---------\n \\x817c\n(1 row)\n\nPlease note that the byte sequence (81-7c) in SJIS represents MINUS\nSIGN in SJIS which means the MINUS SIGN in UTF8 got converted to the\nMINUS SIGN in SJIS and that is what we expect. Isn't it?\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Oct 2020 06:13:53 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 9:44 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> Today while working on some other task related to database encoding, I\n> noticed that the MINUS SIGN (with byte sequence a1-dd) in EUC-JP is\n> mapped to FULLWIDTH HYPHEN-MINUS (with byte sequence ef-bc-8d) in\n> UTF-8. See below:\n>\n> postgres=# select convert('\\xa1dd', 'euc_jp', 'utf8');\n> convert\n> ----------\n> \\xefbc8d\n> (1 row)\n>\n> Isn't this a bug? Shouldn't this have been converted to the MINUS SIGN\n> (with byte sequence e2-88-92) in UTF-8 instead of FULLWIDTH\n> HYPHEN-MINUS SIGN.\n>\n> When the MINUS SIGN (with byte sequence e2-88-92) in UTF-8 is\n> converted to EUC-JP, the convert functions fails with an error saying:\n> \"character with byte sequence 0xe2 0x88 0x92 in encoding UTF8 has no\n> equivalent in encoding EUC_JP\". See below:\n>\n> postgres=# select convert('\\xe28892', 'utf-8', 'euc_jp');\n> ERROR: character with byte sequence 0xe2 0x88 0x92 in encoding \"UTF8\"\n> has no equivalent in encoding \"EUC_JP\"\n>\n> However, when the same MINUS SIGN in UTF-8 is converted to SJIS\n> encoding, the convert function returns the correct result. See below:\n>\n> postgres=# select convert('\\xe28892', 'utf-8', 'sjis');\n> convert\n> ---------\n> \\x817c\n> (1 row)\n>\n> Please note that the byte sequence (81-7c) in SJIS represents MINUS\n> SIGN in SJIS which means the MINUS SIGN in UTF8 got converted to the\n> MINUS SIGN in SJIS and that is what we expect. Isn't it?\n\nSo we have\n\na1dd in euc_jp,\n817c in sjis,\nefbc8d in utf-8\n\nthat convert between each other just fine.\n\nBut when it comes to\n\ne28892 in utf-8\n\nit currently only converts to sjis and that too just one way:\n\nselect convert('\\xe28892', 'utf-8', 'sjis');\n convert\n---------\n \\x817c\n(1 row)\n\nselect convert('\\x817c', 'sjis', 'utf-8');\n convert\n----------\n \\xefbc8d\n(1 row)\n\nI noticed that the commit a8bd7e1c6e02 from ages ago removed\nconversions from and to utf-8's e28892, in favor of efbc8d, and that\nchange has stuck. (Note though that these maps looked pretty\ndifferent back then.)\n\n--- a/src/backend/utils/mb/Unicode/euc_jp_to_utf8.map\n+++ b/src/backend/utils/mb/Unicode/euc_jp_to_utf8.map\n- {0xa1dd, 0xe28892},\n+ {0xa1dd, 0xefbc8d},\n\n--- a/src/backend/utils/mb/Unicode/utf8_to_euc_jp.map\n+++ b/src/backend/utils/mb/Unicode/utf8_to_euc_jp.map\n- {0xe28892, 0xa1dd},\n+ {0xefbc8d, 0xa1dd},\n\nCan't tell what reason there was to do that, but there must have been\nsome. Maybe the Japanese character sets prefer full-width hyphen\nminus (unicode U+FF0D) over mathematical minus sign (U+2212)?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Oct 2020 12:08:51 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "Hello.\n\nAt Fri, 30 Oct 2020 06:13:53 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> Hi All,\n> \n> Today while working on some other task related to database encoding, I\n> noticed that the MINUS SIGN (with byte sequence a1-dd) in EUC-JP is\n> mapped to FULLWIDTH HYPHEN-MINUS (with byte sequence ef-bc-8d) in\n> UTF-8. See below:\n> \n> postgres=# select convert('\\xa1dd', 'euc_jp', 'utf8');\n> convert\n> ----------\n> \\xefbc8d\n> (1 row)\n> \n> Isn't this a bug? Shouldn't this have been converted to the MINUS SIGN\n> (with byte sequence e2-88-92) in UTF-8 instead of FULLWIDTH\n> HYPHEN-MINUS SIGN.\n\nNo it's not a bug, but a well-known \"design\":(\n\nThe mapping is generated from CP932.TXT and JIS0212.TXT by\nUCS_to_UEC_JP.pl.\n\nCP932.TXT used here is here.\n\nhttps://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WINDOWS/CP932.TXT\n\nCP932.TXT maps 0x817C(SJIS) = 0xA1DD(EUC-JP) as follows.\n\n0x817C\t0xFF0D\t#FULLWIDTH HYPHEN-MINUS\n\n> When the MINUS SIGN (with byte sequence e2-88-92) in UTF-8 is\n> converted to EUC-JP, the convert functions fails with an error saying:\n> \"character with byte sequence 0xe2 0x88 0x92 in encoding UTF8 has no\n> equivalent in encoding EUC_JP\". See below:\n>\n> postgres=# select convert('\\xe28892', 'utf-8', 'euc_jp');\n> ERROR: character with byte sequence 0xe2 0x88 0x92 in encoding \"UTF8\"\n> has no equivalent in encoding \"EUC_JP\"\n\nU+FF0D(ef bc 8d) is mapped to 0xa1dd@euc-jp\nU+2212(e2 88 92) doesn't have a mapping between euc-jp.\n\n> However, when the same MINUS SIGN in UTF-8 is converted to SJIS\n> encoding, the convert function returns the correct result. See below:\n> \n> postgres=# select convert('\\xe28892', 'utf-8', 'sjis');\n> convert\n> ---------\n> \\x817c\n> (1 row)\n\nIt is manually added by UCS_to_SJIS.pl. I'm not sure about the reason\nbut maybe because it was used widely.\n\nSo ping-pong between Unicode and SJIS behaves like this:\n\nU+2212 => 0x817c@sjis => U+ff0d => 0x817c@sjis ...\n\n> Please note that the byte sequence (81-7c) in SJIS represents MINUS\n> SIGN in SJIS which means the MINUS SIGN in UTF8 got converted to the\n> MINUS SIGN in SJIS and that is what we expect. Isn't it?\n\nI think we don't change authoritative mappings, but maybe can add some\none-way conversions for the convenience.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 30 Oct 2020 12:19:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Oct 30, 2020 at 9:44 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>> Today while working on some other task related to database encoding, I\n>> noticed that the MINUS SIGN (with byte sequence a1-dd) in EUC-JP is\n>> mapped to FULLWIDTH HYPHEN-MINUS (with byte sequence ef-bc-8d) in\n>> UTF-8. See below:\n>> ...\n>> Isn't this a bug?\n\n> Can't tell what reason there was to do that, but there must have been\n> some. Maybe the Japanese character sets prefer full-width hyphen\n> minus (unicode U+FF0D) over mathematical minus sign (U+2212)?\n\nThe way it's been explained to me in the past is that the conversion\nbetween Unicode and the various Japanese encodings is not as well\ndefined as one could wish, because there are multiple quasi-standard\nversions of the Japanese encodings. So we shouldn't move too hastily\non changing this. Maybe it's really a bug, but maybe there are good\nreasons.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Oct 2020 23:24:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "At Fri, 30 Oct 2020 12:08:51 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> I noticed that the commit a8bd7e1c6e02 from ages ago removed\n> conversions from and to utf-8's e28892, in favor of efbc8d, and that\n> change has stuck. (Note though that these maps looked pretty\n> different back then.)\n> \n> --- a/src/backend/utils/mb/Unicode/euc_jp_to_utf8.map\n> +++ b/src/backend/utils/mb/Unicode/euc_jp_to_utf8.map\n> - {0xa1dd, 0xe28892},\n> + {0xa1dd, 0xefbc8d},\n> \n> --- a/src/backend/utils/mb/Unicode/utf8_to_euc_jp.map\n> +++ b/src/backend/utils/mb/Unicode/utf8_to_euc_jp.map\n> - {0xe28892, 0xa1dd},\n> + {0xefbc8d, 0xa1dd},\n> \n> Can't tell what reason there was to do that, but there must have been\n> some. Maybe the Japanese character sets prefer full-width hyphen\n> minus (unicode U+FF0D) over mathematical minus sign (U+2212)?\n\nIt's a decsion made by Microsoft. Several other characters are in\nsimilar issues. I remember many people complained but in the end that\nwasn't \"fixed\" and led to the well-known conversion messes of Japanese\ncharacter conversion involving Unicode in Java.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 30 Oct 2020 12:28:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "> Hi All,\n> \n> Today while working on some other task related to database encoding, I\n> noticed that the MINUS SIGN (with byte sequence a1-dd) in EUC-JP is\n> mapped to FULLWIDTH HYPHEN-MINUS (with byte sequence ef-bc-8d) in\n> UTF-8. See below:\n> \n> postgres=# select convert('\\xa1dd', 'euc_jp', 'utf8');\n> convert\n> ----------\n> \\xefbc8d\n> (1 row)\n> \n> Isn't this a bug? Shouldn't this have been converted to the MINUS SIGN\n> (with byte sequence e2-88-92) in UTF-8 instead of FULLWIDTH\n> HYPHEN-MINUS SIGN.\n\nYeah. Originally EUC_JP 0xa1dd was converted to UTF8 0xe28892. At some\npoint, someone changed the mapping and now you see it.\n\n> When the MINUS SIGN (with byte sequence e2-88-92) in UTF-8 is\n> converted to EUC-JP, the convert functions fails with an error saying:\n> \"character with byte sequence 0xe2 0x88 0x92 in encoding UTF8 has no\n> equivalent in encoding EUC_JP\". See below:\n> \n> postgres=# select convert('\\xe28892', 'utf-8', 'euc_jp');\n> ERROR: character with byte sequence 0xe2 0x88 0x92 in encoding \"UTF8\"\n> has no equivalent in encoding \"EUC_JP\"\n\nAgain, originally UTF8 0xe28892 was converted to EUC_JP 0xa1dd . At\nsome point, someone changed the mapping.\n\n> However, when the same MINUS SIGN in UTF-8 is converted to SJIS\n> encoding, the convert function returns the correct result. See below:\n> \n> postgres=# select convert('\\xe28892', 'utf-8', 'sjis');\n> convert\n> ---------\n> \\x817c\n> (1 row)\n> \n> Please note that the byte sequence (81-7c) in SJIS represents MINUS\n> SIGN in SJIS which means the MINUS SIGN in UTF8 got converted to the\n> MINUS SIGN in SJIS and that is what we expect. Isn't it?\n\nAgreed.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 30 Oct 2020 13:06:26 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "> The mapping is generated from CP932.TXT and JIS0212.TXT by\n> UCS_to_UEC_JP.pl.\n\nI still don't understand why this change has been made. Originally the\nconversion was based on JIS0208.txt, JIS0212.txt and JIS0201.txt,\nwhich is the exact definition of EUC-JP. CP932.txt is defined by\nMicrosoft for their products.\n\nProbably we should call our \"EUC-JP\" something like \"EUC-JP-MS\" or\nwhatever to differentiate from true EUC-JP.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 30 Oct 2020 13:17:08 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 8:49 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> At Fri, 30 Oct 2020 06:13:53 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in\n> > Hi All,\n> >\n> > Today while working on some other task related to database encoding, I\n> > noticed that the MINUS SIGN (with byte sequence a1-dd) in EUC-JP is\n> > mapped to FULLWIDTH HYPHEN-MINUS (with byte sequence ef-bc-8d) in\n> > UTF-8. See below:\n> >\n> > postgres=# select convert('\\xa1dd', 'euc_jp', 'utf8');\n> > convert\n> > ----------\n> > \\xefbc8d\n> > (1 row)\n> >\n> > Isn't this a bug? Shouldn't this have been converted to the MINUS SIGN\n> > (with byte sequence e2-88-92) in UTF-8 instead of FULLWIDTH\n> > HYPHEN-MINUS SIGN.\n>\n> No it's not a bug, but a well-known \"design\":(\n>\n> The mapping is generated from CP932.TXT and JIS0212.TXT by\n> UCS_to_UEC_JP.pl.\n>\n> CP932.TXT used here is here.\n>\n> https://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WINDOWS/CP932.TXT\n>\n> CP932.TXT maps 0x817C(SJIS) = 0xA1DD(EUC-JP) as follows.\n>\n> 0x817C 0xFF0D #FULLWIDTH HYPHEN-MINUS\n>\n\nWe do have MINUS SIGN (U+2212) defined in both UTF-8 and EUC-JP\nencoding. So, not sure why converting MINUS SIGN from UTF-8 to EUC-JP\nshould throw an error saying: \"... in encoding UTF8 has *no*\nequivalent in EUC_JP\". I mean this information looks misleading and\nthat's I reason I feel its a bug.\n\n> > When the MINUS SIGN (with byte sequence e2-88-92) in UTF-8 is\n> > converted to EUC-JP, the convert functions fails with an error saying:\n> > \"character with byte sequence 0xe2 0x88 0x92 in encoding UTF8 has no\n> > equivalent in encoding EUC_JP\". See below:\n> >\n> > postgres=# select convert('\\xe28892', 'utf-8', 'euc_jp');\n> > ERROR: character with byte sequence 0xe2 0x88 0x92 in encoding \"UTF8\"\n> > has no equivalent in encoding \"EUC_JP\"\n>\n> U+FF0D(ef bc 8d) is mapped to 0xa1dd@euc-jp\n> U+2212(e2 88 92) doesn't have a mapping between euc-jp.\n>\n> > However, when the same MINUS SIGN in UTF-8 is converted to SJIS\n> > encoding, the convert function returns the correct result. See below:\n> >\n> > postgres=# select convert('\\xe28892', 'utf-8', 'sjis');\n> > convert\n> > ---------\n> > \\x817c\n> > (1 row)\n>\n> It is manually added by UCS_to_SJIS.pl. I'm not sure about the reason\n> but maybe because it was used widely.\n>\n> So ping-pong between Unicode and SJIS behaves like this:\n>\n> U+2212 => 0x817c@sjis => U+ff0d => 0x817c@sjis ...\n>\n> > Please note that the byte sequence (81-7c) in SJIS represents MINUS\n> > SIGN in SJIS which means the MINUS SIGN in UTF8 got converted to the\n> > MINUS SIGN in SJIS and that is what we expect. Isn't it?\n>\n> I think we don't change authoritative mappings, but maybe can add some\n> one-way conversions for the convenience.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 30 Oct 2020 10:04:22 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "At Fri, 30 Oct 2020 13:17:08 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> > The mapping is generated from CP932.TXT and JIS0212.TXT by\n> > UCS_to_UEC_JP.pl.\n> \n> I still don't understand why this change has been made. Originally the\n> conversion was based on JIS0208.txt, JIS0212.txt and JIS0201.txt,\n> which is the exact definition of EUC-JP. CP932.txt is defined by\n> Microsoft for their products.\n> \n> Probably we should call our \"EUC-JP\" something like \"EUC-JP-MS\" or\n> whatever to differentiate from true EUC-JP.\n\nSeems valid. Things are already so at the time aeed17d is introduced\n(I believe it didn't make any difference in conversions.) and the\nchange was made by a8bd7e1c6e in 2002.\n\n\nI'm not sure the point of the change, though..\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 30 Oct 2020 13:47:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 12:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Fri, 30 Oct 2020 06:13:53 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in\n> > However, when the same MINUS SIGN in UTF-8 is converted to SJIS\n> > encoding, the convert function returns the correct result. See below:\n> >\n> > postgres=# select convert('\\xe28892', 'utf-8', 'sjis');\n> > convert\n> > ---------\n> > \\x817c\n> > (1 row)\n>\n> It is manually added by UCS_to_SJIS.pl. I'm not sure about the reason\n> but maybe because it was used widely.\n>\n> So ping-pong between Unicode and SJIS behaves like this:\n>\n> U+2212 => 0x817c@sjis => U+ff0d => 0x817c@sjis ...\n\nIs it the following piece of code in UCS_TO_SJIS.pl that manually adds\nthe mapping?\n\n# Add these UTF8->SJIS pairs to the table.\npush @$mapping,\n...\n {\n direction => FROM_UNICODE,\n ucs => 0x2212,\n code => 0x817c,\n comment => '# MINUS SIGN',\n f => $this_script,\n l => __LINE__\n },\n\nGiven that U+2212 is encoded by e28892 in utf8, I assume that's how\nutf8_to_sjis.map ends up with the following mapping into sjis for that\nbyte sequence:\n\n /*** Three byte table, leaf: e288xx - offset 0x004ee ***/\n\n /* 80 */ 0x81cd, 0x0000, 0x81dd, 0x81ce, 0x0000, 0x0000, 0x0000, 0x81de,\n /* 88 */ 0x81b8, 0x0000, 0x0000, 0x81b9, 0x0000, 0x0000, 0x0000, 0x0000,\n /* 90 */ 0x0000, 0x8794, \"0x817c\", ...\n\n> > Please note that the byte sequence (81-7c) in SJIS represents MINUS\n> > SIGN in SJIS which means the MINUS SIGN in UTF8 got converted to the\n> > MINUS SIGN in SJIS and that is what we expect. Isn't it?\n>\n> I think we don't change authoritative mappings, but maybe can add some\n> one-way conversions for the convenience.\n\nMaybe UCS_TO_EUC_JP.pl could do something like the above.\n\nAre there other cases that were fixed like this in the past, either\nfor euc_jp or sjis?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Oct 2020 14:38:30 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "At Fri, 30 Oct 2020 14:38:30 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> On Fri, Oct 30, 2020 at 12:20 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > So ping-pong between Unicode and SJIS behaves like this:\n> >\n> > U+2212 => 0x817c@sjis => U+ff0d => 0x817c@sjis ...\n> \n> Is it the following piece of code in UCS_TO_SJIS.pl that manually adds\n> the mapping?\n\nYes.\n\n> # Add these UTF8->SJIS pairs to the table.\n> push @$mapping,\n> ...\n> {\n> direction => FROM_UNICODE,\n> ucs => 0x2212,\n> code => 0x817c,\n> comment => '# MINUS SIGN',\n> f => $this_script,\n> l => __LINE__\n> },\n> \n> Given that U+2212 is encoded by e28892 in utf8, I assume that's how\n> utf8_to_sjis.map ends up with the following mapping into sjis for that\n> byte sequence:\n> \n> /*** Three byte table, leaf: e288xx - offset 0x004ee ***/\n> \n> /* 80 */ 0x81cd, 0x0000, 0x81dd, 0x81ce, 0x0000, 0x0000, 0x0000, 0x81de,\n> /* 88 */ 0x81b8, 0x0000, 0x0000, 0x81b9, 0x0000, 0x0000, 0x0000, 0x0000,\n> /* 90 */ 0x0000, 0x8794, \"0x817c\", ...\n\nI'm not sure how we should construct our won mapping, but the\ndifference made by we simply moved to JIS0208.TXT based as Ishii-san\nsuggested the differences in the mapping would be as the follows.\n\n1. The following codes (regions) are not defined in JIS0208.\n\n 8ea1 - 8edf (up to 64 characters (I didn't actually counted them.))\n ada1 - adfc (up to 92 characters (ditto))\n 8ff3f3 - 8ff4a8 (up to 182 characters (ditto))\n\n a1c0 ff3c: (ff3c: FULLWIDTH REVERSE SOLIDUS)\n 8ff4aa ff07: (ff07: FULLWIDTH APOSTROPHE)\n\n2. some individual differences\n\n EUC 0208 932\n a1c1 301c ff5e: (301c:WAVE DASH)\n a1c2 2016 2225: (2016:DOUBLE_VERTICAL LINE) : (2225:PARALLEL TO)\n* a1dd 2212 ff0d: (2212: MINUS_SIGN) : (ff0d: FULLWIDTH HYPHEN-MINUS)\n d1f1 a2 ffe0: (00a2: CENT SIGN) : (ffe0: FULLWIDTH CENT SIGN)\n d1f2 a3 ffe1: (00a3: PUND SIGN) : (ffe1: FULLWIDTH POUND SIGN)\n a2cc ac ffe2: (00ac: NOT SIGN) : (ffe2: FULLWIDTH NOT SIGN)\n\n\n*1: https://www.unicode.org/Public/MAPPINGS/OBSOLETE/EASTASIA/JIS/JIS0208.TXT\n\n> > > Please note that the byte sequence (81-7c) in SJIS represents MINUS\n> > > SIGN in SJIS which means the MINUS SIGN in UTF8 got converted to the\n> > > MINUS SIGN in SJIS and that is what we expect. Isn't it?\n> >\n> > I think we don't change authoritative mappings, but maybe can add some\n> > one-way conversions for the convenience.\n> \n> Maybe UCS_TO_EUC_JP.pl could do something like the above.\n> \n> Are there other cases that were fixed like this in the past, either\n> for euc_jp or sjis?\n\nHonestly, I don't know how the mapping was decided in 2002, but\nremoving the regions in 1 would cause confusion. So what we can do in\nthis area would be chaning some of 2 to 0208 mapping. But arbitrary\nmixture of different mapings would cause new problem..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 30 Oct 2020 16:33:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
},
{
"msg_contents": "At Fri, 30 Oct 2020 16:33:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 30 Oct 2020 14:38:30 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> I'm not sure how we should construct our won mapping, but the\n> difference made by we simply moved to JIS0208.TXT based as Ishii-san\n> suggested the differences in the mapping would be as the follows.\n\nMmm..\n\nI'm not sure how we should construct our won mapping, but the\ndifference made by simply moving to JIS0208.TXT-based as Ishii-san\nsuggested, the following differences would be seen in the mappings.\n\n> 1. The following codes (regions) are not defined in JIS0208.\n> \n> 8ea1 - 8edf (up to 64 characters (I didn't actually counted them.))\n> ada1 - adfc (up to 92 characters (ditto))\n> 8ff3f3 - 8ff4a8 (up to 182 characters (ditto))\n\n 8ea1 - 8edf (64 chars. U+ff61 - U+ff9f) (hankaku-kana)\n ada1 - adfc (83 chars, U+2460 - U+33a1) (numbers with cicle)\n 8ff3f3 - 8ff4a8 (20 chars, U+2160 - U+2179) (roman numerals)\n\n> a1c0 ff3c: (ff3c: FULLWIDTH REVERSE SOLIDUS)\n> 8ff4aa ff07: (ff07: FULLWIDTH APOSTROPHE)\n> \n> 2. some individual differences\n> \n> EUC 0208 932\n> a1c1 301c ff5e: (301c:WAVE DASH)\n> a1c2 2016 2225: (2016:DOUBLE_VERTICAL LINE) : (2225:PARALLEL TO)\n> * a1dd 2212 ff0d: (2212: MINUS_SIGN) : (ff0d: FULLWIDTH HYPHEN-MINUS)\n> d1f1 a2 ffe0: (00a2: CENT SIGN) : (ffe0: FULLWIDTH CENT SIGN)\n> d1f2 a3 ffe1: (00a3: PUND SIGN) : (ffe1: FULLWIDTH POUND SIGN)\n> a2cc ac ffe2: (00ac: NOT SIGN) : (ffe2: FULLWIDTH NOT SIGN)\n> \n> \n> *1: https://www.unicode.org/Public/MAPPINGS/OBSOLETE/EASTASIA/JIS/JIS0208.TXT\n> \n> > > > Please note that the byte sequence (81-7c) in SJIS represents MINUS\n> > > > SIGN in SJIS which means the MINUS SIGN in UTF8 got converted to the\n> > > > MINUS SIGN in SJIS and that is what we expect. Isn't it?\n> > >\n> > > I think we don't change authoritative mappings, but maybe can add some\n> > > one-way conversions for the convenience.\n> > \n> > Maybe UCS_TO_EUC_JP.pl could do something like the above.\n> > \n> > Are there other cases that were fixed like this in the past, either\n> > for euc_jp or sjis?\n> \n> Honestly, I don't know how the mapping was decided in 2002, but\n> removing the regions in 1 would cause confusion. So what we can do in\n> this area would be chaning some of 2 to 0208 mapping. But arbitrary\n> mixture of different mapings would cause new problem..\n\n Forgot about adding one-way mappings. I think we can add several\n such mappings, say.\n\n U+3031->: EUC:a1c1 <-> U+ff5e\n U+2016->: EUC:a1c2 <-> U+2225\n U+2212->: EUC:a1dd <-> U+ff0d\n U+00a2->: EUC:d1f1 <-> U+ffe0\n U+00a3->: EUC:d1f2 <-> U+ffe1\n U+00ac->: EUC:a2cc <-> U+ffe2\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 30 Oct 2020 16:56:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MINUS SIGN (U+2212) in EUC-JP encoding is mapped to FULLWIDTH\n HYPHEN-MINUS (U+FF0D) in UTF-8"
}
] |
[
{
"msg_contents": "Hi\r\n\r\nI am checking one customer query where there are some empty tables in a\r\nnontrivial query. The fixed estimation on empty tables in Postgres are\r\nworking very well, but I found another issue.\r\n\r\ncreate table test_a(id int);\r\ncreate table test_b(id int);\r\ninsert into test_a select generate_series(1,100000);\r\nanalyze test_a, test_b;\r\n\r\nwith zero row in test_b postgres optimizer uses nested loop\r\n\r\npostgres=# explain analyze select * from test_a where not exists(select *\r\nfrom test_b where test_a.id=test_b.id);\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Nested Loop Anti Join (cost=0.00..2693.00 rows=99999 width=4) (actual\r\ntime=0.024..90.530 rows=100000 loops=1) │\r\n│ Join Filter: (test_a.id = test_b.id)\r\n │\r\n│ -> Seq Scan on test_a (cost=0.00..1443.00 rows=100000 width=4)\r\n(actual time=0.017..8.171 rows=100000 loops=1) │\r\n│ -> Seq Scan on test_b (cost=0.00..0.00 rows=1 width=4) (actual\r\ntime=0.000..0.000 rows=0 loops=100000) │\r\n│ Planning Time: 0.153 ms\r\n │\r\n│ Execution Time: 94.331 ms\r\n │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(6 rows)\r\n\r\nbut if I add one fake row to test_b, I got hash antijoin\r\n\r\ninsert into test_b values(-1);\r\nanalyze test_b;\r\n\r\npostgres=# explain analyze select * from test_a where not exists(select *\r\nfrom test_b where test_a.id=test_b.id);\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Hash Anti Join (cost=1.02..2706.51 rows=99999 width=4) (actual\r\ntime=0.026..24.474 rows=100000 loops=1) │\r\n│ Hash Cond: (test_a.id = test_b.id)\r\n │\r\n│ -> Seq Scan on test_a (cost=0.00..1443.00 rows=100000 width=4)\r\n(actual time=0.010..8.522 rows=100000 loops=1) │\r\n│ -> Hash (cost=1.01..1.01 rows=1 width=4) (actual time=0.008..0.010\r\nrows=1 loops=1) │\r\n│ Buckets: 1024 Batches: 1 Memory Usage: 9kB\r\n │\r\n│ -> Seq Scan on test_b (cost=0.00..1.01 rows=1 width=4) (actual\r\ntime=0.003..0.004 rows=1 loops=1) │\r\n│ Planning Time: 0.186 ms\r\n │\r\n│ Execution Time: 28.334 ms\r\n │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(8 rows)\r\n\r\nNow the query is almost 3 times faster. Probably this is a cost issue,\r\nbecause cost is very similar. With fake row I got better plan. But when I\r\ndisable hashjoin I got more expensive but better plan too\r\n\r\npostgres=# explain analyze select * from test_a where not exists(select *\r\nfrom test_b where test_a.id=test_b.id);\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Nested Loop Anti Join (cost=0.00..2944.01 rows=99999 width=4) (actual\r\ntime=0.100..47.360 rows=100000 loops=1) │\r\n│ Join Filter: (test_a.id = test_b.id)\r\n │\r\n│ Rows Removed by Join Filter: 100000\r\n │\r\n│ -> Seq Scan on test_a (cost=0.00..1443.00 rows=100000 width=4)\r\n(actual time=0.019..8.586 rows=100000 loops=1) │\r\n│ -> Materialize (cost=0.00..1.01 rows=1 width=4) (actual\r\ntime=0.000..0.000 rows=1 loops=100000) │\r\n│ -> Seq Scan on test_b (cost=0.00..1.01 rows=1 width=4) (actual\r\ntime=0.006..0.008 rows=1 loops=1) │\r\n│ Planning Time: 0.176 ms\r\n │\r\n│ Execution Time: 51.248 ms\r\n │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(8 rows)\r\n\r\nOn empty table the Materialize node helps 50%\r\n\nHiI am checking one customer query where there are some empty tables in a nontrivial query. The fixed estimation on empty tables in Postgres are working very well, but I found another issue.create table test_a(id int);create table test_b(id int);insert into test_a select generate_series(1,100000);analyze test_a, test_b;with zero row in test_b postgres optimizer uses nested looppostgres=# explain analyze select * from test_a where not exists(select * from test_b where test_a.id=test_b.id);┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Nested Loop Anti Join (cost=0.00..2693.00 rows=99999 width=4) (actual time=0.024..90.530 rows=100000 loops=1) ││ Join Filter: (test_a.id = test_b.id) ││ -> Seq Scan on test_a (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.017..8.171 rows=100000 loops=1) ││ -> Seq Scan on test_b (cost=0.00..0.00 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=100000) ││ Planning Time: 0.153 ms ││ Execution Time: 94.331 ms │└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(6 rows)but if I add one fake row to test_b, I got hash antijoininsert into test_b values(-1);analyze test_b;postgres=# explain analyze select * from test_a where not exists(select * from test_b where test_a.id=test_b.id);┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Hash Anti Join (cost=1.02..2706.51 rows=99999 width=4) (actual time=0.026..24.474 rows=100000 loops=1) ││ Hash Cond: (test_a.id = test_b.id) ││ -> Seq Scan on test_a (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.010..8.522 rows=100000 loops=1) ││ -> Hash (cost=1.01..1.01 rows=1 width=4) (actual time=0.008..0.010 rows=1 loops=1) ││ Buckets: 1024 Batches: 1 Memory Usage: 9kB ││ -> Seq Scan on test_b (cost=0.00..1.01 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=1) ││ Planning Time: 0.186 ms ││ Execution Time: 28.334 ms │└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(8 rows)Now the query is almost 3 times faster. Probably this is a cost issue, because cost is very similar. With fake row I got better plan. But when I disable hashjoin I got more expensive but better plan toopostgres=# explain analyze select * from test_a where not exists(select * from test_b where test_a.id=test_b.id);┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Nested Loop Anti Join (cost=0.00..2944.01 rows=99999 width=4) (actual time=0.100..47.360 rows=100000 loops=1) ││ Join Filter: (test_a.id = test_b.id) ││ Rows Removed by Join Filter: 100000 ││ -> Seq Scan on test_a (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.019..8.586 rows=100000 loops=1) ││ -> Materialize (cost=0.00..1.01 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=100000) ││ -> Seq Scan on test_b (cost=0.00..1.01 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=1) ││ Planning Time: 0.176 ms ││ Execution Time: 51.248 ms │└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(8 rows)On empty table the Materialize node helps 50%",
"msg_date": "Fri, 30 Oct 2020 05:19:01 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "empty table blocks hash antijoin"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPlease find attached my first patch (the tiniest of doc patches) :\nI found the wording around convalidated in pg_catalog documentation a bit\nconfusing/ambiguous.\n\nIf I'm doing something wrong, please let me know :)\n\nBest regards,\nJimmy\n\nJimmy Angelakos\nSenior PostgreSQL Architect\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/",
"msg_date": "Fri, 30 Oct 2020 13:47:04 +0000",
"msg_from": "Jimmy Angelakos <jimmy@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Clarify wording for convalidated in pg_constraint."
},
{
"msg_contents": "Jimmy Angelakos <jimmy@2ndquadrant.com> writes:\n> Please find attached my first patch (the tiniest of doc patches) :\n> I found the wording around convalidated in pg_catalog documentation a bit\n> confusing/ambiguous.\n\nSeems reasonable; pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Oct 2020 10:39:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Clarify wording for convalidated in pg_constraint."
}
] |
[
{
"msg_contents": "This is the other doc fix as suggested in\nhttps://www.postgresql.org/message-id/20201027220555.GS4951%40momjian.us\n\nThere is already a compatibility section, so put there.\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 30 Oct 2020 14:03:48 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "document deviation from standard on REVOKE ROLE"
},
{
"msg_contents": "On Fri, Oct 30, 2020 at 02:03:48PM -0400, John Naylor wrote:\n> This is the other doc fix as suggested in \n> https://www.postgresql.org/message-id/20201027220555.GS4951%40momjian.us\n> \n> There is already a compatibility section, so put there.\n> -- \n> John Naylor\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company \n\n> diff --git a/doc/src/sgml/ref/revoke.sgml b/doc/src/sgml/ref/revoke.sgml\n> index 35ff87a4f5..c8d5be92db 100644\n> --- a/doc/src/sgml/ref/revoke.sgml\n> +++ b/doc/src/sgml/ref/revoke.sgml\n> @@ -298,6 +298,12 @@ REVOKE admins FROM joe;\n> is required according to the standard, but <productname>PostgreSQL</productname>\n> assumes <literal>RESTRICT</literal> by default.\n> </para>\n> +\n> + <para>\n> + In the SQL standard, <command>REVOKE</command> only revokes the privilege\n> + as granted by the invoking role. In <productname>PostgreSQL</productname>,\n> + this will also revoke privileges granted by other roles.\n> + </para>\n> </refsect1>\n> \n> <refsect1>\n\nJohn, should this 2020 patch still be applied?\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 1 Nov 2023 19:40:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: document deviation from standard on REVOKE ROLE"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Fri, Oct 30, 2020 at 02:03:48PM -0400, John Naylor wrote:\n>> + In the SQL standard, <command>REVOKE</command> only revokes the privilege\n>> + as granted by the invoking role. In <productname>PostgreSQL</productname>,\n>> + this will also revoke privileges granted by other roles.\n\n> John, should this 2020 patch still be applied?\n\n[ raised eyebrow... ] I do not think that was ever true as written,\nand it's demonstrably not true now.\n\nregression=# create user alice;\nCREATE ROLE\nregression=# create user bob;\nCREATE ROLE\nregression=# create table subject (id int);\nCREATE TABLE\nregression=# grant select on table subject to alice with grant option;\nGRANT\nregression=# grant select on table subject to bob with grant option;\nGRANT\nregression=# \\c - alice\nYou are now connected to database \"regression\" as user \"alice\".\nregression=> grant select on table subject to public;\nGRANT\nregression=> \\c - bob\nYou are now connected to database \"regression\" as user \"bob\".\nregression=> grant select on table subject to public;\nGRANT\nregression=> \\dp subject\n Access privileges\n Schema | Name | Type | Access privileges | Column privileges | Policies \n--------+---------+-------+---------------------------+-------------------+----------\n public | subject | table | postgres=arwdDxt/postgres+| | \n | | | alice=r*/postgres +| | \n | | | bob=r*/postgres +| | \n | | | =r/alice +| | \n | | | =r/bob | | \n(1 row)\n\nregression=> revoke select on table subject from public;\nREVOKE\nregression=> \\dp subject\n Access privileges\n Schema | Name | Type | Access privileges | Column privileges | Policies \n--------+---------+-------+---------------------------+-------------------+----------\n public | subject | table | postgres=arwdDxt/postgres+| | \n | | | alice=r*/postgres +| | \n | | | bob=r*/postgres +| | \n | | | =r/alice | | \n(1 row)\n\nMaybe there's some related point that needs to be made,\nbut not that one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Nov 2023 19:49:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: document deviation from standard on REVOKE ROLE"
},
{
"msg_contents": "On Wed, Nov 1, 2023 at 07:49:25PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Fri, Oct 30, 2020 at 02:03:48PM -0400, John Naylor wrote:\n> >> + In the SQL standard, <command>REVOKE</command> only revokes the privilege\n> >> + as granted by the invoking role. In <productname>PostgreSQL</productname>,\n> >> + this will also revoke privileges granted by other roles.\n> \n> > John, should this 2020 patch still be applied?\n> \n> [ raised eyebrow... ] I do not think that was ever true as written,\n> and it's demonstrably not true now.\n..\n> Maybe there's some related point that needs to be made,\n> but not that one.\n\nCool, thanks, closed!\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 1 Nov 2023 19:50:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: document deviation from standard on REVOKE ROLE"
}
] |
[
{
"msg_contents": "Forking this thread, since the existing CFs have been closed.\nhttps://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n\nThe strategy is to create catalog entries for all tables with indisvalid=false,\nand then process them like REINDEX CONCURRENTLY. If it's interrupted, it\nleaves INVALID indexes, which can be cleaned up with DROP or REINDEX, same as\nCIC on a plain table.\n\nOn Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n> On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n> > On Mon, Jun 15, 2020 at 08:15:05PM +0800, 李杰(慎追) wrote:\n> > > As shown above, an error occurred while creating an index in the second partition. \n> > > It can be clearly seen that the index of the partitioned table is invalid \n> > > and the index of the first partition is normal, the second partition is invalid, \n> > > and the Third Partition index does not exist at all.\n> > \n> > That's a problem. I really think that we should make the steps of the\n> > concurrent operation consistent across all relations, meaning that all\n> > the indexes should be created as invalid for all the parts of the\n> > partition tree, including partitioned tables as well as their\n> > partitions, in the same transaction. Then a second new transaction\n> > gets used for the index build, followed by a third one for the\n> > validation that switches the indexes to become valid.\n> \n> Note that the mentioned problem wasn't serious: there was missing index on\n> child table, therefor the parent index was invalid, as intended. However I\n> agree that it's not nice that the command can fail so easily and leave behind\n> some indexes created successfully and some failed some not created at all.\n> \n> But I took your advice initially creating invalid inds.\n...\n> That gave me the idea to layer CIC on top of Reindex, since I think it does\n> exactly what's needed.\n\nOn Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n> > It would be good also to check if\n> > we have a partition index tree that maps partially with a partition\n> > table tree (aka no all table partitions have a partition index), where\n> > these don't get clustered because there is no index to work on.\n> \n> This should not happen, since a incomplete partitioned index is \"invalid\".\n\n-- \nJustin",
"msg_date": "Sat, 31 Oct 2020 01:31:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Sat, Oct 31, 2020 at 01:31:17AM -0500, Justin Pryzby wrote:\n> Forking this thread, since the existing CFs have been closed.\n> https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n> \n> The strategy is to create catalog entries for all tables with indisvalid=false,\n> and then process them like REINDEX CONCURRENTLY. If it's interrupted, it\n> leaves INVALID indexes, which can be cleaned up with DROP or REINDEX, same as\n> CIC on a plain table.\n> \n> On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n> > On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n> > > On Mon, Jun 15, 2020 at 08:15:05PM +0800, 李杰(慎追) wrote:\n> > > > As shown above, an error occurred while creating an index in the second partition. \n> > > > It can be clearly seen that the index of the partitioned table is invalid \n> > > > and the index of the first partition is normal, the second partition is invalid, \n> > > > and the Third Partition index does not exist at all.\n> > > \n> > > That's a problem. I really think that we should make the steps of the\n> > > concurrent operation consistent across all relations, meaning that all\n> > > the indexes should be created as invalid for all the parts of the\n> > > partition tree, including partitioned tables as well as their\n> > > partitions, in the same transaction. Then a second new transaction\n> > > gets used for the index build, followed by a third one for the\n> > > validation that switches the indexes to become valid.\n> > \n> > Note that the mentioned problem wasn't serious: there was missing index on\n> > child table, therefor the parent index was invalid, as intended. However I\n> > agree that it's not nice that the command can fail so easily and leave behind\n> > some indexes created successfully and some failed some not created at all.\n> > \n> > But I took your advice initially creating invalid inds.\n> ...\n> > That gave me the idea to layer CIC on top of Reindex, since I think it does\n> > exactly what's needed.\n> \n> On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> > On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n> > > It would be good also to check if\n> > > we have a partition index tree that maps partially with a partition\n> > > table tree (aka no all table partitions have a partition index), where\n> > > these don't get clustered because there is no index to work on.\n> > \n> > This should not happen, since a incomplete partitioned index is \"invalid\".\n\n@cfbot: rebased over recent changes to indexcmds.c\n\n-- \nJustin",
"msg_date": "Sun, 29 Nov 2020 14:22:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 5:22 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Oct 31, 2020 at 01:31:17AM -0500, Justin Pryzby wrote:\n> > Forking this thread, since the existing CFs have been closed.\n> > https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n> >\n> > The strategy is to create catalog entries for all tables with indisvalid=false,\n> > and then process them like REINDEX CONCURRENTLY. If it's interrupted, it\n> > leaves INVALID indexes, which can be cleaned up with DROP or REINDEX, same as\n> > CIC on a plain table.\n> >\n> > On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n> > > On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n> > > > On Mon, Jun 15, 2020 at 08:15:05PM +0800, 李杰(慎追) wrote:\n> > > > > As shown above, an error occurred while creating an index in the second partition.\n> > > > > It can be clearly seen that the index of the partitioned table is invalid\n> > > > > and the index of the first partition is normal, the second partition is invalid,\n> > > > > and the Third Partition index does not exist at all.\n> > > >\n> > > > That's a problem. I really think that we should make the steps of the\n> > > > concurrent operation consistent across all relations, meaning that all\n> > > > the indexes should be created as invalid for all the parts of the\n> > > > partition tree, including partitioned tables as well as their\n> > > > partitions, in the same transaction. Then a second new transaction\n> > > > gets used for the index build, followed by a third one for the\n> > > > validation that switches the indexes to become valid.\n> > >\n> > > Note that the mentioned problem wasn't serious: there was missing index on\n> > > child table, therefor the parent index was invalid, as intended. However I\n> > > agree that it's not nice that the command can fail so easily and leave behind\n> > > some indexes created successfully and some failed some not created at all.\n> > >\n> > > But I took your advice initially creating invalid inds.\n> > ...\n> > > That gave me the idea to layer CIC on top of Reindex, since I think it does\n> > > exactly what's needed.\n> >\n> > On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> > > On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n> > > > It would be good also to check if\n> > > > we have a partition index tree that maps partially with a partition\n> > > > table tree (aka no all table partitions have a partition index), where\n> > > > these don't get clustered because there is no index to work on.\n> > >\n> > > This should not happen, since a incomplete partitioned index is \"invalid\".\n>\n> @cfbot: rebased over recent changes to indexcmds.c\n\nStatus update for a commitfest entry.\n\nThis patch has not been updated and \"Waiting on Author\" status since\nNov 30. Are you still planning to work on this, Justin? If no, I'm\ngoing to set this entry to \"Returned with Feedback\" barring\nobjections.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Jan 2021 21:51:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Thu, Jan 28, 2021 at 09:51:51PM +0900, Masahiko Sawada wrote:\n> On Mon, Nov 30, 2020 at 5:22 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Sat, Oct 31, 2020 at 01:31:17AM -0500, Justin Pryzby wrote:\n> > > Forking this thread, since the existing CFs have been closed.\n> > > https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n> > >\n> > > The strategy is to create catalog entries for all tables with indisvalid=false,\n> > > and then process them like REINDEX CONCURRENTLY. If it's interrupted, it\n> > > leaves INVALID indexes, which can be cleaned up with DROP or REINDEX, same as\n> > > CIC on a plain table.\n> > >\n> > > On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n> > > > On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n> > > > Note that the mentioned problem wasn't serious: there was missing index on\n> > > > child table, therefor the parent index was invalid, as intended. However I\n> > > > agree that it's not nice that the command can fail so easily and leave behind\n> > > > some indexes created successfully and some failed some not created at all.\n> > > >\n> > > > But I took your advice initially creating invalid inds.\n> > > ...\n> > > > That gave me the idea to layer CIC on top of Reindex, since I think it does\n> > > > exactly what's needed.\n> > >\n> > > On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> > > > On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n> > > > > It would be good also to check if\n> > > > > we have a partition index tree that maps partially with a partition\n> > > > > table tree (aka no all table partitions have a partition index), where\n> > > > > these don't get clustered because there is no index to work on.\n> > > >\n> > > > This should not happen, since a incomplete partitioned index is \"invalid\".\n> >\n> > @cfbot: rebased over recent changes to indexcmds.c\n> \n> Status update for a commitfest entry.\n> \n> This patch has not been updated and \"Waiting on Author\" status since\n> Nov 30. Are you still planning to work on this, Justin? If no, I'm\n> going to set this entry to \"Returned with Feedback\" barring\n> objections.\n\nI had been waiting to rebase since there hasn't been any review comments and I\nexpected additional, future conflicts.\n\n-- \nJustin",
"msg_date": "Thu, 28 Jan 2021 08:30:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On 28.01.2021 17:30, Justin Pryzby wrote:\n> On Thu, Jan 28, 2021 at 09:51:51PM +0900, Masahiko Sawada wrote:\n>> On Mon, Nov 30, 2020 at 5:22 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> On Sat, Oct 31, 2020 at 01:31:17AM -0500, Justin Pryzby wrote:\n>>>> Forking this thread, since the existing CFs have been closed.\n>>>> https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n>>>>\n>>>> The strategy is to create catalog entries for all tables with indisvalid=false,\n>>>> and then process them like REINDEX CONCURRENTLY. If it's interrupted, it\n>>>> leaves INVALID indexes, which can be cleaned up with DROP or REINDEX, same as\n>>>> CIC on a plain table.\n>>>>\n>>>> On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n>>>>> On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n>>>>> Note that the mentioned problem wasn't serious: there was missing index on\n>>>>> child table, therefor the parent index was invalid, as intended. However I\n>>>>> agree that it's not nice that the command can fail so easily and leave behind\n>>>>> some indexes created successfully and some failed some not created at all.\n>>>>>\n>>>>> But I took your advice initially creating invalid inds.\n>>>> ...\n>>>>> That gave me the idea to layer CIC on top of Reindex, since I think it does\n>>>>> exactly what's needed.\n>>>> On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n>>>>> On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n>>>>>> It would be good also to check if\n>>>>>> we have a partition index tree that maps partially with a partition\n>>>>>> table tree (aka no all table partitions have a partition index), where\n>>>>>> these don't get clustered because there is no index to work on.\n>>>>> This should not happen, since a incomplete partitioned index is \"invalid\".\n\n>>>>> I had been waiting to rebase since there hasn't been any review comments and I\n>>>>> expected additional, future conflicts.\n>>>>>\n\nI attempted to review this feature, but the last patch conflicts with \nthe recent refactoring, so I wasn't able to test it properly.\nCould you please send a new version?\n\nMeanwhile, here are my questions about the patch:\n\n1) I don't see a reason to change the logic here. We don't skip counting \nexisting indexes when create parent index. Why should we skip them in \nCONCURRENTLY mode?\n\n ��� ��� ��� // If concurrent, maybe this should be done after excluding \nindexes which already exist ?\npgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_TOTAL,\n ��� ��� ��� ��� ��� ��� ��� ��� ��� ��� �nparts);\n\n2) Here we access relation field after closing the relation. Is it safe?\n\n ��� /* save lockrelid and locktag for below */\n ��� heaprelid = rel->rd_lockInfo.lockRelId;\n\n3) leaf_partitions() function only handles indexes, so I suggest to name \nit more specifically and add a comment about meaning of 'options' parameter.\n\n4) I don't quite understand the idea of the regression test. Why do we \nexpect to see invalid indexes there?\n+��� \"idxpart_a_idx1\" UNIQUE, btree (a) INVALID\n\n5) Speaking of documentation, I think we need to add a paragraph about \nCIC on partitioned indexes which will explain that invalid indexes may \nappear and what user should do to fix them.\n\n6) ReindexIndexesConcurrently() needs some code cleanup.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 15 Feb 2021 22:06:47 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On 28.01.2021 17:30, Justin Pryzby wrote:\n> On Thu, Jan 28, 2021 at 09:51:51PM +0900, Masahiko Sawada wrote:\n>> On Mon, Nov 30, 2020 at 5:22 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> On Sat, Oct 31, 2020 at 01:31:17AM -0500, Justin Pryzby wrote:\n>>>> Forking this thread, since the existing CFs have been closed.\n>>>> https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n>>>>\n>>>> The strategy is to create catalog entries for all tables with indisvalid=false,\n>>>> and then process them like REINDEX CONCURRENTLY. If it's interrupted, it\n>>>> leaves INVALID indexes, which can be cleaned up with DROP or REINDEX, same as\n>>>> CIC on a plain table.\n>>>>\n>>>> On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n>>>>> On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n>>>>> Note that the mentioned problem wasn't serious: there was missing index on\n>>>>> child table, therefor the parent index was invalid, as intended. However I\n>>>>> agree that it's not nice that the command can fail so easily and leave behind\n>>>>> some indexes created successfully and some failed some not created at all.\n>>>>>\n>>>>> But I took your advice initially creating invalid inds.\n>>>> ...\n>>>>> That gave me the idea to layer CIC on top of Reindex, since I think it does\n>>>>> exactly what's needed.\n>>>> On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n>>>>> On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n>>>>>> It would be good also to check if\n>>>>>> we have a partition index tree that maps partially with a partition\n>>>>>> table tree (aka no all table partitions have a partition index), where\n>>>>>> these don't get clustered because there is no index to work on.\n>>>>> This should not happen, since a incomplete partitioned index is \"invalid\".\n\n>>>>> I had been waiting to rebase since there hasn't been any review comments and I\n>>>>> expected additional, future conflicts.\n>>>>>\n\nI attempted to review this feature, but the last patch conflicts with \nthe recent refactoring, so I wasn't able to test it properly.\nCould you please send a new version?\n\nMeanwhile, here are my questions about the patch:\n\n1) I don't see a reason to change the logic here. We don't skip counting \nexisting indexes when create parent index. Why should we skip them in \nCONCURRENTLY mode?\n\n ��� ��� ��� // If concurrent, maybe this should be done after excluding \nindexes which already exist ?\npgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_TOTAL,\n ��� ��� ��� ��� ��� ��� ��� ��� ��� ��� �nparts);\n\n2) Here we access relation field after closing the relation. Is it safe?\n\n ��� /* save lockrelid and locktag for below */\n ��� heaprelid = rel->rd_lockInfo.lockRelId;\n\n3) leaf_partitions() function only handles indexes, so I suggest to name \nit more specifically and add a comment about meaning of 'options' parameter.\n\n4) I don't quite understand the idea of the regression test. Why do we \nexpect to see invalid indexes there?\n+��� \"idxpart_a_idx1\" UNIQUE, btree (a) INVALID\n\n5) Speaking of documentation, I think we need to add a paragraph about \nCIC on partitioned indexes which will explain that invalid indexes may \nappear and what user should do to fix them.\n\n6) ReindexIndexesConcurrently() needs some code cleanup.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 15 Feb 2021 22:07:05 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Mon, Feb 15, 2021 at 10:06:47PM +0300, Anastasia Lubennikova wrote:\n> On 28.01.2021 17:30, Justin Pryzby wrote:\n> > On Thu, Jan 28, 2021 at 09:51:51PM +0900, Masahiko Sawada wrote:\n> > > On Mon, Nov 30, 2020 at 5:22 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > On Sat, Oct 31, 2020 at 01:31:17AM -0500, Justin Pryzby wrote:\n> > > > > Forking this thread, since the existing CFs have been closed.\n> > > > > https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n> > > > > \n> > > > > The strategy is to create catalog entries for all tables with indisvalid=false,\n> > > > > and then process them like REINDEX CONCURRENTLY. If it's interrupted, it\n> > > > > leaves INVALID indexes, which can be cleaned up with DROP or REINDEX, same as\n> > > > > CIC on a plain table.\n> > > > > \n> > > > > On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n> > > > > > On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n> > > > > > Note that the mentioned problem wasn't serious: there was missing index on\n> > > > > > child table, therefor the parent index was invalid, as intended. However I\n> > > > > > agree that it's not nice that the command can fail so easily and leave behind\n> > > > > > some indexes created successfully and some failed some not created at all.\n> > > > > > \n> > > > > > But I took your advice initially creating invalid inds.\n> > > > > ...\n> > > > > > That gave me the idea to layer CIC on top of Reindex, since I think it does\n> > > > > > exactly what's needed.\n> > > > > On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> > > > > > On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n> > > > > > > It would be good also to check if\n> > > > > > > we have a partition index tree that maps partially with a partition\n> > > > > > > table tree (aka no all table partitions have a partition index), where\n> > > > > > > these don't get clustered because there is no index to work on.\n> > > > > > This should not happen, since a incomplete partitioned index is \"invalid\".\n> \n> > > > > > I had been waiting to rebase since there hasn't been any review comments and I\n> > > > > > expected additional, future conflicts.\n> > > > > > \n> \n> I attempted to review this feature, but the last patch conflicts with the\n> recent refactoring, so I wasn't able to test it properly.\n> Could you please send a new version?\n\nI rebased this yesterday, so here's my latest.\n\n> 2) Here we access relation field after closing the relation. Is it safe?\n> ��� /* save lockrelid and locktag for below */\n> ��� heaprelid = rel->rd_lockInfo.lockRelId;\n\nThanks, fixed this just now.\n\n> 3) leaf_partitions() function only handles indexes, so I suggest to name it\n> more specifically and add a comment about meaning of 'options' parameter.\n> \n> 4) I don't quite understand the idea of the regression test. Why do we\n> expect to see invalid indexes there?\n> +��� \"idxpart_a_idx1\" UNIQUE, btree (a) INVALID\n\nBecause of the unique failure:\n+create unique index concurrently on idxpart (a); -- partitioned, unique failure\n+ERROR: could not create unique index \"idxpart2_a_idx2_ccnew\"\n+DETAIL: Key (a)=(10) is duplicated.\n+\\d idxpart\n\nThis shows that CIC first creates catalog-only INVALID indexes, and then\nreindexes them to \"validate\".\n\n-- \nJustin",
"msg_date": "Mon, 15 Feb 2021 13:33:41 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Hi,\nFor v13-0006-More-refactoring.patch :\n\n+ /* It's not a shared catalog, so refuse to move it to shared tablespace\n*/\n+ if (params->tablespaceOid == GLOBALTABLESPACE_OID && false)\n+ ereport(ERROR,\n\nDo you intend to remove the ineffective check ?\n\n+ else\n+ heapRelation = table_open(heapId,\n+ ShareUpdateExclusiveLock);\n+ table_close(heapRelation, NoLock);\n\nThe table_open() seems to be unnecessary since there is no check after the\nopen.\n\n+ // heapRelationIds = list_make1_oid(heapId);\nIf the code is not needed, you can remove the above.\n\nFor v13-0005-Refactor-to-allow-reindexing-all-index-partition.patch :\n\n+ /* Skip invalid indexes, if requested */\n+ if ((options & REINDEXOPT_SKIPVALID) != 0 &&\n+ get_index_isvalid(partoid))\n\nThe comment seems to diverge from the name of the flag (which says skip\nvalid index).\n\nCheers\n\nOn Mon, Feb 15, 2021 at 11:34 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Feb 15, 2021 at 10:06:47PM +0300, Anastasia Lubennikova wrote:\n> > On 28.01.2021 17:30, Justin Pryzby wrote:\n> > > On Thu, Jan 28, 2021 at 09:51:51PM +0900, Masahiko Sawada wrote:\n> > > > On Mon, Nov 30, 2020 at 5:22 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > > > > On Sat, Oct 31, 2020 at 01:31:17AM -0500, Justin Pryzby wrote:\n> > > > > > Forking this thread, since the existing CFs have been closed.\n> > > > > >\n> https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n> > > > > >\n> > > > > > The strategy is to create catalog entries for all tables with\n> indisvalid=false,\n> > > > > > and then process them like REINDEX CONCURRENTLY. If it's\n> interrupted, it\n> > > > > > leaves INVALID indexes, which can be cleaned up with DROP or\n> REINDEX, same as\n> > > > > > CIC on a plain table.\n> > > > > >\n> > > > > > On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n> > > > > > > On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier\n> wrote:\n> > > > > > > Note that the mentioned problem wasn't serious: there was\n> missing index on\n> > > > > > > child table, therefor the parent index was invalid, as\n> intended. However I\n> > > > > > > agree that it's not nice that the command can fail so easily\n> and leave behind\n> > > > > > > some indexes created successfully and some failed some not\n> created at all.\n> > > > > > >\n> > > > > > > But I took your advice initially creating invalid inds.\n> > > > > > ...\n> > > > > > > That gave me the idea to layer CIC on top of Reindex, since I\n> think it does\n> > > > > > > exactly what's needed.\n> > > > > > On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> > > > > > > On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier\n> wrote:\n> > > > > > > > It would be good also to check if\n> > > > > > > > we have a partition index tree that maps partially with a\n> partition\n> > > > > > > > table tree (aka no all table partitions have a partition\n> index), where\n> > > > > > > > these don't get clustered because there is no index to work\n> on.\n> > > > > > > This should not happen, since a incomplete partitioned index\n> is \"invalid\".\n> >\n> > > > > > > I had been waiting to rebase since there hasn't been any\n> review comments and I\n> > > > > > > expected additional, future conflicts.\n> > > > > > >\n> >\n> > I attempted to review this feature, but the last patch conflicts with the\n> > recent refactoring, so I wasn't able to test it properly.\n> > Could you please send a new version?\n>\n> I rebased this yesterday, so here's my latest.\n>\n> > 2) Here we access relation field after closing the relation. Is it safe?\n> > /* save lockrelid and locktag for below */\n> > heaprelid = rel->rd_lockInfo.lockRelId;\n>\n> Thanks, fixed this just now.\n>\n> > 3) leaf_partitions() function only handles indexes, so I suggest to name\n> it\n> > more specifically and add a comment about meaning of 'options' parameter.\n> >\n> > 4) I don't quite understand the idea of the regression test. Why do we\n> > expect to see invalid indexes there?\n> > + \"idxpart_a_idx1\" UNIQUE, btree (a) INVALID\n>\n> Because of the unique failure:\n> +create unique index concurrently on idxpart (a); -- partitioned, unique\n> failure\n> +ERROR: could not create unique index \"idxpart2_a_idx2_ccnew\"\n> +DETAIL: Key (a)=(10) is duplicated.\n> +\\d idxpart\n>\n> This shows that CIC first creates catalog-only INVALID indexes, and then\n> reindexes them to \"validate\".\n>\n> --\n> Justin\n>\n\nHi,For v13-0006-More-refactoring.patch :+ /* It's not a shared catalog, so refuse to move it to shared tablespace */+ if (params->tablespaceOid == GLOBALTABLESPACE_OID && false)+ ereport(ERROR,Do you intend to remove the ineffective check ?+ else+ heapRelation = table_open(heapId,+ ShareUpdateExclusiveLock);+ table_close(heapRelation, NoLock);The table_open() seems to be unnecessary since there is no check after the open.+ // heapRelationIds = list_make1_oid(heapId);If the code is not needed, you can remove the above.For v13-0005-Refactor-to-allow-reindexing-all-index-partition.patch :+ /* Skip invalid indexes, if requested */+ if ((options & REINDEXOPT_SKIPVALID) != 0 &&+ get_index_isvalid(partoid))The comment seems to diverge from the name of the flag (which says skip valid index).CheersOn Mon, Feb 15, 2021 at 11:34 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Feb 15, 2021 at 10:06:47PM +0300, Anastasia Lubennikova wrote:\n> On 28.01.2021 17:30, Justin Pryzby wrote:\n> > On Thu, Jan 28, 2021 at 09:51:51PM +0900, Masahiko Sawada wrote:\n> > > On Mon, Nov 30, 2020 at 5:22 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > On Sat, Oct 31, 2020 at 01:31:17AM -0500, Justin Pryzby wrote:\n> > > > > Forking this thread, since the existing CFs have been closed.\n> > > > > https://www.postgresql.org/message-id/flat/20200914143102.GX18552%40telsasoft.com#58b1056488451f8594b0f0ba40996afd\n> > > > > \n> > > > > The strategy is to create catalog entries for all tables with indisvalid=false,\n> > > > > and then process them like REINDEX CONCURRENTLY. If it's interrupted, it\n> > > > > leaves INVALID indexes, which can be cleaned up with DROP or REINDEX, same as\n> > > > > CIC on a plain table.\n> > > > > \n> > > > > On Sat, Aug 08, 2020 at 01:37:44AM -0500, Justin Pryzby wrote:\n> > > > > > On Mon, Jun 15, 2020 at 09:37:42PM +0900, Michael Paquier wrote:\n> > > > > > Note that the mentioned problem wasn't serious: there was missing index on\n> > > > > > child table, therefor the parent index was invalid, as intended. However I\n> > > > > > agree that it's not nice that the command can fail so easily and leave behind\n> > > > > > some indexes created successfully and some failed some not created at all.\n> > > > > > \n> > > > > > But I took your advice initially creating invalid inds.\n> > > > > ...\n> > > > > > That gave me the idea to layer CIC on top of Reindex, since I think it does\n> > > > > > exactly what's needed.\n> > > > > On Sat, Sep 26, 2020 at 02:56:55PM -0500, Justin Pryzby wrote:\n> > > > > > On Thu, Sep 24, 2020 at 05:11:03PM +0900, Michael Paquier wrote:\n> > > > > > > It would be good also to check if\n> > > > > > > we have a partition index tree that maps partially with a partition\n> > > > > > > table tree (aka no all table partitions have a partition index), where\n> > > > > > > these don't get clustered because there is no index to work on.\n> > > > > > This should not happen, since a incomplete partitioned index is \"invalid\".\n> \n> > > > > > I had been waiting to rebase since there hasn't been any review comments and I\n> > > > > > expected additional, future conflicts.\n> > > > > > \n> \n> I attempted to review this feature, but the last patch conflicts with the\n> recent refactoring, so I wasn't able to test it properly.\n> Could you please send a new version?\n\nI rebased this yesterday, so here's my latest.\n\n> 2) Here we access relation field after closing the relation. Is it safe?\n> /* save lockrelid and locktag for below */\n> heaprelid = rel->rd_lockInfo.lockRelId;\n\nThanks, fixed this just now.\n\n> 3) leaf_partitions() function only handles indexes, so I suggest to name it\n> more specifically and add a comment about meaning of 'options' parameter.\n> \n> 4) I don't quite understand the idea of the regression test. Why do we\n> expect to see invalid indexes there?\n> + \"idxpart_a_idx1\" UNIQUE, btree (a) INVALID\n\nBecause of the unique failure:\n+create unique index concurrently on idxpart (a); -- partitioned, unique failure\n+ERROR: could not create unique index \"idxpart2_a_idx2_ccnew\"\n+DETAIL: Key (a)=(10) is duplicated.\n+\\d idxpart\n\nThis shows that CIC first creates catalog-only INVALID indexes, and then\nreindexes them to \"validate\".\n\n-- \nJustin",
"msg_date": "Mon, 15 Feb 2021 13:37:06 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Mon, Feb 15, 2021 at 10:07:05PM +0300, Anastasia Lubennikova wrote:\n> 5) Speaking of documentation, I think we need to add a paragraph about CIC\n> on partitioned indexes which will explain that invalid indexes may appear\n> and what user should do to fix them.\n\nI'm not sure about that - it's already documented in general, for\nnonpartitioned indexes.\n\n-- \nJustin",
"msg_date": "Fri, 26 Feb 2021 12:20:19 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Justin Pryzby писал 2021-02-26 21:20:\n> On Mon, Feb 15, 2021 at 10:07:05PM +0300, Anastasia Lubennikova wrote:\n>> 5) Speaking of documentation, I think we need to add a paragraph about \n>> CIC\n>> on partitioned indexes which will explain that invalid indexes may \n>> appear\n>> and what user should do to fix them.\n> \n> I'm not sure about that - it's already documented in general, for\n> nonpartitioned indexes.\n\nHi.\n\nI've rebased patches and tried to fix issues I've seen. I've fixed \nreference after table_close() in the first patch (can be seen while \nbuilding with CPPFLAGS='-DRELCACHE_FORCE_RELEASE'). It seems childidxs \nshouldn't live in ind_context, so I moved it out of it. Updated \ndocumentation to state that CIC can leave invalid or valid indexes on \npartitions if it's not succeeded. Also merged old \n0002-f-progress-reporting.patch and \n0003-WIP-Add-SKIPVALID-flag-for-more-integration.patch. It seems the \nfirst one didn't really fixed issue with progress report (as \nReindexRelationConcurrently() uses pgstat_progress_start_command(), \nwhich seems to mess up the effect of this command in DefineIndex()). \nNote, that third patch completely removes attempts to report create \nindex progress correctly (reindex reports about individual commands, not \nthe whole CREATE INDEX).\n\nSo I've added 0003-Try-to-fix-create-index-progress-report.patch, which \ntries to fix the mess with create index progress report. It introduces \nnew flag REINDEXOPT_REPORT_CREATE_PART to ReindexParams->options. Given \nthis flag, ReindexRelationConcurrently() will not report about \nindividual operations start/stop, but ReindexMultipleInternal() will \nreport about reindexed partitions. To make the issue worse, some \npartitions can be handled in ReindexPartitions() and \nReindexMultipleInternal() should know how many to correctly update \nPROGRESS_CREATEIDX_PARTITIONS_DONE counter. Also it needs IndexOid to \ncorrectly generate pg_stat_progress_create_index record, so we pass \nthese parameters to it.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Thu, 10 Feb 2022 18:07:08 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Hi.\n\nI've added 0005-Mark-intermediate-partitioned-indexes-as-valid.patch \nwhich fixed the following issues - when partitioned index is created, \nindexes on intermediate partitioned tables were preserved in invalid \nstate. Also added some more tests.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Wed, 02 Mar 2022 10:24:59 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "This patch is marked \"waiting on author\" in the CF. However the most\nrecent emails have patches and it's not clear to me what's left from\nprevious reviews that might not be addressed yet. Should this patch be\nmarked \"Needs Review\"?\n\nAnastasia and Alexander are marked as reviewers. Are you still able to\nreview it or are there still pending issues that need to be resolved\nfrom previous reviews?\n\n\n",
"msg_date": "Fri, 25 Mar 2022 01:05:49 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 01:05:49AM -0400, Greg Stark wrote:\n> This patch is marked \"waiting on author\" in the CF. However the most\n> recent emails have patches and it's not clear to me what's left from\n> previous reviews that might not be addressed yet. Should this patch be\n> marked \"Needs Review\"?\n> \n> Anastasia and Alexander are marked as reviewers. Are you still able to\n> review it or are there still pending issues that need to be resolved\n> from previous reviews?\n\nI still haven't responded to Alexander's feedback, so I need to do that.\n(Sorry).\n\nHowever, since the patch attracted no attention for 50 some weeks last year, so\nnow is a weird time to shift attention to it. As such, I will move it to the\nnext CF.\n\nhttps://www.postgresql.org/message-id/flat/20210226182019.GU20769@telsasoft.com#da169a0a518bf8121604437d9ab053b3\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 25 Mar 2022 08:13:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 10, 2022 at 06:07:08PM +0300, Alexander Pyhalov wrote:\n> I've rebased patches and tried to fix issues I've seen. I've fixed reference\n> after table_close() in the first patch (can be seen while building with\n> CPPFLAGS='-DRELCACHE_FORCE_RELEASE').\n\nThanks for finding that.\n\nThe patches other than 0001 are more experimental, and need someone to check if\nit's even a good approach to use, so I kept them separate from the essential\npatch.\n\nYour latest 0005 patch (mark intermediate partitioned indexes as valid) is\nprobably fixing a bug in my SKIPVALID patch, right ? I'm not sure whether the\nSKIPVALID patch should be merged into 0001, and I've been awaiting feedback on\nthe main patch before handling progress reporting.\n\nSorry for not responding sooner. The patch saw no activity for ~11 months so I\nwasn't prepared to pick it up in March, at least not without guidance from a\ncommitter.\n\nWould you want to take over this patch ? I wrote it following someone's\nquestion, but don't expect that I'd use the feature myself. I can help review\nit or try to clarify the organization of my existing patches (but still haven't\nmanaged to work my way through your amendments to my patches).\n\nThanks for caring about partitioned DDL ;)\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 28 Jun 2022 13:33:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Justin Pryzby писал 2022-06-28 21:33:\n> Hi,\n> \n> On Thu, Feb 10, 2022 at 06:07:08PM +0300, Alexander Pyhalov wrote:\n>> I've rebased patches and tried to fix issues I've seen. I've fixed \n>> reference\n>> after table_close() in the first patch (can be seen while building \n>> with\n>> CPPFLAGS='-DRELCACHE_FORCE_RELEASE').\n> \n> Thanks for finding that.\n> \n> The patches other than 0001 are more experimental, and need someone to \n> check if\n> it's even a good approach to use, so I kept them separate from the \n> essential\n> patch.\n> \n> Your latest 0005 patch (mark intermediate partitioned indexes as valid) \n> is\n> probably fixing a bug in my SKIPVALID patch, right ? I'm not sure \n> whether the\n> SKIPVALID patch should be merged into 0001, and I've been awaiting \n> feedback on\n> the main patch before handling progress reporting.\n\nHi. I think it's more about fixing ReindexPartitions-to-set-indisvalid \npatch, as\nwe also should mark intermediate indexes as valid when reindex succeeds.\n\n\n> Sorry for not responding sooner. The patch saw no activity for ~11 \n> months so I\n> wasn't prepared to pick it up in March, at least not without guidance \n> from a\n> committer.\n> \n> Would you want to take over this patch ? I wrote it following \n> someone's\n> question, but don't expect that I'd use the feature myself. I can help \n> review\n> it or try to clarify the organization of my existing patches (but still \n> haven't\n> managed to work my way through your amendments to my patches).\n> \n\nYes, I'm glad to work on the patches, as this for us this is a very \nimportant feature.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Wed, 29 Jun 2022 09:45:49 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Justin Pryzby писал 2022-06-28 21:33:\n> Hi,\n> \n> On Thu, Feb 10, 2022 at 06:07:08PM +0300, Alexander Pyhalov wrote:\n>> I've rebased patches and tried to fix issues I've seen. I've fixed \n>> reference\n>> after table_close() in the first patch (can be seen while building \n>> with\n>> CPPFLAGS='-DRELCACHE_FORCE_RELEASE').\n> \n\nRebased patches on the current master.\nThey still require proper review.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Wed, 29 Jun 2022 11:01:37 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "I finally found time to digest and integrate your changes into my local\nbranch. This fixes the three issues you reported: FORCE_RELEASE, issue\nwith INVALID partitions issue (for which I adapted your patch into an\nearlier patch in my series), and progress reporting. And rebased.\n\n-- \nJustin",
"msg_date": "Sun, 20 Nov 2022 21:00:11 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Justin Pryzby писал 2022-11-21 06:00:\n> I finally found time to digest and integrate your changes into my local\n> branch. This fixes the three issues you reported: FORCE_RELEASE, issue\n> with INVALID partitions issue (for which I adapted your patch into an\n> earlier patch in my series), and progress reporting. And rebased.\n\nHi.\n\nThank you for the effort.\nI've looked through and tested new patch a bit. Overall it looks good to \nme.\nThe question I have is whether we should update \npg_stat_progress_create_index in reindex_invalid_child_indexes(), when \nwe skip valid indexes?\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Mon, 21 Nov 2022 17:44:22 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Hi,\n\nThank you Justin and Alexander for working on this, I have reviewed and\ntested the latest patch, it works well, the problems mentioned\npreviously are all fixed. I like the idea of sharing code of reindex\nand index, but I have noticed some peculiarities as a user. \n\nThe reporting is somewhat confusing as it switches to reporting for\nreindex concurrently while building child indexes, this should be fixed\nwith the simple patch I have attached. Another thing that I have\nnoticed is that REINDEX, which is used under the hood, creates new\nindexes with suffix _ccnew, and if the index building fails, the\nindexes that could not be build will have the name with _ccnew suffix.\nThis can actually be seen in your test:\n\nERROR: could not create unique index \"idxpart2_a_idx2_ccnew\"\n\nI find it quite confusing and I don't think that this the expected\nbehavior (if it is, I think it should be documented, like it is for\nREINDEX). As an example of problems that it might entail, DROP INDEX\nwill not drop all the invalid indexes in the inheritance tree, because\nit will leave _ccnew indexes in place, which is ok for reindex\nconcurrently, but that's not how C-I-C works now. I think that fixing\nthis problem requires some heavy code rewrite and I'm not quite sure\nhow to go about it, if you have any ideas, I will be happy to try them\nout.\n\n\nThanks,\nIlya",
"msg_date": "Sat, 03 Dec 2022 19:13:30 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Sat, Dec 03, 2022 at 07:13:30PM +0400, Ilya Gladyshev wrote:\n> Hi,\n> \n> Thank you Justin and Alexander for working on this, I have reviewed and\n> tested the latest patch, it works well, the problems mentioned\n> previously are all fixed. I like the idea of sharing code of reindex\n> and index, but I have noticed some peculiarities as a user.�\n> \n> The reporting is somewhat confusing as it switches to reporting for\n> reindex concurrently while building child indexes, this should be fixed\n> with the simple patch I have attached. Another thing that I have\n> noticed is that REINDEX, which is used under the hood, creates new\n> indexes with suffix _ccnew, and if the index building fails, the\n> indexes that could not be build will have the name with _ccnew suffix.\n> This can actually be seen in your test:\n> \n> ERROR: could not create unique index \"idxpart2_a_idx2_ccnew\"\n\n> I find it quite confusing and I don't think that this the expected\n> behavior (if it is, I think it should be documented, like it is for\n> REINDEX). As an example of problems that it might entail, DROP INDEX\n> will not drop all the invalid indexes in the inheritance tree, because\n> it will leave _ccnew indexes in place, which is ok for reindex\n> concurrently, but that's not how C-I-C works now. I think that fixing\n> this problem requires some heavy code rewrite and I'm not quite sure\n\nThis beavior is fixed. I re-factored and re-implented to use\nDefineIndex() for building indexes concurrently rather than reindexing.\nThat makes the patch smaller, actually, and has the added benefit of\nsplitting off the \"Concurrently\" part of DefineIndex() into a separate\nfunction.\n\nThis currently handles partitions with a loop around the whole CIC\nimplementation, which means that things like WaitForLockers() happen\nonce for each index, the same as REINDEX CONCURRENTLY on a partitioned\ntable. Contrast that with ReindexRelationConcurrently(), which handles\nall the indexes on a table in one pass by looping around indexes within\neach phase.\n\nBTW, it causes the patch to fail to apply in cfbot when you send an\nadditional (002) supplementary patch without including the original\n(001) patch. You can name it *.txt to avoid the issue.\nhttps://wiki.postgresql.org/wiki/Cfbot#Which_attachments_are_considered_to_be_patches.3F\n\nThanks for looking.\n\n-- \nJustin",
"msg_date": "Sun, 4 Dec 2022 13:09:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Sun, 2022-12-04 at 13:09 -0600, Justin Pryzby wrote:\n> \n> This beavior is fixed. I re-factored and re-implented to use\n> DefineIndex() for building indexes concurrently rather than\n> reindexing.\n> That makes the patch smaller, actually, and has the added benefit of\n> splitting off the \"Concurrently\" part of DefineIndex() into a\n> separate\n> function.\n\nNice, I think it turned out pretty concise. I played around with the\npatch quite a bit, didn't find any major problems, the only minor thing\nthat I can note is that we should skip the top parent index itself in\nthe loop not to increment the pg_stat counter, something like this:\n\ndiff --git a/src/backend/commands/indexcmds.c\nb/src/backend/commands/indexcmds.c\nindex cfab45b999..9049540b5b 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -1515,6 +1515,9 @@ DefineIndex(Oid relationId,\n Oid indrelid =\nlfirst_oid(lc);\n Oid tabrelid =\nIndexGetRelation(indrelid, false);\n \n+ if (indrelid == indexRelationId)\n+ continue;\n+\n if\n(RELKIND_HAS_STORAGE(get_rel_relkind(indrelid)) &&\n !get_index_isvalid(indrelid))\n {\n> \n> BTW, it causes the patch to fail to apply in cfbot when you send an\n> additional (002) supplementary patch without including the original\n> (001) patch. You can name it *.txt to avoid the issue.\n> \n> https://wiki.postgresql.org/wiki/Cfbot#Which_attachments_are_considered_to_be_patches.3F\n> \n> Thanks for looking.\n> \nMy bad, didn't know about this, thanks for the link.\n\nOn a side note, I noticed that reindex behaviour is strange on\npartitioned tables, it doesn't mark partitioned tables as valid after\nreindexing children, as I could understand from the code and mailing\nlists, this is the intended behaviour, but I can't quite understand the\nrationale for it, do you know why it is done this way?\n\n\n\n",
"msg_date": "Fri, 09 Dec 2022 17:53:10 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Sun, Dec 04, 2022 at 01:09:35PM -0600, Justin Pryzby wrote:\n> This currently handles partitions with a loop around the whole CIC\n> implementation, which means that things like WaitForLockers() happen\n> once for each index, the same as REINDEX CONCURRENTLY on a partitioned\n> table. Contrast that with ReindexRelationConcurrently(), which handles\n> all the indexes on a table in one pass by looping around indexes within\n> each phase.\n\nRebased over the progress reporting fix (27f5c712b).\n\nI added a list of (intermediate) partitioned tables, rather than looping\nover the list of inheritors again, to save calling rel_get_relkind().\n\nI think this patch is done.\n\n-- \nJustin",
"msg_date": "Sun, 26 Mar 2023 09:51:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Justin Pryzby писал 2023-03-26 17:51:\n> On Sun, Dec 04, 2022 at 01:09:35PM -0600, Justin Pryzby wrote:\n>> This currently handles partitions with a loop around the whole CIC\n>> implementation, which means that things like WaitForLockers() happen\n>> once for each index, the same as REINDEX CONCURRENTLY on a partitioned\n>> table. Contrast that with ReindexRelationConcurrently(), which \n>> handles\n>> all the indexes on a table in one pass by looping around indexes \n>> within\n>> each phase.\n> \n> Rebased over the progress reporting fix (27f5c712b).\n> \n> I added a list of (intermediate) partitioned tables, rather than \n> looping\n> over the list of inheritors again, to save calling rel_get_relkind().\n> \n> I think this patch is done.\n\nHi.\n\nOverall looks good to me. However, I think that using 'partitioned' as \nlist of partitioned index oids in DefineIndex() is a bit misleading - \nwe've just used it as boolean, specifying if we are dealing with a \npartitioned relation.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Mon, 27 Mar 2023 13:28:24 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 01:28:24PM +0300, Alexander Pyhalov wrote:\n> Justin Pryzby писал 2023-03-26 17:51:\n> > On Sun, Dec 04, 2022 at 01:09:35PM -0600, Justin Pryzby wrote:\n> > > This currently handles partitions with a loop around the whole CIC\n> > > implementation, which means that things like WaitForLockers() happen\n> > > once for each index, the same as REINDEX CONCURRENTLY on a partitioned\n> > > table. Contrast that with ReindexRelationConcurrently(), which handles\n> > > all the indexes on a table in one pass by looping around indexes within\n> > > each phase.\n> > \n> > Rebased over the progress reporting fix (27f5c712b).\n> > \n> > I added a list of (intermediate) partitioned tables, rather than looping\n> > over the list of inheritors again, to save calling rel_get_relkind().\n> > \n> > I think this patch is done.\n> \n> Overall looks good to me. However, I think that using 'partitioned' as list\n> of partitioned index oids in DefineIndex() is a bit misleading - we've just\n> used it as boolean, specifying if we are dealing with a partitioned\n> relation.\n\nRight. This is also rebased on 8c852ba9a4 (Allow some exclusion\nconstraints on partitions).\n\n-- \nJustin",
"msg_date": "Wed, 12 Jul 2023 21:27:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Justin Pryzby писал 2023-07-13 05:27:\n> On Mon, Mar 27, 2023 at 01:28:24PM +0300, Alexander Pyhalov wrote:\n>> Justin Pryzby писал 2023-03-26 17:51:\n>> > On Sun, Dec 04, 2022 at 01:09:35PM -0600, Justin Pryzby wrote:\n>> > > This currently handles partitions with a loop around the whole CIC\n>> > > implementation, which means that things like WaitForLockers() happen\n>> > > once for each index, the same as REINDEX CONCURRENTLY on a partitioned\n>> > > table. Contrast that with ReindexRelationConcurrently(), which handles\n>> > > all the indexes on a table in one pass by looping around indexes within\n>> > > each phase.\n>> >\n>> > Rebased over the progress reporting fix (27f5c712b).\n>> >\n>> > I added a list of (intermediate) partitioned tables, rather than looping\n>> > over the list of inheritors again, to save calling rel_get_relkind().\n>> >\n>> > I think this patch is done.\n>> \n>> Overall looks good to me. However, I think that using 'partitioned' as \n>> list\n>> of partitioned index oids in DefineIndex() is a bit misleading - we've \n>> just\n>> used it as boolean, specifying if we are dealing with a partitioned\n>> relation.\n> \n> Right. This is also rebased on 8c852ba9a4 (Allow some exclusion\n> constraints on partitions).\n\nHi.\nI have some more question.\nIn the following code (indexcmds.c:1640 and later)\n\n1640 rel = table_open(relationId, \nShareUpdateExclusiveLock);\n1641 heaprelid = rel->rd_lockInfo.lockRelId;\n1642 table_close(rel, ShareUpdateExclusiveLock);\n1643 SET_LOCKTAG_RELATION(heaplocktag, \nheaprelid.dbId, heaprelid.relId);\n\nshould we release ShareUpdateExclusiveLock before getting session lock \nin DefineIndexConcurrentInternal()?\nAlso we unlock parent table there between reindexing childs in the end \nof DefineIndexConcurrentInternal():\n\n1875 /*\n1876 * Last thing to do is release the session-level lock on \nthe parent table.\n1877 */\n1878 UnlockRelationIdForSession(&heaprelid, \nShareUpdateExclusiveLock);\n1879 }\n\n\nIs it safe? Shouldn't we hold session lock on the parent table while \nrebuilding child indexes?\n\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Thu, 13 Jul 2023 09:49:42 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Hi.\n\nI've rebased patch on master and it'seems to me there's one more issue -\n\nwhen we call DefineIndexConcurrentInternal() in partitioned case, it \nwaits for transactions, locking tableId, not tabrelid - heaprelid \nLockRelId is constructed for parent index relation, not for child index \nrelation.\n\nAttaching fixed version.\n\nAlso I'm not sure what to do with locking of child relations. If we \ndon't do anything, you can drop one of the partitioned table childs \nwhile CIC is in progress, and get error\n\nERROR: cache lookup failed for index 16399\n\nIf you try to lock all child tables in CIC session, you'll get \ndeadlocks.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Mon, 29 Jan 2024 15:43:34 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Hi,\n\nI think it's well worth the effort to revive the patch, so I rebased it \non master, updated it and will return it back to the commitfest. \nAlexander, Justin feel free to add yourselves as authors\n\nOn 29.01.2024 12:43, Alexander Pyhalov wrote:\n> Hi.\n>\n> I've rebased patch on master and it'seems to me there's one more issue -\n>\n> when we call DefineIndexConcurrentInternal() in partitioned case, it \n> waits for transactions, locking tableId, not tabrelid - heaprelid \n> LockRelId is constructed for parent index relation, not for child \n> index relation.\n>\n> Attaching fixed version.\n>\n> Also I'm not sure what to do with locking of child relations. If we \n> don't do anything, you can drop one of the partitioned table childs \n> while CIC is in progress, and get error\n>\n> ERROR: cache lookup failed for index 16399\nI agree that we need to do something about it, in particular, I think we \nshould lock all the partitions inside the transaction that builds the \ncatalog entries. Fixed this in the new version.\n> If you try to lock all child tables in CIC session, you'll get deadlocks.\n\nDo you mean the deadlock between the transaction that drops a partition \nand the transaction doing CIC? I think this is unavoidable and can be \nreproduced even without partitioning.\n\nAlso not sure why a list of children relation was obtained with \nShareLock that CIC is supposed to avoid not to block writes, changed \nthat to ShareUpdateExclusive.\n\nRegards,\n\nIlya",
"msg_date": "Thu, 23 May 2024 22:14:57 +0100",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Ilya Gladyshev писал(а) 2024-05-24 00:14:\n> Hi,\n\nHi.\n\n> \n> I think it's well worth the effort to revive the patch, so I rebased it \n> on master, updated it and will return it back to the commitfest. \n> Alexander, Justin feel free to add yourselves as authors\n> \n> On 29.01.2024 12:43, Alexander Pyhalov wrote:\n>> Hi.\n>> \n>> I've rebased patch on master and it'seems to me there's one more issue \n>> -\n>> \n>> when we call DefineIndexConcurrentInternal() in partitioned case, it \n>> waits for transactions, locking tableId, not tabrelid - heaprelid \n>> LockRelId is constructed for parent index relation, not for child \n>> index relation.\n>> \n>> Attaching fixed version.\n>> \n>> Also I'm not sure what to do with locking of child relations. If we \n>> don't do anything, you can drop one of the partitioned table childs \n>> while CIC is in progress, and get error\n>> \n>> ERROR: cache lookup failed for index 16399\n> I agree that we need to do something about it, in particular, I think \n> we should lock all the partitions inside the transaction that builds \n> the catalog entries. Fixed this in the new version.\n>> If you try to lock all child tables in CIC session, you'll get \n>> deadlocks.\n> \n> Do you mean the deadlock between the transaction that drops a partition \n> and the transaction doing CIC? I think this is unavoidable and can be \n> reproduced even without partitioning.\n\nYes, it seems we trade this error for possible deadlock between \ntransaction, dropping a partition, and CIC.\n\n> \n> Also not sure why a list of children relation was obtained with \n> ShareLock that CIC is supposed to avoid not to block writes, changed \n> that to ShareUpdateExclusive.\n> \n\nI expect that it wasn't an issue due to the fact that it's held for a \nbrief period until DefineIndexConcurrentInternal() commits for the first \ntime. But it seems, it's more correct to use ShareUpdateExclusive lock \nhere.\n\n\nAlso I'd like to note that in new patch version there's a strange \nwording in documentation:\n\n\"This can be very convenient as not only will all existing partitions be\n indexed, but any future partitions will be as well.\n <command>CREATE INDEX ... CONCURRENTLY</command> can incur long lock \ntimes\n on huge partitioned tables, to avoid that you can\n use <command>CREATE INDEX ON ONLY</command> the partitioned table, \nwhich\n creates the new index marked as invalid, preventing automatic \napplication\n to existing partitions.\"\n\nAll the point of CIC is to avoid long lock times. So it seems this \nparagraph should be rewritten in the following way:\n\n\"To avoid long lock times, you can use CREATE INDEX CONCURRENTLY or \nCREATE INDEX ON ONLY</command> the partitioned table...\"\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Fri, 24 May 2024 12:04:33 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On 24.05.2024 10:04, Alexander Pyhalov wrote:\n> Ilya Gladyshev писал(а) 2024-05-24 00:14:\n>> Hi,\n>\n> Hi.\n>\n>>\n>> I think it's well worth the effort to revive the patch, so I rebased \n>> it on master, updated it and will return it back to the commitfest. \n>> Alexander, Justin feel free to add yourselves as authors\n>>\n>> On 29.01.2024 12:43, Alexander Pyhalov wrote:\n>>> Hi.\n>>>\n>>> I've rebased patch on master and it'seems to me there's one more \n>>> issue -\n>>>\n>>> when we call DefineIndexConcurrentInternal() in partitioned case, it \n>>> waits for transactions, locking tableId, not tabrelid - heaprelid \n>>> LockRelId is constructed for parent index relation, not for child \n>>> index relation.\n>>>\n>>> Attaching fixed version.\n>>>\n>>> Also I'm not sure what to do with locking of child relations. If we \n>>> don't do anything, you can drop one of the partitioned table childs \n>>> while CIC is in progress, and get error\n>>>\n>>> ERROR: cache lookup failed for index 16399\n>> I agree that we need to do something about it, in particular, I think \n>> we should lock all the partitions inside the transaction that builds \n>> the catalog entries. Fixed this in the new version.\n>>> If you try to lock all child tables in CIC session, you'll get \n>>> deadlocks.\n>>\n>> Do you mean the deadlock between the transaction that drops a \n>> partition and the transaction doing CIC? I think this is unavoidable \n>> and can be reproduced even without partitioning.\n>\n> Yes, it seems we trade this error for possible deadlock between \n> transaction, dropping a partition, and CIC.\n>\n>>\n>> Also not sure why a list of children relation was obtained with \n>> ShareLock that CIC is supposed to avoid not to block writes, changed \n>> that to ShareUpdateExclusive.\n>>\n>\n> I expect that it wasn't an issue due to the fact that it's held for a \n> brief period until DefineIndexConcurrentInternal() commits for the \n> first time. But it seems, it's more correct to use \n> ShareUpdateExclusive lock here.\n>\n>\n> Also I'd like to note that in new patch version there's a strange \n> wording in documentation:\n>\n> \"This can be very convenient as not only will all existing partitions be\n> indexed, but any future partitions will be as well.\n> <command>CREATE INDEX ... CONCURRENTLY</command> can incur long lock \n> times\n> on huge partitioned tables, to avoid that you can\n> use <command>CREATE INDEX ON ONLY</command> the partitioned table, which\n> creates the new index marked as invalid, preventing automatic \n> application\n> to existing partitions.\"\n>\n> All the point of CIC is to avoid long lock times. So it seems this \n> paragraph should be rewritten in the following way:\n>\n> \"To avoid long lock times, you can use CREATE INDEX CONCURRENTLY or \n> CREATE INDEX ON ONLY</command> the partitioned table...\"\n\n\nTrue, the current wording doesn't look right. Right now CREATE INDEX ON \nONLY is described as a workaround for the missing CIC. I think it rather \nmakes sense to say that it gives more fine-grained control of partition \nlocking than both CIC and ordinary CREATE INDEX. See the updated patch.",
"msg_date": "Tue, 28 May 2024 00:52:15 +0100",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "Ilya Gladyshev писал(а) 2024-05-28 02:52:\n\n>> Also I'd like to note that in new patch version there's a strange \n>> wording in documentation:\n>> \n>> \"This can be very convenient as not only will all existing partitions \n>> be\n>> indexed, but any future partitions will be as well.\n>> <command>CREATE INDEX ... CONCURRENTLY</command> can incur long lock \n>> times\n>> on huge partitioned tables, to avoid that you can\n>> use <command>CREATE INDEX ON ONLY</command> the partitioned table, \n>> which\n>> creates the new index marked as invalid, preventing automatic \n>> application\n>> to existing partitions.\"\n>> \n>> All the point of CIC is to avoid long lock times. So it seems this \n>> paragraph should be rewritten in the following way:\n>> \n>> \"To avoid long lock times, you can use CREATE INDEX CONCURRENTLY or \n>> CREATE INDEX ON ONLY</command> the partitioned table...\"\n> \n> \n> True, the current wording doesn't look right. Right now CREATE INDEX ON \n> ONLY is described as a workaround for the missing CIC. I think it \n> rather makes sense to say that it gives more fine-grained control of \n> partition locking than both CIC and ordinary CREATE INDEX. See the \n> updated patch.\n\nHi.\n\nNot sure if it's worth removing mentioning of CIC in\n\n creates the new index marked as invalid, preventing automatic \napplication\n to existing partitions. Instead, indexes can then be created \nindividually\n- on each partition using <literal>CONCURRENTLY</literal> and\n+ on each partition and\n <firstterm>attached</firstterm> to the partitioned index on the \nparent\n using <command>ALTER INDEX ... ATTACH PARTITION</command>. Once \nindexes for\n all the partitions are attached to the parent index, the parent \nindex will\n\nbut at least now it looks better.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Tue, 28 May 2024 09:05:02 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On 28.05.2024 07:05, Alexander Pyhalov wrote:\n> Ilya Gladyshev писал(а) 2024-05-28 02:52:\n>\n>>> Also I'd like to note that in new patch version there's a strange \n>>> wording in documentation:\n>>>\n>>> \"This can be very convenient as not only will all existing \n>>> partitions be\n>>> indexed, but any future partitions will be as well.\n>>> <command>CREATE INDEX ... CONCURRENTLY</command> can incur long \n>>> lock times\n>>> on huge partitioned tables, to avoid that you can\n>>> use <command>CREATE INDEX ON ONLY</command> the partitioned table, \n>>> which\n>>> creates the new index marked as invalid, preventing automatic \n>>> application\n>>> to existing partitions.\"\n>>>\n>>> All the point of CIC is to avoid long lock times. So it seems this \n>>> paragraph should be rewritten in the following way:\n>>>\n>>> \"To avoid long lock times, you can use CREATE INDEX CONCURRENTLY or \n>>> CREATE INDEX ON ONLY</command> the partitioned table...\"\n>>\n>>\n>> True, the current wording doesn't look right. Right now CREATE INDEX \n>> ON ONLY is described as a workaround for the missing CIC. I think it \n>> rather makes sense to say that it gives more fine-grained control of \n>> partition locking than both CIC and ordinary CREATE INDEX. See the \n>> updated patch.\n>\n> Hi.\n>\n> Not sure if it's worth removing mentioning of CIC in\n>\n> creates the new index marked as invalid, preventing automatic \n> application\n> to existing partitions. Instead, indexes can then be created \n> individually\n> - on each partition using <literal>CONCURRENTLY</literal> and\n> + on each partition and\n> <firstterm>attached</firstterm> to the partitioned index on the \n> parent\n> using <command>ALTER INDEX ... ATTACH PARTITION</command>. Once \n> indexes for\n> all the partitions are attached to the parent index, the parent \n> index will\n>\n> but at least now it looks better.\n\nThe current patch version locks all the partitions in the first \ntransaction up until each of them is built, which makes for long lock \ntimes for partitions that are built last. Having looked at the \nimplementation of REINDEX CONCURRENTLY for partitioned tables, I think \nwe can improve this by using the same approach of just skipping the \nrelations that we find out are dropped when trying to lock them. \nIncidentally, this implementation in the new patch version is also simpler.\n\nIn addition, I noticed that progress tracking is once again broken for \npartitioned tables, while looking at REINDEX implementation, attaching \nthe second patch to fix it.",
"msg_date": "Sat, 15 Jun 2024 19:56:38 +0100",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Thu, May 23, 2024 at 10:14:57PM +0100, Ilya Gladyshev wrote:\n> Hi,\n> \n> I think it's well worth the effort to revive the patch, so I rebased it on\n> master, updated it and will return it back to the commitfest. Alexander,\n> Justin feel free to add yourselves as authors\n\nThanks -- I was intending to write about this.\n\nI realized that the patch will need some isolation tests to exercise its\nconcurrent behavior.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 15 Jun 2024 14:40:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On 15.06.2024 20:40, Justin Pryzby wrote:\n> On Thu, May 23, 2024 at 10:14:57PM +0100, Ilya Gladyshev wrote:\n>> Hi,\n>>\n>> I think it's well worth the effort to revive the patch, so I rebased it on\n>> master, updated it and will return it back to the commitfest. Alexander,\n>> Justin feel free to add yourselves as authors\n> Thanks -- I was intending to write about this.\n>\n> I realized that the patch will need some isolation tests to exercise its\n> concurrent behavior.\n\n\nThanks for the suggestion, added an isolation test that verifies \nbehaviour of partitioned CIC with simultaneous partition drop/detach \ngoing on. Also fixed some issues in the new patch that I found while \nwriting the test.",
"msg_date": "Thu, 20 Jun 2024 20:20:58 +0100",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Sat, Jun 15, 2024 at 07:56:38PM +0100, Ilya Gladyshev wrote:\n> In addition, I noticed that progress tracking is once again broken for\n> partitioned tables, while looking at REINDEX implementation, attaching the\n> second patch to fix it.\n\nThanks for the fixes, I started reviewing them but need some more time\nto digest.\n\nDo you mean that progress reporting is broken in master, for REINDEX, or\njust with this patch ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 11 Jul 2024 07:31:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "It is broken in master, I just didn’t want to create a separate thread, but it can be fixed independently. As I remember, the problem is that progress is tracked for each table in the hierarchy as if the table is processed separately, without ever setting partitions_total and partitions_done counters.\n\n> 11 июля 2024 г., в 13:31, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> On Sat, Jun 15, 2024 at 07:56:38PM +0100, Ilya Gladyshev wrote:\n>> In addition, I noticed that progress tracking is once again broken for\n>> partitioned tables, while looking at REINDEX implementation, attaching the\n>> second patch to fix it.\n> \n> Thanks for the fixes, I started reviewing them but need some more time\n> to digest.\n> \n> Do you mean that progress reporting is broken in master, for REINDEX, or\n> just with this patch ?\n> \n> -- \n> Justin\n\n\n\n",
"msg_date": "Thu, 11 Jul 2024 21:35:24 +0100",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Thu, Jul 11, 2024 at 09:35:24PM +0100, Ilya Gladyshev wrote:\n> It is broken in master, I just didn’t want to create a separate\n> thread, but it can be fixed independently. As I remember, the\n> problem is that progress is tracked for each table in the hierarchy\n> as if the table is processed separately, without ever setting\n> partitions_total and partitions_done counters.\n\nPlease let's move this point to its own thread and deal with it with\nan independent patch. Hiding that in a thread that's already quite\nlong is not a good idea. This needs proper review, and a separate\nthread with a good subject to describe the problem will attract a\nbetter audience to deal with the problem you are seeing.\n\nI was not paying much attention, until you've mentioned that this was\nan issue with HEAD.\n--\nMichael",
"msg_date": "Fri, 12 Jul 2024 09:01:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On 12.07.2024 01:01, Michael Paquier wrote:\n> Please let's move this point to its own thread and deal with it with\n> an independent patch. Hiding that in a thread that's already quite\n> long is not a good idea. This needs proper review, and a separate\n> thread with a good subject to describe the problem will attract a\n> better audience to deal with the problem you are seeing.\n>\n> I was not paying much attention, until you've mentioned that this was\n> an issue with HEAD.\n> --\n> Michael\n\nSure, created a separate thread [1]. Please disregard the second patch \nin this thread. Duplicating the last version of the relevant patch here \nto avoid any confusion.\n\n[1] \nhttps://www.postgresql.org/message-id/b72f2d89-820a-4fa2-9058-b155cf646f4f%40gmail.com",
"msg_date": "Fri, 12 Jul 2024 23:17:25 +0100",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 11:17:25PM +0100, Ilya Gladyshev wrote:\n> Sure, created a separate thread [1]. Please disregard the second patch in\n> this thread. Duplicating the last version of the relevant patch here to\n> avoid any confusion.\n> \n> [1] https://www.postgresql.org/message-id/b72f2d89-820a-4fa2-9058-b155cf646f4f%40gmail.com\n\nThanks, will check that.\n--\nMichael",
"msg_date": "Mon, 15 Jul 2024 09:45:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX CONCURRENTLY on partitioned index"
}
] |
[
{
"msg_contents": "Hi,\n\nPer Coverity.\n\nIf test set->latch against NULL, is why it can be NULL.\nResetEvent can dereference NULL.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 31 Oct 2020 11:40:53 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "At Sat, 31 Oct 2020 11:40:53 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Per Coverity.\n> \n> If test set->latch against NULL, is why it can be NULL.\n> ResetEvent can dereference NULL.\n\nIf the returned event is WL_LATCH_SET, set->latch cannot be NULL. We\nshouldn't inadvertently ignore the unexpected or broken situation.We\ncould put Assert instead, but I think that we don't need do something\nhere at all since SIGSEGV would be raised at the right location.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 02 Nov 2020 09:49:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 1:49 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sat, 31 Oct 2020 11:40:53 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in\n> > Per Coverity.\n> >\n> > If test set->latch against NULL, is why it can be NULL.\n> > ResetEvent can dereference NULL.\n>\n> If the returned event is WL_LATCH_SET, set->latch cannot be NULL. We\n> shouldn't inadvertently ignore the unexpected or broken situation.We\n> could put Assert instead, but I think that we don't need do something\n> here at all since SIGSEGV would be raised at the right location.\n\nHmm. I changed that to support set->latch == NULL, so that you can\nuse the long lived WES in the rare code paths that call WaitLatch()\nwithout a latch (for example the code I proposed at [1]). The Windows\nversion leaves the event handle of the most recently used latch in\nset->handles[n] (because AFAICS there is no way to have a \"hole\" in\nthe handles array). The event can fire while you are waiting on \"no\nlatch\". Perhaps it should be changed to\nResetEvent(set->handles[cur_event->pos + 1])?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 2 Nov 2020 16:22:09 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "At Mon, 2 Nov 2020 16:22:09 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Mon, Nov 2, 2020 at 1:49 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Sat, 31 Oct 2020 11:40:53 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in\n> > > Per Coverity.\n> > >\n> > > If test set->latch against NULL, is why it can be NULL.\n> > > ResetEvent can dereference NULL.\n> >\n> > If the returned event is WL_LATCH_SET, set->latch cannot be NULL. We\n> > shouldn't inadvertently ignore the unexpected or broken situation.We\n> > could put Assert instead, but I think that we don't need do something\n> > here at all since SIGSEGV would be raised at the right location.\n> \n> Hmm. I changed that to support set->latch == NULL, so that you can\n> use the long lived WES in the rare code paths that call WaitLatch()\n> without a latch (for example the code I proposed at [1]). The Windows\n\nOoo. We don't update epoll events in that case. Ok, I understand\nWL_LATCH_SET can fire while set->latch == NULL.\n\n(I was confused by WaitEventAdjust* asserts set->latch != NULL for\n WL_LATCH_SET. Shouldn't we move the check from ModifyWaitEvent() to\n WaitEventAdjust*()?))\n\n> version leaves the event handle of the most recently used latch in\n> set->handles[n] (because AFAICS there is no way to have a \"hole\" in\n> the handles array). The event can fire while you are waiting on \"no\n> latch\". Perhaps it should be changed to\n> ResetEvent(set->handles[cur_event->pos + 1])?\n> \n> [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com\n\nSeems right. Just doing that *seems* to work fine, but somehow I\ncannot build on Windows for now...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 02 Nov 2020 14:33:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "At Mon, 02 Nov 2020 14:33:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 2 Nov 2020 16:22:09 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> > On Mon, Nov 2, 2020 at 1:49 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > At Sat, 31 Oct 2020 11:40:53 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in\n> > > > Per Coverity.\n> > > >\n> > > > If test set->latch against NULL, is why it can be NULL.\n> > > > ResetEvent can dereference NULL.\n> > >\n> > > If the returned event is WL_LATCH_SET, set->latch cannot be NULL. We\n> > > shouldn't inadvertently ignore the unexpected or broken situation.We\n> > > could put Assert instead, but I think that we don't need do something\n> > > here at all since SIGSEGV would be raised at the right location.\n> > \n> > Hmm. I changed that to support set->latch == NULL, so that you can\n> > use the long lived WES in the rare code paths that call WaitLatch()\n> > without a latch (for example the code I proposed at [1]). The Windows\n> \n> Ooo. We don't update epoll events in that case. Ok, I understand\n> WL_LATCH_SET can fire while set->latch == NULL.\n> \n> (I was confused by WaitEventAdjust* asserts set->latch != NULL for\n> WL_LATCH_SET. Isn't it better if we moved the check on latch from\n> ModifyWaitEvent() to WaitEventAdjust*()?))\n> \n> > version leaves the event handle of the most recently used latch in\n> > set->handles[n] (because AFAICS there is no way to have a \"hole\" in\n> > the handles array). The event can fire while you are waiting on \"no\n> > latch\". Perhaps it should be changed to\n> > ResetEvent(set->handles[cur_event->pos + 1])?\n> > \n> > [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com\n> \n> Seems right. Just doing that *seems* to work fine, but somehow I\n> cannot build on Windows for now...\n\nThat was caused by a leftover change on config_default.pl I made when\nI tried to enable NLS.\n\nI called SetLatch() during WaitLatch(NULL, ) but that doesn't fire\nWL_LATCH_SET event for me on Windows. (I got it fired on Linux..) On\nWindows, the latch is detected after exiting the WaitLatch()\ncall. Seems like MyLatch of waiter is different from\npeerPGPROC->procLatch. And... an update for Visual Studio broke my\nenvironment... I will investigate this further but everything feel\ncumbersome on Windows...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 02 Nov 2020 17:25:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "Em seg., 2 de nov. de 2020 às 05:25, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Mon, 02 Nov 2020 14:33:40 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > At Mon, 2 Nov 2020 16:22:09 +1300, Thomas Munro <thomas.munro@gmail.com>\n> wrote in\n> > > On Mon, Nov 2, 2020 at 1:49 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > At Sat, 31 Oct 2020 11:40:53 -0300, Ranier Vilela <\n> ranier.vf@gmail.com> wrote in\n> > > > > Per Coverity.\n> > > > >\n> > > > > If test set->latch against NULL, is why it can be NULL.\n> > > > > ResetEvent can dereference NULL.\n> > > >\n> > > > If the returned event is WL_LATCH_SET, set->latch cannot be NULL. We\n> > > > shouldn't inadvertently ignore the unexpected or broken situation.We\n> > > > could put Assert instead, but I think that we don't need do something\n> > > > here at all since SIGSEGV would be raised at the right location.\n> > >\n> > > Hmm. I changed that to support set->latch == NULL, so that you can\n> > > use the long lived WES in the rare code paths that call WaitLatch()\n> > > without a latch (for example the code I proposed at [1]). The Windows\n> >\n> > Ooo. We don't update epoll events in that case. Ok, I understand\n> > WL_LATCH_SET can fire while set->latch == NULL.\n> >\n> > (I was confused by WaitEventAdjust* asserts set->latch != NULL for\n> > WL_LATCH_SET. Isn't it better if we moved the check on latch from\n> > ModifyWaitEvent() to WaitEventAdjust*()?))\n> >\n> > > version leaves the event handle of the most recently used latch in\n> > > set->handles[n] (because AFAICS there is no way to have a \"hole\" in\n> > > the handles array). The event can fire while you are waiting on \"no\n> > > latch\". Perhaps it should be changed to\n> > > ResetEvent(set->handles[cur_event->pos + 1])?\n> > >\n> > > [1]\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com\n> >\n> > Seems right. Just doing that *seems* to work fine, but somehow I\n> > cannot build on Windows for now...\n>\n> That was caused by a leftover change on config_default.pl I made when\n> I tried to enable NLS.\n>\n> I called SetLatch() during WaitLatch(NULL, ) but that doesn't fire\n> WL_LATCH_SET event for me on Windows. (I got it fired on Linux..) On\n> Windows, the latch is detected after exiting the WaitLatch()\n> call. Seems like MyLatch of waiter is different from\n> peerPGPROC->procLatch. And... an update for Visual Studio broke my\n> environment... I will investigate this further but everything feel\n> cumbersome on Windows...\n>\nI can build.\nvc_regress is it enough to test it?\n\nregards,\nRanier Vilela\n\nEm seg., 2 de nov. de 2020 às 05:25, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Mon, 02 Nov 2020 14:33:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 2 Nov 2020 16:22:09 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> > On Mon, Nov 2, 2020 at 1:49 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > At Sat, 31 Oct 2020 11:40:53 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in\n> > > > Per Coverity.\n> > > >\n> > > > If test set->latch against NULL, is why it can be NULL.\n> > > > ResetEvent can dereference NULL.\n> > >\n> > > If the returned event is WL_LATCH_SET, set->latch cannot be NULL. We\n> > > shouldn't inadvertently ignore the unexpected or broken situation.We\n> > > could put Assert instead, but I think that we don't need do something\n> > > here at all since SIGSEGV would be raised at the right location.\n> > \n> > Hmm. I changed that to support set->latch == NULL, so that you can\n> > use the long lived WES in the rare code paths that call WaitLatch()\n> > without a latch (for example the code I proposed at [1]). The Windows\n> \n> Ooo. We don't update epoll events in that case. Ok, I understand\n> WL_LATCH_SET can fire while set->latch == NULL.\n> \n> (I was confused by WaitEventAdjust* asserts set->latch != NULL for\n> WL_LATCH_SET. Isn't it better if we moved the check on latch from\n> ModifyWaitEvent() to WaitEventAdjust*()?))\n> \n> > version leaves the event handle of the most recently used latch in\n> > set->handles[n] (because AFAICS there is no way to have a \"hole\" in\n> > the handles array). The event can fire while you are waiting on \"no\n> > latch\". Perhaps it should be changed to\n> > ResetEvent(set->handles[cur_event->pos + 1])?\n> > \n> > [1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK1607VmtrDUHQXrsooU%3Dap4g4R2yaoByWOOA3m8xevUQ%40mail.gmail.com\n> \n> Seems right. Just doing that *seems* to work fine, but somehow I\n> cannot build on Windows for now...\n\nThat was caused by a leftover change on config_default.pl I made when\nI tried to enable NLS.\n\nI called SetLatch() during WaitLatch(NULL, ) but that doesn't fire\nWL_LATCH_SET event for me on Windows. (I got it fired on Linux..) On\nWindows, the latch is detected after exiting the WaitLatch()\ncall. Seems like MyLatch of waiter is different from\npeerPGPROC->procLatch. And... an update for Visual Studio broke my\nenvironment... I will investigate this further but everything feel\ncumbersome on Windows...I can build.vc_regress is it enough to test it?regards,Ranier Vilela",
"msg_date": "Mon, 2 Nov 2020 08:31:57 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "At Mon, 02 Nov 2020 17:25:04 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I called SetLatch() during WaitLatch(NULL, ) but that doesn't fire\n> WL_LATCH_SET event for me on Windows. (I got it fired on Linux..) On\n> Windows, the latch is detected after exiting the WaitLatch()\n> call. Seems like MyLatch of waiter is different from\n> peerPGPROC->procLatch. And... an update for Visual Studio broke my\n> environment... I will investigate this further but everything feel\n> cumbersome on Windows...\n\nI managed to reproduce the issue. FWIW the attached modifies\npg_backend_pid() to call \"WaitLatch(NULL,\" and\npg_terminate_backend(pid) to SetLatch() to the process latch of the\npid. (It's minunderstanding that I could reproduce this on Linux.)\n\nSession A:\n =# select pg_backend_pid(); -- sleeps for 10 seconds.\n\nSession B:\n =# select pg_terminate_backend(A-pid);\n\n[11628] LOG: server process (PID 14568) was terminated by exception 0xC0000005\n[11628] DETAIL: Failed process was running: select pg_backend_pid();\n[11628] HINT: See C include file \"ntstatus.h\" for a description of the hexadecimal value.\n[11628] LOG: terminating any other active server processes\n[2948] WARNING: terminating connection because of crash of another server process\n2\n\nWith the fix patch, it changes to:\n\n[16632] LOG: FALSE LATCH: 0000000000000000\n\nrebards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 02 Nov 2020 20:50:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 12:50 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> With the fix patch, it changes to:\n>\n> [16632] LOG: FALSE LATCH: 0000000000000000\n\nNice repo. But is it OK to not reset the Win32 event in this case?\nDoes it still work correctly if you wait on the latch after that\nhappened, and perhaps after the PG latch is reset?\n\n\n",
"msg_date": "Tue, 3 Nov 2020 20:44:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "At Tue, 3 Nov 2020 20:44:23 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Tue, Nov 3, 2020 at 12:50 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > With the fix patch, it changes to:\n> >\n> > [16632] LOG: FALSE LATCH: 0000000000000000\n> \n> Nice repo. But is it OK to not reset the Win32 event in this case?\n> Does it still work correctly if you wait on the latch after that\n> happened, and perhaps after the PG latch is reset?\n\nI'm not sure what is the point of the question, but the previous patch\ndoesn't omit resetting the Win32-event in that case. In the same way\nwith other implements of the same function, it resets the trigger that\nwoke up the process since the trigger is no longer needed even if we\nare not waiting on it.\n\nIf we call WaitLatch(OrSocket) that waits on the latch, it immediately\nreturns because the latch is set. If we called ResetLatch before the\nnext call to WaitLatch(), it correctly waits on a trigger to be\npulled.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 04 Nov 2020 10:09:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "Em ter., 3 de nov. de 2020 às 22:09, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Tue, 3 Nov 2020 20:44:23 +1300, Thomas Munro <thomas.munro@gmail.com>\n> wrote in\n> > On Tue, Nov 3, 2020 at 12:50 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > With the fix patch, it changes to:\n> > >\n> > > [16632] LOG: FALSE LATCH: 0000000000000000\n> >\n> > Nice repo. But is it OK to not reset the Win32 event in this case?\n> > Does it still work correctly if you wait on the latch after that\n> > happened, and perhaps after the PG latch is reset?\n>\n> I'm not sure what is the point of the question, but the previous patch\n> doesn't omit resetting the Win32-event in that case. In the same way\n> with other implements of the same function, it resets the trigger that\n> woke up the process since the trigger is no longer needed even if we\n> are not waiting on it.\n>\n> If we call WaitLatch(OrSocket) that waits on the latch, it immediately\n> returns because the latch is set. If we called ResetLatch before the\n> next call to WaitLatch(), it correctly waits on a trigger to be\n> pulled.\n>\n+1\nThe patch for me is syntactically equal to the code changed and\navoids the dereference.\n\nregards,\nRanier Vilela\n\nEm ter., 3 de nov. de 2020 às 22:09, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Tue, 3 Nov 2020 20:44:23 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Tue, Nov 3, 2020 at 12:50 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > With the fix patch, it changes to:\n> >\n> > [16632] LOG: FALSE LATCH: 0000000000000000\n> \n> Nice repo. But is it OK to not reset the Win32 event in this case?\n> Does it still work correctly if you wait on the latch after that\n> happened, and perhaps after the PG latch is reset?\n\nI'm not sure what is the point of the question, but the previous patch\ndoesn't omit resetting the Win32-event in that case. In the same way\nwith other implements of the same function, it resets the trigger that\nwoke up the process since the trigger is no longer needed even if we\nare not waiting on it.\n\nIf we call WaitLatch(OrSocket) that waits on the latch, it immediately\nreturns because the latch is set. If we called ResetLatch before the\nnext call to WaitLatch(), it correctly waits on a trigger to be\npulled.+1The patch for me is syntactically equal to the code changed andavoids the dereference.regards,Ranier Vilela",
"msg_date": "Wed, 4 Nov 2020 18:47:23 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 10:47 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Em ter., 3 de nov. de 2020 às 22:09, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:\n>> If we call WaitLatch(OrSocket) that waits on the latch, it immediately\n>> returns because the latch is set. If we called ResetLatch before the\n>> next call to WaitLatch(), it correctly waits on a trigger to be\n>> pulled.\n>\n> +1\n> The patch for me is syntactically equal to the code changed and\n> avoids the dereference.\n\nThanks! Pushed.\n\n\n",
"msg_date": "Wed, 25 Nov 2020 18:00:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dereference before NULL check (src/backend/storage/ipc/latch.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nPer Coverity.\n\nmake_ruledef function can dereference a NULL pointer (actions),\nif \"ev_qual\" is provided and \"actions\" does not exist.\n\nThe comment there is contradictory: \" /* these could be nulls */ \"\nBecause if \"ev_qual\" is not null, \"actions\" cannot be either.\n\nSolution proposed merely as a learning experience.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 31 Oct 2020 11:49:07 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Explicit NULL dereference (src/backend/utils/adt/ruleutils.c)"
},
{
"msg_contents": "At Sat, 31 Oct 2020 11:49:07 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Per Coverity.\n> \n> make_ruledef function can dereference a NULL pointer (actions),\n> if \"ev_qual\" is provided and \"actions\" does not exist.\n> \n> The comment there is contradictory: \" /* these could be nulls */ \"\n> Because if \"ev_qual\" is not null, \"actions\" cannot be either.\n> \n> Solution proposed merely as a learning experience.\n\nWe cannot reach there with ev_action == NULL since it comes from a\nnon-nullable column. Since most of the other columns has an assertion\nthat !isnull, I think we should do the same thing for ev_action (and\nev_qual). SPI_getvalue() returns C-NULL for SQL-NULL (or for some\nother unexpected situations.).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 02 Nov 2020 10:36:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Explicit NULL dereference (src/backend/utils/adt/ruleutils.c)"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> We cannot reach there with ev_action == NULL since it comes from a\n> non-nullable column. Since most of the other columns has an assertion\n> that !isnull, I think we should do the same thing for ev_action (and\n> ev_qual). SPI_getvalue() returns C-NULL for SQL-NULL (or for some\n> other unexpected situations.).\n\nIsn't the comment just above there wrong?\n\n \t/* these could be nulls */\n\nI wonder just when that became outdated.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Nov 2020 21:05:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Explicit NULL dereference (src/backend/utils/adt/ruleutils.c)"
},
{
"msg_contents": "At Mon, 02 Nov 2020 10:36:10 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Sat, 31 Oct 2020 11:49:07 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> > Per Coverity.\n> > \n> > make_ruledef function can dereference a NULL pointer (actions),\n> > if \"ev_qual\" is provided and \"actions\" does not exist.\n> > \n> > The comment there is contradictory: \" /* these could be nulls */ \"\n> > Because if \"ev_qual\" is not null, \"actions\" cannot be either.\n> > \n> > Solution proposed merely as a learning experience.\n> \n> We cannot reach there with ev_action == NULL since it comes from a\n> non-nullable column. Since most of the other columns has an assertion\n> that !isnull, I think we should do the same thing for ev_action (and\n> ev_qual). SPI_getvalue() returns C-NULL for SQL-NULL (or for some\n> other unexpected situations.).\n\nThe following code is found there, since 1998. (15cb32d93e)\n\n>\t/* If the rule has an event qualification, add it */\n>\tif (ev_qual == NULL)\n>\t\tev_qual = \"\";\n\nThe problem code here was written as the follows.\n\n+ fno = SPI_fnumber(rulettc, \"is_instead\");\n+ is_instead = (bool)SPI_getbinval(ruletup, rulettc, fno, &isnull);\n+\n+ fno = SPI_fnumber(rulettc, \"ev_qual\");\n+ ev_qual = SPI_getvalue(ruletup, rulettc, fno);\n+ if (isnull) ev_qual = NULL;\n+\n+ fno = SPI_fnumber(rulettc, \"ev_action\");\n+ ev_action = SPI_getvalue(ruletup, rulettc, fno);\n+ if (isnull) ev_action = NULL;\n+ if (ev_action != NULL) {\n+ actions = (List *)stringToNode(ev_action);\n+ }\n\nI'm not sure what the code means by just reading there but at least it\nseems impossible for the current code to return NULL for legit values.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 02 Nov 2020 11:19:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Explicit NULL dereference (src/backend/utils/adt/ruleutils.c)"
},
{
"msg_contents": "At Sun, 01 Nov 2020 21:05:29 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > We cannot reach there with ev_action == NULL since it comes from a\n> > non-nullable column. Since most of the other columns has an assertion\n> > that !isnull, I think we should do the same thing for ev_action (and\n> > ev_qual). SPI_getvalue() returns C-NULL for SQL-NULL (or for some\n> > other unexpected situations.).\n> \n> Isn't the comment just above there wrong?\n> \n> \t/* these could be nulls */\n> \n> I wonder just when that became outdated.\n\nMmm. I investigated that.\n\nAt the very beginning of CREATE RULE (d31084e9d1, 1996), InsertRule()\ndid the following.\n\n> template = \"INSERT INTO pg_rewrite \\\n>(rulename, ev_type, ev_class, ev_attr, action, ev_qual, is_instead) VALUES \\\n>('%s', %d::char, %d::oid, %d::int2, '%s'::text, '%s'::text, \\\n> '%s'::bool);\";\n> if (strlen(template) + strlen(rulname) + strlen(actionbuf) +\n>\tstrlen(qualbuf) + 20 /* fudge fac */ > RULE_PLAN_SIZE) {\n>\telog(WARN, \"DefineQueryRewrite: rule plan string too big.\");\n> }\n> sprintf(rulebuf, template,\n>\t rulname, evtype, eventrel_oid, evslot_index, actionbuf,\n>\t qualbuf, is_instead);\n\nDoesn't seem that ev_qual and ev_action can be NULL. The same\nfunction in the current converts action list to string using\nnodeToSTring so NIL is converted into '<>', which is not NULL.\n\nSo I think ev_action cannot be null from the beginning of the history\nunless the columns is modified manually. ev_qual and ev_action are\nmarked as non-nullable (9b39b799db, in 2018). They could be null if we\nmodified that columns nullable then set NULL, but that could happen on\nall other columns in pg_rewite catalog, which are Assert(!null)ed.\n\nAlthough ev_action cannot be a empty list using SQL interface. So we\ncan get rid of the case list_length(action) == 0, but I'm not sure\nit's worth doing (but the attaches does..).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 02 Nov 2020 13:36:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Explicit NULL dereference (src/backend/utils/adt/ruleutils.c)"
},
{
"msg_contents": "Em seg., 2 de nov. de 2020 às 01:36, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Sun, 01 Nov 2020 21:05:29 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > We cannot reach there with ev_action == NULL since it comes from a\n> > > non-nullable column. Since most of the other columns has an assertion\n> > > that !isnull, I think we should do the same thing for ev_action (and\n> > > ev_qual). SPI_getvalue() returns C-NULL for SQL-NULL (or for some\n> > > other unexpected situations.).\n> >\n> > Isn't the comment just above there wrong?\n> >\n> > /* these could be nulls */\n> >\n> > I wonder just when that became outdated.\n>\n> Mmm. I investigated that.\n>\n> At the very beginning of CREATE RULE (d31084e9d1, 1996), InsertRule()\n> did the following.\n>\n> > template = \"INSERT INTO pg_rewrite \\\n> >(rulename, ev_type, ev_class, ev_attr, action, ev_qual, is_instead)\n> VALUES \\\n> >('%s', %d::char, %d::oid, %d::int2, '%s'::text, '%s'::text, \\\n> > '%s'::bool);\";\n> > if (strlen(template) + strlen(rulname) + strlen(actionbuf) +\n> > strlen(qualbuf) + 20 /* fudge fac */ > RULE_PLAN_SIZE) {\n> > elog(WARN, \"DefineQueryRewrite: rule plan string too big.\");\n> > }\n> > sprintf(rulebuf, template,\n> > rulname, evtype, eventrel_oid, evslot_index, actionbuf,\n> > qualbuf, is_instead);\n>\n> Doesn't seem that ev_qual and ev_action can be NULL. The same\n> function in the current converts action list to string using\n> nodeToSTring so NIL is converted into '<>', which is not NULL.\n>\n> So I think ev_action cannot be null from the beginning of the history\n> unless the columns is modified manually. ev_qual and ev_action are\n> marked as non-nullable (9b39b799db, in 2018). They could be null if we\n> modified that columns nullable then set NULL, but that could happen on\n> all other columns in pg_rewite catalog, which are Assert(!null)ed.\n>\n> Although ev_action cannot be a empty list using SQL interface. So we\n> can get rid of the case list_length(action) == 0, but I'm not sure\n> it's worth doing (but the attaches does..).\n>\nI think that Assert is not the right solution here.\n\nFor a function that returns NULL twice (SPI_getvalue), it is worth testing\nthe result against NULL.\nIn the future, any modification may cause further dereference.\nIn addition, the static analysis tools would continue to note this snippet\neither as a bug or as a suspect.\n\nChecking \"actions\" pointer against NULL, and acting appropriately would do.\n\nregards,\nRanier Vilela\n\nEm seg., 2 de nov. de 2020 às 01:36, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Sun, 01 Nov 2020 21:05:29 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > We cannot reach there with ev_action == NULL since it comes from a\n> > non-nullable column. Since most of the other columns has an assertion\n> > that !isnull, I think we should do the same thing for ev_action (and\n> > ev_qual). SPI_getvalue() returns C-NULL for SQL-NULL (or for some\n> > other unexpected situations.).\n> \n> Isn't the comment just above there wrong?\n> \n> /* these could be nulls */\n> \n> I wonder just when that became outdated.\n\nMmm. I investigated that.\n\nAt the very beginning of CREATE RULE (d31084e9d1, 1996), InsertRule()\ndid the following.\n\n> template = \"INSERT INTO pg_rewrite \\\n>(rulename, ev_type, ev_class, ev_attr, action, ev_qual, is_instead) VALUES \\\n>('%s', %d::char, %d::oid, %d::int2, '%s'::text, '%s'::text, \\\n> '%s'::bool);\";\n> if (strlen(template) + strlen(rulname) + strlen(actionbuf) +\n> strlen(qualbuf) + 20 /* fudge fac */ > RULE_PLAN_SIZE) {\n> elog(WARN, \"DefineQueryRewrite: rule plan string too big.\");\n> }\n> sprintf(rulebuf, template,\n> rulname, evtype, eventrel_oid, evslot_index, actionbuf,\n> qualbuf, is_instead);\n\nDoesn't seem that ev_qual and ev_action can be NULL. The same\nfunction in the current converts action list to string using\nnodeToSTring so NIL is converted into '<>', which is not NULL.\n\nSo I think ev_action cannot be null from the beginning of the history\nunless the columns is modified manually. ev_qual and ev_action are\nmarked as non-nullable (9b39b799db, in 2018). They could be null if we\nmodified that columns nullable then set NULL, but that could happen on\nall other columns in pg_rewite catalog, which are Assert(!null)ed.\n\nAlthough ev_action cannot be a empty list using SQL interface. So we\ncan get rid of the case list_length(action) == 0, but I'm not sure\nit's worth doing (but the attaches does..).I think that Assert is not the right solution here.For a function that returns NULL twice (SPI_getvalue), it is worth testing the result against NULL.In the future, any modification may cause further dereference.In addition, the static analysis tools would continue to note this snippet either as a bug or as a suspect.Checking \"actions\" pointer against NULL, and acting appropriately would do. regards,Ranier Vilela",
"msg_date": "Mon, 2 Nov 2020 08:25:27 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Explicit NULL dereference (src/backend/utils/adt/ruleutils.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em seg., 2 de nov. de 2020 às 01:36, Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> escreveu:\n>> Doesn't seem that ev_qual and ev_action can be NULL. The same\n>> function in the current converts action list to string using\n>> nodeToSTring so NIL is converted into '<>', which is not NULL.\n\n> I think that Assert is not the right solution here.\n\nI think there's some confusion here: whether the ev_actions column can\ncontain a SQL NULL is a very different thing from whether the result of\nstringToNode() on it can be a NIL list. The latter should also not\nhappen, but it's not enforced by low-level code in the same way that\nthe NOT NULL property is. So in my judgment, it's okay for us to just\nAssert that we got a not-null datum, but it's probably worth expending\nan actual test-and-elog for the NIL-list case.\n\nOf course, someone could put a string into that column that doesn't\nread out as a List at all, or it does but the elements aren't Query\nnodes, etc etc. There's a very finite limit to how much code I'm\nwilling to expend on such scenarios. But a test for NIL list seems\nreasonable, since this function previously had what looked like sane\nhandling for that case.\n\nAnyway, I adjusted Kyotaro-san's patch a bit (including fixing the\nnear identical code in make_viewdef) and pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Nov 2020 14:49:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Explicit NULL dereference (src/backend/utils/adt/ruleutils.c)"
},
{
"msg_contents": "At Mon, 02 Nov 2020 14:49:50 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Em seg., 2 de nov. de 2020 às 01:36, Kyotaro Horiguchi <\n> > horikyota.ntt@gmail.com> escreveu:\n> >> Doesn't seem that ev_qual and ev_action can be NULL. The same\n> >> function in the current converts action list to string using\n> >> nodeToSTring so NIL is converted into '<>', which is not NULL.\n> \n> > I think that Assert is not the right solution here.\n> \n> I think there's some confusion here: whether the ev_actions column can\n> contain a SQL NULL is a very different thing from whether the result of\n> stringToNode() on it can be a NIL list. The latter should also not\n> happen, but it's not enforced by low-level code in the same way that\n> the NOT NULL property is. So in my judgment, it's okay for us to just\n> Assert that we got a not-null datum, but it's probably worth expending\n> an actual test-and-elog for the NIL-list case.\n> \n> Of course, someone could put a string into that column that doesn't\n> read out as a List at all, or it does but the elements aren't Query\n> nodes, etc etc. There's a very finite limit to how much code I'm\n> willing to expend on such scenarios. But a test for NIL list seems\n> reasonable, since this function previously had what looked like sane\n> handling for that case.\n> \n> Anyway, I adjusted Kyotaro-san's patch a bit (including fixing the\n> near identical code in make_viewdef) and pushed it.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 04 Nov 2020 10:41:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Explicit NULL dereference (src/backend/utils/adt/ruleutils.c)"
}
] |
[
{
"msg_contents": "For my patch https://commitfest.postgresql.org/30/2522/\n\nWhen I run\n\nmake -j4 all contrib && make check-world\nlocally\n\nI see 2 errors.\n\nWhen cf-bot runs this it sees\n35 out of 93 failed.\n\nHow can I see the same errors?\n\nDave Cramer\n\nFor my patch https://commitfest.postgresql.org/30/2522/When I run make -j4 all contrib && make check-worldlocallyI see 2 errors.When cf-bot runs this it sees35 out of 93 failed.How can I see the same errors? Dave Cramer",
"msg_date": "Sun, 1 Nov 2020 07:58:06 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "how to replicate test results in cf-bot on travis"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 1:58 AM Dave Cramer <davecramer@gmail.com> wrote:\n> For my patch https://commitfest.postgresql.org/30/2522/\n>\n> When I run\n>\n> make -j4 all contrib && make check-world\n> locally\n>\n> I see 2 errors.\n>\n> When cf-bot runs this it sees\n> 35 out of 93 failed.\n>\n> How can I see the same errors?\n\nHi Dave,\nI applied your patch here and see the same 35 errors from\nsrc/test/isolation, make check as cfbot reports. Is it possible that\nyou forgot to add the changes under that directory when posting the\npatch to the list?\n\n\n",
"msg_date": "Mon, 2 Nov 2020 07:41:15 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: how to replicate test results in cf-bot on travis"
},
{
"msg_contents": "On Sun., Nov. 1, 2020, 1:41 p.m. Thomas Munro, <thomas.munro@gmail.com>\nwrote:\n\n> On Mon, Nov 2, 2020 at 1:58 AM Dave Cramer <davecramer@gmail.com> wrote:\n> > For my patch https://commitfest.postgresql.org/30/2522/\n> >\n> > When I run\n> >\n> > make -j4 all contrib && make check-world\n> > locally\n> >\n> > I see 2 errors.\n> >\n> > When cf-bot runs this it sees\n> > 35 out of 93 failed.\n> >\n> > How can I see the same errors?\n>\n> Hi Dave,\n> I applied your patch here and see the same 35 errors from\n> src/test/isolation, make check as cfbot reports. Is it possible that\n> you forgot to add the changes under that directory when posting the\n> patch to the list?\n>\n\n\nI rebased my branch and tried it. I'll try again\n\nThx\n\n>\n\nOn Sun., Nov. 1, 2020, 1:41 p.m. Thomas Munro, <thomas.munro@gmail.com> wrote:On Mon, Nov 2, 2020 at 1:58 AM Dave Cramer <davecramer@gmail.com> wrote:\n> For my patch https://commitfest.postgresql.org/30/2522/\n>\n> When I run\n>\n> make -j4 all contrib && make check-world\n> locally\n>\n> I see 2 errors.\n>\n> When cf-bot runs this it sees\n> 35 out of 93 failed.\n>\n> How can I see the same errors?\n\nHi Dave,\nI applied your patch here and see the same 35 errors from\nsrc/test/isolation, make check as cfbot reports. Is it possible that\nyou forgot to add the changes under that directory when posting the\npatch to the list?I rebased my branch and tried it. I'll try againThx",
"msg_date": "Sun, 1 Nov 2020 13:46:39 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: how to replicate test results in cf-bot on travis"
},
{
"msg_contents": "On Sun, 1 Nov 2020 at 13:46, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Sun., Nov. 1, 2020, 1:41 p.m. Thomas Munro, <thomas.munro@gmail.com>\n> wrote:\n>\n>> On Mon, Nov 2, 2020 at 1:58 AM Dave Cramer <davecramer@gmail.com> wrote:\n>> > For my patch https://commitfest.postgresql.org/30/2522/\n>> >\n>> > When I run\n>> >\n>> > make -j4 all contrib && make check-world\n>> > locally\n>> >\n>> > I see 2 errors.\n>> >\n>> > When cf-bot runs this it sees\n>> > 35 out of 93 failed.\n>> >\n>> > How can I see the same errors?\n>>\n>> Hi Dave,\n>> I applied your patch here and see the same 35 errors from\n>> src/test/isolation, make check as cfbot reports. Is it possible that\n>> you forgot to add the changes under that directory when posting the\n>> patch to the list?\n>>\n>\n>\n> I rebased my branch and tried it. I'll try again\n>\n> Thx\n>\n\nOK, checked and I definitely have the changes. I don't think the isolation\ntest is running. Is there some configuration that enables it?\n\nDave\n\nOn Sun, 1 Nov 2020 at 13:46, Dave Cramer <davecramer@gmail.com> wrote:On Sun., Nov. 1, 2020, 1:41 p.m. Thomas Munro, <thomas.munro@gmail.com> wrote:On Mon, Nov 2, 2020 at 1:58 AM Dave Cramer <davecramer@gmail.com> wrote:\n> For my patch https://commitfest.postgresql.org/30/2522/\n>\n> When I run\n>\n> make -j4 all contrib && make check-world\n> locally\n>\n> I see 2 errors.\n>\n> When cf-bot runs this it sees\n> 35 out of 93 failed.\n>\n> How can I see the same errors?\n\nHi Dave,\nI applied your patch here and see the same 35 errors from\nsrc/test/isolation, make check as cfbot reports. Is it possible that\nyou forgot to add the changes under that directory when posting the\npatch to the list?I rebased my branch and tried it. I'll try againThxOK, checked and I definitely have the changes. I don't think the isolation test is running. Is there some configuration that enables it?Dave",
"msg_date": "Sun, 1 Nov 2020 18:02:19 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: how to replicate test results in cf-bot on travis"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> OK, checked and I definitely have the changes. I don't think the isolation\n> test is running. Is there some configuration that enables it?\n\nNo, top-level \"check-world\" should invoke that ... but if you're unsure,\nyou could cd to src/test/isolation and run check or installcheck there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Nov 2020 18:15:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: how to replicate test results in cf-bot on travis"
},
{
"msg_contents": "On Sun, 1 Nov 2020 at 18:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > OK, checked and I definitely have the changes. I don't think the\n> isolation\n> > test is running. Is there some configuration that enables it?\n>\n> No, top-level \"check-world\" should invoke that ... but if you're unsure,\n> you could cd to src/test/isolation and run check or installcheck there.\n>\n>\nFor some reason it is not running on my machine but the above works. Thanks\n\n Dave Cramer\n\nOn Sun, 1 Nov 2020 at 18:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@gmail.com> writes:\n> OK, checked and I definitely have the changes. I don't think the isolation\n> test is running. Is there some configuration that enables it?\n\nNo, top-level \"check-world\" should invoke that ... but if you're unsure,\nyou could cd to src/test/isolation and run check or installcheck there.\nFor some reason it is not running on my machine but the above works. Thanks Dave Cramer",
"msg_date": "Mon, 2 Nov 2020 11:18:03 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: how to replicate test results in cf-bot on travis"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-02 11:18:03 -0500, Dave Cramer wrote:\n> On Sun, 1 Nov 2020 at 18:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Dave Cramer <davecramer@gmail.com> writes:\n> > > OK, checked and I definitely have the changes. I don't think the\n> > isolation\n> > > test is running. Is there some configuration that enables it?\n> >\n> > No, top-level \"check-world\" should invoke that ... but if you're unsure,\n> > you could cd to src/test/isolation and run check or installcheck there.\n> >\n> >\n> For some reason it is not running on my machine but the above works. Thanks\n\nIf there are failures - you mentioned that there are some - check-world\nwill not even start the isolation tests. You could use -k, to contuniue\nafter failures, but that makes it harder to see the erors.\n\nWith -j xx it'll be a bit scheduling dependant whether you'd e.g. see\nthe isolation test results if e.g. the \"main\" tests fail.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Nov 2020 11:47:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: how to replicate test results in cf-bot on travis"
},
{
"msg_contents": "On Mon, 2 Nov 2020 at 14:47, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-11-02 11:18:03 -0500, Dave Cramer wrote:\n> > On Sun, 1 Nov 2020 at 18:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > Dave Cramer <davecramer@gmail.com> writes:\n> > > > OK, checked and I definitely have the changes. I don't think the\n> > > isolation\n> > > > test is running. Is there some configuration that enables it?\n> > >\n> > > No, top-level \"check-world\" should invoke that ... but if you're\n> unsure,\n> > > you could cd to src/test/isolation and run check or installcheck there.\n> > >\n> > >\n> > For some reason it is not running on my machine but the above works.\n> Thanks\n>\n> If there are failures - you mentioned that there are some - check-world\n> will not even start the isolation tests. You could use -k, to contuniue\n> after failures, but that makes it harder to see the erors.\n>\n> With -j xx it'll be a bit scheduling dependant whether you'd e.g. see\n> the isolation test results if e.g. the \"main\" tests fail.\n>\n>\nAh, that makes sense now.\n\nThanks,\n\n\nDave Cramer\n\nOn Mon, 2 Nov 2020 at 14:47, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-11-02 11:18:03 -0500, Dave Cramer wrote:\n> On Sun, 1 Nov 2020 at 18:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Dave Cramer <davecramer@gmail.com> writes:\n> > > OK, checked and I definitely have the changes. I don't think the\n> > isolation\n> > > test is running. Is there some configuration that enables it?\n> >\n> > No, top-level \"check-world\" should invoke that ... but if you're unsure,\n> > you could cd to src/test/isolation and run check or installcheck there.\n> >\n> >\n> For some reason it is not running on my machine but the above works. Thanks\n\nIf there are failures - you mentioned that there are some - check-world\nwill not even start the isolation tests. You could use -k, to contuniue\nafter failures, but that makes it harder to see the erors.\n\nWith -j xx it'll be a bit scheduling dependant whether you'd e.g. see\nthe isolation test results if e.g. the \"main\" tests fail.\nAh, that makes sense now.Thanks,Dave Cramer",
"msg_date": "Mon, 2 Nov 2020 15:46:05 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: how to replicate test results in cf-bot on travis"
}
] |
[
{
"msg_contents": "Over in [1] I noted a need for fmgroids.h macros for pg_proc entries\nthat overload a particular C function. The F_XXX macros seem to be\ndesigned on the assumption that the only thing they're needed for\nis to call the function, so you don't really care which SQL alias\nyou use to get there. That's unhelpful though when you are concerned\nwith the SQL-level function more than the C function. I seem to recall\nthat we've run into this before, but up to now have been able to work\naround it.\n\nA narrow solution would be to revert the policy we just established\n(in commit 36b931214) against allowing oid_symbol entries in\npg_proc.dat, and just manually label such entries when we need to refer\nto them in the C code. I don't especially care for that though, for\nthe reasons cited in 36b931214 --- it'll lead to a lot of random\nvariation in how people refer to pg_proc entries, rather than having\na uniform convention.\n\nI experimented with generating the fmgroids.h macros by using, not\nthe prosrc field, but the concatenation of the proname and proargtypes\nfields. That's guaranteed unique within pg_proc, but it would have\nresulted in several hundred required changes in our source tree,\nbecause of common references to F_OIDEQ and suchlike. But I discovered\nthat we could alter the rule to be \"append proargtypes only if proname\nis not unique\", and that eliminates the pain almost completely: only\ntwo existing code references need to change.\n\nAs an additional benefit, we can generate macros for all the bootstrap\npg_proc entries, not only ones that are for C functions; that fixes\nanother issue noted in [1].\n\nHence, I propose the attached. Any objections?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/298489.1604188643%40sss.pgh.pa.us",
"msg_date": "Sun, 01 Nov 2020 14:19:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Getting Gen_fmgrtab.pl to generate macros for all pg_proc entries"
}
] |
[
{
"msg_contents": "While fooling with Gen_fmgrtab.pl for a nearby patch [1], I noticed\nthat fmgrtab.c had a lot of entries pointing at aggregate_dummy,\nwhich seemed rather useless. So I experimented with removing them.\n\nIt turns out that nodeWindowAgg.c is carelessly expecting them to be\nthere, because it does fmgr_info_cxt() on the target window function\neven if it will never call it because it's a plain aggregate.\nBut that's pretty trivial to fix, just need to relocate that call.\n\nWith that, we don't actually need aggregate_dummy() to exist at\nall, because it's never referenced. Having \"aggregate_dummy\"\nas the prosrc value for an aggregate function is now just a\nrandom convention; any other string would do as well. (We could\nsave a few bytes in pg_proc by choosing a shorter string, but\nprobably it's better to stick to the existing convention.)\n\nAnyway, this saves about 3KB in fmgrtab.o, without any downside\nthat I can see. If someone accidentally called an aggregate as\na normal function, they'd now get a different error message,\nnamely \"internal function \"aggregate_dummy\" is not in internal lookup\ntable\" instead of \"aggregate function NNN called as normal function\".\nThat doesn't really seem like a problem.\n\nThe attached patch is a delta over the one in [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/472274.1604258384%40sss.pgh.pa.us",
"msg_date": "Sun, 01 Nov 2020 15:47:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Getting rid of aggregate_dummy()"
},
{
"msg_contents": "On 01/11/2020 22:47, Tom Lane wrote:\n> With that, we don't actually need aggregate_dummy() to exist at\n> all, because it's never referenced. Having \"aggregate_dummy\"\n> as the prosrc value for an aggregate function is now just a\n> random convention; any other string would do as well. (We could\n> save a few bytes in pg_proc by choosing a shorter string, but\n> probably it's better to stick to the existing convention.)\n\nNULL would seem like the natural value for that.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 2 Nov 2020 13:46:15 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Getting rid of aggregate_dummy()"
},
{
"msg_contents": "On Sun, 1 Nov 2020 at 15:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> Anyway, this saves about 3KB in fmgrtab.o, without any downside\n> that I can see. If someone accidentally called an aggregate as\n> a normal function, they'd now get a different error message,\n> namely \"internal function \"aggregate_dummy\" is not in internal lookup\n> table\" instead of \"aggregate function NNN called as normal function\".\n> That doesn't really seem like a problem.\n>\n\nSpeaking as somebody who sometimes does really dumb things, I don’t like\nthis change in error message. The current message clearly identifies the\nproblem; the new message makes it look like there is a bug in Postgres.\n\nOn Sun, 1 Nov 2020 at 15:47, Tom Lane <tgl@sss.pgh.pa.us> wrote: \nAnyway, this saves about 3KB in fmgrtab.o, without any downside\nthat I can see. If someone accidentally called an aggregate as\na normal function, they'd now get a different error message,\nnamely \"internal function \"aggregate_dummy\" is not in internal lookup\ntable\" instead of \"aggregate function NNN called as normal function\".\nThat doesn't really seem like a problem.\n Speaking as somebody who sometimes does really dumb things, I don’t like this change in error message. The current message clearly identifies the problem; the new message makes it look like there is a bug in Postgres.",
"msg_date": "Mon, 2 Nov 2020 08:26:44 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting rid of aggregate_dummy()"
},
{
"msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> On Sun, 1 Nov 2020 at 15:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, this saves about 3KB in fmgrtab.o, without any downside\n>> that I can see. If someone accidentally called an aggregate as\n>> a normal function, they'd now get a different error message,\n>> namely \"internal function \"aggregate_dummy\" is not in internal lookup\n>> table\" instead of \"aggregate function NNN called as normal function\".\n>> That doesn't really seem like a problem.\n\n> Speaking as somebody who sometimes does really dumb things, I don’t like\n> this change in error message. The current message clearly identifies the\n> problem; the new message makes it look like there is a bug in Postgres.\n\nNeither message would be reachable without (erroneous) C hacking,\nso I don't quite buy that there's a problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Nov 2020 09:21:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Getting rid of aggregate_dummy()"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 01/11/2020 22:47, Tom Lane wrote:\n>> With that, we don't actually need aggregate_dummy() to exist at\n>> all, because it's never referenced. Having \"aggregate_dummy\"\n>> as the prosrc value for an aggregate function is now just a\n>> random convention; any other string would do as well. (We could\n>> save a few bytes in pg_proc by choosing a shorter string, but\n>> probably it's better to stick to the existing convention.)\n\n> NULL would seem like the natural value for that.\n\nI wouldn't be in favor of that unless we changed the prolang value\nas well. Which could certainly be considered, but it makes the\npatch rather more invasive, and I'm not sure it's worth it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Nov 2020 09:22:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Getting rid of aggregate_dummy()"
},
{
"msg_contents": "On Mon, 2 Nov 2020 at 09:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n>\n\n\n> > Speaking as somebody who sometimes does really dumb things, I don’t like\n> > this change in error message. The current message clearly identifies the\n> > problem; the new message makes it look like there is a bug in Postgres.\n>\n> Neither message would be reachable without (erroneous) C hacking,\n> so I don't quite buy that there's a problem.\n>\n\nOK, I must have misunderstood. I had the impression that we were talking\nabout just writing a query which used an aggregate function where a normal\nfunction was needed, but on closer consideration I see I had it wrong. For\nexample:\n\nodyssey=> select * from uw_term where count(*) = 1;\nERROR: aggregate functions are not allowed in WHERE\nLINE 1: select * from uw_term where count(*) = 1;\n ^\nodyssey=>\n\nBut this is a different error message, and thinking about it putting an\naggregate in the SELECT will end up using it as an aggregate (e.g. SELECT\ncount(*) FROM ...).\n\nI agree that C hackers need to know what they’re doing ;-)\n\nOn Mon, 2 Nov 2020 at 09:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n \n> Speaking as somebody who sometimes does really dumb things, I don’t like\n> this change in error message. The current message clearly identifies the\n> problem; the new message makes it look like there is a bug in Postgres.\n\nNeither message would be reachable without (erroneous) C hacking,\nso I don't quite buy that there's a problem.OK, I must have misunderstood. I had the impression that we were talking about just writing a query which used an aggregate function where a normal function was needed, but on closer consideration I see I had it wrong. For example:odyssey=> select * from uw_term where count(*) = 1;ERROR: aggregate functions are not allowed in WHERELINE 1: select * from uw_term where count(*) = 1; ^odyssey=> But this is a different error message, and thinking about it putting an aggregate in the SELECT will end up using it as an aggregate (e.g. SELECT count(*) FROM ...).I agree that C hackers need to know what they’re doing ;-)",
"msg_date": "Mon, 2 Nov 2020 10:39:04 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting rid of aggregate_dummy()"
},
{
"msg_contents": "I wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> On 01/11/2020 22:47, Tom Lane wrote:\n>>> With that, we don't actually need aggregate_dummy() to exist at\n>>> all, because it's never referenced. Having \"aggregate_dummy\"\n>>> as the prosrc value for an aggregate function is now just a\n>>> random convention; any other string would do as well. (We could\n>>> save a few bytes in pg_proc by choosing a shorter string, but\n>>> probably it's better to stick to the existing convention.)\n\n>> NULL would seem like the natural value for that.\n\n> I wouldn't be in favor of that unless we changed the prolang value\n> as well. Which could certainly be considered, but it makes the\n> patch rather more invasive, and I'm not sure it's worth it.\n\nLooking closer, I see that pg_proc.prosrc is marked NOT NULL,\nso this couldn't work anyway unless we wish to remove that marking.\nWhich doesn't seem particularly wise. I pushed this without any\nchange in the catalog contents.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Nov 2020 11:32:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Getting rid of aggregate_dummy()"
}
] |
[
{
"msg_contents": "I posted the idea of allowing negative indexes in split_part to pgsql-general\nlast week, and it seemed there was some interest:\n\nhttp://postgr.es/m/CAPWqQZR%2B-5pAZNSSrnmYczRaX-huemc%3DoO8URvDZvUA-M%3DMOBA%40mail.gmail.com\n\nAttached is a patch, based on master, that implements the approach as described\nin that discussion.\n\nThe motivation is that the existing idioms for splitting a string and selecting\nthe nth-to-last element are rather complicated and/or inefficient:\n\n 1. (string_to_array('foo bar baz', ' '))[cardinality(string_to_array('foo bar baz', ' ')) - 1]\n 2. reverse(split_part(reverse('foo bar baz'), ' ', 1))\n 3. (regexp_match('foo baz bar', '\\S*$'))[1]\n\nWith the patch, split_part(haystack, needle, -1) selects the last field of the\nstring, split_part(haystack, needle, -2) selects the second-to-last field, and\nso on. Per Tom Lane, there is precedent for this design, where negative indices\nmeaning \"count from the end\", namely the left and right string functions.\n\nThe patch includes updates to the docs and regression tests. If the feature is\ndeemed desirable, I believe the patch is \"commit quality\" (though, fair warning,\nthis is my first contribution to Postgres, so I might have the wrong notion\nof what a committable patch looks like).\n\nNote that the implementation is deliberately a bit inefficient to keep things\nsimple. When presented with a negative index, the implementation does an extra\npass over the string to count the total number of fields, in order to convert\nthe negative index to a positive index. Then it proceeds as it normally would.\n\nOne can imagine adding support for backwards B-M-H, but I'm not sure that could\nbe made to work with all multibyte encodings. We could at least avoid the extra\npass over the string by allocating a circular buffer of size |n| when n is\nnegative, but that wasn't clearly worthwhile. (I did implement the optimization\nfor the special case of -1, since its implementation was trivial.)\n\nCheers,\nNikhil",
"msg_date": "Sun, 1 Nov 2020 21:39:11 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Support negative indexes in split_part"
},
{
"msg_contents": "Nikhil Benesch <nikhil.benesch@gmail.com> writes:\n> I posted the idea of allowing negative indexes in split_part to pgsql-general\n> last week, and it seemed there was some interest:\n> http://postgr.es/m/CAPWqQZR%2B-5pAZNSSrnmYczRaX-huemc%3DoO8URvDZvUA-M%3DMOBA%40mail.gmail.com\n> Attached is a patch, based on master, that implements the approach as described\n> in that discussion.\n\nPlease add an entry to the upcoming commitfest, to make sure we don't\nlose track of this:\n\nhttps://commitfest.postgresql.org/30/\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Nov 2020 21:56:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support negative indexes in split_part"
},
{
"msg_contents": "On 11/1/20 9:56 PM, Tom Lane wrote:\n> Please add an entry to the upcoming commitfest, to make sure we don't\n> lose track of this:\n> \n> https://commitfest.postgresql.org/30/\n\nDone: https://commitfest.postgresql.org/30/2816/\n\n\n",
"msg_date": "Sun, 1 Nov 2020 22:48:26 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support negative indexes in split_part"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nPatch looks good to me. Seems like a useful feature, and I agree that the two-pass implementation makes the change very easy to review.\r\n\r\nQuick note on test coverage: gcov marks the \"needle not found\" branch (the one marked `/* special case if fldsep not found at all */`) as being completely uncovered. I don't think that needs to gate this patch; it looks like it was uncovered before this feature was added.\r\n\r\nDoc builds are currently failing due to what appears to be an xmllint failure:\r\n\r\n /usr/bin/xmllint --path . --noout --valid postgres.sgml\r\n error : Unknown IO error\r\n postgres.sgml:21: warning: failed to load external entity \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\r\n\r\nbut that doesn't have anything to do with this patch. Marking Ready for Committer. (I'm a little new to this myself, so someone please let me know if I'm jumping the gun.)\r\n\r\nThanks!\r\n--Jacob\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Tue, 10 Nov 2020 19:38:16 +0000",
"msg_from": "Jacob Champion <pchampion@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support negative indexes in split_part"
},
{
"msg_contents": "Jacob Champion <pchampion@pivotal.io> writes:\n> Patch looks good to me. Seems like a useful feature, and I agree that the two-pass implementation makes the change very easy to review.\n\nLGTM too. I made a couple of cosmetic improvements and pushed it.\n\n> Quick note on test coverage: gcov marks the \"needle not found\" branch (the one marked `/* special case if fldsep not found at all */`) as being completely uncovered. I don't think that needs to gate this patch; it looks like it was uncovered before this feature was added.\n\nWe seem to be trying for full test coverage of this function now,\nso I added a test case for that branch too.\n\n> Doc builds are currently failing due to what appears to be an xmllint failure:\n\nUnrelated, but see\n\nhttps://www.postgresql.org/message-id/flat/E2EE6B76-2D96-408A-B961-CAE47D1A86F0%40yesql.se\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Nov 2020 13:53:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support negative indexes in split_part"
}
] |
[
{
"msg_contents": "This is a new thread about possible refactoring and commit a6642b3ae\n(\"Add support for partitioned tables and indexes in REINDEX\")\n\nCurrently, ReindexPartitions() calls ReindexMultipleInternal() which loops\naround ReindexRelationConcurrently(), similar to what's done with\nReindexMultipleTables()\n\nContrast that with ReindexTable(), which calls ReindexRelationConcurrently()\nwith the table's OID, which then handles all indexes at once:\n\n|postgres=# REINDEX TABLE CONCURRENTLY onek;\n|DEBUG: 00000: building index \"onek_unique1_ccnew\" on table \"onek\" serially\n|LOCATION: index_build, index.c:2853\n|DEBUG: 00000: index \"onek_unique1_ccnew\" can safely use deduplication\n|LOCATION: _bt_allequalimage, nbtutils.c:2742\n|DEBUG: 00000: building index \"onek_unique2_ccnew\" on table \"onek\" serially\n|LOCATION: index_build, index.c:2853\n|DEBUG: 00000: index \"onek_unique2_ccnew\" can safely use deduplication\n|LOCATION: _bt_allequalimage, nbtutils.c:2742\n|DEBUG: 00000: building index \"onek_hundred_ccnew\" on table \"onek\" serially\n|LOCATION: index_build, index.c:2853\n|DEBUG: 00000: index \"onek_hundred_ccnew\" can safely use deduplication\n|LOCATION: _bt_allequalimage, nbtutils.c:2742\n|DEBUG: 00000: building index \"onek_stringu1_ccnew\" on table \"onek\" serially\n|LOCATION: index_build, index.c:2853\n|DEBUG: 00000: index \"onek_stringu1_ccnew\" can safely use deduplication\n|LOCATION: _bt_allequalimage, nbtutils.c:2742\n|DEBUG: 00000: validate_index found 1000 heap tuples, 1000 index tuples; inserted 0 missing tuples\n|LOCATION: validate_index, index.c:3281\n|DEBUG: 00000: validate_index found 1000 heap tuples, 1000 index tuples; inserted 0 missing tuples\n|LOCATION: validate_index, index.c:3281\n|DEBUG: 00000: validate_index found 1000 heap tuples, 1000 index tuples; inserted 0 missing tuples\n|LOCATION: validate_index, index.c:3281\n|DEBUG: 00000: validate_index found 1000 heap tuples, 1000 index tuples; inserted 0 missing tuples\n|LOCATION: validate_index, index.c:3281\n|REINDEX\n\nShould we refactor ReindexRelationConcurrently() into two functions ? One to\nbuild a list of indexes on a relation, and one to concurrently reindex an\nlist of indexes. Then, ReindexPartitions() would make a list of leaf indexes\nof a partitioned index, or leaf indexes of partitioned indexes of a partitioned\ntable, and then reindex those indexes all at once. For CIC, we could call\nthat from DefineIndex().\n\nThe reason is that concurrent Reindex must wait for longrunning transactions,\nand if we call it in a loop, then we wait for longrunning transactions N times.\nI can imagine scenarios where it's easy for an DBA to schedule maintenance to\ndo reindex concurrently and restart processes to allow the reindex to proceed.\nBut it might be infeasible to restart processes every 5min for 3 hours to allow\nreindex to proceed on each partition.\n\nReindexMultipleTables avoids doing that to avoid deadlocks, which makes great\nsense for REINDEX SCHEMA/DATABASE/SYSTEM. But I wonder if that reasoning\ndoesn't apply to partitioned tables.\n\nWe currently have this:\n\nReindexIndex()\n\t=> ReindexPartitions\n\t=> ReindexRelationConcurrently\n\t=> reindex_index\nReindexTable()\n\t=> ReindexPartitions\n\t=> ReindexRelationConcurrently\n\t=> reindex_relation\nReindexPartitions()\n\t=> ReindexMultipleInternal()\n\t\t=> ReindexRelationConcurrently()\n\t\t=> reindex_relation()\n\t\t=> reindex_index()\n\nAnd I'm proposing to consider this:\n\nReindexIndex()\n\t=> ReindexPartitions\n\t=> ReindexIndexesConcurrently\n\t=> reindex_index\nReindexTable()\n\t=> ReindexPartitions\n\t=> ReindexRelationConcurrently - this would be just a wrapper to collect the list of index Oids\n\t=> reindex_relation\nReindexPartitions - this exists mainly to make a list of all leaf indexes on all childs of a partitioned table\n\t=> ReindexIndexesConcurrently - this processes all indexes at once\n\t=> ReindexMultipleInternal - this loops around everything\n\t\t=> ReindexRelationConcurrently()\n\t\t=> reindex_index()\n\t\t=> reindex_relation()\nReindexRelationConcurrently\n\t=> ReindexIndexesConcurrently - this is the worker function factored out of ReindexRelationConcurrently\n\nI'm not sure if I'm missing any opportunities to simplify...\n\nSo then it's processed similar to REINDEX TABLE (rather than REINDEX DATABASE).\n\n|postgres=# REINDEX TABLE CONCURRENTLY hash_parted;\n|DEBUG: building index \"hpart0_a_idx_ccnew\" on table \"hpart0\" serially\n|DEBUG: index \"hpart0_a_idx_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart1_a_idx_ccnew\" on table \"hpart1\" serially\n|DEBUG: index \"hpart1_a_idx_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart2_a_idx_ccnew\" on table \"hpart2\" serially\n|DEBUG: index \"hpart2_a_idx_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart3_a_idx_ccnew\" on table \"hpart3\" serially\n|DEBUG: index \"hpart3_a_idx_ccnew\" can safely use deduplication\n|DEBUG: validate_index found 2489 heap tuples, 2489 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2527 heap tuples, 2527 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2530 heap tuples, 2530 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2453 heap tuples, 2453 index tuples; inserted 0 missing tuples\n|REINDEX\n\nAnd, if there are multiple indexes:\n\n|postgres=# REINDEX TABLE CONCURRENTLY hash_parted;\n|DEBUG: building index \"hpart0_a_idx_ccnew\" on table \"hpart0\" serially\n|DEBUG: index \"hpart0_a_idx_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart0_a_idx1_ccnew\" on table \"hpart0\" serially\n|DEBUG: index \"hpart0_a_idx1_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart1_a_idx_ccnew\" on table \"hpart1\" serially\n|DEBUG: index \"hpart1_a_idx_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart1_a_idx1_ccnew\" on table \"hpart1\" serially\n|DEBUG: index \"hpart1_a_idx1_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart2_a_idx_ccnew\" on table \"hpart2\" serially\n|DEBUG: index \"hpart2_a_idx_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart2_a_idx1_ccnew\" on table \"hpart2\" serially\n|DEBUG: index \"hpart2_a_idx1_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart3_a_idx_ccnew\" on table \"hpart3\" serially\n|DEBUG: index \"hpart3_a_idx_ccnew\" can safely use deduplication\n|DEBUG: building index \"hpart3_a_idx1_ccnew\" on table \"hpart3\" serially\n|DEBUG: index \"hpart3_a_idx1_ccnew\" can safely use deduplication\n|DEBUG: validate_index found 2489 heap tuples, 2489 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2489 heap tuples, 2489 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2527 heap tuples, 2527 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2527 heap tuples, 2527 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2530 heap tuples, 2530 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2530 heap tuples, 2530 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2453 heap tuples, 2453 index tuples; inserted 0 missing tuples\n|DEBUG: validate_index found 2453 heap tuples, 2453 index tuples; inserted 0 missing tuples\n|REINDEX\n\nI think the usual scenario is to have 100-1000 partitions, and 1-10 indexes per\npartition. It seems to me that at least all partitions of a given index should\nbe processed simultaneously.\n\nAlso, it occured to me that CIC for regular tables could be simplied to do the\nsame thing that I'm doing for CIC on a partitioned table: create an INVALID\ncatalog entry, and then reindex it concurrently. This seems to work, with the\nonly difference I see being that REINDEX leaves behind ccnew indexes.\n\nAttached is what I'm thinking of.",
"msg_date": "Mon, 2 Nov 2020 01:00:06 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "reindex partitioned indexes: refactor ReindexRelationConcurrently ?"
},
{
"msg_contents": "On Mon, Nov 02, 2020 at 01:00:06AM -0600, Justin Pryzby wrote:\n> The reason is that concurrent Reindex must wait for longrunning transactions,\n> and if we call it in a loop, then we wait for longrunning transactions N times.\n> I can imagine scenarios where it's easy for an DBA to schedule maintenance to\n> do reindex concurrently and restart processes to allow the reindex to proceed.\n> But it might be infeasible to restart processes every 5min for 3 hours to allow\n> reindex to proceed on each partition.\n> \n> ReindexMultipleTables avoids doing that to avoid deadlocks, which makes great\n> sense for REINDEX SCHEMA/DATABASE/SYSTEM. But I wonder if that reasoning\n> doesn't apply to partitioned tables.\n> \n> I think the usual scenario is to have 100-1000 partitions, and 1-10 indexes per\n> partition. It seems to me that at least all partitions of a given index should\n> be processed simultaneously.\n\nReindexPartitions(), as currently shaped, has the advantage to\nminimize the number of ccnew and ccold indexes to handle in parallel.\nWith your suggestion, there could be potentially hundreds of\nbuilt-still-invalid indexes or invalid-but-not-dropped indexes\ndepending on the phase where the whole REINDEX operation fails, if it\nfails of course. So I would say no to your proposal and I would\nprefer keeping the approach where we minimize the remnants of a failed\noperation to a bare minimum (aka one index for REINDEX INDEX, and one\nset of indexes on a single relation for REINDEX TABLE).\n--\nMichael",
"msg_date": "Mon, 2 Nov 2020 17:18:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reindex partitioned indexes: refactor\n ReindexRelationConcurrently ?"
}
] |
[
{
"msg_contents": "Hi,\n\nAt the moment we do very basic parsing of makefiles to build the\nvisual studio project file in order to build our contrib modules on\nMSVC. This parsing is quite basic and still requires a number of\nspecial cases to get enough information into the project file in order\nfor the build to succeed. It might be nice if we could reduce some of\nthose special cases so that:\n\na) We reduce the amount of work specific to windows when we add new\ncontrib modules, and;\nb) We can work towards a better way for people to build their own\nextensions on windows.\n\nI admit to not being much of an expert in either perl or make, but I\ncame up with the attached which does allow a good number of the\nspecial cases to be removed.\n\nI'm keen to get some feedback on this idea.\n\nPatch attached.\n\nDavid",
"msg_date": "Mon, 2 Nov 2020 20:34:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-02 20:34:28 +1300, David Rowley wrote:\n> It might be nice if we could reduce some of those special cases so\n> that:\n> \n> a) We reduce the amount of work specific to windows when we add new\n> contrib modules, and;\n> b) We can work towards a better way for people to build their own\n> extensions on windows.\n\nA worthy goal.\n\n\n\n> diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\n> index 90594bd41b..491a465e2f 100644\n> --- a/src/tools/msvc/Mkvcbuild.pm\n> +++ b/src/tools/msvc/Mkvcbuild.pm\n> @@ -32,16 +32,13 @@ my $libpq;\n> my @unlink_on_exit;\n> \n> # Set of variables for modules in contrib/ and src/test/modules/\n> -my $contrib_defines = { 'refint' => 'REFINT_VERBOSE' };\n> -my @contrib_uselibpq = ('dblink', 'oid2name', 'postgres_fdw', 'vacuumlo');\n> -my @contrib_uselibpgport = ('oid2name', 'pg_standby', 'vacuumlo');\n> -my @contrib_uselibpgcommon = ('oid2name', 'pg_standby', 'vacuumlo');\n> +my $contrib_defines = undef;\n> +my @contrib_uselibpq = undef;\n> +my @contrib_uselibpgport = ('pg_standby');\n> +my @contrib_uselibpgcommon = ('pg_standby');\n> my $contrib_extralibs = undef;\n> my $contrib_extraincludes = { 'dblink' => ['src/backend'] };\n> -my $contrib_extrasource = {\n> -\t'cube' => [ 'contrib/cube/cubescan.l', 'contrib/cube/cubeparse.y' ],\n> -\t'seg' => [ 'contrib/seg/segscan.l', 'contrib/seg/segparse.y' ],\n> -};\n> +my $contrib_extrasource = undef;\n\nHm - Is that all the special case stuff we get rid of?\n\nWhat's with the now unef'd arrays/hashes? First, wouldn't an empty array be\nmore appropriate? Second, can we just get rid of them?\n\nAnd why is the special stuff for pg_standby still needed?\n\n> my @contrib_excludes = (\n> \t'bool_plperl', 'commit_ts',\n> \t'hstore_plperl', 'hstore_plpython',\n> @@ -163,7 +160,7 @@ sub mkvcbuild\n> \t$postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');\n> \t$postgres->AddIncludeDir('src/backend');\n> \t$postgres->AddDir('src/backend/port/win32');\n> -\t$postgres->AddFile('src/backend/utils/fmgrtab.c');\n> +\t$postgres->AddFile('src/backend/utils/fmgrtab.c', 1);\n> \t$postgres->ReplaceFile('src/backend/port/pg_sema.c',\n> \t\t'src/backend/port/win32_sema.c');\n> \t$postgres->ReplaceFile('src/backend/port/pg_shmem.c',\n> @@ -316,8 +313,8 @@ sub mkvcbuild\n\nWhy do so many places need this new parameter? Looks like all explicit\ncalls use it? Can't we just use it by default, using a separate function\nfor the internal cases? Would make this a lot more readable...\n\n\n> \tmy $isolation_tester =\n> \t $solution->AddProject('isolationtester', 'exe', 'misc');\n> -\t$isolation_tester->AddFile('src/test/isolation/isolationtester.c');\n> -\t$isolation_tester->AddFile('src/test/isolation/specparse.y');\n> -\t$isolation_tester->AddFile('src/test/isolation/specscanner.l');\n> -\t$isolation_tester->AddFile('src/test/isolation/specparse.c');\n> +\t$isolation_tester->AddFile('src/test/isolation/isolationtester.c', 1);\n> +\t$isolation_tester->AddFile('src/test/isolation/specparse.y', 1);\n> +\t$isolation_tester->AddFile('src/test/isolation/specscanner.l', 1);\n> +\t$isolation_tester->AddFile('src/test/isolation/specparse.c', 1);\n> \t$isolation_tester->AddIncludeDir('src/test/isolation');\n> \t$isolation_tester->AddIncludeDir('src/port');\n> \t$isolation_tester->AddIncludeDir('src/test/regress');\n> @@ -342,8 +339,8 @@ sub mkvcbuild\n\nWhy aren't these dealth with using the .c->.l/.y logic you added?\n\n\n> +\t# Process custom compiler flags\n> +\tif ($mf =~ /^PG_CPPFLAGS\\s*=\\s*(.*)$/mg)\n\nProbably worth mentioning in pgxs.mk or such.\n\n\n> +\t{\n> +\t\tforeach my $flag (split /\\s+/, $1)\n> +\t\t{\n> +\t\t\tif ($flag =~ /^-D(.*)$/)\n> +\t\t\t{\n> +\t\t\t\tforeach my $proj (@projects)\n> +\t\t\t\t{\n> +\t\t\t\t\t$proj->AddDefine($1);\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t\telsif ($flag =~ /^-I(.*)$/)\n> +\t\t\t{\n> +\t\t\t\tforeach my $proj (@projects)\n> +\t\t\t\t{\n> +\t\t\t\t\tif ($1 eq '$(libpq_srcdir)')\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\t$proj->AddIncludeDir('src\\interfaces\\libpq');\n> +\t\t\t\t\t\t$proj->AddReference($libpq);\n> +\t\t\t\t\t}\n\nWhy just libpq?\n\n\n> +# Handle makefile rules for when file to be added to the project\n> +# does not exist. Returns 1 when the original file add should be\n> +# skipped.\n> +sub AdditionalFileRules\n> +{\n> +\tmy $self = shift;\n> +\tmy $fname = shift;\n> +\tmy ($ext) = $fname =~ /(\\.[^.]+)$/;\n> +\n> +\t# For missing .c files, check if a .l file of the same name\n> +\t# exists and add that too.\n> +\tif ($ext eq \".c\")\n> +\t{\n> +\t\tmy $filenoext = $fname;\n> +\t\t$filenoext =~ s{\\.[^.]+$}{};\n> +\t\tif (-e \"$filenoext.l\")\n> +\t\t{\n> +\t\t\tAddFile($self, \"$filenoext.l\", 0);\n> +\t\t\treturn 1;\n> +\t\t}\n> +\t\tif (-e \"$filenoext.y\")\n> +\t\t{\n> +\t\t\tAddFile($self, \"$filenoext.y\", 0);\n> +\t\t\treturn 0;\n> +\t\t}\n> +\t}\n\nAren't there related rules for .h?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Nov 2020 12:49:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "Thank you for looking at this.\n\nOn Tue, 3 Nov 2020 at 09:49, Andres Freund <andres@anarazel.de> wrote:\n> > diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\n> > index 90594bd41b..491a465e2f 100644\n> > --- a/src/tools/msvc/Mkvcbuild.pm\n> > +++ b/src/tools/msvc/Mkvcbuild.pm\n> > @@ -32,16 +32,13 @@ my $libpq;\n> > my @unlink_on_exit;\n> >\n> > # Set of variables for modules in contrib/ and src/test/modules/\n> > -my $contrib_defines = { 'refint' => 'REFINT_VERBOSE' };\n> > -my @contrib_uselibpq = ('dblink', 'oid2name', 'postgres_fdw', 'vacuumlo');\n> > -my @contrib_uselibpgport = ('oid2name', 'pg_standby', 'vacuumlo');\n> > -my @contrib_uselibpgcommon = ('oid2name', 'pg_standby', 'vacuumlo');\n> > +my $contrib_defines = undef;\n> > +my @contrib_uselibpq = undef;\n> > +my @contrib_uselibpgport = ('pg_standby');\n> > +my @contrib_uselibpgcommon = ('pg_standby');\n> > my $contrib_extralibs = undef;\n> > my $contrib_extraincludes = { 'dblink' => ['src/backend'] };\n> > -my $contrib_extrasource = {\n> > - 'cube' => [ 'contrib/cube/cubescan.l', 'contrib/cube/cubeparse.y' ],\n> > - 'seg' => [ 'contrib/seg/segscan.l', 'contrib/seg/segparse.y' ],\n> > -};\n> > +my $contrib_extrasource = undef;\n>\n> Hm - Is that all the special case stuff we get rid of?\n\nWhat else did you have in mind?\n\n> What's with the now unef'd arrays/hashes? First, wouldn't an empty array be\n> more appropriate? Second, can we just get rid of them?\n\nYes, those should be empty hashtables/arrays. I've changed that.\n\nWe could get rid of the variables too. I've just left them in there\nfor now as I wasn't sure if it might be a good idea to keep them for\nif we really need to brute force something in the future. I found\nparsing makefiles quite tedious, so it didn't seem unrealistic to me\nthat someone might struggle in the future to make something work.\n\n> And why is the special stuff for pg_standby still needed?\n\nI'm not much of an expert, but I didn't see anything in the makefile\nfor pg_standby that indicates we should link libpgport or libpgcommon.\nIt would be good if someone could explain how that works.\n\n> > my @contrib_excludes = (\n> > 'bool_plperl', 'commit_ts',\n> > 'hstore_plperl', 'hstore_plpython',\n> > @@ -163,7 +160,7 @@ sub mkvcbuild\n> > $postgres = $solution->AddProject('postgres', 'exe', '', 'src/backend');\n> > $postgres->AddIncludeDir('src/backend');\n> > $postgres->AddDir('src/backend/port/win32');\n> > - $postgres->AddFile('src/backend/utils/fmgrtab.c');\n> > + $postgres->AddFile('src/backend/utils/fmgrtab.c', 1);\n> > $postgres->ReplaceFile('src/backend/port/pg_sema.c',\n> > 'src/backend/port/win32_sema.c');\n> > $postgres->ReplaceFile('src/backend/port/pg_shmem.c',\n> > @@ -316,8 +313,8 @@ sub mkvcbuild\n>\n> Why do so many places need this new parameter? Looks like all explicit\n> calls use it? Can't we just use it by default, using a separate function\n> for the internal cases? Would make this a lot more readable...\n\nThat makes sense. I've updated the patch to have AddFile() add any\nadditional files always then I've also added a new function named\nAddFileConditional which does what AddFile(..., 0) did.\n\n> > my $isolation_tester =\n> > $solution->AddProject('isolationtester', 'exe', 'misc');\n> > - $isolation_tester->AddFile('src/test/isolation/isolationtester.c');\n> > - $isolation_tester->AddFile('src/test/isolation/specparse.y');\n> > - $isolation_tester->AddFile('src/test/isolation/specscanner.l');\n> > - $isolation_tester->AddFile('src/test/isolation/specparse.c');\n> > + $isolation_tester->AddFile('src/test/isolation/isolationtester.c', 1);\n> > + $isolation_tester->AddFile('src/test/isolation/specparse.y', 1);\n> > + $isolation_tester->AddFile('src/test/isolation/specscanner.l', 1);\n> > + $isolation_tester->AddFile('src/test/isolation/specparse.c', 1);\n> > $isolation_tester->AddIncludeDir('src/test/isolation');\n> > $isolation_tester->AddIncludeDir('src/port');\n> > $isolation_tester->AddIncludeDir('src/test/regress');\n> > @@ -342,8 +339,8 @@ sub mkvcbuild\n>\n> Why aren't these dealth with using the .c->.l/.y logic you added?\n\nYeah, some of those could be removed. I mostly only paid attention to\ncontrib though.\n\n> > + # Process custom compiler flags\n> > + if ($mf =~ /^PG_CPPFLAGS\\s*=\\s*(.*)$/mg)\n>\n> Probably worth mentioning in pgxs.mk or such.\n\nI'm not quite sure I understand what you mean here.\n\n> > + {\n> > + foreach my $flag (split /\\s+/, $1)\n> > + {\n> > + if ($flag =~ /^-D(.*)$/)\n> > + {\n> > + foreach my $proj (@projects)\n> > + {\n> > + $proj->AddDefine($1);\n> > + }\n> > + }\n> > + elsif ($flag =~ /^-I(.*)$/)\n> > + {\n> > + foreach my $proj (@projects)\n> > + {\n> > + if ($1 eq '$(libpq_srcdir)')\n> > + {\n> > + $proj->AddIncludeDir('src\\interfaces\\libpq');\n> > + $proj->AddReference($libpq);\n> > + }\n>\n> Why just libpq?\n\nI've only gone as far as making the existing contrib modules build.\nLikely there's more to be done there.\n\n> > +# Handle makefile rules for when file to be added to the project\n> > +# does not exist. Returns 1 when the original file add should be\n> > +# skipped.\n> > +sub AdditionalFileRules\n> > +{\n> > + my $self = shift;\n> > + my $fname = shift;\n> > + my ($ext) = $fname =~ /(\\.[^.]+)$/;\n> > +\n> > + # For missing .c files, check if a .l file of the same name\n> > + # exists and add that too.\n> > + if ($ext eq \".c\")\n> > + {\n> > + my $filenoext = $fname;\n> > + $filenoext =~ s{\\.[^.]+$}{};\n> > + if (-e \"$filenoext.l\")\n> > + {\n> > + AddFile($self, \"$filenoext.l\", 0);\n> > + return 1;\n> > + }\n> > + if (-e \"$filenoext.y\")\n> > + {\n> > + AddFile($self, \"$filenoext.y\", 0);\n> > + return 0;\n> > + }\n> > + }\n>\n> Aren't there related rules for .h?\n\nI've only gone as far as making the existing contrib modules build.\nLikely there's more to be done there.\n\nDavid",
"msg_date": "Fri, 6 Nov 2020 10:51:48 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On 2020-Nov-06, David Rowley wrote:\n\n> +# Handle makefile rules for when file to be added to the project\n> +# does not exist. Returns 1 when the original file add should be\n> +# skipped.\n> +sub FindAndAddAdditionalFiles\n> +{\n> +\tmy $self = shift;\n> +\tmy $fname = shift;\n> +\tmy ($ext) = $fname =~ /(\\.[^.]+)$/;\n> +\n> +\t# For .c files, check if a .l file of the same name exists and add that\n> +\t# too.\n> +\tif ($ext eq \".c\")\n> +\t{\n> +\t\tmy $filenoext = $fname;\n> +\t\t$filenoext =~ s{\\.[^.]+$}{};\n\nI think you can make this simpler by capturing both the basename and the\nextension in one go. For example,\n\n $fname =~ /(.*)(\\.[^.]+)$/;\n $filenoext = $1;\n $ext = $2;\n\nso you avoid the second =~ statement.\n\n> +\t\tif (-e \"$filenoext.l\")\n> +\t\t{\n> +\t\t\tAddFileConditional($self, \"$filenoext.l\");\n> +\t\t\treturn 1;\n> +\t\t}\n> +\t\tif (-e \"$filenoext.y\")\n> +\t\t{\n> +\t\t\tAddFileConditional($self, \"$filenoext.y\");\n\nMaybe DRY like\n\nfor my $ext (\".l\", \".y\") {\n my $file = $filenoext . $ext;\n AddFileConditional($self, $file) if -f $file;\n return 1;\n}\n\nNote: comment says \"check if a .l file\" and then checks both .l and .y.\nProbably want to update the comment ... \n\n\n",
"msg_date": "Mon, 9 Nov 2020 11:06:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Tue, 10 Nov 2020 at 03:07, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Nov-06, David Rowley wrote:\n>\n> > +# Handle makefile rules for when file to be added to the project\n> > +# does not exist. Returns 1 when the original file add should be\n> > +# skipped.\n> > +sub FindAndAddAdditionalFiles\n> > +{\n> > + my $self = shift;\n> > + my $fname = shift;\n> > + my ($ext) = $fname =~ /(\\.[^.]+)$/;\n> > +\n> > + # For .c files, check if a .l file of the same name exists and add that\n> > + # too.\n> > + if ($ext eq \".c\")\n> > + {\n> > + my $filenoext = $fname;\n> > + $filenoext =~ s{\\.[^.]+$}{};\n>\n> I think you can make this simpler by capturing both the basename and the\n> extension in one go. For example,\n>\n> $fname =~ /(.*)(\\.[^.]+)$/;\n> $filenoext = $1;\n> $ext = $2;\n>\n> so you avoid the second =~ statement.\n\nThanks. That's neater.\n\n> > + if (-e \"$filenoext.l\")\n> > + {\n> > + AddFileConditional($self, \"$filenoext.l\");\n> > + return 1;\n> > + }\n> > + if (-e \"$filenoext.y\")\n> > + {\n> > + AddFileConditional($self, \"$filenoext.y\");\n>\n> Maybe DRY like\n>\n> for my $ext (\".l\", \".y\") {\n> my $file = $filenoext . $ext;\n> AddFileConditional($self, $file) if -f $file;\n> return 1;\n> }\n\nI did adapt that part of the code, but not exactly to what's above.\nThe return there would cause us to return from the function after the\nfirst iteration.\n\n> Note: comment says \"check if a .l file\" and then checks both .l and .y.\n> Probably want to update the comment ...\n\nUpdated.\n\nI've attached the v3 patch.\n\nI'm still working through some small differences in some of the\n.vcxproj files. I've been comparing these by copying *.vcxproj out to\nanother directory with patched and unpatched then diffing the\ndirectory. See attached txt file with those diffs. Here's a summary of\nsome of them:\n\n1. There are a few places that libpq gets linked where it previously did not.\n2. REFINT_VERBOSE gets defined in a few more places than it did\npreviously. This makes it closer to what happens on Linux anyway, if\nyou look at the Make output from contrib/spi/Makefile you'll see\n-DREFINT_VERBOSE in there for autoinc\n3. LOWER_NODE gets defined in ltree now where it wasn't before. It's\ndefined on Linux. Unsure why it wasn't before on Windows.\n\nDavid",
"msg_date": "Wed, 11 Nov 2020 11:01:57 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 11:01:57AM +1300, David Rowley wrote:\n> I'm still working through some small differences in some of the\n> .vcxproj files. I've been comparing these by copying *.vcxproj out to\n> another directory with patched and unpatched then diffing the\n> directory. See attached txt file with those diffs. Here's a summary of\n> some of them:\n\nThanks. It would be good to not have those diffs if not necessary.\n\n> 1. There are a few places that libpq gets linked where it previously did not.\n\nIt seems to me that your patch is doing the right thing for adminpack\nand that its Makefile has no need to include a reference to libpq\nsource path, no?\n\nFor dblink and postgres_fdw, the duplication comes from PG_CPPFLAGS.\nIt does not matter much in practice, but it would be nice to not have\nunnecessary data in the project files. One thing that could be done\nis to make Project.pm more aware of the uniqueness of the elements\nincluded. But, do we really need -I$(libpq_srcdir) there anyway?\nFrom what I can see, we have all the paths in -I we'd actually need\nwith or without USE_PGXS.\n \n> 2. REFINT_VERBOSE gets defined in a few more places than it did\n> previously. This makes it closer to what happens on Linux anyway, if\n> you look at the Make output from contrib/spi/Makefile you'll see\n> -DREFINT_VERBOSE in there for autoinc.\n\nIndeed.\n\n> 3. LOWER_NODE gets defined in ltree now where it wasn't before. It's\n> defined on Linux. Unsure why it wasn't before on Windows.\n\nYour patch is grabbing the value of PG_CPPFLAGS from ltree's\nMakefile, which is fine. We may be able to remove this flag and rely\non pg_tolower() instead in the long run? I am not sure about\nFLG_CANLOOKSIGN() though.\n--\nMichael",
"msg_date": "Wed, 11 Nov 2020 09:44:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 11 Nov 2020 at 13:44, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Nov 11, 2020 at 11:01:57AM +1300, David Rowley wrote:\n> > I'm still working through some small differences in some of the\n> > .vcxproj files. I've been comparing these by copying *.vcxproj out to\n> > another directory with patched and unpatched then diffing the\n> > directory. See attached txt file with those diffs. Here's a summary of\n> > some of them:\n>\n> Thanks. It would be good to not have those diffs if not necessary.\n>\n> > 1. There are a few places that libpq gets linked where it previously did not.\n>\n> It seems to me that your patch is doing the right thing for adminpack\n> and that its Makefile has no need to include a reference to libpq\n> source path, no?\n\nYeah. Likely a separate commit should remove the -I$(libpq_srcdir)\nfrom adminpack and old_snapshot\n\n> For dblink and postgres_fdw, the duplication comes from PG_CPPFLAGS.\n> It does not matter much in practice, but it would be nice to not have\n> unnecessary data in the project files. One thing that could be done\n> is to make Project.pm more aware of the uniqueness of the elements\n> included. But, do we really need -I$(libpq_srcdir) there anyway?\n> From what I can see, we have all the paths in -I we'd actually need\n> with or without USE_PGXS.\n\nI've changed the patch to do that for the includes. I'm now putting\nthe list of include directories in a hash table to get rid of the\nduplicates. This does shuffle the order of them around a bit. I've\ndone the same for references too.\n\n> > 3. LOWER_NODE gets defined in ltree now where it wasn't before. It's\n> > defined on Linux. Unsure why it wasn't before on Windows.\n>\n> Your patch is grabbing the value of PG_CPPFLAGS from ltree's\n> Makefile, which is fine. We may be able to remove this flag and rely\n> on pg_tolower() instead in the long run? I am not sure about\n> FLG_CANLOOKSIGN() though.\n\nI didn't look in detail, but it looks like if we define LOWER_NODE on\nWindows that it might break pg_upgrade. I guess you could say it's\npartially broken now as the behaviour there will depend on if you\nbuild using Visual Studio or cygwin. We'd define LOWER_NODE on cygwin\nbut not on VS. Looks like a pg_upgrade might be problematic there\ntoday.\n\nIt feels a bit annoying to add some special case to the script to\nmaintain the status quo there. An alternative to that would be to\nmodify the .c code at #ifdef LOWER_NODE to also check we're not\nbuilding on VS. Neither option seems nice.\n\nI've attached the updated patch and also a diff showing the changes in\nthe *.vcxproj files.\n\nThere are quite a few places where the hash table code for includes\nand references gets rid of duplicates that already exist today. For\nexample pgbench.vcxproj references libpgport.vcxproj and\nlibpgcommon.vcxproj twice.\n\nDavid",
"msg_date": "Tue, 22 Dec 2020 23:24:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Tue, Dec 22, 2020 at 11:24:40PM +1300, David Rowley wrote:\n> On Wed, 11 Nov 2020 at 13:44, Michael Paquier <michael@paquier.xyz> wrote:\n>> It seems to me that your patch is doing the right thing for adminpack\n>> and that its Makefile has no need to include a reference to libpq\n>> source path, no?\n> \n> Yeah. Likely a separate commit should remove the -I$(libpq_srcdir)\n> from adminpack and old_snapshot\n\nI have begun a new thread about this point as that's a separate\ntopic. I did not see other places in need of a similar cleanup:\nhttps://www.postgresql.org/message-id/X+LQpfLyk7jgzUki@paquier.xyz\n\n> I didn't look in detail, but it looks like if we define LOWER_NODE on\n> Windows that it might break pg_upgrade. I guess you could say it's\n> partially broken now as the behaviour there will depend on if you\n> build using Visual Studio or cygwin. We'd define LOWER_NODE on cygwin\n> but not on VS. Looks like a pg_upgrade might be problematic there\n> today.\n> \n> It feels a bit annoying to add some special case to the script to\n> maintain the status quo there. An alternative to that would be to\n> modify the .c code at #ifdef LOWER_NODE to also check we're not\n> building on VS. Neither option seems nice.\n\nHmm. It seems that you are right here. This influences lquery\nparsing so it may be nasty and this exists since ltree is present in\nthe tree (2002). I think that I would choose the update in the C code\nand remove LOWER_NODE while keeping the scripts clean, and documenting\ndirectly in the code why this compatibility issue exists.\nREFINT_VERBOSE is no problem, fortunately.\n\n> I've attached the updated patch and also a diff showing the changes in\n> the *.vcxproj files.\n\nThanks!\n\n> There are quite a few places where the hash table code for includes\n> and references gets rid of duplicates that already exist today. For\n> example pgbench.vcxproj references libpgport.vcxproj and\n> libpgcommon.vcxproj twice.\n\nThe diffs look clean. dblink has lost src/backend/, there are the\nadditions of REFINT_VERBOSE and LOWER_NODE but the bulk of the diffs\ncomes from a change in the order of items listed, while removing\nduplicates.\n\nI have tested your patch, and this is causing compilation failures for\nhstore_plpython, jsonb_plpython and ltree_plpython. So\nAddTransformModule is missing something here when compiling with\nPython.\n--\nMichael",
"msg_date": "Wed, 23 Dec 2020 14:46:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 23 Dec 2020 at 18:46, Michael Paquier <michael@paquier.xyz> wrote:\n> I have begun a new thread about this point as that's a separate\n> topic. I did not see other places in need of a similar cleanup:\n> https://www.postgresql.org/message-id/X+LQpfLyk7jgzUki@paquier.xyz\n\nThanks. I'll look at that shortly.\n\n> > I didn't look in detail, but it looks like if we define LOWER_NODE on\n> > Windows that it might break pg_upgrade. I guess you could say it's\n> > partially broken now as the behaviour there will depend on if you\n> > build using Visual Studio or cygwin. We'd define LOWER_NODE on cygwin\n> > but not on VS. Looks like a pg_upgrade might be problematic there\n> > today.\n> >\n> > It feels a bit annoying to add some special case to the script to\n> > maintain the status quo there. An alternative to that would be to\n> > modify the .c code at #ifdef LOWER_NODE to also check we're not\n> > building on VS. Neither option seems nice.\n>\n> Hmm. It seems that you are right here. This influences lquery\n> parsing so it may be nasty and this exists since ltree is present in\n> the tree (2002). I think that I would choose the update in the C code\n> and remove LOWER_NODE while keeping the scripts clean, and documenting\n> directly in the code why this compatibility issue exists.\n> REFINT_VERBOSE is no problem, fortunately.\n\nI ended up modifying each place in the C code where we check\nLOWER_NODE. I found 2 places, one in crc32.c and another in ltree.h.\nI added the same comment to both to explain why there's a check for\n!defined(_MSC_VER) there. I'm not particularly happy about this code,\nbut I don't really see what else to do right now.\n\n> I have tested your patch, and this is causing compilation failures for\n> hstore_plpython, jsonb_plpython and ltree_plpython. So\n> AddTransformModule is missing something here when compiling with\n> Python.\n\nOh thanks for finding that. That was due to some incorrect Perl code\nI'd written to add the includes from one project into another. Fixed\nby:\n\n- $p->AddIncludeDir(join(\";\", $pl_proj->{includes}));\n+ foreach my $inc (keys %{ $pl_proj->{includes} } )\n+ {\n+ $p->AddIncludeDir($inc);\n+ }\n+\n\nDavid",
"msg_date": "Wed, 30 Dec 2020 10:03:34 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 30 Dec 2020 at 10:03, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 23 Dec 2020 at 18:46, Michael Paquier <michael@paquier.xyz> wrote:\n> > I have tested your patch, and this is causing compilation failures for\n> > hstore_plpython, jsonb_plpython and ltree_plpython. So\n> > AddTransformModule is missing something here when compiling with\n> > Python.\n>\n> Oh thanks for finding that. That was due to some incorrect Perl code\n> I'd written to add the includes from one project into another. Fixed\n> by:\n\nI accidentally attached the wrong patch before. Now attaching the correct one.\n\nDavid",
"msg_date": "Wed, 30 Dec 2020 10:07:29 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 10:07:29AM +1300, David Rowley wrote:\n> -#ifdef LOWER_NODE\n> +/*\n> + * Below we ignore the fact that LOWER_NODE is defined when compiling with\n> + * MSVC. The reason for this is that earlier versions of the MSVC build\n> + * scripts failed to define LOWER_NODE. More recent version of the MSVC\n> + * build scripts parse makefiles which results in LOWER_NODE now being\n> + * defined. We check for _MSC_VER here so as not to break pg_upgrade when\n> + * upgrading from versions MSVC versions where LOWER_NODE was not defined.\n> + */\n> +#if defined(LOWER_NODE) && !defined(_MSC_VER)\n> #include <ctype.h>\n> #define TOLOWER(x)\ttolower((unsigned char) (x))\n> #else\n\nWhile on it, do you think that it would be more readable if we remove\ncompletely LOWER_NODE and use only a check based on _MSC_VER for those\ntwo files in ltree? This could also be handled as a separate change.\n\n> +\tforeach my $line (split /\\n/, $mf)\n> +\t{\n> +\t\tif ($line =~ /^[A-Za-z0-9_]*\\.o:\\s(.*)/)\n> +\t\t{\n> +\t\t\tforeach my $file (split /\\s+/, $1)\n> +\t\t\t{\n> +\t\t\t\tforeach my $proj (@projects)\n> +\t\t\t\t{\n> +\t\t\t\t\t$proj->AddFileConditional(\"$subdir/$n/$file\");\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t}\n> +\t}\nLooking closer at this change, I don't think that this is completely\ncorrect and that could become a trap. This is adding quite a bit of\ncomplexity to take care of contrib_extrasource getting empty, and it\nactually overlaps with the handling of OBJS done in AddDir(), no?\n--\nMichael",
"msg_date": "Tue, 12 Jan 2021 16:18:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "Thank you for having another look at this.\n\nOn Tue, 12 Jan 2021 at 20:18, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 30, 2020 at 10:07:29AM +1300, David Rowley wrote:\n> > -#ifdef LOWER_NODE\n> > +/*\n> > + * Below we ignore the fact that LOWER_NODE is defined when compiling with\n> > + * MSVC. The reason for this is that earlier versions of the MSVC build\n> > + * scripts failed to define LOWER_NODE. More recent version of the MSVC\n> > + * build scripts parse makefiles which results in LOWER_NODE now being\n> > + * defined. We check for _MSC_VER here so as not to break pg_upgrade when\n> > + * upgrading from versions MSVC versions where LOWER_NODE was not defined.\n> > + */\n> > +#if defined(LOWER_NODE) && !defined(_MSC_VER)\n> > #include <ctype.h>\n> > #define TOLOWER(x) tolower((unsigned char) (x))\n> > #else\n>\n> While on it, do you think that it would be more readable if we remove\n> completely LOWER_NODE and use only a check based on _MSC_VER for those\n> two files in ltree? This could also be handled as a separate change.\n\nI'm hesitant to touch that. If anyone is running an instance compiled\nwith a non-default LOWER_NODE then we might give them some trouble if\nthey pg_upgrade their database later.\n\n> > + foreach my $line (split /\\n/, $mf)\n> > + {\n> > + if ($line =~ /^[A-Za-z0-9_]*\\.o:\\s(.*)/)\n> > + {\n> > + foreach my $file (split /\\s+/, $1)\n> > + {\n> > + foreach my $proj (@projects)\n> > + {\n> > + $proj->AddFileConditional(\"$subdir/$n/$file\");\n> > + }\n> > + }\n> > + }\n> > + }\n> Looking closer at this change, I don't think that this is completely\n> correct and that could become a trap. This is adding quite a bit of\n> complexity to take care of contrib_extrasource getting empty, and it\n> actually overlaps with the handling of OBJS done in AddDir(), no?\n\nhmm. I'm not quite sure if I know what you mean by \"trap\" here.\n\ncontrib/cube/Makefile has an example of what this is trying to catch:\n\n# cubescan is compiled as part of cubeparse\ncubeparse.o: cubescan.c\n\nI don't really see what other options there are apart from just not\nget rid of $contrib_extrasource.\n\nCan you give an example of what sort of scenario you've got in mind\nwhere it'll cause issues?\n\nI've attached a rebased patch.\n\nDavid",
"msg_date": "Wed, 3 Mar 2021 22:37:56 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 3 Mar 2021 at 22:37, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a rebased patch.\n\nI've rebased this again.\n\nI also moved away from using hash tables for storing references and\nlibraries. I was having some problems getting psql to compile due to\nthe order of the dependencies being reversed due to the order being at\nthe mercy of Perl's hash function. There's mention of this in\nMakefile.global.in:\n\n# libpq_pgport is for use by client executables (not libraries) that use libpq.\n# We force clients to pull symbols from the non-shared libraries libpgport\n# and libpgcommon rather than pulling some libpgport symbols from libpq just\n# because libpq uses those functions too. This makes applications less\n# dependent on changes in libpq's usage of pgport (on platforms where we\n# don't have symbol export control for libpq). To do this we link to\n# pgport before libpq. This does cause duplicate -lpgport's to appear\n# on client link lines, since that also appears in $(LIBS).\n# libpq_pgport_shlib is the same idea, but for use in client shared libraries.\n\nI switched these back to arrays but added an additional check to only\nadd new items to the array if we don't already have an element with\nthe same value.\n\nI've attached the diffs in the *.vcxproj files between patched and unpatched.\n\nDavid",
"msg_date": "Mon, 19 Apr 2021 23:47:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "> diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm\n> index ebb169e201..68606a296d 100644\n> --- a/src/tools/msvc/MSBuildProject.pm\n> +++ b/src/tools/msvc/MSBuildProject.pm\n> @@ -310,11 +310,12 @@ sub WriteItemDefinitionGroup\n> \tmy $targetmachine =\n> \t $self->{platform} eq 'Win32' ? 'MachineX86' : 'MachineX64';\n> \n> -\tmy $includes = $self->{includes};\n> -\tunless ($includes eq '' or $includes =~ /;$/)\n> +\tmy $includes = \"\";\n> +\tforeach my $inc (@{ $self->{includes} })\n> \t{\n> -\t\t$includes .= ';';\n> +\t\t$includes .= $inc . \";\";\n> \t}\n\nPerl note: you can do this more easily as \n\n my $includes = join ';', @{$self->{includes}};\n $includes .= ';' unless $includes eq '';\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n Are you not unsure you want to delete Firefox?\n [Not unsure] [Not not unsure] [Cancel]\n http://smylers.hates-software.com/2008/01/03/566e45b2.html\n\n\n",
"msg_date": "Mon, 19 Apr 2021 12:24:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "\nOn 4/19/21 12:24 PM, Alvaro Herrera wrote:\n>> diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm\n>> index ebb169e201..68606a296d 100644\n>> --- a/src/tools/msvc/MSBuildProject.pm\n>> +++ b/src/tools/msvc/MSBuildProject.pm\n>> @@ -310,11 +310,12 @@ sub WriteItemDefinitionGroup\n>> \tmy $targetmachine =\n>> \t $self->{platform} eq 'Win32' ? 'MachineX86' : 'MachineX64';\n>> \n>> -\tmy $includes = $self->{includes};\n>> -\tunless ($includes eq '' or $includes =~ /;$/)\n>> +\tmy $includes = \"\";\n>> +\tforeach my $inc (@{ $self->{includes} })\n>> \t{\n>> -\t\t$includes .= ';';\n>> +\t\t$includes .= $inc . \";\";\n>> \t}\n> Perl note: you can do this more easily as \n>\n> my $includes = join ';', @{$self->{includes}};\n> $includes .= ';' unless $includes eq '';\n>\n\nor even more simply:\n\n\n my $includes = join ';', @{$self->{includes}}, \"\";\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 19 Apr 2021 17:28:02 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Tue, 20 Apr 2021 at 09:28, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 4/19/21 12:24 PM, Alvaro Herrera wrote:\n> >> diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm\n> >> index ebb169e201..68606a296d 100644\n> >> --- a/src/tools/msvc/MSBuildProject.pm\n> >> +++ b/src/tools/msvc/MSBuildProject.pm\n> >> @@ -310,11 +310,12 @@ sub WriteItemDefinitionGroup\n> >> my $targetmachine =\n> >> $self->{platform} eq 'Win32' ? 'MachineX86' : 'MachineX64';\n> >>\n> >> - my $includes = $self->{includes};\n> >> - unless ($includes eq '' or $includes =~ /;$/)\n> >> + my $includes = \"\";\n> >> + foreach my $inc (@{ $self->{includes} })\n> >> {\n> >> - $includes .= ';';\n> >> + $includes .= $inc . \";\";\n> >> }\n> > Perl note: you can do this more easily as\n> >\n> > my $includes = join ';', @{$self->{includes}};\n> > $includes .= ';' unless $includes eq '';\n> >\n>\n> or even more simply:\n>\n>\n> my $includes = join ';', @{$self->{includes}}, \"\";\n\nBoth look more compact. Thanks. I'll include this for the next version.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Apr 2021 15:00:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 5:18 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 3 Mar 2021 at 22:37, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I've attached a rebased patch.\n>\n> I've rebased this again.\n>\n> I also moved away from using hash tables for storing references and\n> libraries. I was having some problems getting psql to compile due to\n> the order of the dependencies being reversed due to the order being at\n> the mercy of Perl's hash function. There's mention of this in\n> Makefile.global.in:\n>\n> # libpq_pgport is for use by client executables (not libraries) that use libpq.\n> # We force clients to pull symbols from the non-shared libraries libpgport\n> # and libpgcommon rather than pulling some libpgport symbols from libpq just\n> # because libpq uses those functions too. This makes applications less\n> # dependent on changes in libpq's usage of pgport (on platforms where we\n> # don't have symbol export control for libpq). To do this we link to\n> # pgport before libpq. This does cause duplicate -lpgport's to appear\n> # on client link lines, since that also appears in $(LIBS).\n> # libpq_pgport_shlib is the same idea, but for use in client shared libraries.\n>\n> I switched these back to arrays but added an additional check to only\n> add new items to the array if we don't already have an element with\n> the same value.\n>\n> I've attached the diffs in the *.vcxproj files between patched and unpatched.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Jul 2021 21:31:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Thu, 15 Jul 2021 at 04:01, vignesh C <vignesh21@gmail.com> wrote:\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nI've rebased this patch and broken it down into 6 individual patches.\n\n0001: Removes an include directory for dblink. This appears like it's\nnot needed. It was added in ee3b4188a (Jan 2010), but an earlier\ncommit, 320c7eb8c (June 2008) seems to have made it pointless. It's\nstill a mystery to me why ee3b4188a would have been required in the\nfirst place.\n\n0002: Parses -D in the CPPFLAGS of Makefiles and uses those in the\nMSVC script. It also adjusts the ltree contrib module so that we do\nthe same LOWER_NODE behaviour as we did before. The MSVC scripts\nappear to have mistakenly forgotten to define LOWER_NODE as it is in\nthe Makefiles.\n\n0003: Is a tidy up patch to make the 'includes' field an array rather\nthan a string\n\n0004: Adds code to check for duplicate references and libraries before\nadding new ones of the same name to the project.\n\n0005: Is mostly a tidy up so that we use AddFile consistently instead\nof sometimes doing $self->{files}->{<name>} = 1;\n\n0006: I'm not so sure about. It attempts to do a bit more Makefile\nparsing to get rid of contrib_extrasource and the majority of\ncontrib_uselibpgport and contrib_uselibpgcommon usages.\n\nDavid",
"msg_date": "Wed, 28 Jul 2021 00:25:31 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n\n> On Thu, 15 Jul 2021 at 04:01, vignesh C <vignesh21@gmail.com> wrote:\n>> The patch does not apply on Head anymore, could you rebase and post a\n>> patch. I'm changing the status to \"Waiting for Author\".\n>\n> I've rebased this patch and broken it down into 6 individual patches.\n\nI don't know anything about the MSVC build process, but I figured I\ncould do a general Perl code review.\n\n> --- a/src/tools/msvc/Mkvcbuild.pm\n> +++ b/src/tools/msvc/Mkvcbuild.pm\n[…] \n> +\t# Process custom compiler flags\n> +\tif ($mf =~ /^PG_CPPFLAGS\\s*=\\s*(.*)$/mg || $mf =~ /^override\\s*CPPFLAGS\\s*(?:[\\+\\:])?=\\s*(.*)$/mg)\n ^^^^^^^^^^^\nThis is a very convoluted way of writing [+:]?\n\n> --- a/src/tools/msvc/Project.pm\n> +++ b/src/tools/msvc/Project.pm\n> @@ -58,7 +58,7 @@ sub AddFiles\n> \n> \twhile (my $f = shift)\n> \t{\n> -\t\t$self->{files}->{ $dir . \"/\" . $f } = 1;\n> +\t\tAddFile($self, $dir . \"/\" . $f, 1);\n\nAddFile is a method, so should be called as $self->AddFile(…).\n\n> --- a/src/tools/msvc/Mkvcbuild.pm\n> +++ b/src/tools/msvc/Mkvcbuild.pm\n> @@ -36,16 +36,12 @@ my @unlink_on_exit;\n[…]\n> +\t\t\telsif ($flag =~ /^-I(.*)$/)\n> +\t\t\t{\n> +\t\t\t\tforeach my $proj (@projects)\n> +\t\t\t\t{\n> +\t\t\t\t\tif ($1 eq '$(libpq_srcdir)')\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\t$proj->AddIncludeDir('src\\interfaces\\libpq');\n> +\t\t\t\t\t\t$proj->AddReference($libpq);\n> +\t\t\t\t\t}\n> +\t\t\t\t}\n> +\t\t\t}\n\nIt would be better to do the if check outside the for loop.\n\n> --- a/src/tools/msvc/Project.pm\n> +++ b/src/tools/msvc/Project.pm\n> @@ -51,6 +51,16 @@ sub AddFile\n> \treturn;\n> }\n> \n> +sub AddFileAndAdditionalFiles\n> +{\n> +\tmy ($self, $filename) = @_;\n> +\tif (FindAndAddAdditionalFiles($self, $filename) != 1)\n\nAgain, FindAndAddAdditionalFiles is a method and should be called as\n$self->FindAndAddAdditionalFiles($filename).\n\n> +\t{\n> +\t\t$self->{files}->{$filename} = 1;\n> +\t}\n> +\treturn;\n> +}\n\n\n- ilmari\n\n\n",
"msg_date": "Tue, 27 Jul 2021 14:44:14 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> 0001: Removes an include directory for dblink. This appears like it's\n> not needed. It was added in ee3b4188a (Jan 2010), but an earlier\n> commit, 320c7eb8c (June 2008) seems to have made it pointless. It's\n> still a mystery to me why ee3b4188a would have been required in the\n> first place.\n\nFWIW, I poked around in the mailing list archives around that date\nand couldn't find any supporting discussion. It does seem like it\nshouldn't be needed, given that dblink's Makefile does no such thing.\n\nI'd suggest just pushing your 0001 and seeing if the buildfarm\ncomplains.\n\n> 0002: Parses -D in the CPPFLAGS of Makefiles and uses those in the\n> MSVC script. It also adjusts the ltree contrib module so that we do\n> the same LOWER_NODE behaviour as we did before. The MSVC scripts\n> appear to have mistakenly forgotten to define LOWER_NODE as it is in\n> the Makefiles.\n\nThe LOWER_NODE situation seems like a mess, but I think the right fix\nis to remove -DLOWER_NODE from the Makefile altogether and move the\nresponsibility into the C code. You could have ltree.h do\n\n#if !defined(_MSC_VER)\n#define LOWER_NODE 1\n#endif\n\nand put the explanatory comment on that, not on the uses of the flag.\n\nHaven't looked at the rest.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Jul 2021 10:33:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On 2021-Jul-28, David Rowley wrote:\n\n> 0003: Is a tidy up patch to make the 'includes' field an array rather\n> than a string\n\nIn this one, you can avoid turning one line into four with map,\n\n-\t$p->AddIncludeDir($pl_proj->{includes});\n+\tforeach my $inc (@{ $pl_proj->{includes} })\n+\t{\n+\t\t$p->AddIncludeDir($inc);\n+\t}\n\nInstead of that you can do something like this:\n\n+\tmap { $p->AddIncludeDir($_); } @{$pl_proj->{includes}};\n\n> 0004: Adds code to check for duplicate references and libraries before\n> adding new ones of the same name to the project.\n\nI think using the return value of grep as a boolean is confusing. It\nseems more legible to compare to 0. So instead of this:\n\n+\t\tif (! grep { $_ eq $ref} @{ $self->{references} })\n+\t\t{\n+\t\t\tpush @{ $self->{references} }, $ref;\n+\t\t}\n\nuse something like:\n\n+\t\tif (grep { $_ eq $ref} @{ $self->{references} } == 0)\n\n\n> 0006: I'm not so sure about. It attempts to do a bit more Makefile\n> parsing to get rid of contrib_extrasource and the majority of\n> contrib_uselibpgport and contrib_uselibpgcommon usages.\n\nI wonder if we could fix up libpq_pipeline's Makefile somehow to get rid\nof the remaining ones.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 27 Jul 2021 11:01:03 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "\nOn 7/27/21 11:01 AM, Alvaro Herrera wrote:\n> On 2021-Jul-28, David Rowley wrote:\n>\n>> 0003: Is a tidy up patch to make the 'includes' field an array rather\n>> than a string\n> In this one, you can avoid turning one line into four with map,\n>\n> -\t$p->AddIncludeDir($pl_proj->{includes});\n> +\tforeach my $inc (@{ $pl_proj->{includes} })\n> +\t{\n> +\t\t$p->AddIncludeDir($inc);\n> +\t}\n>\n> Instead of that you can do something like this:\n>\n> +\tmap { $p->AddIncludeDir($_); } @{$pl_proj->{includes}};\n\n\nusing map() for a side effect like this is generally frowned upon. See\n<https://metacpan.org/pod/Perl::Critic::Policy::BuiltinFunctions::ProhibitVoidMap>\n\n\n do { $p->AddIncludeDir($_); } foreach @{$pl_proj->{includes}};\n\n\nwould be an ok one-liner.\n\n\n>\n>> 0004: Adds code to check for duplicate references and libraries before\n>> adding new ones of the same name to the project.\n> I think using the return value of grep as a boolean is confusing. It\n> seems more legible to compare to 0. So instead of this:\n>\n> +\t\tif (! grep { $_ eq $ref} @{ $self->{references} })\n> +\t\t{\n> +\t\t\tpush @{ $self->{references} }, $ref;\n> +\t\t}\n>\n> use something like:\n>\n> +\t\tif (grep { $_ eq $ref} @{ $self->{references} } == 0)\n>\n\n\nBut I believe that's a widely used idiom :-)\n\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 11:29:58 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> On 2021-Jul-28, David Rowley wrote:\n>\n>> 0003: Is a tidy up patch to make the 'includes' field an array rather\n>> than a string\n>\n> In this one, you can avoid turning one line into four with map,\n>\n> -\t$p->AddIncludeDir($pl_proj->{includes});\n> +\tforeach my $inc (@{ $pl_proj->{includes} })\n> +\t{\n> +\t\t$p->AddIncludeDir($inc);\n> +\t}\n>\n> Instead of that you can do something like this:\n>\n> +\tmap { $p->AddIncludeDir($_); } @{$pl_proj->{includes}};\n\nmap (and grep) should never be used void context for side effects. Our\nperlcritic policy doesn't currently forbid that, but it should (and\nthere is one violation of that in contrib/intarray). I'll submit a\npatch for that separately.\n\nThe acceptable one-liner version would be a postfix for loop:\n\n\t$p->AddIncludeDir($_) for @{$pl_proj->{includes}};\n\n>> 0004: Adds code to check for duplicate references and libraries before\n>> adding new ones of the same name to the project.\n>\n> I think using the return value of grep as a boolean is confusing. It\n> seems more legible to compare to 0. So instead of this:\n>\n> +\t\tif (! grep { $_ eq $ref} @{ $self->{references} })\n> +\t\t{\n> +\t\t\tpush @{ $self->{references} }, $ref;\n> +\t\t}\n>\n> use something like:\n>\n> +\t\tif (grep { $_ eq $ref} @{ $self->{references} } == 0)\n\nI disagree. Using grep in boolean context is perfectly idiomatic perl.\nWhat would be more idiomatic is List::Util::any, but that's not availble\nwithout upgrading List::Util from CPAN on Perls older than 5.20, so we\ncan't use that.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 27 Jul 2021 16:52:38 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On 2021-Jul-27, Dagfinn Ilmari Mannsåker wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > +\t\tif (grep { $_ eq $ref} @{ $self->{references} } == 0)\n> \n> I disagree. Using grep in boolean context is perfectly idiomatic perl.\n> What would be more idiomatic is List::Util::any, but that's not availble\n> without upgrading List::Util from CPAN on Perls older than 5.20, so we\n> can't use that.\n\nI was wondering if instead of grepping the whole list for each addition\nit would make sense to push always, and do a unique-ification step at\nthe end.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 27 Jul 2021 12:01:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 28 Jul 2021 at 02:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > 0001: Removes an include directory for dblink. This appears like it's\n> > not needed. It was added in ee3b4188a (Jan 2010), but an earlier\n> > commit, 320c7eb8c (June 2008) seems to have made it pointless. It's\n> > still a mystery to me why ee3b4188a would have been required in the\n> > first place.\n>\n> FWIW, I poked around in the mailing list archives around that date\n> and couldn't find any supporting discussion. It does seem like it\n> shouldn't be needed, given that dblink's Makefile does no such thing.\n\nI think the reason is that src/backend/utils/Makefile symlinks\nfmgroids.h into src/include/utils. The copy you added in 320c7eb8c\nseems to be the MSVC build's equivalent of that.\n\nDavid\n\n\n",
"msg_date": "Wed, 28 Jul 2021 12:07:15 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 28 Jul 2021 at 01:44, Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> I don't know anything about the MSVC build process, but I figured I\n> could do a general Perl code review.\n\nThanks for looking at this. Perl review is very useful as it's\ncertainly not my native tongue, as you might have noticed.\n\n> > --- a/src/tools/msvc/Mkvcbuild.pm\n> > +++ b/src/tools/msvc/Mkvcbuild.pm\n> […]\n> > + # Process custom compiler flags\n> > + if ($mf =~ /^PG_CPPFLAGS\\s*=\\s*(.*)$/mg || $mf =~ /^override\\s*CPPFLAGS\\s*(?:[\\+\\:])?=\\s*(.*)$/mg)\n> ^^^^^^^^^^^\n> This is a very convoluted way of writing [+:]?\n\nI've replaced the (?:[\\+\\:])? with [+:]? It's a bit of a blind\nadjustment. I see that the resulting vcxproj files have not changed as\na result of that.\n\n> > --- a/src/tools/msvc/Project.pm\n> > +++ b/src/tools/msvc/Project.pm\n> > @@ -58,7 +58,7 @@ sub AddFiles\n> >\n> > while (my $f = shift)\n> > {\n> > - $self->{files}->{ $dir . \"/\" . $f } = 1;\n> > + AddFile($self, $dir . \"/\" . $f, 1);\n>\n> AddFile is a method, so should be called as $self->AddFile(…).\n\nAdjusted thanks.\n\n> > --- a/src/tools/msvc/Mkvcbuild.pm\n> > +++ b/src/tools/msvc/Mkvcbuild.pm\n> > @@ -36,16 +36,12 @@ my @unlink_on_exit;\n> […]\n> > + elsif ($flag =~ /^-I(.*)$/)\n> > + {\n> > + foreach my $proj (@projects)\n> > + {\n> > + if ($1 eq '$(libpq_srcdir)')\n> > + {\n> > + $proj->AddIncludeDir('src\\interfaces\\libpq');\n> > + $proj->AddReference($libpq);\n> > + }\n> > + }\n> > + }\n>\n> It would be better to do the if check outside the for loop.\n\nAgreed.\n\n> > --- a/src/tools/msvc/Project.pm\n> > +++ b/src/tools/msvc/Project.pm\n> > @@ -51,6 +51,16 @@ sub AddFile\n> > return;\n> > }\n> >\n> > +sub AddFileAndAdditionalFiles\n> > +{\n> > + my ($self, $filename) = @_;\n> > + if (FindAndAddAdditionalFiles($self, $filename) != 1)\n>\n> Again, FindAndAddAdditionalFiles is a method and should be called as\n> $self->FindAndAddAdditionalFiles($filename).\n>\n> > + {\n> > + $self->{files}->{$filename} = 1;\n> > + }\n> > + return;\n> > +}\n\nAdjusted.\n\nI'll send updated patches once I look at the other reviews.\n\nDavid\n\n\n",
"msg_date": "Wed, 28 Jul 2021 15:54:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 28 Jul 2021 at 03:52, Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>\n> > On 2021-Jul-28, David Rowley wrote:\n> >\n> >> 0003: Is a tidy up patch to make the 'includes' field an array rather\n> >> than a string\n> >\n> > In this one, you can avoid turning one line into four with map,\n> >\n> > - $p->AddIncludeDir($pl_proj->{includes});\n> > + foreach my $inc (@{ $pl_proj->{includes} })\n> > + {\n> > + $p->AddIncludeDir($inc);\n> > + }\n> >\n> > Instead of that you can do something like this:\n> >\n> > + map { $p->AddIncludeDir($_); } @{$pl_proj->{includes}};\n>\n> map (and grep) should never be used void context for side effects. Our\n> perlcritic policy doesn't currently forbid that, but it should (and\n> there is one violation of that in contrib/intarray). I'll submit a\n> patch for that separately.\n>\n> The acceptable one-liner version would be a postfix for loop:\n>\n> $p->AddIncludeDir($_) for @{$pl_proj->{includes}};\n\nI'm not sure if this is all just getting overly smart about it.\nThere's already a loop next to this doing:\n\nforeach my $type_lib (@{ $type_proj->{libraries} })\n{\n $p->AddLibrary($type_lib);\n}\n\nI don't object to changing mine, if that's what people think who are\nmore familiar with Perl than I am, but I do think consistency is a\ngood thing. TBH, I kinda prefer the multi-line loop. I think most\npeople that look at these scripts are going to be primarily C coders,\nso assuming each of the variations do the same job, then I'd rather\nsee us stick to the most C like version.\n\nIn the meantime, I'll just change it to $p->AddIncludeDir($_) for\n@{$pl_proj->{includes}};. I just wanted to note my thoughts.\n\nDavid\n\n\n",
"msg_date": "Wed, 28 Jul 2021 16:07:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 28 Jul 2021 at 04:01, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jul-27, Dagfinn Ilmari Mannsåker wrote:\n>\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>\n> > > + if (grep { $_ eq $ref} @{ $self->{references} } == 0)\n> >\n> > I disagree. Using grep in boolean context is perfectly idiomatic perl.\n> > What would be more idiomatic is List::Util::any, but that's not availble\n> > without upgrading List::Util from CPAN on Perls older than 5.20, so we\n> > can't use that.\n>\n> I was wondering if instead of grepping the whole list for each addition\n> it would make sense to push always, and do a unique-ification step at\n> the end.\n\nI see [1] has some thoughts on this, I don't think performance will\nmatter much here though. I think the order of the final array is\nlikely more important. I didn't test, but I imagine using one of\nthose hash solutions might end up having the array elements in some\nhashtable like order.\n\nI'm not quite sure if I can tell here if it's ok to leave the grep\nas-is or if I should be changing it to:\n\n if (grep { $_ eq $ref} @{ $self->{references} } == 0)\n\nDavid\n\n[1] https://www.oreilly.com/library/view/perl-cookbook/1565922433/ch04s07.html\n\n\n",
"msg_date": "Wed, 28 Jul 2021 16:13:35 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Wed, 28 Jul 2021 at 03:52, Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > I think using the return value of grep as a boolean is confusing. It\n> > seems more legible to compare to 0. So instead of this:\n> >\n> > + if (! grep { $_ eq $ref} @{ $self->{references} })\n> > + {\n> > + push @{ $self->{references} }, $ref;\n> > + }\n> >\n> > use something like:\n> >\n> > + if (grep { $_ eq $ref} @{ $self->{references} } == 0)\n>\n> I disagree. Using grep in boolean context is perfectly idiomatic perl.\n> What would be more idiomatic is List::Util::any, but that's not availble\n> without upgrading List::Util from CPAN on Perls older than 5.20, so we\n> can't use that.\n\nOk, if the grep stuff is ok as is with the boolean comparison then I'd\nsay 0002 and 0003 of the attached are ok to go.\n\nI pushed the v9 0001 and 0005 patch after adjusting the AddFile($self,\n...) to become $self->AddFile(...)\n\nI've adjusted the attached 0001 patch (previously 0002) to define\nLOWER_NODE in ltree.h as mentioned by Tom.\n\n0004 still needs work.\n\nThanks for all the reviews.\n\nDavid",
"msg_date": "Thu, 29 Jul 2021 00:05:43 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Thu, 29 Jul 2021 at 00:05, David Rowley <dgrowleyml@gmail.com> wrote:\n> I pushed the v9 0001 and 0005 patch after adjusting the AddFile($self,\n> ...) to become $self->AddFile(...)\n\nI pushed all the patches, apart from the 0004 patch.\n\nOne weird thing I noticed with the -D define patch (245de4845) and the\nLOWER_NODE adjustment is that in crc32.c we do:\n\n#ifdef LOWER_NODE\n#include <ctype.h>\n#define TOLOWER(x) tolower((unsigned char) (x))\n#else\n#define TOLOWER(x) (x)\n#endif\n\nmeaning when LOWER_NODE Is defined that the CRC is hashed all in lower\ncase, effectively making it case-insensitive. Whereas in ltree.h we\ndo:\n\n#ifdef LOWER_NODE\n#define FLG_CANLOOKSIGN(x) ( ( (x) & ( LQL_NOT | LVAR_ANYEND |\nLVAR_SUBLEXEME ) ) == 0 )\n#else\n#define FLG_CANLOOKSIGN(x) ( ( (x) & ( LQL_NOT | LVAR_ANYEND |\nLVAR_SUBLEXEME | LVAR_INCASE ) ) == 0 )\n#endif\n\nSo effectively if LOWER_NODE is defined then we *don't* pass the\nLVAR_INCASE which makes the comparisons case-sensitive! I've never\nused ltree before, so might just be misunderstanding something, but\nthis does smell quite buggy to me. However, I just made it work how it\nalways has worked.\n\n> 0004 still needs work.\n\nI adjusted this one so that it does the right thing for all the\nexisting .l and .y files and correctly adds the relevant .c file when\nrequired, but to be honest, I just made this work by checking that the\neach of the vcxproj files match before and after the change.\n\nThere is code that parses the likes of the following in the cube\ncontrib module's Makefile:\n\n# cubescan is compiled as part of cubeparse\ncubeparse.o: cubescan.c\n\nHere, since cubescan.c is not added to the project files for\ncompilation, I made that just call the: AddDependantFiles function,\nwhich just searches for .l and .y files that exist with the same name,\nbut does not add the actual file passed to the function. This means\nthat we add cubescan.l to the project but not cubscan.c.\n\nThis is different from what happens with ecpg with pgc.l. We also add\npgc.c to the project in that case because it's mentioned in OBJS in\nsrc/interfaces/ecpg/preproc/Makefile.\n\nThe only change in the outputted vcxproj files that the attached\nproduces is an order change in the AdditionalDependencies of\nlibpq_pipeline.vcxproj\n\nI also managed to remove libpq_pipeline from contrib_uselibpgport and\ncontrib_uselibpgcommon. The parsing for SHLIB_LINK_INTERNAL and\nPG_LIBS_INTERNAL only allowed for = not +=.\n\nDoes anyone have any thoughts on where we should draw the line on\nparsing Makefiles? I'm a little worried that I'm adding pasing just\nfor exactly how the Makefiles are today and that it could easily be\nbroken if something is adjusted later. I'm not all that skilled with\nmake, so I'm struggling to judge this for myself.\n\nDavid",
"msg_date": "Fri, 30 Jul 2021 15:05:30 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
},
{
"msg_contents": "On Fri, 30 Jul 2021 at 15:05, David Rowley <dgrowleyml@gmail.com> wrote:\n> Does anyone have any thoughts on where we should draw the line on\n> parsing Makefiles? I'm a little worried that I'm adding pasing just\n> for exactly how the Makefiles are today and that it could easily be\n> broken if something is adjusted later. I'm not all that skilled with\n> make, so I'm struggling to judge this for myself.\n\nAfter thinking about this some more, I think since we're never going\nto make these Perl scripts do everything that make can do, that if the\nparsing that's being added seems reasonable and works for what we have\ntoday, there I don't think there is much reason not to just go with\nthis.\n\nThe v10 patch I attached previously output almost identical *.vcxproj\nfiles. The only change was in libpq_pipeline.vcxproj where the order\nof the AdditionalDependencies changed to have ws2_32.lib first rather\nthan somewhere in the middle.\n\nI've now pushed the final patch in this series.\n\nThank you to everyone who looked at one or more of these patches.\n\nDavid\n\n\n",
"msg_date": "Mon, 9 Aug 2021 19:55:07 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reduce the number of special cases to build contrib modules on\n windows"
}
] |
[
{
"msg_contents": "I noticed that hash_array_extended() does not pass down the collation to \nthe element's collation function, unlike hash_array(). As a \nconsequence, hash partitioning using text arrays as partition key fails.\n\nThe attached patch fixes this. I propose to backpatch this.",
"msg_date": "Mon, 2 Nov 2020 08:40:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "hash_array_extended() needs to pass down collation"
},
{
"msg_contents": "On 02/11/2020 09:40, Peter Eisentraut wrote:\n> I noticed that hash_array_extended() does not pass down the collation to\n> the element's collation function, unlike hash_array(). As a\n> consequence, hash partitioning using text arrays as partition key fails.\n> \n> The attached patch fixes this. I propose to backpatch this.\n\n+1. Straightforward oversight in commit 5e1963fb764e.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 2 Nov 2020 13:52:48 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: hash_array_extended() needs to pass down collation"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I noticed that hash_array_extended() does not pass down the collation to \n> the element's collation function, unlike hash_array(). As a \n> consequence, hash partitioning using text arrays as partition key fails.\n\n> The attached patch fixes this. I propose to backpatch this.\n\nLGTM\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Nov 2020 10:01:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: hash_array_extended() needs to pass down collation"
},
{
"msg_contents": "On Mon, Nov 02, 2020 at 10:01:53AM -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I noticed that hash_array_extended() does not pass down the collation to \n>> the element's collation function, unlike hash_array(). As a \n>> consequence, hash partitioning using text arrays as partition key fails.\n> \n>> The attached patch fixes this. I propose to backpatch this.\n> \n> LGTM\n\n+1.\n--\nMichael",
"msg_date": "Tue, 3 Nov 2020 19:48:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: hash_array_extended() needs to pass down collation"
},
{
"msg_contents": "On 2020-11-03 11:48, Michael Paquier wrote:\n> On Mon, Nov 02, 2020 at 10:01:53AM -0500, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> I noticed that hash_array_extended() does not pass down the collation to\n>>> the element's collation function, unlike hash_array(). As a\n>>> consequence, hash partitioning using text arrays as partition key fails.\n>>\n>>> The attached patch fixes this. I propose to backpatch this.\n>>\n>> LGTM\n> \n> +1.\n\ncommitted\n\n\n",
"msg_date": "Wed, 4 Nov 2020 12:51:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: hash_array_extended() needs to pass down collation"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've attached a patch that fixes a typo in verify_heapam.c recently added.\n\ns/wheter/whether/\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 2 Nov 2020 17:05:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix a typo in verify_heapam.c"
},
{
"msg_contents": "On 02/11/2020 10:05, Masahiko Sawada wrote:\n> Hi all,\n> \n> I've attached a patch that fixes a typo in verify_heapam.c recently added.\n> \n> s/wheter/whether/\n\nThis was just fixed in commit 8a15e735b.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 2 Nov 2020 10:10:56 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in verify_heapam.c"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 5:10 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 02/11/2020 10:05, Masahiko Sawada wrote:\n> > Hi all,\n> >\n> > I've attached a patch that fixes a typo in verify_heapam.c recently added.\n> >\n> > s/wheter/whether/\n>\n> This was just fixed in commit 8a15e735b.\n\nOh, I had missed it. Thanks.\n\nRegards,\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 2 Nov 2020 23:49:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix a typo in verify_heapam.c"
}
] |
[
{
"msg_contents": "While looking at the parallel copy patches, it started to annoy me how \nlarge copy.c is. It confuses my little head. (Ok, it's annoyed me many \ntimes in the past, but I haven't done anything about it.)\n\nThere isn't much common code between COPY FROM and COPY TO, so I propose \nthat we split copy.c into two: copyfrom.c and copyto.c. See attached. I \nthin that's much nicer.\n\nUsually when I'm looking at COPY, I'm specifically looking at COPY FROM \nor COPY TO. There is symmetry between the two, like SendCopyBegin() and \nReceiveCopyBegin(), but more often I drill into the details of either \nCOPY FROM or TO. And when I need to look at those symmetric cases, I \nwant to open the two functions side-by-side anyway, so they might as \nwell be in different files.\n\nThere is some code duplication now. BeginCopy() was called by both \nBeginCopyFrom() and BeginCopyTo(). I copied the common parts of it to \nBeginCopyFrom and BeginCopyTo(), but some of the code was only \napplicable to FROM or TO.\n\nI also split/duplicated the CopyStateData struct into CopyFromStateData \nand CopyToStateData. Many of the fields were common, but many were not, \nand I think some duplication is nicer than a struct where you use some \nfields and others are unused. I put the common formatting options into a \nnew CopyFormatOptions struct.\n\nMoving code around always makes backpatching bug fixes harder, but I \nthink we should bit the bullet. If the parallel COPY work is committed, \nit's going to move things around a lot anyway.\n\n- Heikki",
"msg_date": "Mon, 2 Nov 2020 11:03:29 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Split copy.c"
},
{
"msg_contents": "On 2020-11-02 10:03, Heikki Linnakangas wrote:\n> While looking at the parallel copy patches, it started to annoy me how\n> large copy.c is. It confuses my little head. (Ok, it's annoyed me many\n> times in the past, but I haven't done anything about it.)\n\n> [0001-Split-copy.c-into-...o.c-and-copyfrom.c.patch]\n\nThere seems to be an oversight of contrib/file_fdw. (debian 10, gcc \n8.3.0)\n\nAfter:\n\n./configure \n--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.split_copy \n--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.split_copy/bin.fast \n--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.split_copy/lib.fast \n--with-pgport=6973 --quiet --enable-depend --with-libxml \n--with-libxslt --with-zlib --with-openssl --enable-tap-tests \n--with-extra-version=_split_copy_1102_90d8\n\n\n... these errors+warnings from contrib/file_fdw:\n\n\n-- [2020.11.02 10:31:53 split_copy/1] make contrib\nfile_fdw.c:108:2: error: unknown type name ‘CopyState’\n CopyState cstate; /* COPY execution state */\n ^~~~~~~~~\nfile_fdw.c: In function ‘fileBeginForeignScan’:\nfile_fdw.c:658:2: error: unknown type name ‘CopyState’; did you mean \n‘CopyToState’?\n CopyState cstate;\n ^~~~~~~~~\n CopyToState\nfile_fdw.c:680:10: warning: passing argument 3 of ‘BeginCopyFrom’ from \nincompatible pointer type [-Wincompatible-pointer-types]\n filename,\n ^~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:60:76: note: expected ‘Node *’ {aka \n‘struct Node *’} but argument is of type ‘char *’\n extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, \nNode *whereClause,\n \n~~~~~~^~~~~~~~~~~\nfile_fdw.c:681:10: error: incompatible type for argument 4 of \n‘BeginCopyFrom’\n is_program,\n ^~~~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:61:23: note: expected ‘const char *’ \nbut argument is of type ‘_Bool’\n const char *filename,\n ~~~~~~~~~~~~^~~~~~~~\nIn file included from ../../src/include/access/tupdesc.h:19,\n from ../../src/include/access/htup_details.h:19,\n from file_fdw.c:18:\n../../src/include/nodes/pg_list.h:65:19: warning: passing argument 6 of \n‘BeginCopyFrom’ from incompatible pointer type \n[-Wincompatible-pointer-types]\n #define NIL ((List *) NULL)\n ~^~~~~~~~~~~~~~\nfile_fdw.c:683:10: note: in expansion of macro ‘NIL’\n NIL,\n ^~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:62:48: note: expected \n‘copy_data_source_cb’ {aka ‘int (*)(void *, int, int)’} but argument is \nof type ‘List *’ {aka ‘struct List *’}\n bool is_program, copy_data_source_cb data_source_cb, List \n*attnamelist, List *options);\n ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~\nfile_fdw.c:678:11: error: too few arguments to function ‘BeginCopyFrom’\n cstate = BeginCopyFrom(NULL,\n ^~~~~~~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:60:22: note: declared here\n extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, \nNode *whereClause,\n ^~~~~~~~~~~~~\nfile_fdw.c: In function ‘fileIterateForeignScan’:\nfile_fdw.c:714:20: warning: cast to pointer from integer of different \nsize [-Wint-to-pointer-cast]\n errcallback.arg = (void *) festate->cstate;\n ^\nfile_fdw.c:731:30: warning: passing argument 1 of ‘NextCopyFrom’ makes \npointer from integer without a cast [-Wint-conversion]\n found = NextCopyFrom(festate->cstate, NULL,\n ~~~~~~~^~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:64:40: note: expected ‘CopyFromState’ \n{aka ‘struct CopyFromStateData *’} but argument is of type ‘int’\n extern bool NextCopyFrom(CopyFromState cstate, ExprContext *econtext,\n\n ~~~~~~~~~~~~~~^~~~~~\nfile_fdw.c: In function ‘fileReScanForeignScan’:\nfile_fdw.c:751:21: warning: passing argument 1 of ‘EndCopyFrom’ makes \npointer from integer without a cast [-Wint-conversion]\n EndCopyFrom(festate->cstate);\n ~~~~~~~^~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:63:39: note: expected ‘CopyFromState’ \n{aka ‘struct CopyFromStateData *’} but argument is of type ‘int’\n extern void EndCopyFrom(CopyFromState cstate);\n ~~~~~~~~~~~~~~^~~~~~\nfile_fdw.c:755:17: warning: passing argument 3 of ‘BeginCopyFrom’ from \nincompatible pointer type [-Wincompatible-pointer-types]\n festate->filename,\n ~~~~~~~^~~~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:60:76: note: expected ‘Node *’ {aka \n‘struct Node *’} but argument is of type ‘char *’\n extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, \nNode *whereClause,\n \n~~~~~~^~~~~~~~~~~\nfile_fdw.c:756:17: error: incompatible type for argument 4 of \n‘BeginCopyFrom’\n festate->is_program,\n ~~~~~~~^~~~~~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:61:23: note: expected ‘const char *’ \nbut argument is of type ‘_Bool’\n const char *filename,\n ~~~~~~~~~~~~^~~~~~~~\nIn file included from ../../src/include/access/tupdesc.h:19,\n from ../../src/include/access/htup_details.h:19,\n from file_fdw.c:18:\n../../src/include/nodes/pg_list.h:65:19: warning: passing argument 6 of \n‘BeginCopyFrom’ from incompatible pointer type \n[-Wincompatible-pointer-types]\n #define NIL ((List *) NULL)\n ~^~~~~~~~~~~~~~\nfile_fdw.c:758:10: note: in expansion of macro ‘NIL’\n NIL,\n ^~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:62:48: note: expected \n‘copy_data_source_cb’ {aka ‘int (*)(void *, int, int)’} but argument is \nof type ‘List *’ {aka ‘struct List *’}\n bool is_program, copy_data_source_cb data_source_cb, List \n*attnamelist, List *options);\n ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~\nfile_fdw.c:753:20: error: too few arguments to function ‘BeginCopyFrom’\n festate->cstate = BeginCopyFrom(NULL,\n ^~~~~~~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:60:22: note: declared here\n extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, \nNode *whereClause,\n ^~~~~~~~~~~~~\nfile_fdw.c: In function ‘fileEndForeignScan’:\nfile_fdw.c:773:22: warning: passing argument 1 of ‘EndCopyFrom’ makes \npointer from integer without a cast [-Wint-conversion]\n EndCopyFrom(festate->cstate);\n ~~~~~~~^~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:63:39: note: expected ‘CopyFromState’ \n{aka ‘struct CopyFromStateData *’} but argument is of type ‘int’\n extern void EndCopyFrom(CopyFromState cstate);\n ~~~~~~~~~~~~~~^~~~~~\nfile_fdw.c: In function ‘file_acquire_sample_rows’:\n\n\nfile_fdw.c: In function ‘file_acquire_sample_rows’:\nfile_fdw.c:1110:2: error: unknown type name ‘CopyState’; did you mean \n‘CopyToState’?\n CopyState cstate;\n ^~~~~~~~~\n CopyToState\nfile_fdw.c:1128:39: warning: passing argument 3 of ‘BeginCopyFrom’ from \nincompatible pointer type [-Wincompatible-pointer-types]\n cstate = BeginCopyFrom(NULL, onerel, filename, is_program, NULL, NIL,\n ^~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:60:76: note: expected ‘Node *’ {aka \n‘struct Node *’} but argument is of type ‘char *’\n extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, \nNode *whereClause,\n \n~~~~~~^~~~~~~~~~~\nfile_fdw.c:1128:49: error: incompatible type for argument 4 of \n‘BeginCopyFrom’\n cstate = BeginCopyFrom(NULL, onerel, filename, is_program, NULL, NIL,\n ^~~~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:61:23: note: expected ‘const char *’ \nbut argument is of type ‘_Bool’\n const char *filename,\n ~~~~~~~~~~~~^~~~~~~~\nIn file included from ../../src/include/access/tupdesc.h:19,\n from ../../src/include/access/htup_details.h:19,\n from file_fdw.c:18:\n../../src/include/nodes/pg_list.h:65:19: warning: passing argument 6 of \n‘BeginCopyFrom’ from incompatible pointer type \n[-Wincompatible-pointer-types]\n #define NIL ((List *) NULL)\n ~^~~~~~~~~~~~~~\nfile_fdw.c:1128:67: note: in expansion of macro ‘NIL’\n cstate = BeginCopyFrom(NULL, onerel, filename, is_program, NULL, NIL,\n ^~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:62:48: note: expected \n‘copy_data_source_cb’ {aka ‘int (*)(void *, int, int)’} but argument is \nof type ‘List *’ {aka ‘struct List *’}\n bool is_program, copy_data_source_cb data_source_cb, List \n*attnamelist, List *options);\n ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~\nfile_fdw.c:1128:11: error: too few arguments to function ‘BeginCopyFrom’\n cstate = BeginCopyFrom(NULL, onerel, filename, is_program, NULL, NIL,\n ^~~~~~~~~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:60:22: note: declared here\n extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, \nNode *whereClause,\n ^~~~~~~~~~~~~\nfile_fdw.c:1144:20: warning: cast to pointer from integer of different \nsize [-Wint-to-pointer-cast]\n errcallback.arg = (void *) cstate;\n ^\nfile_fdw.c:1159:24: warning: passing argument 1 of ‘NextCopyFrom’ makes \npointer from integer without a cast [-Wint-conversion]\n found = NextCopyFrom(cstate, NULL, values, nulls);\n ^~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:64:40: note: expected ‘CopyFromState’ \n{aka ‘struct CopyFromStateData *’} but argument is of type ‘int’\n extern bool NextCopyFrom(CopyFromState cstate, ExprContext *econtext,\n ~~~~~~~~~~~~~~^~~~~~\nfile_fdw.c:1211:14: warning: passing argument 1 of ‘EndCopyFrom’ makes \npointer from integer without a cast [-Wint-conversion]\n EndCopyFrom(cstate);\n ^~~~~~\nIn file included from file_fdw.c:24:\n../../src/include/commands/copy.h:63:39: note: expected ‘CopyFromState’ \n{aka ‘struct CopyFromStateData *’} but argument is of type ‘int’\n extern void EndCopyFrom(CopyFromState cstate);\n ~~~~~~~~~~~~~~^~~~~~\nmake[1]: *** [../../src/Makefile.global:921: file_fdw.o] Error 1\nmake: *** [Makefile:95: all-file_fdw-recurse] Error 2\n-- contrib make returned 2 - abort\n\n\n\n\n",
"msg_date": "Mon, 02 Nov 2020 10:36:53 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On 02/11/2020 11:36, Erik Rijkers wrote:\n> On 2020-11-02 10:03, Heikki Linnakangas wrote:\n>> While looking at the parallel copy patches, it started to annoy me how\n>> large copy.c is. It confuses my little head. (Ok, it's annoyed me many\n>> times in the past, but I haven't done anything about it.)\n> \n>> [0001-Split-copy.c-into-...o.c-and-copyfrom.c.patch]\n> \n> There seems to be an oversight of contrib/file_fdw. (debian 10, gcc\n> 8.3.0)\n\nAh yeah, I missed updating file_fdw. Here's a new patch.\n\n- Heikki",
"msg_date": "Mon, 2 Nov 2020 13:19:03 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On 2020-11-02 12:19, Heikki Linnakangas wrote:\n> On 02/11/2020 11:36, Erik Rijkers wrote:\n>> On 2020-11-02 10:03, Heikki Linnakangas wrote:\n>>> While looking at the parallel copy patches, it started to annoy me \n>>> how\n>>> large copy.c is. It confuses my little head. (Ok, it's annoyed me \n>>> many\n>>> times in the past, but I haven't done anything about it.)\n>> \n>>> [0001-Split-copy.c-into-...o.c-and-copyfrom.c.patch]\n>> \n>> There seems to be an oversight of contrib/file_fdw. (debian 10, gcc\n>> 8.3.0)\n> \n> Ah yeah, I missed updating file_fdw. Here's a new patch.\n\nSomething still not quite right in the compile-with-assert:\n\n-- [2020.11.02 12:49:12 split_copy/0] make core: make --quiet -j 4 \n(speed 0=debug_assertions speed 1=fast)\n\n\nIn file included from ../../../src/include/postgres.h:46,\n from copyto.c:15:\ncopyto.c: In function ‘BeginCopyTo’:\ncopyto.c:477:11: error: ‘is_from’ undeclared (first use in this \nfunction); did you mean ‘is_program’?\n Assert(!is_from);\n ^~~~~~~\n../../../src/include/c.h:790:9: note: in definition of macro ‘Assert’\n if (!(condition)) \\\n ^~~~~~~~~\ncopyto.c:477:11: note: each undeclared identifier is reported only once \nfor each function it appears in\n Assert(!is_from);\n ^~~~~~~\n../../../src/include/c.h:790:9: note: in definition of macro ‘Assert’\n if (!(condition)) \\\n ^~~~~~~~~\nmake[3]: *** [../../../src/Makefile.global:921: copyto.o] Error 1\nmake[2]: *** [common.mk:39: commands-recursive] Error 2\nmake[2]: *** Waiting for unfinished jobs....\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\n\n\n\n\n",
"msg_date": "Mon, 02 Nov 2020 13:24:12 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 2:33 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> While looking at the parallel copy patches, it started to annoy me how\n> large copy.c is. It confuses my little head. (Ok, it's annoyed me many\n> times in the past, but I haven't done anything about it.)\n>\n\n+1 for having copy from & copy to functionality in separate files.\n\nThis is present in both copyfrom.c & copyto.c, can it be removed from\none place & moved to a common header file?\nstatic const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n\nCopyDest was changed to:\ntypedef enum CopySource\n{\nCOPY_FILE, /* from file (or a piped program) */\nCOPY_OLD_FE, /* from frontend (2.0 protocol) */\nCOPY_NEW_FE, /* from frontend (3.0 protocol) */\nCOPY_CALLBACK /* from callback function */\n} CopySource;\n\ntypedef enum CopyDest\n{\nCOPY_FILE, /* to file (or a piped program) */\nCOPY_OLD_FE, /* to frontend (2.0 protocol) */\nCOPY_NEW_FE, /* to frontend (3.0 protocol) */\n} CopyDest;\n\nShould we have one enum or both are required, if both are required we\ncould think of naming like COPY_TO_FILE, COPY_FROM_FILE, it will make\nit more clearer.\n\nThere is one warning while applying the v2 patch:\nApplying: Split copy.c into copyto.c and copyfrom.c.\n/home/vignesh/postgres/postgres/.git/rebase-apply/patch:909: trailing\nwhitespace.\nwarning: 1 line adds whitespace errors.\n\nThere is one compilation error, may be this Assert is not required:\ncopyto.c: In function ‘BeginCopyTo’:\ncopyto.c:477:11: error: ‘is_from’ undeclared (first use in this function)\n Assert(!is_from);\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 2 Nov 2020 18:02:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-02 11:03:29 +0200, Heikki Linnakangas wrote:\n> While looking at the parallel copy patches, it started to annoy me how large\n> copy.c is.\n\nAgreed.\n\n\n> There isn't much common code between COPY FROM and COPY TO, so I propose\n> that we split copy.c into two: copyfrom.c and copyto.c. See attached. I thin\n> that's much nicer.\n\nNot quite convinced that's the right split - or perhaps there's just\nmore potential. My feeling is that splitting out all the DML related\ncode would make the code considerably easier to read.\n\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Mon, 2 Nov 2020 09:23:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On 02/11/2020 19:23, Andres Freund wrote:\n> On 2020-11-02 11:03:29 +0200, Heikki Linnakangas wrote:\n>> There isn't much common code between COPY FROM and COPY TO, so I propose\n>> that we split copy.c into two: copyfrom.c and copyto.c. See attached. I thin\n>> that's much nicer.\n> \n> Not quite convinced that's the right split - or perhaps there's just\n> more potential. My feeling is that splitting out all the DML related\n> code would make the code considerably easier to read.\n\nWhat do you mean by DML related code?\n\n- Heikki\n\n\n",
"msg_date": "Mon, 2 Nov 2020 19:43:38 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On 2020-11-02 19:43:38 +0200, Heikki Linnakangas wrote:\n> On 02/11/2020 19:23, Andres Freund wrote:\n> > On 2020-11-02 11:03:29 +0200, Heikki Linnakangas wrote:\n> > > There isn't much common code between COPY FROM and COPY TO, so I propose\n> > > that we split copy.c into two: copyfrom.c and copyto.c. See attached. I thin\n> > > that's much nicer.\n> > \n> > Not quite convinced that's the right split - or perhaps there's just\n> > more potential. My feeling is that splitting out all the DML related\n> > code would make the code considerably easier to read.\n> \n> What do you mean by DML related code?\n\nBasically all the insertion related code (e.g CopyMultiInsert*, lots of\ncode in CopyFrom()) and perhaps also the type input invocations.\n\n- Andres\n\n\n",
"msg_date": "Mon, 2 Nov 2020 10:34:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On Tue, 3 Nov 2020 at 07:35, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-11-02 19:43:38 +0200, Heikki Linnakangas wrote:\n> > On 02/11/2020 19:23, Andres Freund wrote:\n> > > On 2020-11-02 11:03:29 +0200, Heikki Linnakangas wrote:\n> > > > There isn't much common code between COPY FROM and COPY TO, so I propose\n> > > > that we split copy.c into two: copyfrom.c and copyto.c. See attached. I thin\n> > > > that's much nicer.\n> > >\n> > > Not quite convinced that's the right split - or perhaps there's just\n> > > more potential. My feeling is that splitting out all the DML related\n> > > code would make the code considerably easier to read.\n> >\n> > What do you mean by DML related code?\n>\n> Basically all the insertion related code (e.g CopyMultiInsert*, lots of\n> code in CopyFrom()) and perhaps also the type input invocations.\n\nI quite like the fact that those are static and inline-able. I very\nmuch imagine there'd be a performance hit if we moved them out to\nanother .c file and made them extern. Some of those functions can be\nquite hot when copying into a partitioned table.\n\nDavid\n\n\n",
"msg_date": "Tue, 3 Nov 2020 15:15:27 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On 03/11/2020 04:15, David Rowley wrote:\n> On Tue, 3 Nov 2020 at 07:35, Andres Freund <andres@anarazel.de> wrote:\n>>\n>> On 2020-11-02 19:43:38 +0200, Heikki Linnakangas wrote:\n>>> On 02/11/2020 19:23, Andres Freund wrote:\n>>>> On 2020-11-02 11:03:29 +0200, Heikki Linnakangas wrote:\n>>>>> There isn't much common code between COPY FROM and COPY TO, so I propose\n>>>>> that we split copy.c into two: copyfrom.c and copyto.c. See attached. I thin\n>>>>> that's much nicer.\n>>>>\n>>>> Not quite convinced that's the right split - or perhaps there's just\n>>>> more potential. My feeling is that splitting out all the DML related\n>>>> code would make the code considerably easier to read.\n>>>\n>>> What do you mean by DML related code?\n>>\n>> Basically all the insertion related code (e.g CopyMultiInsert*, lots of\n>> code in CopyFrom()) and perhaps also the type input invocations.\n\nHmm. COPY FROM consists of two parts:\n\n1. Parse the input text/CSV/binary format into Datums.\n\n2. Store the Datums in the table.\n\nThey're somewhat split already. If you want to only do the parsing part, \nyou can use the BeginCopyFrom() and NextCopyFrom() functions to get \nDatums, without storing them to a table. file_fdw uses that.\n\nYeah, that might indeed be another good split point. Attached is quick \nprototype of that. I tried to avoid doing other changes as part of this \nsplit, but some further refactoring might be good. Like extracting the \nstate for the input parsing from CopyFromStateData into a separate struct.\n\nWith these patches:\n\n$ wc -l src/backend/commands/copy*.c\n 782 src/backend/commands/copy.c\n 1641 src/backend/commands/copyfrom.c\n 1646 src/backend/commands/copyfromparse.c\n 1363 src/backend/commands/copyto.c\n 5432 total\n\n> I quite like the fact that those are static and inline-able. I very\n> much imagine there'd be a performance hit if we moved them out to\n> another .c file and made them extern. Some of those functions can be\n> quite hot when copying into a partitioned table.\n\nAgreed.\n\n- Heikki",
"msg_date": "Tue, 3 Nov 2020 09:38:36 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On 2020-11-03 08:38, Heikki Linnakangas wrote:\n\n> [v3-0001-Split-copy.c-into-copyto.c-and-copyfrom.c.patch]\n> [v3-0002-Split-copyfrom.c-further-into-copyfrom.c-and-copy.patch]\n\nThe patches apply ok, but I get these errors:\n\nIn file included from ../../../src/include/postgres.h:46,\n from copyto.c:15:\ncopyto.c: In function ‘BeginCopyTo’:\ncopyto.c:477:11: error: ‘is_from’ undeclared (first use in this \nfunction); did you mean ‘is_program’?\n Assert(!is_from);\n ^~~~~~~\n../../../src/include/c.h:790:9: note: in definition of macro ‘Assert’\n if (!(condition)) \\\n ^~~~~~~~~\ncopyto.c:477:11: note: each undeclared identifier is reported only once \nfor each function it appears in\n Assert(!is_from);\n ^~~~~~~\n../../../src/include/c.h:790:9: note: in definition of macro ‘Assert’\n if (!(condition)) \\\n ^~~~~~~~~\n\n\n",
"msg_date": "Tue, 03 Nov 2020 09:46:36 +0100",
"msg_from": "Erikjan Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On 03/11/2020 10:46, Erikjan Rijkers wrote:\n> On 2020-11-03 08:38, Heikki Linnakangas wrote:\n> \n>> [v3-0001-Split-copy.c-into-copyto.c-and-copyfrom.c.patch]\n>> [v3-0002-Split-copyfrom.c-further-into-copyfrom.c-and-copy.patch]\n> \n> The patches apply ok, but I get these errors:\n\nHere's a fixed version. There was a stray Assert that didn't make sense \nanymore.\n\n- Heikki",
"msg_date": "Tue, 3 Nov 2020 11:30:19 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "Hey Heikki,\n\nOn Tue, Nov 3, 2020 at 1:30 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\nThanks for working on this refactor. LGTM! I had a few minor comments:\n\n1. We should rename the CopyFormatOptions *cstate param in\nProcessCopyOptions() to CopyFormatOptions *options and List *options to\nList *raw_options IMO, to make it more readable.\n\n2. We need to update the header comment for Copy{From,To}StateData. It is\ncurrently the old comment from CopyStateData.\n\n3. Can we add docs for the missing fields in the header comment for\nBeginCopyFrom()?\n\n4.\n\n> /*\n> * Working state for COPY TO/FROM\n> */\n> MemoryContext copycontext; /* per-copy execution context */\n\nComment needs to be updated for the COPY operation.\n\n5.\n\n> I also split/duplicated the CopyStateData struct into CopyFromStateData\n> and CopyToStateData. Many of the fields were common, but many were not,\n> and I think some duplication is nicer than a struct where you use some\n> fields and others are unused. I put the common formatting options into a\n> new CopyFormatOptions struct.\n\nWould we be better off if we sub-struct CopyState <- Copy{From,To}State?\nLike this:\ntypedef struct Copy{From|To}StateData\n{\nCopyState cs;\n// Fields specific to COPY FROM/TO follow..\n}\n\n6.\n\n> /* create workspace for CopyReadAttributes results */\n> if (!cstate->opts.binary)\n\nCan we replace this if with an else?\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Wed, 11 Nov 2020 11:49:57 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On Tue, Nov 03, 2020 at 03:15:27PM +1300, David Rowley wrote:\n> On Tue, 3 Nov 2020 at 07:35, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2020-11-02 19:43:38 +0200, Heikki Linnakangas wrote:\n> > > On 02/11/2020 19:23, Andres Freund wrote:\n> > > > On 2020-11-02 11:03:29 +0200, Heikki Linnakangas wrote:\n> > > > > There isn't much common code between COPY FROM and COPY TO, so I propose\n> > > > > that we split copy.c into two: copyfrom.c and copyto.c. See attached. I thin\n> > > > > that's much nicer.\n> > > >\n> > > > Not quite convinced that's the right split - or perhaps there's just\n> > > > more potential. My feeling is that splitting out all the DML related\n> > > > code would make the code considerably easier to read.\n> > >\n> > > What do you mean by DML related code?\n> >\n> > Basically all the insertion related code (e.g CopyMultiInsert*, lots of\n> > code in CopyFrom()) and perhaps also the type input invocations.\n> \n> I quite like the fact that those are static and inline-able. I very\n> much imagine there'd be a performance hit if we moved them out to\n> another .c file and made them extern. Some of those functions can be\n> quite hot when copying into a partitioned table.\n\nFor another patch [0], I moved into copy.h:\n+typedef struct CopyMultiInsertBuffer\n+typedef struct CopyMultiInsertInfo\n+CopyMultiInsertBufferInit(ResultRelInfo *rri)\n+CopyMultiInsertInfoSetupBuffer(CopyMultiInsertInfo *miinfo,\n+CopyMultiInsertInfoIsFull(CopyMultiInsertInfo *miinfo)\n+CopyMultiInsertBufferCleanup(CopyMultiInsertInfo *miinfo,\n+CopyMultiInsertInfoNextFreeSlot(CopyMultiInsertInfo *miinfo,\n+CopyMultiInsertInfoStore(CopyMultiInsertInfo *miinfo, ResultRelInfo *rri,\n\nThat's an experimental part 0002 of my patch in response to Simon's suggestion.\nMaybe your response will be that variants of those interfaces should be added\nto nodeModifyTable.[ch] instead of moving them. Currently I'm passing\n(void*)mtstate as cstate - if there were a generic interface, that would be a\nvoid *state or so.\n\n[0] https://commitfest.postgresql.org/30/2553/\n should INSERT SELECT use a BulkInsertState? (and multi_insert)\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 15 Nov 2020 20:28:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "On 16/11/2020 04:28, Justin Pryzby wrote:\n> On Tue, Nov 03, 2020 at 03:15:27PM +1300, David Rowley wrote:\n>> On Tue, 3 Nov 2020 at 07:35, Andres Freund <andres@anarazel.de> wrote:\n>>>\n>>> On 2020-11-02 19:43:38 +0200, Heikki Linnakangas wrote:\n>>>> On 02/11/2020 19:23, Andres Freund wrote:\n>>>>> On 2020-11-02 11:03:29 +0200, Heikki Linnakangas wrote:\n>>>>>> There isn't much common code between COPY FROM and COPY TO, so I propose\n>>>>>> that we split copy.c into two: copyfrom.c and copyto.c. See attached. I thin\n>>>>>> that's much nicer.\n>>>>>\n>>>>> Not quite convinced that's the right split - or perhaps there's just\n>>>>> more potential. My feeling is that splitting out all the DML related\n>>>>> code would make the code considerably easier to read.\n>>>>\n>>>> What do you mean by DML related code?\n>>>\n>>> Basically all the insertion related code (e.g CopyMultiInsert*, lots of\n>>> code in CopyFrom()) and perhaps also the type input invocations.\n>>\n>> I quite like the fact that those are static and inline-able. I very\n>> much imagine there'd be a performance hit if we moved them out to\n>> another .c file and made them extern. Some of those functions can be\n>> quite hot when copying into a partitioned table.\n> \n> For another patch [0], I moved into copy.h:\n> +typedef struct CopyMultiInsertBuffer\n> +typedef struct CopyMultiInsertInfo\n> +CopyMultiInsertBufferInit(ResultRelInfo *rri)\n> +CopyMultiInsertInfoSetupBuffer(CopyMultiInsertInfo *miinfo,\n> +CopyMultiInsertInfoIsFull(CopyMultiInsertInfo *miinfo)\n> +CopyMultiInsertBufferCleanup(CopyMultiInsertInfo *miinfo,\n> +CopyMultiInsertInfoNextFreeSlot(CopyMultiInsertInfo *miinfo,\n> +CopyMultiInsertInfoStore(CopyMultiInsertInfo *miinfo, ResultRelInfo *rri,\n> \n> That's an experimental part 0002 of my patch in response to Simon's suggestion.\n> Maybe your response will be that variants of those interfaces should be added\n> to nodeModifyTable.[ch] instead of moving them.\n\nNice. I don't think that affects this patch too much.\n\nI would suggest renaming renaming the functions and structs to remove \nthe \"Copy\"-prefix. COPY uses them, but so does INSERT with the patch.\n\n> Currently I'm passing (void*)mtstate as cstate - if there were a\n> generic interface, that would be a void *state or so.\nThe functions only need cstate/mtstate to set the line number, for the \nerror callback, and to access the transition_capture field. You could \nadd a field for transition_capture in CopyMultiInsertInfo. For the line \nnumber, you could add a line number field in CopyMultiInsertInfo, set \nthat in CopyMultiInsertBufferFlush() instead of cstate->cur_lineno, and \nteach CopyFromErrorCallback() to get the line number from there.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 16 Nov 2020 12:25:45 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Split copy.c"
},
{
"msg_contents": "Thanks for feedback, attached is a new patch version.\n\nOn 11/11/2020 21:49, Soumyadeep Chakraborty wrote:\n> On Tue, Nov 3, 2020 at 1:30 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> I also split/duplicated the CopyStateData struct into CopyFromStateData\n>> and CopyToStateData. Many of the fields were common, but many were not,\n>> and I think some duplication is nicer than a struct where you use some\n>> fields and others are unused. I put the common formatting options into a\n>> new CopyFormatOptions struct.\n> \n> Would we be better off if we sub-struct CopyState <- Copy{From,To}State?\n> Like this:\n> typedef struct Copy{From|To}StateData\n> {\n> CopyState cs;\n> // Fields specific to COPY FROM/TO follow..\n> }\n\nHmm. I don't think that would be better. There isn't actually that much \nin common between CopyFromStateData and CopyToStateData, and a little \nbit of duplication seems better.\n\n> 6.\n> \n>> /* create workspace for CopyReadAttributes results */\n>> if (!cstate->opts.binary)\n> \n> Can we replace this if with an else?\n\nSeems better as it is IMO, the if- and else-branches are not really \nrelated to each other, even though they both happen to be conditioned on \ncstate->opts.binary.\n\nFixed all the other things you listed, fixed a bug in setting \n'file_encoding', and trimmed down the #includes.\n\n- Heikki",
"msg_date": "Tue, 17 Nov 2020 12:38:44 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Split copy."
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 2:38 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> Thanks for feedback, attached is a new patch version.\n>\n> On 11/11/2020 21:49, Soumyadeep Chakraborty wrote:\n> > On Tue, Nov 3, 2020 at 1:30 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> I also split/duplicated the CopyStateData struct into CopyFromStateData\n> >> and CopyToStateData. Many of the fields were common, but many were not,\n> >> and I think some duplication is nicer than a struct where you use some\n> >> fields and others are unused. I put the common formatting options into a\n> >> new CopyFormatOptions struct.\n> >\n> > Would we be better off if we sub-struct CopyState <- Copy{From,To}State?\n> > Like this:\n> > typedef struct Copy{From|To}StateData\n> > {\n> > CopyState cs;\n> > // Fields specific to COPY FROM/TO follow..\n> > }\n>\n> Hmm. I don't think that would be better. There isn't actually that much\n> in common between CopyFromStateData and CopyToStateData, and a little\n> bit of duplication seems better.\n>\n\nFair.\n\n> > 6.\n> >\n> >> /* create workspace for CopyReadAttributes results */\n> >> if (!cstate->opts.binary)\n> >\n> > Can we replace this if with an else?\n>\n> Seems better as it is IMO, the if- and else-branches are not really\n> related to each other, even though they both happen to be conditioned on\n> cstate->opts.binary.\n\nFair.\n\n> Fixed all the other things you listed, fixed a bug in setting\n> 'file_encoding', and trimmed down the #includes.\n>\n\nThanks! LGTM! Marking as Ready for Committer.\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Tue, 17 Nov 2020 22:21:18 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split copy."
},
{
"msg_contents": "On 18/11/2020 08:21, Soumyadeep Chakraborty wrote:\n> On Tue, Nov 17, 2020 at 2:38 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> Fixed all the other things you listed, fixed a bug in setting\n>> 'file_encoding', and trimmed down the #includes.\n>>\n> Thanks! LGTM! Marking as Ready for Committer.\n\nPushed, thanks for the review!\n\n- Heikki\n\n\n",
"msg_date": "Mon, 23 Nov 2020 11:17:13 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Split copy.c"
}
] |
[
{
"msg_contents": "We directly call the heap function VM_ALL_VISIBLE in the\r\nIndexOnlyNext function. This is not in line with the design idea of\r\ntable am. If the new storage type needs to implement index only\r\nscan, he must hack the IndexOnlyNext function.\r\n\r\nSo this patch add a new table am 'tid_visible' to test visibility\r\nof tid. So that index only scan can completely use table AM.\r\n\n\n\n\n\n\n\n\n\nWe directly call the heap function VM_ALL_VISIBLE in the\nIndexOnlyNext function. This is not in line with the design idea of\ntable am. If the new storage type needs to implement index only\nscan, he must hack the IndexOnlyNext function.\n \nSo this patch add a new table am 'tid_visible' to test visibility\nof tid. So that index only scan can completely use table AM.",
"msg_date": "Mon, 2 Nov 2020 09:16:26 +0000",
"msg_from": "Jinbao Chen <cjinbao@vmware.com>",
"msg_from_op": true,
"msg_subject": "Add table AM 'tid_visible'"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-02 09:16:26 +0000, Jinbao Chen wrote:\n> We directly call the heap function VM_ALL_VISIBLE in the\n> IndexOnlyNext function. This is not in line with the design idea of\n> table am. If the new storage type needs to implement index only\n> scan, he must hack the IndexOnlyNext function.\n\nYea, it's something we should improve. Have you checked if this has\nperformance impact for heap? Should we also consider planning costs?\n\n\n> So this patch add a new table am 'tid_visible' to test visibility\n> of tid. So that index only scan can completely use table AM.\n\nAs far as I can tell you have not acually attached the patch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Nov 2020 09:14:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add table AM 'tid_visible'"
},
{
"msg_contents": "Hi Andres,\r\n\r\n\r\n\r\n> Yea, it's something we should improve. Have you checked if this has\r\n\r\n> performance impact for heap? Should we also consider planning costs?\r\n\r\nSince the visibility map is very small, all pages of the visibility map will\r\n\r\nusually reside in memory. The IO cost of accessing the visibility map can\r\n\r\nbe ignored. We should add the CPU cost of accessing visibility map. The\r\n\r\nCPU cost of accessing visibility map is usually smaller than cpu_tuple_cost.\r\n\r\nBut Postgres does not have a Macro to describe such a small cost. Should\r\n\r\nWe add one?\r\n\r\n\r\n\r\n> As far as I can tell you have not acually attached the patch.\r\n\r\nAh, forgot to upload the patch. Attach it below.",
"msg_date": "Tue, 3 Nov 2020 07:29:38 +0000",
"msg_from": "Jinbao Chen <cjinbao@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Add table AM 'tid_visible'"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 5:23 PM Jinbao Chen <cjinbao@vmware.com> wrote:\n>\n> Hi Andres,\n>\n>\n>\n> > Yea, it's something we should improve. Have you checked if this has\n>\n> > performance impact for heap? Should we also consider planning costs?\n>\n> Since the visibility map is very small, all pages of the visibility map will\n>\n> usually reside in memory. The IO cost of accessing the visibility map can\n>\n> be ignored. We should add the CPU cost of accessing visibility map. The\n>\n> CPU cost of accessing visibility map is usually smaller than cpu_tuple_cost.\n>\n> But Postgres does not have a Macro to describe such a small cost. Should\n>\n> We add one?\n>\n>\n>\n> > As far as I can tell you have not acually attached the patch.\n>\n> Ah, forgot to upload the patch. Attach it below.\n\nYou sent in your patch, tid_visible-1.patch to pgsql-hackers on Nov 3,\nbut you did not post it to the next CommitFest[1]. If this was\nintentional, then you need to take no action. However, if you want\nyour patch to be reviewed as part of the upcoming CommitFest, then you\nneed to add it yourself before 2021-01-01 AoE[2]. Thanks for your\ncontributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 17:43:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add table AM 'tid_visible'"
}
] |
[
{
"msg_contents": "Hi,\n\nhere is another patch related to using CALL statement inside PL/pgSQL code.\n\nA repeated using of CALL statement is expensive. How much?\n\nI wrote synthetic test:\n\nCREATE TABLE foo(a int, b int, c int);\n\nCREATE OR REPLACE PROCEDURE public.simple_proc3(a integer, b integer, c\ninteger, cnt int, OUT r boolean)\nAS $$\nBEGIN\n INSERT INTO foo VALUES(a, b, c);\n IF cnt % 10000 = 0 THEN\n COMMIT;\n r := true;\n ELSE\n r := false;\n END IF;\nEND;\n$$ LANGUAGE plpgsql;\n\nDO $$\nDECLARE a int; b int; c int; r boolean;\nBEGIN\n TRUNCATE foo;\n FOR i IN 1..10000000\n LOOP\n a := (random() * 100)::int;\n b := (random() * 100)::int;\n c := (random() * 100)::int;\n CALL simple_proc3(a, b, c, i, r);\n IF r THEN\n RAISE NOTICE 'committed at % row', i;\n END IF;\n END LOOP;\nEND;\n$$;\n\nI try to insert 10M rows with commit after inserting 10K rows. Execution\ntime on master is ~ 6 minutes 368251,691 ms (06:08,252)\n\nDO $$\nDECLARE a int; b int; c int; r boolean;\nBEGIN\n TRUNCATE foo;\n FOR i IN 1..10000000\n LOOP\n a := (random() * 100)::int;\n b := (random() * 100)::int;\n c := (random() * 100)::int;\n INSERT INTO foo VALUES(a, b, c);\n IF i % 10000 = 0 THEN\n COMMIT;\n r := true;\n ELSE\n r := false;\n END IF;\n IF r THEN\n RAISE NOTICE 'committed at % row', i;\n END IF;\n END LOOP;\nEND;\n$$;\n\nWhen I try to remove CALL statement then same work needs less to 2 minutes\n99109,511 ms (01:39,110). So this code is three times slower with calling\none procedure. There are two significant parts of overhead:\n\na) internal implementation of CALL statement that doesn't use plan cache\nwell, and it does lot of expensive operations over pg_proc catalogue,\n\nb) wrapper in PL/pgSQL that repeatedly repearse expression string.\n\nOverhead of PL/pgSQL can be reduced by using plan cache after fixing issue\nwith resource owner. I did it, and I introduced \"local_resowner\" for\nholding references of plans for CALL statement expressions.\n\nAfter patching the execution time is reduced to 4 minutes Time: 245852,846\nms (04:05,853). Still the overhead is significant, but it is 30% speedup.\n\nThe best case for this patch is about 2x times better performance\n\nCREATE OR REPLACE PROCEDURE public.simple_proc2(a integer, b integer, c\ninteger, cnt int, OUT r boolean)\nAS $$\nBEGIN\nEND;\n$$ LANGUAGE plpgsql;\n\nDO $$\nDECLARE a int; r boolean;\nBEGIN\n FOR i IN 1..10000000\n LOOP\n CALL simple_proc2((random()*100)::int, (random()*100)::int,\n(random()*100)::int, i, r);\n END LOOP;\nEND;\n$$;\n\nTime: 184667,970 ms (03:04,668), master: Time: 417463,457 ms (06:57,463)\n\nOn second hand, the worst case is about 10% (probably this overhead can be\nreduced by creating \"local_resowner\" only when it is necessary)\n\nCREATE OR REPLACE FUNCTION simple_fx2(a int)\nRETURNS int AS $$\nBEGIN\n RETURN a + a;\nEND;\n$$ LANGUAGE plpgsql IMMUTABLE STRICT;\n\nDO $$\nDECLARE a int;\nBEGIN\n FOR i IN 1..10000000\n LOOP\n a := simple_fx2(i);\n END LOOP;\nEND;\n$$;\n\nTime: 5434,808 ms (00:05,435) , master: Time: 4632,762 ms (00:04,633)\n\nComments, notes, ideas?\n\nRegards\n\nPavel",
"msg_date": "Mon, 2 Nov 2020 13:08:19 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "patch: reduce overhead of execution of CALL statement in no atomic\n mode from PL/pgSQL"
},
{
"msg_contents": "Hi\n\nI played with the profiler a little bit to get some data - see attached\nfile. Almost all overhead is related to impossibility to use simple\nexpression evaluation (that has very good performance now).\n\nProbably there is no simple way to reduce this overhead.\n\nRegards\n\nPavel",
"msg_date": "Tue, 3 Nov 2020 18:27:34 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
},
{
"msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel",
"msg_date": "Fri, 1 Jan 2021 09:15:33 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
},
{
"msg_contents": "pá 1. 1. 2021 v 9:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> only rebase\n>\n\nrebase\n\nregards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>",
"msg_date": "Fri, 8 Jan 2021 08:16:09 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> [ plpgsql-using-local-resowner-for-call-plans-20200108.patch ]\n\nI took a quick look through this patch, just reading it without\nany testing. A few thoughts:\n\n* Instead of adding an argument to GetCachedPlan and ReleaseCachedPlan,\nI think it'd be better to *replace* the useResOwner bool with\na ResourceOwner pointer, with the obvious semantics \"do nothing if\nit's NULL\". Otherwise you have to explain what it means to pass NULL\nwith useResOwner = true. In any case, changing the APIs of these\nfunctions without updating their header comments is not okay.\n\n* I'm not really happy with adding yet another nonorthogonal variant\nof SPI_execute_plan. Maybe better to do something like I did with\nSPI_prepare_extended() in commit 844fe9f15, and create a struct with\nall the inessential parameters so that we can make future API extensions\nwithout inventing whole new functions. Remember also that new SPI\nfunctions have to be documented in spi.sgml.\n\n* Do we really need a PG_TRY in exec_toplevel_block? Not to mention\ncreating and destroying a ResOwner? That seems pretty expensive, and it\nshould be unnecessary for ordinary plpgsql functions. (I'm also unhappy\nthat you've utterly falsified that function's comment without doing\nanything to update it.) This is really the part that needs more\nwork. I'm not sure that you can sell a speedup of CALL operations\nif the penalty is to slow down every other plpgsql function.\n\n* The part of the patch around exec_stmt_call is just about unreadable,\nmainly because git diff seems to think that exec_stmt_call is being\nchanged into make_callstmt_target. Maybe it'd be less messy if you\nput make_callstmt_target after exec_stmt_call.\n\n* Looks like an extra exec_prepare_plan() call snuck into\nexec_assign_expr()?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jan 2021 16:46:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
},
{
"msg_contents": "Hi\n\npá 15. 1. 2021 v 22:46 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > [ plpgsql-using-local-resowner-for-call-plans-20200108.patch ]\n>\n> I took a quick look through this patch, just reading it without\n> any testing. A few thoughts:\n>\n> * Instead of adding an argument to GetCachedPlan and ReleaseCachedPlan,\n> I think it'd be better to *replace* the useResOwner bool with\n> a ResourceOwner pointer, with the obvious semantics \"do nothing if\n> it's NULL\". Otherwise you have to explain what it means to pass NULL\n> with useResOwner = true. In any case, changing the APIs of these\n> functions without updating their header comments is not okay.\n>\n\ndone\n\n\n> * I'm not really happy with adding yet another nonorthogonal variant\n> of SPI_execute_plan. Maybe better to do something like I did with\n> SPI_prepare_extended() in commit 844fe9f15, and create a struct with\n> all the inessential parameters so that we can make future API extensions\n> without inventing whole new functions. Remember also that new SPI\n> functions have to be documented in spi.sgml.\n>\n\ndone\n\n\n> * Do we really need a PG_TRY in exec_toplevel_block? Not to mention\n> creating and destroying a ResOwner? That seems pretty expensive, and it\n> should be unnecessary for ordinary plpgsql functions. (I'm also unhappy\n> that you've utterly falsified that function's comment without doing\n> anything to update it.) This is really the part that needs more\n> work. I'm not sure that you can sell a speedup of CALL operations\n> if the penalty is to slow down every other plpgsql function.\n>\n\nI rewrote this part - now there is no new PG_TRY. local_resowner is created\nonly when routine is executed in non atomic mode\n\n\n> * The part of the patch around exec_stmt_call is just about unreadable,\n> mainly because git diff seems to think that exec_stmt_call is being\n> changed into make_callstmt_target. Maybe it'd be less messy if you\n> put make_callstmt_target after exec_stmt_call.\n>\n\ndone\n\n\n> * Looks like an extra exec_prepare_plan() call snuck into\n> exec_assign_expr()?\n>\n\nfixed\n\nI did performance tests and not the slowdown in the worst case is lower\n(3-5%) only for execution in non-atomic mode. The performance of atomic\nmode is the same.\n\nRegards\n\nPavel\n\n\n\n\n> regards, tom lane\n>",
"msg_date": "Thu, 21 Jan 2021 13:51:25 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
},
{
"msg_contents": "Hi\n\nThis is a little bit of an enhanced version of the previous patch. The\nworst case overhead is reduced almost to zero. The local resource owner is\ncreated only when routine is executed in non-atomic mode, and when routine\ncontains a CALL statement.\n\nRegards\n\nPavel",
"msg_date": "Thu, 21 Jan 2021 14:37:39 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
},
{
"msg_contents": "čt 21. 1. 2021 v 14:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> This is a little bit of an enhanced version of the previous patch. The\n> worst case overhead is reduced almost to zero. The local resource owner is\n> created only when routine is executed in non-atomic mode, and when routine\n> contains a CALL statement.\n>\n\nSorry. Last patch wasn't tested well.\n\nRegards\n\nPavel\n\n\n\n\n> Regards\n>\n> Pavel\n>\n>",
"msg_date": "Thu, 21 Jan 2021 15:14:10 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> [ plpgsql-plan-cache-for-call-3.patch ]\n\nPushed with some additional cleanup.\n\nIt strikes me that we ought to get rid of SPI_execute_with_receiver\n(which has been added since v13) in favor of a \"SPI_execute_extended\"\nthat shares the same options struct as SPI_execute_plan_extended.\nBut I left that for tomorrow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Jan 2021 22:33:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
},
{
"msg_contents": "út 26. 1. 2021 v 4:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > [ plpgsql-plan-cache-for-call-3.patch ]\n>\n> Pushed with some additional cleanup.\n>\n\nThank you\n\nPavel\n\n\n> It strikes me that we ought to get rid of SPI_execute_with_receiver\n> (which has been added since v13) in favor of a \"SPI_execute_extended\"\n> that shares the same options struct as SPI_execute_plan_extended.\n> But I left that for tomorrow.\n>\n> regards, tom lane\n>\n\nút 26. 1. 2021 v 4:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> [ plpgsql-plan-cache-for-call-3.patch ]\n\nPushed with some additional cleanup.Thank you Pavel\n\nIt strikes me that we ought to get rid of SPI_execute_with_receiver\n(which has been added since v13) in favor of a \"SPI_execute_extended\"\nthat shares the same options struct as SPI_execute_plan_extended.\nBut I left that for tomorrow.\n\n regards, tom lane",
"msg_date": "Tue, 26 Jan 2021 05:38:32 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: patch: reduce overhead of execution of CALL statement in no\n atomic mode from PL/pgSQL"
}
] |
[
{
"msg_contents": "Hello Pgsql-hackers,\n\nWhen selecting data from json column it named as '?column?'\ntucha=# select info->>'suma', docn from document order by id desc limit 5;\n ?column? | docn \n----------+------\n 665.97 | 695\n 513.51 | 632\n 665.97 | 4804\n 492.12 | 4315\n 332.98 | 1302\n(5 rows)\n\nIt would be useful if the name of column will be autoassigned based on\nname of json key. Like at next query:\n\ntucha=# select info->>'suma' as suma, docn from document order by id desc limit 5;\n suma | docn \n--------+------\n 665.97 | 695\n 513.51 | 632\n 665.97 | 4804\n 492.12 | 4315\n 332.98 | 1302\n(5 rows)\n\n\nWould it be useful this auto assigned name for column from json?\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Mon, 2 Nov 2020 17:05:29 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Proposition for autoname columns"
},
{
"msg_contents": "On Mon, Nov 2, 2020 at 05:05:29PM +0200, Eugen Konkov wrote:\n> Hello Pgsql-hackers,\n> \n> When selecting data from json column it named as '?column?'\n> tucha=# select info->>'suma', docn from document order by id desc limit 5;\n> ?column? | docn \n> ----------+------\n> 665.97 | 695\n> 513.51 | 632\n> 665.97 | 4804\n> 492.12 | 4315\n> 332.98 | 1302\n> (5 rows)\n> \n> It would be useful if the name of column will be autoassigned based on\n> name of json key. Like at next query:\n> \n> tucha=# select info->>'suma' as suma, docn from document order by id desc limit 5;\n> suma | docn \n> --------+------\n> 665.97 | 695\n> 513.51 | 632\n> 665.97 | 4804\n> 492.12 | 4315\n> 332.98 | 1302\n> (5 rows)\n> \n> \n> Would it be useful this auto assigned name for column from json?\n\nI think we could do it, but it would only work if the column was output\nas a single json value, and not a multi-key/value field. I am afraid if\nwe tried to do it, the result would be too inconsistent to be useful.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 10:56:08 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 8:56 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> > It would be useful if the name of column will be autoassigned based on\n> > name of json key. Like at next query:\n> >\n> > tucha=# select info->>'suma' as suma, docn from document order by id\n> desc limit 5;\n> > suma | docn\n> > --------+------\n>\n> > Would it be useful this auto assigned name for column from json?\n>\n> I think we could do it, but it would only work if the column was output\n> as a single json value, and not a multi-key/value field. I am afraid if\n> we tried to do it, the result would be too inconsistent to be useful.\n>\n\nDoing it seems problematic given the nature of SQL and existing means to\nassign names to columns. If it can be done I don't see how the output\nvalue would make any difference. What is being asked for is the simple\ntextual value on the right side of the ->> (and other similar) operators to\nbe converted into a column name. I could image doing this at rewrite time\nby saying (in parse terms):\n\ninfo->>'suma to' becomes info->>'suma' AS \"suma to\" (specifically, add AS,\ndouble-quote the literal and stick it after the AS).\n\nIf {AS \"suma to\"} isn't valid syntax for some value of \"suma to\" just drop\nthe attempt and move on.\n\nI agree that this feature would be useful.\n\nDavid J.\n\nOn Wed, Nov 11, 2020 at 8:56 AM Bruce Momjian <bruce@momjian.us> wrote:\n> It would be useful if the name of column will be autoassigned based on\n> name of json key. Like at next query:\n> \n> tucha=# select info->>'suma' as suma, docn from document order by id desc limit 5;\n> suma | docn \n> --------+------\n> Would it be useful this auto assigned name for column from json?\n\nI think we could do it, but it would only work if the column was output\nas a single json value, and not a multi-key/value field. I am afraid if\nwe tried to do it, the result would be too inconsistent to be useful.Doing it seems problematic given the nature of SQL and existing means to assign names to columns. If it can be done I don't see how the output value would make any difference. What is being asked for is the simple textual value on the right side of the ->> (and other similar) operators to be converted into a column name. I could image doing this at rewrite time by saying (in parse terms):info->>'suma to' becomes info->>'suma' AS \"suma to\" (specifically, add AS, double-quote the literal and stick it after the AS).If {AS \"suma to\"} isn't valid syntax for some value of \"suma to\" just drop the attempt and move on.I agree that this feature would be useful.David J.",
"msg_date": "Wed, 11 Nov 2020 09:39:14 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "Hello Bruce,\n\nWednesday, November 11, 2020, 5:56:08 PM, you wrote:\n\n> On Mon, Nov 2, 2020 at 05:05:29PM +0200, Eugen Konkov wrote:\n>> Hello Pgsql-hackers,\n>> \n>> When selecting data from json column it named as '?column?'\n>> tucha=# select info->>'suma', docn from document order by id desc limit 5;\n>> ?column? | docn \n>> ----------+------\n>> 665.97 | 695\n>> 513.51 | 632\n>> 665.97 | 4804\n>> 492.12 | 4315\n>> 332.98 | 1302\n>> (5 rows)\n>> \n>> It would be useful if the name of column will be autoassigned based on\n>> name of json key. Like at next query:\n>> \n>> tucha=# select info->>'suma' as suma, docn from document order by id desc limit 5;\n>> suma | docn \n>> --------+------\n>> 665.97 | 695\n>> 513.51 | 632\n>> 665.97 | 4804\n>> 492.12 | 4315\n>> 332.98 | 1302\n>> (5 rows)\n>> \n>> \n>> Would it be useful this auto assigned name for column from json?\n\n> I think we could do it, but it would only work if the column was output\n> as a single json value, and not a multi-key/value field. I am afraid if\n> we tried to do it, the result would be too inconsistent to be useful.\n\n\ncool, thank you.\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 20:58:01 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n\n> On Mon, Nov 2, 2020 at 05:05:29PM +0200, Eugen Konkov wrote:\n>> Hello Pgsql-hackers,\n>> \n>> When selecting data from json column it named as '?column?'\n>> tucha=# select info->>'suma', docn from document order by id desc limit 5;\n>> ?column? | docn \n>> ----------+------\n>> 665.97 | 695\n>> 513.51 | 632\n>> 665.97 | 4804\n>> 492.12 | 4315\n>> 332.98 | 1302\n>> (5 rows)\n>> \n>> It would be useful if the name of column will be autoassigned based on\n>> name of json key. Like at next query:\n>> \n>> tucha=# select info->>'suma' as suma, docn from document order by id desc limit 5;\n>> suma | docn \n>> --------+------\n>> 665.97 | 695\n>> 513.51 | 632\n>> 665.97 | 4804\n>> 492.12 | 4315\n>> 332.98 | 1302\n>> (5 rows)\n>> \n>> \n>> Would it be useful this auto assigned name for column from json?\n>\n> I think we could do it, but it would only work if the column was output\n> as a single json value, and not a multi-key/value field. I am afraid if\n> we tried to do it, the result would be too inconsistent to be useful.\n\nCould this be done via the support function, so that the top-level\noperator/function in each select list item can return a suggested column\nname if the relevant arguments are constants?\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n",
"msg_date": "Thu, 12 Nov 2020 00:18:49 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 12:18:49AM +0000, Dagfinn Ilmari Manns�ker wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I think we could do it, but it would only work if the column was output\n> > as a single json value, and not a multi-key/value field. I am afraid if\n> > we tried to do it, the result would be too inconsistent to be useful.\n> \n> Could this be done via the support function, so that the top-level\n> operator/function in each select list item can return a suggested column\n> name if the relevant arguments are constants?\n\nYes, the user explicitly calling a function would be much easier to\npredict.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 19:55:52 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 5:56 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Nov 12, 2020 at 12:18:49AM +0000, Dagfinn Ilmari Mannsåker wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > I think we could do it, but it would only work if the column was output\n> > > as a single json value, and not a multi-key/value field. I am afraid\n> if\n> > > we tried to do it, the result would be too inconsistent to be useful.\n> >\n> > Could this be done via the support function, so that the top-level\n> > operator/function in each select list item can return a suggested column\n> > name if the relevant arguments are constants?\n>\n> Yes, the user explicitly calling a function would be much easier to\n> predict.\n>\n>\nFor the user an operator and a function are different ways to invoke the\nsame underlying thing using different syntax. I'm not seeing how this\nsyntax difference makes this any easier to implement for explicit function\ninvocation compared to operator function invocation.\n\nDavid J.\n\nOn Wed, Nov 11, 2020 at 5:56 PM Bruce Momjian <bruce@momjian.us> wrote:On Thu, Nov 12, 2020 at 12:18:49AM +0000, Dagfinn Ilmari Mannsåker wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I think we could do it, but it would only work if the column was output\n> > as a single json value, and not a multi-key/value field. I am afraid if\n> > we tried to do it, the result would be too inconsistent to be useful.\n> \n> Could this be done via the support function, so that the top-level\n> operator/function in each select list item can return a suggested column\n> name if the relevant arguments are constants?\n\nYes, the user explicitly calling a function would be much easier to\npredict.For the user an operator and a function are different ways to invoke the same underlying thing using different syntax. I'm not seeing how this syntax difference makes this any easier to implement for explicit function invocation compared to operator function invocation.David J.",
"msg_date": "Wed, 11 Nov 2020 22:52:50 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "\nOn 11/11/20 7:55 PM, Bruce Momjian wrote:\n> On Thu, Nov 12, 2020 at 12:18:49AM +0000, Dagfinn Ilmari Mannsåker wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> I think we could do it, but it would only work if the column was output\n>>> as a single json value, and not a multi-key/value field. I am afraid if\n>>> we tried to do it, the result would be too inconsistent to be useful.\n>> Could this be done via the support function, so that the top-level\n>> operator/function in each select list item can return a suggested column\n>> name if the relevant arguments are constants?\n> Yes, the user explicitly calling a function would be much easier to\n> predict.\n>\n\n\nI suspect this is doomed to failure. There is no guarantee that the path\nexpression is going to be static or constant across rows. Say you have\nthis table:\n\n\nx: foo, j: {\"foo\": 1, \"bar\": 2}\n\nx: bar j: {\"foo\": 3, \"bar\": 4}\n\n\nand you say:\n\n\n select j->>x from mytable;\n\n\nWhat should the column be named?\n\n\nI think we'd be trying to manage a set of corner cases, and all because\nsomeone didn't want to put \"as foo\" in their query. And if we generate a\ncolumn name in some cases and not in others there will be complaints of\ninconsistency.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 08:19:39 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "Hello Andrew,\n\nThursday, November 12, 2020, 3:19:39 PM, you wrote:\n\n\n> On 11/11/20 7:55 PM, Bruce Momjian wrote:\n>> On Thu, Nov 12, 2020 at 12:18:49AM +0000, Dagfinn Ilmari Mannsåker wrote:\n>>> Bruce Momjian <bruce@momjian.us> writes:\n>>>> I think we could do it, but it would only work if the column was output\n>>>> as a single json value, and not a multi-key/value field. I am afraid if\n>>>> we tried to do it, the result would be too inconsistent to be useful.\n>>> Could this be done via the support function, so that the top-level\n>>> operator/function in each select list item can return a suggested column\n>>> name if the relevant arguments are constants?\n>> Yes, the user explicitly calling a function would be much easier to\n>> predict.\n>>\n\n\n> I suspect this is doomed to failure. There is no guarantee that the path\n> expression is going to be static or constant across rows. Say you have\n> this table:\n\n\n> x: foo, j: {\"foo\": 1, \"bar\": 2}\n\n> x: bar j: {\"foo\": 3, \"bar\": 4}\n\n\n> and you say:\n\n\n> select j->>x from mytable;\n> What should the column be named?\n\nSuppose it should be named 'as x'\n\n\n> I think we'd be trying to manage a set of corner cases, and all because\n> someone didn't want to put \"as foo\" in their query. And if we generate a\n> column name in some cases and not in others there will be complaints of\n> inconsistency.\n\n\n> cheers\n\n\n> andrew\n\n\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 16:14:50 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 7:18 AM Eugen Konkov <kes-kes@yandex.ru> wrote:\n\n> Hello Andrew,\n>\n> Thursday, November 12, 2020, 3:19:39 PM, you wrote:\n>\n>\n> > On 11/11/20 7:55 PM, Bruce Momjian wrote:\n>\n> > select j->>x from mytable;\n> > What should the column be named?\n>\n> Suppose it should be named 'as x'\n>\n\n+1\n\n>\n>\n> > I think we'd be trying to manage a set of corner cases, and all because\n> > someone didn't want to put \"as foo\" in their query. And if we generate a\n> > column name in some cases and not in others there will be complaints of\n> > inconsistency.\n>\n>\nYes, this is suggesting a behavior that is contrary to (but not prohibited\nby) the natural expression and expectations of SQL. That said, we already\ntake a function's name and use it to specify the name of it output column\nas opposed to using \"?column?\" and requiring a user to apply a specific\nalias. This is only a step beyond that, choosing the default name for an\noperator's output column based upon not the name of the operator (or its\nunderlying function) but based upon its one (and only possible) right-hand\nargument. It is purely a user convenience feature and can be rejected on\nthat grounds but I'm not seeing any fundamental issue with only having some\noperator combinations doing this. It's nice when it works and you are no\nworse off than today when it doesn't.\n\nDavid J.\n\nOn Thu, Nov 12, 2020 at 7:18 AM Eugen Konkov <kes-kes@yandex.ru> wrote:Hello Andrew,\n\nThursday, November 12, 2020, 3:19:39 PM, you wrote:\n\n\n> On 11/11/20 7:55 PM, Bruce Momjian wrote:\n> select j->>x from mytable;\n> What should the column be named?\n\nSuppose it should be named 'as x'+1\n\n\n> I think we'd be trying to manage a set of corner cases, and all because\n> someone didn't want to put \"as foo\" in their query. And if we generate a\n> column name in some cases and not in others there will be complaints of\n> inconsistency.Yes, this is suggesting a behavior that is contrary to (but not prohibited by) the natural expression and expectations of SQL. That said, we already take a function's name and use it to specify the name of it output column as opposed to using \"?column?\" and requiring a user to apply a specific alias. This is only a step beyond that, choosing the default name for an operator's output column based upon not the name of the operator (or its underlying function) but based upon its one (and only possible) right-hand argument. It is purely a user convenience feature and can be rejected on that grounds but I'm not seeing any fundamental issue with only having some operator combinations doing this. It's nice when it works and you are no worse off than today when it doesn't.David J.",
"msg_date": "Thu, 12 Nov 2020 08:05:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "\nOn 11/12/20 9:14 AM, Eugen Konkov wrote:\n> Hello Andrew,\n>\n> Thursday, November 12, 2020, 3:19:39 PM, you wrote:\n>\n>\n>> On 11/11/20 7:55 PM, Bruce Momjian wrote:\n>>> On Thu, Nov 12, 2020 at 12:18:49AM +0000, Dagfinn Ilmari Mannsåker wrote:\n>>>> Bruce Momjian <bruce@momjian.us> writes:\n>>>>> I think we could do it, but it would only work if the column was output\n>>>>> as a single json value, and not a multi-key/value field. I am afraid if\n>>>>> we tried to do it, the result would be too inconsistent to be useful.\n>>>> Could this be done via the support function, so that the top-level\n>>>> operator/function in each select list item can return a suggested column\n>>>> name if the relevant arguments are constants?\n>>> Yes, the user explicitly calling a function would be much easier to\n>>> predict.\n>>>\n>> I suspect this is doomed to failure. There is no guarantee that the path\n>> expression is going to be static or constant across rows. Say you have\n>> this table:\n>> x: foo, j: {\"foo\": 1, \"bar\": 2}\n>> x: bar j: {\"foo\": 3, \"bar\": 4}\n>> and you say:\n>> select j->>x from mytable;\n>> What should the column be named?\n> Suppose it should be named 'as x'\n\n\nSo if we then say:\n\n\n select x, j->>x from mytable;\n\n\nyou want both result columns named x? That seems like a recipe for\nserious confusion. I really don't think this proposal has been properly\nthought through.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 10:58:51 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "čt 12. 11. 2020 v 16:59 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n> On 11/12/20 9:14 AM, Eugen Konkov wrote:\n> > Hello Andrew,\n> >\n> > Thursday, November 12, 2020, 3:19:39 PM, you wrote:\n> >\n> >\n> >> On 11/11/20 7:55 PM, Bruce Momjian wrote:\n> >>> On Thu, Nov 12, 2020 at 12:18:49AM +0000, Dagfinn Ilmari Mannsåker\n> wrote:\n> >>>> Bruce Momjian <bruce@momjian.us> writes:\n> >>>>> I think we could do it, but it would only work if the column was\n> output\n> >>>>> as a single json value, and not a multi-key/value field. I am\n> afraid if\n> >>>>> we tried to do it, the result would be too inconsistent to be useful.\n> >>>> Could this be done via the support function, so that the top-level\n> >>>> operator/function in each select list item can return a suggested\n> column\n> >>>> name if the relevant arguments are constants?\n> >>> Yes, the user explicitly calling a function would be much easier to\n> >>> predict.\n> >>>\n> >> I suspect this is doomed to failure. There is no guarantee that the path\n> >> expression is going to be static or constant across rows. Say you have\n> >> this table:\n> >> x: foo, j: {\"foo\": 1, \"bar\": 2}\n> >> x: bar j: {\"foo\": 3, \"bar\": 4}\n> >> and you say:\n> >> select j->>x from mytable;\n> >> What should the column be named?\n> > Suppose it should be named 'as x'\n>\n>\n> So if we then say:\n>\n>\n> select x, j->>x from mytable;\n>\n>\n> you want both result columns named x? That seems like a recipe for\n> serious confusion. I really don't think this proposal has been properly\n> thought through.\n>\n\nWhy? It is consistent - you will get a value of key x, and anybody expects,\nso value should be different.\n\nRegards\n\nPavel\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n>\n>\n\nčt 12. 11. 2020 v 16:59 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\nOn 11/12/20 9:14 AM, Eugen Konkov wrote:\n> Hello Andrew,\n>\n> Thursday, November 12, 2020, 3:19:39 PM, you wrote:\n>\n>\n>> On 11/11/20 7:55 PM, Bruce Momjian wrote:\n>>> On Thu, Nov 12, 2020 at 12:18:49AM +0000, Dagfinn Ilmari Mannsåker wrote:\n>>>> Bruce Momjian <bruce@momjian.us> writes:\n>>>>> I think we could do it, but it would only work if the column was output\n>>>>> as a single json value, and not a multi-key/value field. I am afraid if\n>>>>> we tried to do it, the result would be too inconsistent to be useful.\n>>>> Could this be done via the support function, so that the top-level\n>>>> operator/function in each select list item can return a suggested column\n>>>> name if the relevant arguments are constants?\n>>> Yes, the user explicitly calling a function would be much easier to\n>>> predict.\n>>>\n>> I suspect this is doomed to failure. There is no guarantee that the path\n>> expression is going to be static or constant across rows. Say you have\n>> this table:\n>> x: foo, j: {\"foo\": 1, \"bar\": 2}\n>> x: bar j: {\"foo\": 3, \"bar\": 4}\n>> and you say:\n>> select j->>x from mytable;\n>> What should the column be named?\n> Suppose it should be named 'as x'\n\n\nSo if we then say:\n\n\n select x, j->>x from mytable;\n\n\nyou want both result columns named x? That seems like a recipe for\nserious confusion. I really don't think this proposal has been properly\nthought through.Why? It is consistent - you will get a value of key x, and anybody expects, so value should be different.RegardsPavel\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 12 Nov 2020 17:03:42 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 8:59 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n>\n> So if we then say:\n>\n>\n> select x, j->>x from mytable;\n>\n>\n> you want both result columns named x? That seems like a recipe for\n> serious confusion. I really don't think this proposal has been properly\n> thought through.\n>\n>\nIMO It no worse than today's:\n\nselect count(*), count(*) from (values (1), (2)) vals (v);\ncount | count\n2 | 2\nDavid J.\n\nOn Thu, Nov 12, 2020 at 8:59 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\nSo if we then say:\n\n\n select x, j->>x from mytable;\n\n\nyou want both result columns named x? That seems like a recipe for\nserious confusion. I really don't think this proposal has been properly\nthought through.IMO It no worse than today's:select count(*), count(*) from (values (1), (2)) vals (v);count | count2 | 2David J.",
"msg_date": "Thu, 12 Nov 2020 09:12:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "\nOn 11/12/20 11:12 AM, David G. Johnston wrote:\n> On Thu, Nov 12, 2020 at 8:59 AM Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n>\n>\n> So if we then say:\n>\n>\n> select x, j->>x from mytable;\n>\n>\n> you want both result columns named x? That seems like a recipe for\n> serious confusion. I really don't think this proposal has been\n> properly\n> thought through.\n>\n>\n> IMO It no worse than today's:\n>\n> select count(*), count(*) from (values (1), (2)) vals (v);\n> count | count\n> 2 | 2\n>\n\n\nI guess the difference here is that there's an extra level of\nindirection. So\n\n\nselect x, j->>'x', j->>x from mytable\n\n\nwould have 3 result columns all named x.\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 11:32:49 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 11:32:49AM -0500, Andrew Dunstan wrote:\n> On 11/12/20 11:12 AM, David G. Johnston wrote:\n> > IMO It no worse than today's:\n> >\n> > select count(*), count(*) from (values (1), (2)) vals (v);\n> > count | count\n> > 2 | 2\n> >\n> \n> \n> I guess the difference here is that there's an extra level of\n> indirection. So\n> \n> select x, j->>'x', j->>x from mytable\n> \n> would have 3 result columns all named x.\n\nYeah, I feel it would have to be something a user specifically asks for,\nand we would have to say it would be the first or a random match of one\nof the keys. Ultimately, it might be so awkward as to be useless.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 12:02:24 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 9:32 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 11/12/20 11:12 AM, David G. Johnston wrote:\n> > On Thu, Nov 12, 2020 at 8:59 AM Andrew Dunstan <andrew@dunslane.net\n> > <mailto:andrew@dunslane.net>> wrote:\n> >\n> >\n> >\n> > So if we then say:\n> >\n> >\n> > select x, j->>x from mytable;\n> >\n> >\n> > you want both result columns named x? That seems like a recipe for\n> > serious confusion. I really don't think this proposal has been\n> > properly\n> > thought through.\n> >\n> >\n> > IMO It no worse than today's:\n> >\n> > select count(*), count(*) from (values (1), (2)) vals (v);\n> > count | count\n> > 2 | 2\n> >\n>\n>\n> I guess the difference here is that there's an extra level of\n> indirection. So\n>\n>\n> select x, j->>'x', j->>x from mytable\n>\n>\n> would have 3 result columns all named x.\n>\n>\nI totally missed the variable reference there - only two of those become\n\"x\", the variable reference stays un-rewritten and thus results in\n\"?column?\", similar to today:\n\nselect count(*), count(*) +1 from (values (1), (2)) vals (v);\ncount | ?column?\n2 | 2\n\nThe query rewriter would only rewrite these expressions and provide an\nexpression-related explicit alias clause if the expression is a single\noperator (same as single function today) and the right-hand side of the\noperator is a constant (meaning the constant is a reasonable representation\nof every output value that is going to appear in the result column). If\nthe RHS is a variable then there is no good name that is known to cover all\noutput values and thus ?column? (i.e., do not rewrite/provide an alias\nclause) is an appropriate choice.\n\nMy concerns in this area involve stored views and ruleutils, dump/reload by\nextension. Greenfield, this would have been nice, and worth the minimal\ncomplexity given its usefulness in the common case, but is it useful enough\nto introduce a whole new default naming mechanism and dealing with\ndump/restore concerns?\n\nDavid J.\n\nOn Thu, Nov 12, 2020 at 9:32 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 11/12/20 11:12 AM, David G. Johnston wrote:\n> On Thu, Nov 12, 2020 at 8:59 AM Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n>\n>\n> So if we then say:\n>\n>\n> select x, j->>x from mytable;\n>\n>\n> you want both result columns named x? That seems like a recipe for\n> serious confusion. I really don't think this proposal has been\n> properly\n> thought through.\n>\n>\n> IMO It no worse than today's:\n>\n> select count(*), count(*) from (values (1), (2)) vals (v);\n> count | count\n> 2 | 2\n>\n\n\nI guess the difference here is that there's an extra level of\nindirection. So\n\n\nselect x, j->>'x', j->>x from mytable\n\n\nwould have 3 result columns all named x.I totally missed the variable reference there - only two of those become \"x\", the variable reference stays un-rewritten and thus results in \"?column?\", similar to today:select count(*), count(*) +1 from (values (1), (2)) vals (v); count | ?column?2 | 2The query rewriter would only rewrite these expressions and provide an expression-related explicit alias clause if the expression is a single operator (same as single function today) and the right-hand side of the operator is a constant (meaning the constant is a reasonable representation of every output value that is going to appear in the result column). If the RHS is a variable then there is no good name that is known to cover all output values and thus ?column? (i.e., do not rewrite/provide an alias clause) is an appropriate choice.My concerns in this area involve stored views and ruleutils, dump/reload by extension. Greenfield, this would have been nice, and worth the minimal complexity given its usefulness in the common case, but is it useful enough to introduce a whole new default naming mechanism and dealing with dump/restore concerns?David J.",
"msg_date": "Thu, 12 Nov 2020 10:54:10 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> The query rewriter would only rewrite these expressions and provide an\n> expression-related explicit alias clause if the expression is a single\n> operator (same as single function today) and the right-hand side of the\n> operator is a constant (meaning the constant is a reasonable representation\n> of every output value that is going to appear in the result column).\n\nI haven't been paying too close attention to this thread, but it seems\nlike there is a lot of misapprehension here about how this could\nreasonably be implemented. There is zero (not epsilon, but zero)\nchance of changing column aliases at rewrite time. Those have to be\nassigned in the parser, else we will not understand how to resolve\nreferences to sub-select output columns. Specifically it has to happen\nin FigureColname(), which means that resolving non-constant arguments\nto constants isn't terribly practical.\n\nActually, since FigureColname() works on the raw parse tree, I'm not\neven sure how you could make this happen in that context, unless you're\nwilling to say that \"j ->> 'x'\" resolves as \"x\" just based on the name\nof the operator, without any info about its semantics. That doesn't\nseem very cool. Now, in a quick look at the callers, it looks like it'd\nbe no problem from the callers' standpoint to switch things around to do\ncolname selection on the parsed tree instead, ie the existing choice is\nfor FigureColname's benefit not the callers'. But it'd likely cost\na good deal to do it the other way, since now FigureColname would need\nto perform catalog lookups to get column and function names.\n\nMaybe you could do something like passing *both* trees to FigureColname,\nand let it obtain the actual operator OID from the parsed tree when the\nraw tree contains AEXPR_OP. But the recursion in FigureColname would be\ndifficult to manage because the two trees often don't match one-to-one.\n\nOn the whole, I'm on the side of the people who don't want to change this.\nThe implementation cost seems likely to greatly outweigh the value, plus\nit feels more like a wart than a feature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Nov 2020 13:52:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 01:52:11PM -0500, Tom Lane wrote:\n> On the whole, I'm on the side of the people who don't want to change this.\n> The implementation cost seems likely to greatly outweigh the value, plus\n> it feels more like a wart than a feature.\n\nI think we can mark this as, \"We thought about it, and we decided it is\nprobably not a good idea.\"\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 13:55:44 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Thursday, November 12, 2020, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Nov 12, 2020 at 01:52:11PM -0500, Tom Lane wrote:\n> > On the whole, I'm on the side of the people who don't want to change\n> this.\n> > The implementation cost seems likely to greatly outweigh the value, plus\n> > it feels more like a wart than a feature.\n>\n> I think we can mark this as, \"We thought about it, and we decided it is\n> probably not a good idea.\"\n>\n>\n+1\n\nDavid J.\n\nOn Thursday, November 12, 2020, Bruce Momjian <bruce@momjian.us> wrote:On Thu, Nov 12, 2020 at 01:52:11PM -0500, Tom Lane wrote:\n> On the whole, I'm on the side of the people who don't want to change this.\n> The implementation cost seems likely to greatly outweigh the value, plus\n> it feels more like a wart than a feature.\n\nI think we can mark this as, \"We thought about it, and we decided it is\nprobably not a good idea.\"\n+1David J.",
"msg_date": "Thu, 12 Nov 2020 12:01:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On 2020-Nov-12, Tom Lane wrote:\n\n> On the whole, I'm on the side of the people who don't want to change this.\n> The implementation cost seems likely to greatly outweigh the value, plus\n> it feels more like a wart than a feature.\n\nI think if Eugen wants to spend some time with it and see how it could\nbe implemented, then sent a patch for consideration, then we could make\na better informed decision. My own opinion is that it's not worth the\ntrouble, but I'd rather us not stand in his way if he wants to try\n(With disclaimer that we might end up not liking the patch, of course).\n\n\n",
"msg_date": "Thu, 12 Nov 2020 16:30:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 04:30:15PM -0300, �lvaro Herrera wrote:\n> On 2020-Nov-12, Tom Lane wrote:\n> \n> > On the whole, I'm on the side of the people who don't want to change this.\n> > The implementation cost seems likely to greatly outweigh the value, plus\n> > it feels more like a wart than a feature.\n> \n> I think if Eugen wants to spend some time with it and see how it could\n> be implemented, then sent a patch for consideration, then we could make\n> a better informed decision. My own opinion is that it's not worth the\n> trouble, but I'd rather us not stand in his way if he wants to try\n> (With disclaimer that we might end up not liking the patch, of course).\n\nI think he would be better outlining how he wants it to behave before\neven working on a patch; from our TODO list:\n\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 14:34:52 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposition for autoname columns"
},
{
"msg_contents": "\r\n> On 2020-Nov-12, Tom Lane wrote:\n\n>> On the whole, I'm on the side of the people who don't want to change this.\n>> The implementation cost seems likely to greatly outweigh the value, plus\n>> it feels more like a wart than a feature.\n\n> I think if Eugen wants to spend some time with it and see how it could\n> be implemented, then sent a patch for consideration, then we could make\n> a better informed decision. My own opinion is that it's not worth the\n> trouble, but I'd rather us not stand in his way if he wants to try\n> (With disclaimer that we might end up not liking the patch, of course).\n\nSorry, I am not C/C++ programmist and do not imagine how to start to patch.\nI do not know internals of PG. The only useful thing from me is just that idea \nto make world better.\n\nI suppose initially there were only ?column?, later names were implemented for count, sum etc\nBut it will be cool if PG will do step further and name sum( a ) as sum_a instead of just sum\n\nThe purpose of this proposition is not about correct name generation, the purpose to get\nmore distinct default names:\n?column?, ?column?, ?column?, ?column?, ?column?, ?column?, ?column?, \n\n?count?, ?count?, ?count?, ?sum?, ?sum?, ?sum?, ?sum?\n\n?count_a?, ?count_b?, ?count_c?, ?sum_a?, ?sum_b?, ?sum_c?, ?sum_d?\n\nNotice, that latest is more robust that first ;-)\n\nI suppose we just ignore comlex cases and left them as they are\ncurrent. We could try some very very small step at the direction to\nimprove default names and see feed back from many users how it is\nuseful or not. Then we can decide it worth or not to implement whole\nsystem for default name generation.\n\nUnfortunately I am not judje at which level those should occur: parser, analiser or so.\nI just does not understand those things =(\n\nThank you.\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 22:13:08 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Proposition for autoname columns"
}
] |
[
{
"msg_contents": "Hi\n\nI found some possible typo in procarray.c and xlogreader.h\n\n- * For VACUUM separate horizons (used to to decide which deleted tuples must\n+ * For VACUUM separate horizons (used to decide which deleted tuples must\n\n- *\t\tCallers supply a page_read callback if they want to to call\n+ *\t\tCallers supply a page_read callback if they want to call\n\nBest regards,\nhouzj",
"msg_date": "Tue, 3 Nov 2020 01:39:17 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in xlogreader.h and procarray.c"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 7:09 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> Hi\n>\n> I found some possible typo in procarray.c and xlogreader.h\n>\n> - * For VACUUM separate horizons (used to to decide which deleted tuples must\n> + * For VACUUM separate horizons (used to decide which deleted tuples must\n>\n> - * Callers supply a page_read callback if they want to to call\n> + * Callers supply a page_read callback if they want to call\n>\n\nLGTM. I'll push this in some time, thanks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Nov 2020 08:27:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in xlogreader.h and procarray.c"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 8:27 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 3, 2020 at 7:09 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> >\n> > Hi\n> >\n> > I found some possible typo in procarray.c and xlogreader.h\n> >\n> > - * For VACUUM separate horizons (used to to decide which deleted tuples must\n> > + * For VACUUM separate horizons (used to decide which deleted tuples must\n> >\n> > - * Callers supply a page_read callback if they want to to call\n> > + * Callers supply a page_read callback if they want to call\n> >\n>\n> LGTM. I'll push this in some time, thanks.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Nov 2020 11:12:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in xlogreader.h and procarray.c"
}
] |
[
{
"msg_contents": "Hi Stephen, hackers,\n\n>> > With all those 'readahead' calls it certainly makes one wonder if the\n>> > Linux kernel is reading more than just the block we're looking for\n>> > because it thinks we're doing a sequential read and will therefore want\n>> > the next few blocks when, in reality, we're going to skip past them,\n>> > meaning that any readahead the kernel is doing is likely just wasted\n>> > I/O.\n>> I've done some quick&dirty tests with blockdev --setra/setfra 0 after spending time looking at the smgr/md/fd API changes required to find shortcut, but I'm getting actually a little bit worse timings at least on \"laptop DB tests\". One thing that I've noticed is that needs to be only for automatic-analyze, but not for automatic-vacuum where apparently there is some boost due to readahead.\n\n>Interesting that you weren't seeing any benefit to disabling readahead.\n\nI've got some free minutes and I have repeated the exercise in more realistic\nand strict environment that previous one to conclude that the current situation is preferable:\n\nAnalyzed table was having 171GB (as reported by \\dt+) + indexes: 35GB, 147GB, 35GB, 65GB (as reported by \\di+)\nLinux kernel 4.14.x, 2x NVME under dm-0 (it might matter as /dev/dm-0 might is different layer and might have different storage settings), VG on top of dm-0, LV with stripe-size 8kB, ext4.\ns_b=128MB, RAM=128GB (- ~30GB which were reserved for HugePages), typical output of PgSQL12: \nINFO: \"x\": scanned 1500000 of 22395442 pages, containing 112410444 live rows and 0 dead rows; 1500000 rows in sample, 1678321053 estimated total rows\n\nHot VFS cache:\nRun0: Defaults, default RA on dm-1=256 (*512=128kB), most of the time is spent heapam_scan_analyze_next_block() -> .. -> pread() which causes ~70..80MB/s as reported by pidstat, maximum 22-25% CPU, ~8k IOPS in iostat with average request size per IO=25 sectors(*512/1024 = ~12kB), readahead on, hot caches, total elapsed ~3m\nRun1: Defaults, similar as above (hot VFS cache), total elapsed 2m:50s\nRun2: Defaults, similar as above (hot VFS cache), total elapsed 2m:42s\nRun3: Defaults, miliaria as above (hot VFS cache), total elapsed 2m:40s\n\nNo VFS cache:\nRun4: echo 3 > drop_caches, still with s_b=128MB: maximum 18-23% CPU, ~70MB/s read, ondemand_readahead visible in perf, total elapsed 3m30s\nRun5: echo 3 > drop_caches, still with s_b=128MB: same as above, total elapsed 3m29s\nRun6: echo 3 > drop_caches, still with s_b=128MB: same as above, total elapsed 3m28s\n\nNo VFS cache, readahead off:\nRun7: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: reads at 33MB/s, ~13% CPU, 8.7k read IOPS @ avgrq-sz = 11 sectors (*512=5.5kB), total elapsed 5m59s\nRun8: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: as above, double-confirmed no readaheads [ pread()->generic_file_read_iter()->ext4_mpage_readpages()-> bio.. ], total elapsed 5m56s\nRun9: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: as above, total elapsed 5m55s\n\nOne thing not clear here is maybe in future worth measuring how striped LVs are being \naffected by readaheads.\n\n>Were you able to see where the time in the kernel was going when\n>readahead was turned off for the ANALYZE?\n\nYes, my interpretation is that the time spent goes into directly block I/O layer waiting. \n\n 54.67% 1.33% postgres postgres [.] FileRead\n ---FileRead\n --53.33%--__pread_nocancel\n --50.67%--entry_SYSCALL_64_after_hwframe\n do_syscall_64\n sys_pread64\n |--49.33%--vfs_read\n | --48.00%--__vfs_read\n | |--45.33%--generic_file_read_iter\n | | |--42.67%--ondemand_readahead\n | | | __do_page_cache_readahead\n | | | |--25.33%--ext4_mpage_readpages\n | | | | |--10.67%--submit_bio\n | | | | | generic_make_request\n | | | | | |--8.00%--blk_mq_make_request\n | | | | | | |--4.00%--blk_mq_get_request\n | | | | | | | |--1.33%--blk_mq_get_tag\n | | | | | | | --1.33%--sched_clock\n | | | | | | | xen_sched_clock\n | | | | | | | pvclock_clocksource_read\n | | | | | | |--1.33%--bio_integrity_prep\n | | | | | | --1.33%--blk_account_io_start\n | | | | | | part_round_stats\n | | | | | | blk_mq_in_flight\n | | | | | | blk_mq_queue_tag_busy_iter\n | | | | | --2.67%--dm_make_request\n | | | | | __split_and_process_bio\n | | | | | __split_and_process_non_flush\n | | | | | |--1.33%--__map_bio\n | | | | | | generic_make_request\n | | | | | | generic_make_request_checks\n | | | | | | percpu_counter_add_batch\n | | | | | --1.33%--bio_alloc_bioset\n | | | | | mempool_alloc\n | | | | | kmem_cache_alloc\n | | | | |--6.67%--ext4_map_blocks\n | | | | | |--4.00%--ext4_es_lookup_extent\n | | | | | | --2.67%--_raw_read_lock\n | | | | | --2.67%--__check_block_validity.constprop.81\n | | | | | ext4_data_block_valid\n | | | | --6.67%--add_to_page_cache_lru\n | | | | |--4.00%--__add_to_page_cache_locked\n | | | | | --1.33%--mem_cgroup_try_charge\n | | | | | get_mem_cgroup_from_mm\n | | | | --2.67%--__lru_cache_add\n | | | | pagevec_lru_move_fn\n | | | | __lock_text_start\n | | | |--12.00%--blk_finish_plug\n | | | | blk_flush_plug_list\n | | | | blk_mq_flush_plug_list\n | | | | |--10.67%--__blk_mq_delay_run_hw_queue\n | | | | | __blk_mq_run_hw_queue\n | | | | | blk_mq_sched_dispatch_requests\n | | | | | --9.33%--blk_mq_dispatch_rq_list\n | | | | | nvme_queue_rq\n | | | | | --1.33%--blk_mq_start_request\n\n\n\n\n>The VACUUM case is going to be complicated by what's in the visibility\n>map. (..) \n\nAfter observing the ANALYZE readahead behavior benefit I've abandoned\nthe case of testing much more advanced VACUUM processing, clearly Linux \nread-ahead is beneficial in even simple cases.\n\n>> My only idea would be that a lot of those blocks could be read asynchronously in batches (AIO) with POSIX_FADV_RANDOM issued on block-range before, so maybe the the optimization is possible, but not until we'll have AIO ;)\n>\n> (..)AIO is a whole other animal that's been discussed off and on\n>around here but it's a much larger and more invasive change than just\n>calling posix_fadvise().\n\nYes, I'm aware and I'm keeping my fingers crossed that maybe some day.... \n\nThe ANALYZE just seem fit to be natural candidate to use it. The only easy chance \nof acceleration of stats gathering - at least to me and enduser point of view - \nis to have more parallel autoanalyze workers running to drive more I/O concurrency \n(by e.g. partitioning the data), both in readahead and non-readahead scenarios. \nWhich is a pity because 70-80% of such process sits idle. The readahead might read \n10x more unnecessary data, but pread() doesn't have to wait. <speculation>Once AIO \nwould be it could throw thousands of requests without readahead and achieve much \nbetter efficiency probably</speculation>\n\nI hope the previous simple patch goes into master and helps other people understand\nthe picture more easily.\n\n-J.\n\n",
"msg_date": "Tue, 3 Nov 2020 09:33:31 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": true,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Jakub Wartak (Jakub.Wartak@tomtom.com) wrote:\n> >Interesting that you weren't seeing any benefit to disabling readahead.\n> \n> I've got some free minutes and I have repeated the exercise in more realistic\n> and strict environment that previous one to conclude that the current situation is preferable:\n\nThanks for spending time on this!\n\n> Analyzed table was having 171GB (as reported by \\dt+) + indexes: 35GB, 147GB, 35GB, 65GB (as reported by \\di+)\n> Linux kernel 4.14.x, 2x NVME under dm-0 (it might matter as /dev/dm-0 might is different layer and might have different storage settings), VG on top of dm-0, LV with stripe-size 8kB, ext4.\n> s_b=128MB, RAM=128GB (- ~30GB which were reserved for HugePages), typical output of PgSQL12: \n> INFO: \"x\": scanned 1500000 of 22395442 pages, containing 112410444 live rows and 0 dead rows; 1500000 rows in sample, 1678321053 estimated total rows\n> \n> Hot VFS cache:\n> Run0: Defaults, default RA on dm-1=256 (*512=128kB), most of the time is spent heapam_scan_analyze_next_block() -> .. -> pread() which causes ~70..80MB/s as reported by pidstat, maximum 22-25% CPU, ~8k IOPS in iostat with average request size per IO=25 sectors(*512/1024 = ~12kB), readahead on, hot caches, total elapsed ~3m\n> Run1: Defaults, similar as above (hot VFS cache), total elapsed 2m:50s\n> Run2: Defaults, similar as above (hot VFS cache), total elapsed 2m:42s\n> Run3: Defaults, miliaria as above (hot VFS cache), total elapsed 2m:40s\n> \n> No VFS cache:\n> Run4: echo 3 > drop_caches, still with s_b=128MB: maximum 18-23% CPU, ~70MB/s read, ondemand_readahead visible in perf, total elapsed 3m30s\n> Run5: echo 3 > drop_caches, still with s_b=128MB: same as above, total elapsed 3m29s\n> Run6: echo 3 > drop_caches, still with s_b=128MB: same as above, total elapsed 3m28s\n> \n> No VFS cache, readahead off:\n> Run7: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: reads at 33MB/s, ~13% CPU, 8.7k read IOPS @ avgrq-sz = 11 sectors (*512=5.5kB), total elapsed 5m59s\n> Run8: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: as above, double-confirmed no readaheads [ pread()->generic_file_read_iter()->ext4_mpage_readpages()-> bio.. ], total elapsed 5m56s\n> Run9: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: as above, total elapsed 5m55s\n\n[ ... ]\n\n> >The VACUUM case is going to be complicated by what's in the visibility\n> >map. (..) \n> \n> After observing the ANALYZE readahead behavior benefit I've abandoned\n> the case of testing much more advanced VACUUM processing, clearly Linux \n> read-ahead is beneficial in even simple cases.\n\nThis seems to be indicating that while the Linux kernel may end up\nreading pages we don't end up needing, it's much more often the case\nthat it's ending up reading *some* pages that we do need, and that's\nhappening often enough that it more than makes up for the extra reads\nbeing done.\n\nInstead of having these guessing games between the kernel and what PG's\ndoing, however, we could potentially do better using posix_fadvise() to\ntell the kernel, up front, exactly what blocks we are going to ask for,\nand perhaps that would end up improving things.\n\nAttached is a very much rough-n-ready patch for doing that, using\neffective_io_concurrency to control how many blocks to pre-fetch for\nANALYZE (0 meaning 'none'). If you've got a chance to test with\ndifferent settings for effective_io_concurrency with the patch applied\nto see what impact posix_fadvise() has on these ANALYZE runs, that would\nbe very cool to see.\n\nGoing between effective_cache_size = 0 and effective_cache_size = 10\nwith this patch, in some very quick testing on a laptop NVMe, while\nmaking sure to drop caches and restart PG in between to clear out\nshared_buffers, definitely shows that prefetching done this way is an\nimprovement over letting the kernel's normal read ahead handle it.\n\n> >> My only idea would be that a lot of those blocks could be read asynchronously in batches (AIO) with POSIX_FADV_RANDOM issued on block-range before, so maybe the the optimization is possible, but not until we'll have AIO ;)\n> >\n> > (..)AIO is a whole other animal that's been discussed off and on\n> >around here but it's a much larger and more invasive change than just\n> >calling posix_fadvise().\n> \n> Yes, I'm aware and I'm keeping my fingers crossed that maybe some day.... \n\nI don't think we should throw out the idea of using PrefetchBuffer()\nhere. \"Real\" AIO would certainly be good to have one of these days, but\nuntil then, posix_fadvise() could net us some of those gains in the\nmeantime.\n\n> The ANALYZE just seem fit to be natural candidate to use it. The only easy chance \n> of acceleration of stats gathering - at least to me and enduser point of view - \n> is to have more parallel autoanalyze workers running to drive more I/O concurrency \n> (by e.g. partitioning the data), both in readahead and non-readahead scenarios. \n> Which is a pity because 70-80% of such process sits idle. The readahead might read \n> 10x more unnecessary data, but pread() doesn't have to wait. <speculation>Once AIO \n> would be it could throw thousands of requests without readahead and achieve much \n> better efficiency probably</speculation>\n\nActual parallel ANALYZE could be interesting, of course, but that's also\nquite a bit more complicated to make happen, and even in that case it\nwould seem like we'd probably want to use posix_fadvise() anyway since\nit'd hopefully help.\n\n> I hope the previous simple patch goes into master and helps other people understand\n> the picture more easily.\n\nI agree that we should also be providing users with more information\nabout what's going on, but maybe we can do that and improve things by\nleveraging posix_fadvise().\n\nThanks,\n\nStephen",
"msg_date": "Tue, 3 Nov 2020 12:47:34 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to propose an updated patch on multi/bulk inserts in CTAS [1]\nthat tries to address the review comments that came up in [1]. One of the\nmain review comments was to calculate/estimate the tuple size to decide on\nwhen to flush. I tried to solve this point with a new function\nGetTupleSize()(see the patch for implementation).\n\nI did some testing with custom configuration[2].\n\nUse case 1- 100mn tuples, 2 integer columns, exec time in sec:\nHEAD: *131.507* when the select part is not parallel, 128.832 when the\nselect part is parallel\nPatch: 98.925 when the select part is not parallel, *52.901* when the\nselect part is parallel\n\nUse case 2- 10mn tuples, 4 integer and 6 text columns, exec time in sec:\nHEAD: *76.801* when the select part is not parallel, 66.074 when the select\npart is parallel\nPatch: 74.083 when the select part is not parallel, *57.739* when the\nselect part is parallel\n\nThoughts?\n\nIf the approach followed in the patch looks okay, I can work on a separate\npatch for multi inserts in refresh materialized view cases.\n\nI thank Simon Riggs for the offlist discussion.\n\nPS: I chose to start a new thread as the previous thread [1] was closed in\nthe CF. I hope that's not a problem.\n\n[1] -\nhttps://www.postgresql.org/message-id/CAEET0ZHRWxbRUgwzUK_tOFDWx7VE2-P%3DxMBT6-N%2BgAa9WQ%3DxxA%40mail.gmail.com\n[2] - The postgresql.conf used:\nshared_buffers = 40GB\nsynchronous_commit = off\ncheckpoint_timeout = 1d\nmax_wal_size = 24GB\nmin_wal_size = 15GB\nautovacuum = off\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 3 Nov 2020 16:54:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 4:54 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Use case 1- 100mn tuples, 2 integer columns, exec time in sec:\n> HEAD: 131.507 when the select part is not parallel, 128.832 when the\nselect part is parallel\n> Patch: 98.925 when the select part is not parallel, 52.901 when the\nselect part is parallel\n>\n> Use case 2- 10mn tuples, 4 integer and 6 text columns, exec time in sec:\n> HEAD: 76.801 when the select part is not parallel, 66.074 when the select\npart is parallel\n> Patch: 74.083 when the select part is not parallel, 57.739 when the\nselect part is parallel\n>\n\nI did some more testing with v1 patch: execution time is in seconds, each\ntest is run 2 times, with custom configuration [1].\n\nUse case 3: 1 int and 1 text column. each row size 129 bytes, size of 1\ntext column 101 bytes, number of rows 100million, size of heap file 12.9GB.\nHEAD: 253.227, 259.575\nPatch: 177.921, 174.196\n\nWe get better performance 1.4X with the patch.\n\nUse case 4: 1 int and 30 text columns. each row size 28108 bytes, size of 1\ntext column 932 bytes, number of rows 10000, size of heap file 281.08MB.\nHEAD: 222.812, 218.837\nPatch: 222.492, 222.295\n\nWe don't see much difference with and without patch. Each time only 2\ntuples(2*28108 = 56216 bytes < MAX_MULTI_INSERT_BUFFERED_BYTES(65535\nbytes)) are buffered and flushed.\n\nUse case 5: 1 int and 75 text columns. each row size 70228 bytes, size of 1\ntext column 932 bytes, number of rows 10000, size of heap file 702.28MB.\nHEAD: 554.709, 562.745\nPatch: 553.378, 560.370\n\nWe don't see much difference with and without patch. Since each row\nsize(70228 bytes) is bigger than the MAX_MULTI_INSERT_BUFFERED_BYTES(65535\nbytes), multi insert code is not picked, each single row is inserted with\ntable_tuple_insert() itself.\n\nUse case 6: 1 int and 1 text column. each row size 9205 bytes, size of 1\ntext column 9173 bytes, number of rows 10000, size of heap file 92.05MB.\nHEAD: 70.583, 70251\nPatch: 72.633, 73.521\n\nWe see 2-3 seconds more with patch. When I intentionally made the computed\ntuple size to 0(sz =0) after GetTupleSize(), which means the single inserts\nhappen, the results are 70.364, 70.406. Looks like this 2-3 seconds extra\ntime is due to the multi insert code and happens for with this use case\nonly. And I think this should not be a problem as the difference is not\nhuge.\n\n+ sz = GetTupleSize(slot, MAX_MULTI_INSERT_BUFFERED_BYTES);\n+\n*+. sz = 0;*\n+\n+ /* In case the computed tuple size is 0, we go for single inserts. */\n+ if (sz != 0)\n+ {\n\n[1] - The postgresql.conf used:\nshared_buffers = 40GB\nsynchronous_commit = off\ncheckpoint_timeout = 1d\nmax_wal_size = 24GB\nmin_wal_size = 15GB\nautovacuum = off\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Nov 3, 2020 at 4:54 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:>> Use case 1- 100mn tuples, 2 integer columns, exec time in sec:> HEAD: 131.507 when the select part is not parallel, 128.832 when the select part is parallel> Patch: 98.925 when the select part is not parallel, 52.901 when the select part is parallel>> Use case 2- 10mn tuples, 4 integer and 6 text columns, exec time in sec:> HEAD: 76.801 when the select part is not parallel, 66.074 when the select part is parallel> Patch: 74.083 when the select part is not parallel, 57.739 when the select part is parallel>I did some more testing with v1 patch: execution time is in seconds, each test is run 2 times, with custom configuration [1].Use case 3: 1 int and 1 text column. each row size 129 bytes, size of 1 text column 101 bytes, number of rows 100million, size of heap file 12.9GB.HEAD: 253.227, 259.575Patch: 177.921, 174.196We get better performance 1.4X with the patch.Use case 4: 1 int and 30 text columns. each row size 28108 bytes, size of 1 text column 932 bytes, number of rows 10000, size of heap file 281.08MB.HEAD: 222.812, 218.837Patch: 222.492, 222.295We don't see much difference with and without patch. Each time only 2 tuples(2*28108 = 56216 bytes < MAX_MULTI_INSERT_BUFFERED_BYTES(65535 bytes)) are buffered and flushed.Use case 5: 1 int and 75 text columns. each row size 70228 bytes, size of 1 text column 932 bytes, number of rows 10000, size of heap file 702.28MB.HEAD: 554.709, 562.745Patch: 553.378, 560.370We don't see much difference with and without patch. Since each row size(70228 bytes) is bigger than the MAX_MULTI_INSERT_BUFFERED_BYTES(65535 bytes), multi insert code is not picked, each single row is inserted with table_tuple_insert() itself.Use case 6: 1 int and 1 text column. each row size 9205 bytes, size of 1 text column 9173 bytes, number of rows 10000, size of heap file 92.05MB.HEAD: 70.583, 70251Patch: 72.633, 73.521We see 2-3 seconds more with patch. When I intentionally made the computed tuple size to 0(sz =0) after GetTupleSize(), which means the single inserts happen, the results are 70.364, 70.406. Looks like this 2-3 seconds extra time is due to the multi insert code and happens for with this use case only. And I think this should not be a problem as the difference is not huge.+ sz = GetTupleSize(slot, MAX_MULTI_INSERT_BUFFERED_BYTES);++. sz = 0;++ /* In case the computed tuple size is 0, we go for single inserts. */+ if (sz != 0)+ {[1] - The postgresql.conf used:shared_buffers = 40GBsynchronous_commit = offcheckpoint_timeout = 1dmax_wal_size = 24GBmin_wal_size = 15GBautovacuum = offWith Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 5 Nov 2020 16:33:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 4:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> If the approach followed in the patch looks okay, I can work on a separate patch for multi inserts in refresh materialized view cases.\n>\n\nHi, I'm attaching a v2 patch that has multi inserts for CTAS as well\nas REFRESH MATERIALiZED VIEW.\n\nI did some testing: exec time in seconds.\n\nUse case 1: 1 int and 1 text column. each row size 129 bytes, size of\n1 text column 101 bytes, number of rows 100million, size of heap file\n12.9GB.\nHEAD: 220.733, 220.428\nPatch: 151.923, 152.484\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 9 Nov 2020 16:11:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "> On Nov 9, 2020, at 6:41 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n>\r\n> On Tue, Nov 3, 2020 at 4:54 PM Bharath Rupireddy\r\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n>>\r\n>> If the approach followed in the patch looks okay, I can work on a separate patch for multi inserts in refresh materialized view cases.\r\n>>\r\n>\r\n> Hi, I'm attaching a v2 patch that has multi inserts for CTAS as well\r\n> as REFRESH MATERIALiZED VIEW.\r\n>\r\n> I did some testing: exec time in seconds.\r\n>\r\n> Use case 1: 1 int and 1 text column. each row size 129 bytes, size of\r\n> 1 text column 101 bytes, number of rows 100million, size of heap file\r\n> 12.9GB.\r\n> HEAD: 220.733, 220.428\r\n> Patch: 151.923, 152.484\r\n>\r\n> Thoughts?\r\n>\r\n> With Regards,\r\n> Bharath Rupireddy.\r\n> EnterpriseDB: https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com%2F&data=04%7C01%7Cguopa%40vmware.com%7C2471a90558ce4bf0af5b08d8849c03bb%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637405152899337347%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=QKeRMGQjOlOL%2FlQv%2BuEAb2ocLVq6zqXESKoNOaJ6YCo%3D&reserved=0\r\n> <v2-0001-Multi-Inserts-in-CTAS-Refresh-Materialized-View.patch>\r\n\r\nThanks for doing this. There might be another solution - use raw insert interfaces (i.e. raw_heap_insert()).\r\nAttached is the test (not formal) patch that verifies this idea. raw_heap_insert() writes the page into the\r\ntable files directly and also write the FPI xlog when the tuples filled up the whole page. This seems be\r\nmore efficient.\r\n\r\nIn addition, those raw write interfaces call smgrimmedsync() when finishing raw inserting, this is because\r\nthe write bypasses the shared buffer so a CHECKPOINT plus crash might cause data corruption since\r\nsome FPI xlogs cannot be replayed and those table files are not fsync-ed during crash. It seems that a sync\r\nrequest could be forwarded to the checkpointer for each table segment file and then we do not need to call\r\nsmgrimmedsync(). If the theory is correct this should be in a separate patch. Anyway I tested this idea\r\nalso by simply commenting out the smgrimmedsync() call in heap_raw_insert_end() (a new function in\r\nthe attached patch) since forwarding fsync request is light-weight.\r\n\r\nI did a quick and simple testing. The test environment is a centos6 vm with 7G memory on my Mac laptop.\r\n-O3 gcc compiler option; shared_buffers as 2GB. Did not check if parallel scanning is triggered by the test\r\nquery and the data volume is not large so test time is not long.\r\n\r\nHere are the test script.\r\n create table t1 (a int, b int, c int, d int);\r\n insert into t1 select i,i,i,i from generate_series(1,10000000) i;\r\n show shared_buffers;\r\n \\timing on\r\n create table t2 as select * from t1;\r\n \\timing off\r\n\r\nHere are the results:\r\n\r\nHEAD (37d2ff380312):\r\n Time: 5143.041 ms (00:05.143)\r\nMulti insert patch:\r\n Time: 4456.461 ms (00:04.456)\r\nRaw insert (attached):\r\n Time: 2317.453 ms (00:02.317)\r\nRaw insert + no smgrimmedsync():\r\n Time: 2103.610 ms (00:02.104).\r\n\r\nFrom the above data raw insert is better; also forwarding sync should be able to improve further\r\n(Note my laptop is with SSD so on machine with SATA/SAS, I believe forwarding sync should\r\nbe able to help more.)\r\n\r\nI tested removing smgrimmedsync in \"vacuum full” code that uses raw insert also. FYI.\r\nHEAD:\r\n Time: 3567.036 ms (00:03.567)\r\nno smgrimmedsync:\r\n Time: 3023.487 ms (00:03.023)\r\n\r\n\r\nRaw insert could be used on CTAS & Create MatView. For Refresh MatView the code is a bit\r\ndifferent. I did not spend more time on this so not sure raw insert could be used for that.\r\n\r\nBut I think the previous multi insert work could be still used in at least \"INSERT tbl SELECT…” (if the INSERT\r\nis a simple one, e.g. no trigger, no index, etc).\r\n\r\n\r\nRegards,\r\nPaul",
"msg_date": "Tue, 10 Nov 2020 10:17:15 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 10:17:15AM +0000, Paul Guo wrote:\n> Raw insert could be used on CTAS & Create MatView. For Refresh MatView the code is a bit\n> different. I did not spend more time on this so not sure raw insert could be used for that.\n> \n> But I think the previous multi insert work could be still used in at least \"INSERT tbl SELECT…” (if the INSERT\n> is a simple one, e.g. no trigger, no index, etc).\n\nNote that I've started that idea on another thread:\nhttps://commitfest.postgresql.org/30/2553/\n - should INSERT SELECT use a BulkInsertState? (and multi_insert)\n\nThere's also this one:\nhttps://commitfest.postgresql.org/30/2818/\n - split copy.c, Heikki\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 10 Nov 2020 09:18:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 3:47 PM Paul Guo <guopa@vmware.com> wrote:\n>\n> Thanks for doing this. There might be another solution - use raw insert interfaces (i.e. raw_heap_insert()).\n> Attached is the test (not formal) patch that verifies this idea. raw_heap_insert() writes the page into the\n> table files directly and also write the FPI xlog when the tuples filled up the whole page. This seems be\n> more efficient.\n>\n\nThanks. Will the new raw_heap_insert() APIs scale well (i.e. extend\nthe table parallelly) with parallelism? The existing\ntable_multi_insert() API scales well, see, for instance, the benefit\nwith parallel copy[1] and parallel multi inserts in CTAS[2].\n\n[1] - https://www.postgresql.org/message-id/CALj2ACWeQVd-xoQZHGT01_33St4xPoZQibWz46o7jW1PE3XOqQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CALj2ACWFq6Z4_jd9RPByURB8-Y8wccQWzLf%2B0-Jg%2BKYT7ZO-Ug%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Nov 2020 16:51:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "\n\n> On Nov 13, 2020, at 7:21 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Tue, Nov 10, 2020 at 3:47 PM Paul Guo <guopa@vmware.com> wrote:\n>> \n>> Thanks for doing this. There might be another solution - use raw insert interfaces (i.e. raw_heap_insert()).\n>> Attached is the test (not formal) patch that verifies this idea. raw_heap_insert() writes the page into the\n>> table files directly and also write the FPI xlog when the tuples filled up the whole page. This seems be\n>> more efficient.\n>> \n> \n> Thanks. Will the new raw_heap_insert() APIs scale well (i.e. extend\n> the table parallelly) with parallelism? The existing\n> table_multi_insert() API scales well, see, for instance, the benefit\n> with parallel copy[1] and parallel multi inserts in CTAS[2].\n\nYes definitely some work needs to be done to make raw heap insert interfaces fit the parallel work, but\nit seems that there is no hard blocking issues for this?\n\n> \n> [1] - https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2FCALj2ACWeQVd-xoQZHGT01_33St4xPoZQibWz46o7jW1PE3XOqQ%2540mail.gmail.com&data=04%7C01%7Cguopa%40vmware.com%7C6fb10e05b7a243e0042608d887c651ac%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637408633136197927%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=fyQaor4yhmqVRYcK78JyPW25i7zjRoWXqZVf%2BfFYq1w%3D&reserved=0\n> [2] - https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2FCALj2ACWFq6Z4_jd9RPByURB8-Y8wccQWzLf%252B0-Jg%252BKYT7ZO-Ug%2540mail.gmail.com&data=04%7C01%7Cguopa%40vmware.com%7C6fb10e05b7a243e0042608d887c651ac%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637408633136207912%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=CkFToJ11nmoyT2SodsJYYMOGP3cHSpeNYn8ZTYurn3U%3D&reserved=0\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com%2F&data=04%7C01%7Cguopa%40vmware.com%7C6fb10e05b7a243e0042608d887c651ac%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C1%7C637408633136207912%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=btiktR5Ftx1astyEmCUroQCIN1%2FcgcaMOxfA1z6pawE%3D&reserved=0\n\n\n\n",
"msg_date": "Mon, 16 Nov 2020 14:32:30 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 8:02 PM Paul Guo <guopa@vmware.com> wrote:\n>\n> > On Nov 13, 2020, at 7:21 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 3:47 PM Paul Guo <guopa@vmware.com> wrote:\n> >>\n> >> Thanks for doing this. There might be another solution - use raw insert interfaces (i.e. raw_heap_insert()).\n> >> Attached is the test (not formal) patch that verifies this idea. raw_heap_insert() writes the page into the\n> >> table files directly and also write the FPI xlog when the tuples filled up the whole page. This seems be\n> >> more efficient.\n> >>\n> >\n> > Thanks. Will the new raw_heap_insert() APIs scale well (i.e. extend\n> > the table parallelly) with parallelism? The existing\n> > table_multi_insert() API scales well, see, for instance, the benefit\n> > with parallel copy[1] and parallel multi inserts in CTAS[2].\n>\n> Yes definitely some work needs to be done to make raw heap insert interfaces fit the parallel work, but\n> it seems that there is no hard blocking issues for this?\n>\n\nI may be wrong here. If we were to allow raw heap insert APIs to\nhandle parallelism, shouldn't we need some sort of shared memory to\nallow coordination among workers? If we do so, at the end, aren't\nthese raw insert APIs equivalent to current table_multi_insert() API\nwhich uses a separate shared ring buffer(bulk insert state) for\ninsertions?\n\nAnd can we think of these raw insert APIs similar to the behaviour of\ntable_multi_insert() API for unlogged tables?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Nov 2020 21:02:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "Attaching v2 patch, rebased on the latest master 17958972.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 23 Nov 2020 14:45:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On 23/11/2020 11:15, Bharath Rupireddy wrote:\n> Attaching v2 patch, rebased on the latest master 17958972.\n\nI just broke this again with commit c532d15ddd to split up copy.c. \nHere's another rebased version.\n\n- Heikki",
"msg_date": "Mon, 23 Nov 2020 11:56:41 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 3:26 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 23/11/2020 11:15, Bharath Rupireddy wrote:\n> > Attaching v2 patch, rebased on the latest master 17958972.\n>\n> I just broke this again with commit c532d15ddd to split up copy.c.\n> Here's another rebased version.\n>\n\nThanks! I noticed that and am about to post a new patch. Anyways,\nthanks for the rebased v3 patch. Attaching here v3 again for\nvisibility.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 23 Nov 2020 15:53:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On 23-11-2020 11:23, Bharath Rupireddy wrote:\n> On Mon, Nov 23, 2020 at 3:26 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> On 23/11/2020 11:15, Bharath Rupireddy wrote:\n>>> Attaching v2 patch, rebased on the latest master 17958972.\n>>\n>> I just broke this again with commit c532d15ddd to split up copy.c.\n>> Here's another rebased version.\n>>\n> \n> Thanks! I noticed that and am about to post a new patch. Anyways,\n> thanks for the rebased v3 patch. Attaching here v3 again for\n> visibility.\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n> \n\nHi,\n\nThanks for reviving the patch! I did unfortunately have to shift my \npriorities somewhat and did not find much time to work on open source \nthings the last week(s).\n\nI'm wondering about the use of the GetTupleSize function. As far as I \nunderstand the idea is to limit the amount of buffered data, presumably \nto not write too much data at once for intorel_flush_multi_insert.\nIf I understood correctly how it all works, the table slot can however \nbe of different type than the source slot, which makes that the call to \nCopySlot() potentially stores a different amount of data than computed \nby GetTupleSize(). Not sure if this is a big problem as an estimation \nmight be good enough?\n\nSome other solutions/implementations would be:\n- compute the size after doing CopySlot. Maybe the relation never wants \na virtual tuple and then you can also simplify GetTupleSize?\n- after CopySlot ask for the memory consumed in the slot using \nMemoryContextMemAllocated.\n\nSome small things to maybe change are:\n===========\n+\t\tif (myState->mi_slots[myState->mi_slots_num] == NULL)\n+\t\t{\n+\t\t\tbatchslot = table_slot_create(myState->rel, NULL);\n+\t\t\tmyState->mi_slots[myState->mi_slots_num] = batchslot;\n+\t\t}\n+\t\telse\n+\t\t\tbatchslot = myState->mi_slots[myState->mi_slots_num];\n\nAlternative:\n+\t\tif (myState->mi_slots[myState->mi_slots_num] == NULL)\n+\t\t\tmyState->mi_slots[myState->mi_slots_num] = \ntable_slot_create(myState->rel, NULL);\n+\t\tbatchslot = myState->mi_slots[myState->mi_slots_num];\n\n==============\n\n+\t\t\tsz = att_align_nominal(sz, att->attalign);\nThis could be moved out of the if statement?\n\n==============\n\nRegards,\nLuc\nSwarm64\n\n\n",
"msg_date": "Wed, 25 Nov 2020 09:41:02 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Wed, Nov 25, 2020 at 2:11 PM Luc Vlaming <luc@swarm64.com> wrote:\n>\n> Thanks for reviving the patch! I did unfortunately have to shift my\n> priorities somewhat and did not find much time to work on open source\n> things the last week(s).\n>\n\nThanks for the comments.\n\n>\n> I'm wondering about the use of the GetTupleSize function. As far as I\n> understand the idea is to limit the amount of buffered data, presumably\n> to not write too much data at once for intorel_flush_multi_insert.\n> If I understood correctly how it all works, the table slot can however\n> be of different type than the source slot, which makes that the call to\n> CopySlot() potentially stores a different amount of data than computed\n> by GetTupleSize(). Not sure if this is a big problem as an estimation\n> might be good enough?\n>\n\nYeah. The tuple size may change after ExecCopySlot(). For instance, create\ntable t2 as select a1 from t1; where t1 has two integer columns a1, b1. I'm\ncreating t2 with single column a1 from t1 which makes the source slot\nvirtual.\n\nSource slot is virtual and the size calculated with GetTupleSize() is 8\nbytes:\n(gdb) p *slot\n$18 = {type = T_TupleTableSlot, tts_flags = 16, tts_nvalid = 1,\n tts_ops = 0x562c592652c0 <TTSOpsVirtual>,\n tts_tupleDescriptor = 0x562c5a0409f0, tts_values = 0x562c5a040b50,\n tts_isnull = 0x562c5a040b58, tts_mcxt = 0x562c5a040320, tts_tid = {\n ip_blkid = {bi_hi = 65535, bi_lo = 65535}, ip_posid = 0}, tts_tableOid\n= 0}\n(gdb) call GetTupleSize(slot, 65535)\n$24 = 8\n\nAfter ExecCopySlot(batchslot, slot), destination slot changes to\nTTSOpsBufferHeapTuple and the GetTupleSize() gives 28 bytes now.\n(gdb) p *batchslot\n$19 = {type = T_TupleTableSlot, tts_flags = 20, tts_nvalid = 0,\n tts_ops = 0x562c592653e0 <TTSOpsBufferHeapTuple>,\n tts_tupleDescriptor = 0x7f063fbeecd0, tts_values = 0x562c5a05daa8,\n tts_isnull = 0x562c5a05dab0, tts_mcxt = 0x562c5a040320, tts_tid = {\n ip_blkid = {bi_hi = 65535, bi_lo = 65535}, ip_posid = 0}, tts_tableOid\n= 0}\n(gdb) call GetTupleSize(batchslot, 65535)\n$25 = 28\n\nI think your suggestion to call GetTupleSize() on the destination slot\nafter ExecCopySlot() is right. I changed it in the v4 patch.\n\n>\n> Some other solutions/implementations would be:\n> - compute the size after doing CopySlot. Maybe the relation never wants\n> a virtual tuple and then you can also simplify GetTupleSize?\n>\n\nI think we need to have TTSOpsVirtual code in GetTupleSize() because\ntable_slot_create() which gets called before ExecCopySlot() may create\nvirtual slots for cases such as views and partitioned tables. Though we can\nnot insert into views or partitioned tables using CTAS, I want\nGetTupleSize() to be a generic function. Right now, I can not find other\nuse cases where GetTupleSize() can be used.\n\n>\n> - after CopySlot ask for the memory consumed in the slot using\n> MemoryContextMemAllocated.\n>\n\nMemoryContextMemAllocated of the slot's tts_mcxt will always have extra\nbytes and those extra bytes are way more compared to the actual tuple\nbytes. And most of the time, ExecCopySlot() will just point the src slot\ntts_mcxt to dest slot tts_mcxt. For instance, for a single row with a\nsingle integer column of 8 bytes, the mem_allocated is 49232 bytes. This is\nthe reason we can not rely on mem_allocated.\n\n(gdb) p slot->tts_mcxt -----> source slot\n$22 = (MemoryContext) 0x562c5a040320\n(gdb) p *slot->tts_mcxt\n$20 = {type = T_AllocSetContext, isReset = false, allowInCritSection =\nfalse,\n *mem_allocated = 49232*, methods = 0x562c5926d560 <AllocSetMethods>,\n parent = 0x562c59f97820, firstchild = 0x562c5a042330, prevchild = 0x0,\n nextchild = 0x0, name = 0x562c590d3554 \"ExecutorState\", ident = 0x0,\n reset_cbs = 0x0}\n\n(gdb) p batchslot->tts_mcxt -----> destination slot after\nExecCopySlot().\n$23 = (MemoryContext) 0x562c5a040320\n(gdb) p *batchslot->tts_mcxt\n$21 = {type = T_AllocSetContext, isReset = false, allowInCritSection =\nfalse,\n *mem_allocated = 49232*, methods = 0x562c5926d560 <AllocSetMethods>,\n parent = 0x562c59f97820, firstchild = 0x562c5a042330, prevchild = 0x0,\n nextchild = 0x0, name = 0x562c590d3554 \"ExecutorState\", ident = 0x0,\n reset_cbs = 0x0}\n\n>\n> Some small things to maybe change are:\n> ===========\n> + if (myState->mi_slots[myState->mi_slots_num] == NULL)\n> + {\n> + batchslot = table_slot_create(myState->rel, NULL);\n> + myState->mi_slots[myState->mi_slots_num] =\nbatchslot;\n> + }\n> + else\n> + batchslot =\nmyState->mi_slots[myState->mi_slots_num];\n>\n> Alternative:\n> + if (myState->mi_slots[myState->mi_slots_num] == NULL)\n> + myState->mi_slots[myState->mi_slots_num] =\n> table_slot_create(myState->rel, NULL);\n> + batchslot = myState->mi_slots[myState->mi_slots_num];\n>\n\nChanged.\n\n> ==============\n>\n> + sz = att_align_nominal(sz, att->attalign);\n> This could be moved out of the if statement?\n>\n> ==============\n\nI don't think we can change it. If we were to move it, then sz =\natt_addlength_datum(sz, att->attlen, val); which takes aligned sz may have\nproblems like below:\nSay att_align_nominal sets sz to 4 bytes, then att_addlength_datum takes\nthis 4 bytes adds attlen to it. If we move att_align_nominal(sz,\natt->attalign) out, then att_addlength_datum(sz, att->attlen, val) will not\nconsider the aligned bytes. We might have to add up the aligned bytes\nseparately for the else case. And also note that this code is derived from\nts_virtual_materialize(), where we have the att_align_nominal inside both\nif and else blocks. I may be wrong here.\n\nAttaching v4 patch. Consider it for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 Nov 2020 07:24:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 07:24:01AM +0530, Bharath Rupireddy wrote:\n> Yeah. The tuple size may change after ExecCopySlot(). For instance, create\n> table t2 as select a1 from t1; where t1 has two integer columns a1, b1. I'm\n> creating t2 with single column a1 from t1 which makes the source slot\n> virtual.\n\n+inline Size\n+GetTupleSize(TupleTableSlot *slot, Size maxsize)\n+{\n+ Size sz = 0;\n+ HeapTuple tuple = NULL;\n+\n+ if (TTS_IS_HEAPTUPLE(slot))\n+ tuple = ((HeapTupleTableSlot *) slot)->tuple;\n+ else if(TTS_IS_BUFFERTUPLE(slot))\n+ tuple = ((BufferHeapTupleTableSlot *) slot)->base.tuple;\n+ else if(TTS_IS_MINIMALTUPLE(slot))\n+ tuple = ((MinimalTupleTableSlot *) slot)->tuple;\n\nThere have been various talks about the methods we could use to\nevaluate the threshold in bytes when evaluating that a flush can\nhappen, including the use of memory contexts, or even estimate the\nsize of the number of tuples. This one looks promising because it\nseems exact, however for virtual slots I don't like much the fact that\nyou basically just extracted the parts of tts_virtual_materialize()\nand stuck them in this routine. That's a recipe for future bugs if\nthe materialization logic changes. In short, I am surprised that this\ncalculation is not directly part of TupleTableSlotOps. What we'd want\nis to get this information depending on the slot type dealt with, and\nwith your patch you would miss to handle any new slot type\nintroduced.\n--\nMichael",
"msg_date": "Thu, 26 Nov 2020 13:25:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 9:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> +inline Size\n> +GetTupleSize(TupleTableSlot *slot, Size maxsize)\n> +{\n> + Size sz = 0;\n> + HeapTuple tuple = NULL;\n> +\n> + if (TTS_IS_HEAPTUPLE(slot))\n> + tuple = ((HeapTupleTableSlot *) slot)->tuple;\n> + else if(TTS_IS_BUFFERTUPLE(slot))\n> + tuple = ((BufferHeapTupleTableSlot *) slot)->base.tuple;\n> + else if(TTS_IS_MINIMALTUPLE(slot))\n> + tuple = ((MinimalTupleTableSlot *) slot)->tuple;\n>\n> There have been various talks about the methods we could use to\n> evaluate the threshold in bytes when evaluating that a flush can\n> happen, including the use of memory contexts, or even estimate the\n> size of the number of tuples. This one looks promising because it\n> seems exact, however for virtual slots I don't like much the fact that\n> you basically just extracted the parts of tts_virtual_materialize()\n> and stuck them in this routine. That's a recipe for future bugs if\n> the materialization logic changes. In short, I am surprised that this\n> calculation is not directly part of TupleTableSlotOps. What we'd want\n> is to get this information depending on the slot type dealt with, and\n> with your patch you would miss to handle any new slot type\n> introduced.\n>\n\nYes for virtual slots, I reused the code from\ntts_virtual_materialize() in GetTupleSize(). I can think of below\noptions:\n\n1) Make the size calculation code for virtual slots, a macro or a\nstatic inline function and use that in tts_virtual_materialize() and\nGetTupleSize().\n2) Add comments in both the places, such as \"if any code is changed\nhere, consider changing it in tts_virtual_materialize() /\nGetTupleSize()\"\n3) Add a size variable to TupleTableSlotOps structure.\n4) Add a new API to TupleTableSlotOps structure say get_slot_size().\n5) For new slot types, maybe we can have comments in tuptable.h to\nconsider having equivalent change in GetTupleSize().\n\nIf we go with 3 and 4, will it be acceptable to add the extra code in\ngeneric structure which gets used in most of the code base and use\nthat new code only in limited places (for multi inserts in CTAS and\nRefresh Mat View)? I think we can go ahead with 2 and 5. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Nov 2020 12:01:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On 26-11-2020 07:31, Bharath Rupireddy wrote:\n> On Thu, Nov 26, 2020 at 9:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> +inline Size\n>> +GetTupleSize(TupleTableSlot *slot, Size maxsize)\n>> +{\n>> + Size sz = 0;\n>> + HeapTuple tuple = NULL;\n>> +\n>> + if (TTS_IS_HEAPTUPLE(slot))\n>> + tuple = ((HeapTupleTableSlot *) slot)->tuple;\n>> + else if(TTS_IS_BUFFERTUPLE(slot))\n>> + tuple = ((BufferHeapTupleTableSlot *) slot)->base.tuple;\n>> + else if(TTS_IS_MINIMALTUPLE(slot))\n>> + tuple = ((MinimalTupleTableSlot *) slot)->tuple;\n>>\n>> There have been various talks about the methods we could use to\n>> evaluate the threshold in bytes when evaluating that a flush can\n>> happen, including the use of memory contexts, or even estimate the\n>> size of the number of tuples. This one looks promising because it\n>> seems exact, however for virtual slots I don't like much the fact that\n>> you basically just extracted the parts of tts_virtual_materialize()\n>> and stuck them in this routine. That's a recipe for future bugs if\n>> the materialization logic changes. In short, I am surprised that this\n>> calculation is not directly part of TupleTableSlotOps. What we'd want\n>> is to get this information depending on the slot type dealt with, and\n>> with your patch you would miss to handle any new slot type\n>> introduced.\n>>\n> \n> Yes for virtual slots, I reused the code from\n> tts_virtual_materialize() in GetTupleSize(). I can think of below\n> options:\n> \n> 1) Make the size calculation code for virtual slots, a macro or a\n> static inline function and use that in tts_virtual_materialize() and\n> GetTupleSize().\n> 2) Add comments in both the places, such as \"if any code is changed\n> here, consider changing it in tts_virtual_materialize() /\n> GetTupleSize()\"\n> 3) Add a size variable to TupleTableSlotOps structure.\n> 4) Add a new API to TupleTableSlotOps structure say get_slot_size().\n> 5) For new slot types, maybe we can have comments in tuptable.h to\n> consider having equivalent change in GetTupleSize().\n> \n> If we go with 3 and 4, will it be acceptable to add the extra code in\n> generic structure which gets used in most of the code base and use\n> that new code only in limited places (for multi inserts in CTAS and\n> Refresh Mat View)? I think we can go ahead with 2 and 5. Thoughts?\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n> \n\nWhat I'm wondering about is the reason for wanting a cap on data volume. \nWhen doing some local (highly concurrent) ingest speed tests a few weeks \nago it seemed to mostly matter how many pages were being written and the \nresulting pressure on locks, etc. and not necessarily so much the actual \nmemory usage. I didn't collect proof on that though (yet). There was \nhowever a very clearly observable contention point where with bigger \nbuffers the performance would not only stagnate but actually drop.\n\nSo what I'm kinda wondering is if we should worry more about the amount \nof pages that are going to be written and maybe not so much about the \nmemory usage?\n\nIf this were to be the case then maybe we can consider improving the \ncurrent design, potentially in a follow-up patch? The problem I see is \nthat generically each tableam will have different choices to make on how \nto buffer and flush multiple rows, given that a storage engine might \nhave more or less write amplification, a different way of extending a \nrelation, fsm use, etc.\nAssuming we indeed want a per-tableam implementation, we could either:\n- make multi_insert buffer the tuples itself and add a flush_multi_insert.\n- add a new function called create_multi_insert which returns something \nlike a MultiInsertState, which, like a destreceiver, has a set of \ncallbacks to start, shutdown and insert.\n\nWith both solutions one part that to me seems appealing is that we \nbuffer the data in something that likely resembles the disk format very \nmuch. Thoughts?\n\nRegards,\nLuc\nSwarm64\n\n\n",
"msg_date": "Thu, 26 Nov 2020 07:55:17 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 12:25 PM Luc Vlaming <luc@swarm64.com> wrote:\n>\n> What I'm wondering about is the reason for wanting a cap on data volume.\n> When doing some local (highly concurrent) ingest speed tests a few weeks\n> ago it seemed to mostly matter how many pages were being written and the\n> resulting pressure on locks, etc. and not necessarily so much the actual\n> memory usage. I didn't collect proof on that though (yet). There was\n> however a very clearly observable contention point where with bigger\n> buffers the performance would not only stagnate but actually drop.\n>\n> So what I'm kinda wondering is if we should worry more about the amount\n> of pages that are going to be written and maybe not so much about the\n> memory usage?\n>\n> If this were to be the case then maybe we can consider improving the\n> current design, potentially in a follow-up patch? The problem I see is\n> that generically each tableam will have different choices to make on how\n> to buffer and flush multiple rows, given that a storage engine might\n> have more or less write amplification, a different way of extending a\n> relation, fsm use, etc.\n> Assuming we indeed want a per-tableam implementation, we could either:\n> - make multi_insert buffer the tuples itself and add a flush_multi_insert.\n> - add a new function called create_multi_insert which returns something\n> like a MultiInsertState, which, like a destreceiver, has a set of\n> callbacks to start, shutdown and insert.\n>\n> With both solutions one part that to me seems appealing is that we\n> buffer the data in something that likely resembles the disk format very\n> much. Thoughts?\n>\n\nIMHO, I would like to go with your option 1 i.e. add a few APIs to the\nTableAmRoutine structure. Advantage is that we could use these APIs in\nat least 3 places, without much code duplication: 1) COPY 2) CTAS and\n3) Refresh Materialized View. I could roughly sketch the APIs in below\nway:\n\ntypedef struct MultiInsertStateData\n{\n MemoryContext micontext; /* A temporary memory context for\nmulti insert. */\n BulkInsertStateData *bistate; /* Bulk insert state. */\n TupleTableSlot **mislots; /* Array of buffered slots. */\n uint32 nslots; /* Total number of buffered slots. */\n uint64 nbytes; /* Flush buffers if the total tuple\nsize >= nbytes. */\n int32 nused; /* Number of current buffered slots for\na multi insert batch. */\n int64 nsize; /* Total tuple size for a multi insert\nbatch. */\n} MultiInsertStateData;\n\n/* Creates a temporary memory context, allocates the\nMultiInsertStateData, BulkInsertStateData and initializes other\nmembers. */\n void (*begin_multi_insert) (Relation rel,\nMultiInsertStateData **mistate, uint32 nslots, uint64 nbytes);\n\n/* Buffers the input slot into mistate slots, computes the size of the\ntuple, and adds it to the total tuple size of the buffered tuples, if\nthis size crosses mistate->nbytes, flush the buffered tuples into\ntable. For heapam, existing heap_multi_insert can be used. Once the\nbuffer is flushed, then micontext can be reset and buffered slots can\nbe cleared. */\n void (*do_multi_insert) (Relation rel, MultiInsertStateData\n*mistate, struct TupleTableSlot *slot, CommandId cid, int options);\n\n/* Flush the buffered tuples if any. For heapam, existing\nheap_multi_insert can be used. Deletes temporary memory context and\ndeallocates mistate. */\n void (*end_multi_insert) (Relation rel,\nMultiInsertStateData *mistate, CommandId cid, int options);\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Nov 2020 16:36:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "Few things:\n\nIIUC Andres mentioned similar kinds of APIs earlier in [1].\n\n[1] -\nhttps://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de\n\nI would like to add some more info to one of the API:\n\ntypedef struct MultiInsertStateData\n{\n MemoryContext micontext; /* A temporary memory context for\nmulti insert. */\n BulkInsertStateData *bistate; /* Bulk insert state. */\n TupleTableSlot **mislots; /* Array of buffered slots. */\n uint32 nslots; /* Total number of buffered slots. */\n int64 nbytes; /* Flush buffers if the total tuple size >=\nnbytes. */\n int32 nused; /* Number of current buffered slots for a\nmulti insert batch. */\n int64 nsize; /* Total tuple size for a multi insert batch.\n*/\n} MultiInsertStateData;\n\n/* Creates a temporary memory context, allocates the MultiInsertStateData,\nBulkInsertStateData and initializes other members. */\n void (*begin_multi_insert) (Relation rel, MultiInsertStateData\n**mistate, uint32 nslots, uint64 nbytes);\n\n/* Buffers the input slot into mistate slots, computes the size of the\ntuple, and adds it total buffer tuple size, if this size crosses\nmistate->nbytes, flush the buffered tuples into table. For heapam, existing\nheap_multi_insert can be used. Once the buffer is flushed, then the\nmicontext can be reset and buffered slots can be cleared. *If nbytes i.e.\ntotal tuple size of the batch is not given, tuple size is not calculated,\ntuples are buffered until all the nslots are filled and then flushed.* */\n void (*do_multi_insert) (Relation rel, MultiInsertStateData\n*mistate, struct TupleTableSlot *slot, CommandId cid, int options);\n\n/* Flush the buffered tuples if any. For heapam, existing heap_multi_insert\ncan be used. Deletes temporary memory context and deallocates mistate. */\n\n void (*end_multi_insert) (Relation rel, MultiInsertStateData\n*mistate, CommandId cid, int options);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nFew things:IIUC Andres mentioned similar kinds of APIs earlier in [1].[1] - https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.deI would like to add some more info to one of the API:typedef struct MultiInsertStateData{ MemoryContext micontext; /* A temporary memory context for multi insert. */ BulkInsertStateData *bistate; /* Bulk insert state. */ TupleTableSlot **mislots; /* Array of buffered slots. */ uint32 nslots; /* Total number of buffered slots. */ int64 nbytes; /* Flush buffers if the total tuple size >= nbytes. */ int32 nused; /* Number of current buffered slots for a multi insert batch. */ int64 nsize; /* Total tuple size for a multi insert batch. */ } MultiInsertStateData;/* Creates a temporary memory context, allocates the MultiInsertStateData, BulkInsertStateData and initializes other members. */ void (*begin_multi_insert) (Relation rel, MultiInsertStateData **mistate, uint32 nslots, uint64 nbytes);/* Buffers the input slot into mistate slots, computes the size of the tuple, and adds it total buffer tuple size, if this size crosses mistate->nbytes, flush the buffered tuples into table. For heapam, existing heap_multi_insert can be used. Once the buffer is flushed, then the micontext can be reset and buffered slots can be cleared. If nbytes i.e. total tuple size of the batch is not given, tuple size is not calculated, tuples are buffered until all the nslots are filled and then flushed. */ void (*do_multi_insert) (Relation rel, MultiInsertStateData *mistate, struct TupleTableSlot *slot, CommandId cid, int options);/* Flush the buffered tuples if any. For heapam, existing heap_multi_insert can be used. Deletes temporary memory context and deallocates mistate. */ void (*end_multi_insert) (Relation rel, MultiInsertStateData *mistate, CommandId cid, int options);With Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 Nov 2020 17:06:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On 26-11-2020 12:36, Bharath Rupireddy wrote:\n> Few things:\n> \n> IIUC Andres mentioned similar kinds of APIs earlier in [1].\n> \n> [1] - \n> https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de \n> <https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de>\n> \n> I would like to add some more info to one of the API:\n> \n> typedef struct MultiInsertStateData\n> {\n> MemoryContext micontext; /* A temporary memory context for \n> multi insert. */\n> BulkInsertStateData *bistate; /* Bulk insert state. */\n> TupleTableSlot **mislots; /* Array of buffered slots. */\n> uint32 nslots; /* Total number of buffered slots. */\n> int64 nbytes; /* Flush buffers if the total tuple size \n> >= nbytes. */\n> int32 nused; /* Number of current buffered slots for a \n> multi insert batch. */\n> int64 nsize; /* Total tuple size for a multi insert \n> batch. */\n> } MultiInsertStateData;\n> \n> /* Creates a temporary memory context, allocates the \n> MultiInsertStateData, BulkInsertStateData and initializes other members. */\n> void (*begin_multi_insert) (Relation rel, \n> MultiInsertStateData **mistate, uint32 nslots, uint64 nbytes);\n> \n> /* Buffers the input slot into mistate slots, computes the size of the \n> tuple, and adds it total buffer tuple size, if this size crosses \n> mistate->nbytes, flush the buffered tuples into table. For heapam, \n> existing heap_multi_insert can be used. Once the buffer is flushed, then \n> the micontext can be reset and buffered slots can be cleared. *If nbytes \n> i.e. total tuple size of the batch is not given, tuple size is not \n> calculated, tuples are buffered until all the nslots are filled and then \n> flushed.* */\n> void (*do_multi_insert) (Relation rel, MultiInsertStateData \n> *mistate, struct TupleTableSlot *slot, CommandId cid, int options);\n> \n> /* Flush the buffered tuples if any. For heapam, existing \n> heap_multi_insert can be used. Deletes temporary memory context and \n> deallocates mistate. */\n> void (*end_multi_insert) (Relation rel, MultiInsertStateData \n> *mistate, CommandId cid, int options);\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\nLooks all good to me, except for the nbytes part.\nCould you explain to me what use case that supports? IMHO the tableam \ncan best decide itself that its time to flush, based on its \nimplementation that e.g. considers how many pages to flush at a time and \nsuch, etc? This means also that most of the fields of \nMultiInsertStateData can be private as each tableam would return a \nderivative of that struct (like with the destreceivers).\n\nOne thing I'm wondering is in which memory context the slots end up \nbeing allocated. I'd assume we would want to keep the slots around \nbetween flushes. If they are in the temporary context this might prove \nproblematic however?\n\nRegards,\nLuc\n\n\n",
"msg_date": "Thu, 26 Nov 2020 13:04:31 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Thu, Nov 26, 2020 at 5:34 PM Luc Vlaming <luc@swarm64.com> wrote:\n>\n> On 26-11-2020 12:36, Bharath Rupireddy wrote:\n> > Few things:\n> >\n> > IIUC Andres mentioned similar kinds of APIs earlier in [1].\n> >\n> > [1] -\n> > https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de\n> > <https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de>\n> >\n> > I would like to add some more info to one of the API:\n> >\n> > typedef struct MultiInsertStateData\n> > {\n> > MemoryContext micontext; /* A temporary memory context for\n> > multi insert. */\n> > BulkInsertStateData *bistate; /* Bulk insert state. */\n> > TupleTableSlot **mislots; /* Array of buffered slots. */\n> > uint32 nslots; /* Total number of buffered slots. */\n> > int64 nbytes; /* Flush buffers if the total tuple size\n> > >= nbytes. */\n> > int32 nused; /* Number of current buffered slots for a\n> > multi insert batch. */\n> > int64 nsize; /* Total tuple size for a multi insert\n> > batch. */\n> > } MultiInsertStateData;\n> >\n> > /* Creates a temporary memory context, allocates the\n> > MultiInsertStateData, BulkInsertStateData and initializes other members. */\n> > void (*begin_multi_insert) (Relation rel,\n> > MultiInsertStateData **mistate, uint32 nslots, uint64 nbytes);\n> >\n> > /* Buffers the input slot into mistate slots, computes the size of the\n> > tuple, and adds it total buffer tuple size, if this size crosses\n> > mistate->nbytes, flush the buffered tuples into table. For heapam,\n> > existing heap_multi_insert can be used. Once the buffer is flushed, then\n> > the micontext can be reset and buffered slots can be cleared. *If nbytes\n> > i.e. total tuple size of the batch is not given, tuple size is not\n> > calculated, tuples are buffered until all the nslots are filled and then\n> > flushed.* */\n> > void (*do_multi_insert) (Relation rel, MultiInsertStateData\n> > *mistate, struct TupleTableSlot *slot, CommandId cid, int options);\n> >\n> > /* Flush the buffered tuples if any. For heapam, existing\n> > heap_multi_insert can be used. Deletes temporary memory context and\n> > deallocates mistate. */\n> > void (*end_multi_insert) (Relation rel, MultiInsertStateData\n> > *mistate, CommandId cid, int options);\n>\n> Looks all good to me, except for the nbytes part.\n> Could you explain to me what use case that supports? IMHO the tableam\n> can best decide itself that its time to flush, based on its\n> implementation that e.g. considers how many pages to flush at a time and\n> such, etc? This means also that most of the fields of\n> MultiInsertStateData can be private as each tableam would return a\n> derivative of that struct (like with the destreceivers).\n>\n\nnbytes is basically to support the following case, say the number of\ntuples to buffer is 1000, and if all the tuples are toasted with size\nin few hundred MB or even GB, then do we want to wait until 1000\ntuples are buffered in which case we occupy for one query 1000*toasted\ntuple size in GB. So, if we have a memory limit, then it will give\nflexibility. Whether to use it or not is up to the table AM\nimplementation. And also that existing copy code(since it can know the\ntuple size after parsing input data) uses this mechanism to decide\nwhen to flush.\n\nIf the nbytes is not used in a table am, then the multi insert can\nwait until the total tuples, how much ever large memory they occupy,\nare buffered.\n\nIMO, we can retain nbytes for now to decide on when to flush. Thoughts?\n\nI wonder, how can the do_multi_insert() API decide on when to flush, I\nmean, based on the number of pages to flush? Do we need to pass the\nmaximum number of pages the buffered tuples can occupy and track the\npages currently buffered tuples occupy to decide when to flush? Or is\nit something that the existing table AM infrastructure already\nsupports? If we use the number of pages to decide on when to flush,\nhow well it works with parallel inserts?\n\n>\n> One thing I'm wondering is in which memory context the slots end up\n> being allocated. I'd assume we would want to keep the slots around\n> between flushes. If they are in the temporary context this might prove\n> problematic however?\n>\n\nI should not have used the word temporary, it actually is not\ntemporary. This memory conext will be created in begin_multi_insert(),\nall the buffered tuples are copied using this context, it will be\nreset at the end of each flush and reused. It can get destroyed at the\nend in end_multi_insert(). I think we should even do this with the new\nAPIs implementation.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Nov 2020 19:15:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On 26-11-2020 14:45, Bharath Rupireddy wrote:\n> On Thu, Nov 26, 2020 at 5:34 PM Luc Vlaming <luc@swarm64.com> wrote:\n>>\n>> On 26-11-2020 12:36, Bharath Rupireddy wrote:\n>>> Few things:\n>>>\n>>> IIUC Andres mentioned similar kinds of APIs earlier in [1].\n>>>\n>>> [1] -\n>>> https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de\n>>> <https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de>\n>>>\n>>> I would like to add some more info to one of the API:\n>>>\n>>> typedef struct MultiInsertStateData\n>>> {\n>>> MemoryContext micontext; /* A temporary memory context for\n>>> multi insert. */\n>>> BulkInsertStateData *bistate; /* Bulk insert state. */\n>>> TupleTableSlot **mislots; /* Array of buffered slots. */\n>>> uint32 nslots; /* Total number of buffered slots. */\n>>> int64 nbytes; /* Flush buffers if the total tuple size\n>>> >= nbytes. */\n>>> int32 nused; /* Number of current buffered slots for a\n>>> multi insert batch. */\n>>> int64 nsize; /* Total tuple size for a multi insert\n>>> batch. */\n>>> } MultiInsertStateData;\n>>>\n>>> /* Creates a temporary memory context, allocates the\n>>> MultiInsertStateData, BulkInsertStateData and initializes other members. */\n>>> void (*begin_multi_insert) (Relation rel,\n>>> MultiInsertStateData **mistate, uint32 nslots, uint64 nbytes);\n>>>\n>>> /* Buffers the input slot into mistate slots, computes the size of the\n>>> tuple, and adds it total buffer tuple size, if this size crosses\n>>> mistate->nbytes, flush the buffered tuples into table. For heapam,\n>>> existing heap_multi_insert can be used. Once the buffer is flushed, then\n>>> the micontext can be reset and buffered slots can be cleared. *If nbytes\n>>> i.e. total tuple size of the batch is not given, tuple size is not\n>>> calculated, tuples are buffered until all the nslots are filled and then\n>>> flushed.* */\n>>> void (*do_multi_insert) (Relation rel, MultiInsertStateData\n>>> *mistate, struct TupleTableSlot *slot, CommandId cid, int options);\n>>>\n>>> /* Flush the buffered tuples if any. For heapam, existing\n>>> heap_multi_insert can be used. Deletes temporary memory context and\n>>> deallocates mistate. */\n>>> void (*end_multi_insert) (Relation rel, MultiInsertStateData\n>>> *mistate, CommandId cid, int options);\n>>\n>> Looks all good to me, except for the nbytes part.\n>> Could you explain to me what use case that supports? IMHO the tableam\n>> can best decide itself that its time to flush, based on its\n>> implementation that e.g. considers how many pages to flush at a time and\n>> such, etc? This means also that most of the fields of\n>> MultiInsertStateData can be private as each tableam would return a\n>> derivative of that struct (like with the destreceivers).\n>>\n> \n> nbytes is basically to support the following case, say the number of\n> tuples to buffer is 1000, and if all the tuples are toasted with size\n> in few hundred MB or even GB, then do we want to wait until 1000\n> tuples are buffered in which case we occupy for one query 1000*toasted\n> tuple size in GB. So, if we have a memory limit, then it will give\n> flexibility. Whether to use it or not is up to the table AM\n> implementation. And also that existing copy code(since it can know the\n> tuple size after parsing input data) uses this mechanism to decide\n> when to flush.\n> \n> If the nbytes is not used in a table am, then the multi insert can\n> wait until the total tuples, how much ever large memory they occupy,\n> are buffered.\n> \n> IMO, we can retain nbytes for now to decide on when to flush. Thoughts?\n\nI'm very sorry I had not realized at all that the toasted data would be \nkept in memory until written out. I guess I'm not familiar enough with \nthat part yet. I assumed this would be toasted beforehand and be tableam \nagnostic, and that any decision from the tableam to flush would happen \nway before a lot memory would have accumulated, which is a bit naive in \nhindsight.\n\n> \n> I wonder, how can the do_multi_insert() API decide on when to flush, I\n> mean, based on the number of pages to flush? Do we need to pass the\n> maximum number of pages the buffered tuples can occupy and track the\n> pages currently buffered tuples occupy to decide when to flush? Or is\n> it something that the existing table AM infrastructure already\n> supports? If we use the number of pages to decide on when to flush,\n> how well it works with parallel inserts?\n> \n\nI was assuming each tableam to use its own logic, based on its needs and \nthe tradeoffs a storage engine might want to provide. This does not mean \nit should not consider outside parameters, like the aforementioned \nmemory usage.\nI think it would imply that each tableam implements its own tracking \nmechanism for how much has accumulated, how, and when to flush, because \nthey might track different statistics. IMHO given that each tableam \nanyway would want to implement its own logic on how to store a slot into \na page, tracking the logic for tracking these statistics seemed minor to \nme. Maybe I missed some parts that should be extracted out to a generic \ninterface however?\n\nSome examples of why a tableam could decide on its own on when to flush:\n- the current heap implementation could accumulate a few pages (say up \nto 64) and thereby limit the amount of calls to write() and limit the \naccompanying blocks/context switches. This would also then make the \nwrites more sequential wrt the processes which can help with the \nflushing I presume, like how the sequential scan was optimized to \nprocess a consequtive set of blocks per worker (see \ntable_block_parallelscan_nextpage).\n- something like zheap could accumulate data based on the amount of \ncolumns so that a page with column data is completely filled, thereby \nlimiting the write amplification.\n- something that would implement an lsm storage might accumulate a full \nin-memory level before flushing it out.\n\n>>\n>> One thing I'm wondering is in which memory context the slots end up\n>> being allocated. I'd assume we would want to keep the slots around\n>> between flushes. If they are in the temporary context this might prove\n>> problematic however?\n>>\n> \n> I should not have used the word temporary, it actually is not\n> temporary. This memory conext will be created in begin_multi_insert(),\n> all the buffered tuples are copied using this context, it will be\n> reset at the end of each flush and reused. It can get destroyed at the\n> end in end_multi_insert(). I think we should even do this with the new\n> APIs implementation.\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n> \n\nOkay. But in which context are the slots themselves allocated then? \nBecause if we allocate the slots themselves in the context and then \nreset with each flush we would have to also re-allocate the slots every \nflush, which seems wasteful to me?\n\nRegards,\nLuc\nSwarm64\n\n\n",
"msg_date": "Fri, 27 Nov 2020 07:51:59 +0100",
"msg_from": "Luc Vlaming <luc@swarm64.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Fri, Nov 27, 2020 at 12:22 PM Luc Vlaming <luc@swarm64.com> wrote:\n>\n> >\n> > I wonder, how can the do_multi_insert() API decide on when to flush, I\n> > mean, based on the number of pages to flush? Do we need to pass the\n> > maximum number of pages the buffered tuples can occupy and track the\n> > pages currently buffered tuples occupy to decide when to flush? Or is\n> > it something that the existing table AM infrastructure already\n> > supports? If we use the number of pages to decide on when to flush,\n> > how well it works with parallel inserts?\n>\n> I was assuming each tableam to use its own logic, based on its needs and\n> the tradeoffs a storage engine might want to provide. This does not mean\n> it should not consider outside parameters, like the aforementioned\n> memory usage.\n> I think it would imply that each tableam implements its own tracking\n> mechanism for how much has accumulated, how, and when to flush, because\n> they might track different statistics. IMHO given that each tableam\n> anyway would want to implement its own logic on how to store a slot into\n> a page, tracking the logic for tracking these statistics seemed minor to\n> me. Maybe I missed some parts that should be extracted out to a generic\n> interface however?\n>\n\nAgree with you that tracking page level or some other info is\ndependent on table am implementations.\n\n>\n> Some examples of why a tableam could decide on its own on when to flush:\n> - the current heap implementation could accumulate a few pages (say up\n> to 64) and thereby limit the amount of calls to write() and limit the\n> accompanying blocks/context switches. This would also then make the\n> writes more sequential wrt the processes which can help with the\n> flushing I presume, like how the sequential scan was optimized to\n> process a consequtive set of blocks per worker (see\n> table_block_parallelscan_nextpage).\n> - something like zheap could accumulate data based on the amount of\n> columns so that a page with column data is completely filled, thereby\n> limiting the write amplification.\n> - something that would implement an lsm storage might accumulate a full\n> in-memory level before flushing it out.\n>\n\nThanks for the details.\n\n>\n> >> One thing I'm wondering is in which memory context the slots end up\n> >> being allocated. I'd assume we would want to keep the slots around\n> >> between flushes. If they are in the temporary context this might prove\n> >> problematic however?\n> >>\n> >\n> > I should not have used the word temporary, it actually is not\n> > temporary. This memory conext will be created in begin_multi_insert(),\n> > all the buffered tuples are copied using this context, it will be\n> > reset at the end of each flush and reused. It can get destroyed at the\n> > end in end_multi_insert(). I think we should even do this with the new\n> > APIs implementation.\n> >\n>\n> Okay. But in which context are the slots themselves allocated then?\n> Because if we allocate the slots themselves in the context and then\n> reset with each flush we would have to also re-allocate the slots every\n> flush, which seems wasteful to me?\n>\n\nBuffer slots are allocated in the memory context in which the new APIs\nget called. We don't have to re-allocate the slots every time after\nflushing, but we have to clear them using ExecClearTuple() and reuse.\n\nAnd the memory context I specified in the MultiInsertStateData\nstructure is for using table_multi_insert() inside the new\ndo_multi_insert API after we decide to flush. There's a comment in the\nexisting table_multi_insert() usage in copy code, which says that\ntable_multi_insert() may leak the memory, for the same reason we need\nthat temporary memory context, which gets set just before\ntable_multi_insert(), and reset after that. This happens for each\nbatch of tuples. And in the end this context can be deleted in the\nend_multi_insert API.\n\nHope this helps.\n\nI'm planning to summarize and post the new APIs description here again\nfor other opinions.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Nov 2020 13:16:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "Hi,\n\nCurrently, required logic for multi inserts (such as buffer slots\nallocation, flushing, tuple size calculation to decide when to flush,\ncleanup and so on) is being handled outside of the existing tableam APIs.\nAnd there are a good number of cases where multi inserts can be used, such\nas for existing COPY or for CTAS, CREATE/REFRESH MATERIALIZED VIEW\n[proposed in this thread], and INSERT INTO SELECTs [here\n<https://www.postgresql.org/list/pgsql-hackers/since/202009240000/>] which\nare currently under discussion. Handling the same multi inserts logic in\nmany places is error prone and duplicates most of the code. To avoid this,\nproposing here are generic tableam APIs, that can be used in all the cases\nand which also gives the flexibility to tableam developers in implementing\nmulti inserts logic dependent on the underlying storage engine[1].\n\nI would like to seek thoughts/opinions on the proposed new APIs. Once\nreviewed, I will start implementing them.\n\n[1] -\nhttps://www.postgresql.org/message-id/ca3dd08f-4ce0-01df-ba30-e9981bb0d54e%40swarm64.com\n\nBelow are the proposed structures and APIs:\n\n/* Holds the multi insert related information. */\ntypedef struct MultiInsertStateData\n{\n /* A temporary memory context for multi insert. */\n MemoryContext micontext;\n /* Bulk insert state. */\n BulkInsertStateData *bistate;\n /* Array of buffered slots. */\n TupleTableSlot **mislots;\n /* Maximum number of slots that can be buffered. */\n int32 nslots;\n /* Number of slots that are currently buffered. */\n int32 nused;\n /*\n * Maximum total tuple size that can be buffered in\n * a single batch. Flush the buffered tuples if the\n * current total tuple size, nsize >= nbytes.\n */\n int64 nbytes;\n /*\n * Total tuple size in bytes of the slots that are\n * currently buffered.\n */\n int64 nsize;\n /*\n * Whether to clear the buffered slots content\n * after the flush? If the relation has indexes\n * or after row triggers, the buffered slots\n * required outside do_multi_insert() and clean\n * them using ExecClearTuple() outside the\n * do_multi_insert API. If true, do_multi_insert()\n * can clear the slots.\n */\n bool clearslots;\n /*\n * If true, do_multi_insert will flush the buffered\n * slots, if any, bypassing the slot count and total\n * tuple size checks. This can be useful in cases,\n * where one of the partition can not use multi inserts\n * but others can and they have buffered few slots\n * so far, which need to be flushed for visibility,\n * before the partition that doesn't support can\n * proceed with single inserts.\n */\n bool forceflush;\n} MultiInsertStateData;\n\n/*\n * Allocates and initializes the MultiInsertStateData. Creates a temporary\n * memory context for multi inserts, allocates BulkInsertStateData.\n */\nvoid (*begin_multi_insert) (Relation rel,\n MultiInsertStateData **mistate,\n uint32 nslots,\n uint64 nbytes);\n\n/*\n * Buffers the input slot into mistate slots. Computes the size of the\ntuple,\n * and adds it to the total size of the buffered tuples. If this size\ncrosses\n * nbytes, flush the buffered tuples into the table. Clear the buffered\nslots\n * content if clearslots is true. If nbytes i.e. the maximum total tuple\nsize\n * of the buffered tuples is not given, the tuple size is not calculated,\n * tuples are buffered until all the nslots are filled and then flushed.\n *\n * For heapam, existing heap_multi_insert can be called using\n * rel->rd_tableam->multi_insert() for flushing.\n */\nvoid (*do_multi_insert) (Relation rel,\n struct MultiInsertStateData *mistate,\n struct TupleTableSlot *slot,\n CommandId cid,\n int options);\n\n/*\n * Flush the buffered tuples if any. Clear the buffered slots content if\n * clearslots is true. Deletes temporary memory context and deallocates\n * mistate.\n */\nvoid (*end_multi_insert) (Relation rel,\n struct MultiInsertStateData *mistate,\n CommandId cid,\n int options);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,Currently, required logic for multi inserts (such as buffer slots allocation, flushing, tuple size calculation to decide when to flush, cleanup and so on) is being handled outside of the existing tableam APIs. And there are a good number of cases where multi inserts can be used, such as for existing COPY or for CTAS, CREATE/REFRESH MATERIALIZED VIEW [proposed in this thread], and INSERT INTO SELECTs [here] which are currently under discussion. Handling the same multi inserts logic in many places is error prone and duplicates most of the code. To avoid this, proposing here are generic tableam APIs, that can be used in all the cases and which also gives the flexibility to tableam developers in implementing multi inserts logic dependent on the underlying storage engine[1].I would like to seek thoughts/opinions on the proposed new APIs. Once reviewed, I will start implementing them.[1] - https://www.postgresql.org/message-id/ca3dd08f-4ce0-01df-ba30-e9981bb0d54e%40swarm64.comBelow are the proposed structures and APIs:/* Holds the multi insert related information. */typedef struct MultiInsertStateData{ /* A temporary memory context for multi insert. */ MemoryContext micontext; /* Bulk insert state. */ BulkInsertStateData *bistate; /* Array of buffered slots. */ TupleTableSlot **mislots; /* Maximum number of slots that can be buffered. */ int32 nslots; /* Number of slots that are currently buffered. */ int32 nused; /* * Maximum total tuple size that can be buffered in * a single batch. Flush the buffered tuples if the * current total tuple size, nsize >= nbytes. */ int64 nbytes; /* * Total tuple size in bytes of the slots that are * currently buffered. */ int64 nsize; /* * Whether to clear the buffered slots content * after the flush? If the relation has indexes * or after row triggers, the buffered slots * required outside do_multi_insert() and clean * them using ExecClearTuple() outside the * do_multi_insert API. If true, do_multi_insert() * can clear the slots. */ bool clearslots; /* * If true, do_multi_insert will flush the buffered * slots, if any, bypassing the slot count and total * tuple size checks. This can be useful in cases, * where one of the partition can not use multi inserts * but others can and they have buffered few slots * so far, which need to be flushed for visibility, * before the partition that doesn't support can * proceed with single inserts. */ bool forceflush;} MultiInsertStateData;/* * Allocates and initializes the MultiInsertStateData. Creates a temporary * memory context for multi inserts, allocates BulkInsertStateData. */void (*begin_multi_insert) (Relation rel, MultiInsertStateData **mistate, uint32 nslots, uint64 nbytes);/* * Buffers the input slot into mistate slots. Computes the size of the tuple, * and adds it to the total size of the buffered tuples. If this size crosses * nbytes, flush the buffered tuples into the table. Clear the buffered slots * content if clearslots is true. If nbytes i.e. the maximum total tuple size * of the buffered tuples is not given, the tuple size is not calculated, * tuples are buffered until all the nslots are filled and then flushed. * * For heapam, existing heap_multi_insert can be called using * rel->rd_tableam->multi_insert() for flushing. */void (*do_multi_insert) (Relation rel, struct MultiInsertStateData *mistate, struct TupleTableSlot *slot, CommandId cid, int options);/* * Flush the buffered tuples if any. Clear the buffered slots content if * clearslots is true. Deletes temporary memory context and deallocates * mistate. */void (*end_multi_insert) (Relation rel, struct MultiInsertStateData *mistate, CommandId cid, int options);With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 30 Nov 2020 10:48:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 10:49 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Currently, required logic for multi inserts (such as buffer slots allocation, flushing, tuple size calculation to decide when to flush, cleanup and so on) is being handled outside of the existing tableam APIs. And there are a good number of cases where multi inserts can be used, such as for existing COPY or for CTAS, CREATE/REFRESH MATERIALIZED VIEW [proposed in this thread], and INSERT INTO SELECTs [here] which are currently under discussion. Handling the same multi inserts logic in many places is error prone and duplicates most of the code. To avoid this, proposing here are generic tableam APIs, that can be used in all the cases and which also gives the flexibility to tableam developers in implementing multi inserts logic dependent on the underlying storage engine[1].\n>\n> I would like to seek thoughts/opinions on the proposed new APIs. Once reviewed, I will start implementing them.\n\nIMHO, if we think that something really specific to the tableam then\nit makes sense to move it there. But just to avoid duplicating the\ncode it might not be the best idea. Instead, you can write some\ncommon functions and we can call them from different places. So if\nsomething is very much common and will not vary based on the storage\ntype we can keep it outside the tableam interface however we can move\nthem into some common functions to avoid duplication.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Dec 2020 13:38:15 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Thu, Dec 3, 2020 at 1:38 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 30, 2020 at 10:49 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Currently, required logic for multi inserts (such as buffer slots allocation, flushing, tuple size calculation to decide when to flush, cleanup and so on) is being handled outside of the existing tableam APIs. And there are a good number of cases where multi inserts can be used, such as for existing COPY or for CTAS, CREATE/REFRESH MATERIALIZED VIEW [proposed in this thread], and INSERT INTO SELECTs [here] which are currently under discussion. Handling the same multi inserts logic in many places is error prone and duplicates most of the code. To avoid this, proposing here are generic tableam APIs, that can be used in all the cases and which also gives the flexibility to tableam developers in implementing multi inserts logic dependent on the underlying storage engine[1].\n> >\n> > I would like to seek thoughts/opinions on the proposed new APIs. Once reviewed, I will start implementing them.\n>\n> IMHO, if we think that something really specific to the tableam then\n> it makes sense to move it there. But just to avoid duplicating the\n> code it might not be the best idea. Instead, you can write some\n> common functions and we can call them from different places. So if\n> something is very much common and will not vary based on the storage\n> type we can keep it outside the tableam interface however we can move\n> them into some common functions to avoid duplication.\n>\n\nThanks for the response. Main design goal of the new APIs is to give\nflexibility to tableam developers in implementing multi insert logic\ndependent on the underlying storage engine. Currently, for all the\nunderlying storage engines, we follow the same multi insert logic such\nas when and how to flush the buffered tuples, tuple size calculation,\nand this logic doesn't take into account the underlying storage engine\ncapabilities. Please have a look at [1] where this point was brought\nup by @Luc Vlaming. The subsequent discussion went on to some level of\nagreement on the proposed APIs.\n\nI want to clarify that avoiding duplicate multi insert code (for COPY,\nCTAS, CREATE/REFRESH MAT VIEW and INSERT SELECTs) is a byproduct(not a\nmain design goal) if we implement the new APIs for heap AM. I feel\nsorry for projecting the goal as avoiding duplicate code earlier.\n\nI also want to mention that @Andres Freund visualized similar kinds of\nAPIs in [2].\n\nI tried to keep the API as generic as possible, please have a look at\nthe new structure and APIs [3].\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/ca3dd08f-4ce0-01df-ba30-e9981bb0d54e%40swarm64.com\n[2] - https://www.postgresql.org/message-id/20200924024128.kyk3r5g7dnu3fxxx%40alap3.anarazel.de\n[3] - https://www.postgresql.org/message-id/CALj2ACV8_O651C2zUqrVSRFDJkp8%3DTMwSdG9%2BmDGL%2BvF6CD%2BAQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Dec 2020 14:27:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
},
{
"msg_contents": "On Mon, 16 Nov 2020 at 15:32, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Nov 16, 2020 at 8:02 PM Paul Guo <guopa@vmware.com> wrote:\n> >\n> > > On Nov 13, 2020, at 7:21 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 10, 2020 at 3:47 PM Paul Guo <guopa@vmware.com> wrote:\n> > >>\n> > >> Thanks for doing this. There might be another solution - use raw insert interfaces (i.e. raw_heap_insert()).\n> > >> Attached is the test (not formal) patch that verifies this idea. raw_heap_insert() writes the page into the\n> > >> table files directly and also write the FPI xlog when the tuples filled up the whole page. This seems be\n> > >> more efficient.\n> > >>\n> > >\n> > > Thanks. Will the new raw_heap_insert() APIs scale well (i.e. extend\n> > > the table parallelly) with parallelism? The existing\n> > > table_multi_insert() API scales well, see, for instance, the benefit\n> > > with parallel copy[1] and parallel multi inserts in CTAS[2].\n> >\n> > Yes definitely some work needs to be done to make raw heap insert interfaces fit the parallel work, but\n> > it seems that there is no hard blocking issues for this?\n> >\n>\n> I may be wrong here. If we were to allow raw heap insert APIs to\n> handle parallelism, shouldn't we need some sort of shared memory to\n> allow coordination among workers? If we do so, at the end, aren't\n> these raw insert APIs equivalent to current table_multi_insert() API\n> which uses a separate shared ring buffer(bulk insert state) for\n> insertions?\n>\n> And can we think of these raw insert APIs similar to the behaviour of\n> table_multi_insert() API for unlogged tables?\n\nI found the additional performance of Paul Guo's work to be compelling\nand the idea workable for very large loads.\n\nSurely LockRelationForExtension() is all the inter-process\ncoordination we need to make this work for parallel loads?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 30 Dec 2020 15:04:12 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi Inserts in CREATE TABLE AS - revived patch"
}
] |
[
{
"msg_contents": "to_json is declared as taking \"anyelement\" as input, which means\nyou can't pass it something of unknown type:\n\n postgres=# SELECT to_json('foo');\n ERROR: could not determine polymorphic type because input has type unknown\n\nBut this works fine with the very similar json_build_array function:\n\n postgres=# SELECT json_build_array('foo');\n json_build_array\n ------------------\n [\"foo\"]\n (1 row)\n\nThe difference is that json_build_array takes type \"any\" as input, while\nto_json takes \"anyelement\" as input.\n\nIs there some reason to_json couldn't be switched to take \"any\" as input?\nHacking this together seems to mostly just work:\n\n postgres=# CREATE FUNCTION my_to_json (\"any\") RETURNS json LANGUAGE 'internal' AS 'to_json';\n postgres=# SELECT my_to_json('foo');\n my_to_json\n ------------\n \"foo\"\n (1 row)\n\nIs there something I'm missing?\n\nNikhil\n\n\n",
"msg_date": "Tue, 3 Nov 2020 11:53:55 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why does to_json take \"anyelement\" rather than \"any\"?"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 11:54 AM Nikhil Benesch <nikhil.benesch@gmail.com>\nwrote:\n\n> to_json is declared as taking \"anyelement\" as input, which means\n> you can't pass it something of unknown type:\n>\n> postgres=# SELECT to_json('foo');\n> ERROR: could not determine polymorphic type because input has type\n> unknown\n>\n> But this works fine with the very similar json_build_array function:\n>\n> postgres=# SELECT json_build_array('foo');\n> json_build_array\n> ------------------\n> [\"foo\"]\n> (1 row)\n>\n> The difference is that json_build_array takes type \"any\" as input, while\n> to_json takes \"anyelement\" as input.\n>\n> Is there some reason to_json couldn't be switched to take \"any\" as input?\n> Hacking this together seems to mostly just work:\n>\n> postgres=# CREATE FUNCTION my_to_json (\"any\") RETURNS json LANGUAGE\n> 'internal' AS 'to_json';\n> postgres=# SELECT my_to_json('foo');\n> my_to_json\n> ------------\n> \"foo\"\n> (1 row)\n>\n> Is there something I'm missing?\n>\n> Nikhil\n>\n>\nHm, good question. I am also curious as to why this happens.\n`json_build_array` ends up casting unknowns to text (from reading the\ncode), which seems like a reasonable (although not completely tight)\nassumption. Not sure why `to_json` can't just do the same. You can always\ncast to text yourself, of course, but I am not familiar with the type\nhierarchy enough to tell why `to_json` can't deduce that as text whereas\nthe other function can.\n\nOn Tue, Nov 3, 2020 at 11:54 AM Nikhil Benesch <nikhil.benesch@gmail.com> wrote:to_json is declared as taking \"anyelement\" as input, which means\nyou can't pass it something of unknown type:\n\n postgres=# SELECT to_json('foo');\n ERROR: could not determine polymorphic type because input has type unknown\n\nBut this works fine with the very similar json_build_array function:\n\n postgres=# SELECT json_build_array('foo');\n json_build_array\n ------------------\n [\"foo\"]\n (1 row)\n\nThe difference is that json_build_array takes type \"any\" as input, while\nto_json takes \"anyelement\" as input.\n\nIs there some reason to_json couldn't be switched to take \"any\" as input?\nHacking this together seems to mostly just work:\n\n postgres=# CREATE FUNCTION my_to_json (\"any\") RETURNS json LANGUAGE 'internal' AS 'to_json';\n postgres=# SELECT my_to_json('foo');\n my_to_json\n ------------\n \"foo\"\n (1 row)\n\nIs there something I'm missing?\n\nNikhilHm, good question. I am also curious as to why this happens. `json_build_array` ends up casting unknowns to text (from reading the code), which seems like a reasonable (although not completely tight) assumption. Not sure why `to_json` can't just do the same. You can always cast to text yourself, of course, but I am not familiar with the type hierarchy enough to tell why `to_json` can't deduce that as text whereas the other function can.",
"msg_date": "Thu, 5 Nov 2020 17:42:47 -0500",
"msg_from": "Mohamed Wael Khobalatte <mkhobalatte@grubhub.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does to_json take \"anyelement\" rather than \"any\"?"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 3:43 PM Mohamed Wael Khobalatte <\nmkhobalatte@grubhub.com> wrote:\n\n> You can always cast to text yourself, of course, but I am not familiar\n> with the type hierarchy enough to tell why `to_json` can't deduce that as\n> text whereas the other function can.\n>\n\nMy understanding is that \"any\" is defined to accept that behavior -\nallowing any pseudo-type and unknown. The \"anyelement\" polymorphic\npseudo-type is defined such that only concrete known types are allowed to\nmatch - and then the rules of polymorphism apply when performing a lookup.\nMy uninformed conclusion is that since to_json only defines a single\nparameter that changing it from \"anyelement\" to \"any\" would be reasonable\nand the hack describe probably \"just works\" (though I'd test it on a\nwide-range of built-in types first if I was actually going to use the hack).\n\nYou only get to use \"any\" for a C-language function but that is indeed the\ncase here.\n\nDavid J.\n\nOn Thu, Nov 5, 2020 at 3:43 PM Mohamed Wael Khobalatte <mkhobalatte@grubhub.com> wrote: You can always cast to text yourself, of course, but I am not familiar with the type hierarchy enough to tell why `to_json` can't deduce that as text whereas the other function can. My understanding is that \"any\" is defined to accept that behavior - allowing any pseudo-type and unknown. The \"anyelement\" polymorphic pseudo-type is defined such that only concrete known types are allowed to match - and then the rules of polymorphism apply when performing a lookup. My uninformed conclusion is that since to_json only defines a single parameter that changing it from \"anyelement\" to \"any\" would be reasonable and the hack describe probably \"just works\" (though I'd test it on a wide-range of built-in types first if I was actually going to use the hack).You only get to use \"any\" for a C-language function but that is indeed the case here.David J.",
"msg_date": "Thu, 5 Nov 2020 17:38:46 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does to_json take \"anyelement\" rather than \"any\"?"
},
{
"msg_contents": "On 11/5/20 7:38 PM, David G. Johnston wrote:\n> My understanding is that \"any\" is defined to accept that behavior - allowing any pseudo-type and unknown. The \"anyelement\" polymorphic pseudo-type is defined such that only concrete known types are allowed to match - and then the rules of polymorphism apply when performing a lookup. My uninformed conclusion is that since to_json only defines a single parameter that changing it from \"anyelement\" to \"any\" would be reasonable and the hack describe probably \"just works\" (though I'd test it on a wide-range of built-in types first if I was actually going to use the hack).\n> \n> You only get to use \"any\" for a C-language function but that is indeed the case here.\n\nThat exactly matches my understanding as well. I'll put together a patch.\n\nNikhil\n\n\n",
"msg_date": "Thu, 5 Nov 2020 20:45:37 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does to_json take \"anyelement\" rather than \"any\"?"
},
{
"msg_contents": "Nikhil Benesch <nikhil.benesch@gmail.com> writes:\n> On 11/5/20 7:38 PM, David G. Johnston wrote:\n>> My understanding is that \"any\" is defined to accept that behavior - allowing any pseudo-type and unknown. The \"anyelement\" polymorphic pseudo-type is defined such that only concrete known types are allowed to match - and then the rules of polymorphism apply when performing a lookup. My uninformed conclusion is that since to_json only defines a single parameter that changing it from \"anyelement\" to \"any\" would be reasonable and the hack describe probably \"just works\" (though I'd test it on a wide-range of built-in types first if I was actually going to use the hack).\n>> \n>> You only get to use \"any\" for a C-language function but that is indeed the case here.\n\n> That exactly matches my understanding as well. I'll put together a patch.\n\n\"any\" is a dinosaur IMO. It's definitely lower-level than anyelement;\nfor example the function has to be prepared to deal with raw \"unknown\"\nliterals. So I feel like the proposed solution here is a bit of a hack.\n\nWhat I'm wondering about as I think about this is why we don't allow\nunknown literals to be resolved as text when matching to anyelement.\nMaybe that was intentional, or maybe just overly conservative; or maybe\nthere is a good reason for it. I don't recall, but it would be worth\nexcavating in the list archives to see if it was discussed when the\npolymorphic types were being designed.\n\nA relevant data point is that we *do* allow the case with the more\nrecently added \"anycompatible\" polymorphics:\n\nregression=# create function foo(anycompatible) returns anycompatible as 'select $1' language sql;\nCREATE FUNCTION\nregression=# select foo('bar');\n foo \n-----\n bar\n(1 row)\n\nregression=# select pg_typeof(foo('bar'));\n pg_typeof \n-----------\n text\n(1 row)\n\nSo even if we decide that changing the rules for \"anyelement\" is\ntoo scary, I think switching to_json to anycompatible would be\npreferable to switching it to \"any\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Nov 2020 20:58:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why does to_json take \"anyelement\" rather than \"any\"?"
},
{
"msg_contents": "On 11/5/20 8:58 PM, Tom Lane wrote:\n> \"any\" is a dinosaur IMO. It's definitely lower-level than anyelement;\n> for example the function has to be prepared to deal with raw \"unknown\"\n> literals. So I feel like the proposed solution here is a bit of a hack.\n\nI see what you are saying, but since the code for to_jsonb is shared with\nthe code for jsonb_build_array and jsonb_build_object, which already\nhandle the pitfalls of \"any\", the patch seems to be literally this\nsimple:\n\ndiff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\nindex c01da4bf01..11adf748c9 100644\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -8466,7 +8466,7 @@\n prosrc => 'json_object_two_arg' },\n { oid => '3176', descr => 'map input to json',\n proname => 'to_json', provolatile => 's', prorettype => 'json',\n- proargtypes => 'anyelement', prosrc => 'to_json' },\n+ proargtypes => 'any', prosrc => 'to_json' },\n { oid => '3261', descr => 'remove object fields with null values from json',\n proname => 'json_strip_nulls', prorettype => 'json', proargtypes => 'json',\n prosrc => 'json_strip_nulls' },\n@@ -9289,7 +9289,7 @@\n proargtypes => '_text _text', prosrc => 'jsonb_object_two_arg' },\n { oid => '3787', descr => 'map input to jsonb',\n proname => 'to_jsonb', provolatile => 's', prorettype => 'jsonb',\n- proargtypes => 'anyelement', prosrc => 'to_jsonb' },\n+ proargtypes => 'any', prosrc => 'to_jsonb' },\n { oid => '3265', descr => 'jsonb aggregate transition function',\n proname => 'jsonb_agg_transfn', proisstrict => 'f', provolatile => 's',\n prorettype => 'internal', proargtypes => 'internal anyelement',\n\nI think my argument is that regardless of which of\n{any,anyelement,anycompatible} is best, it seems like to_jsonb,\njsonb_build_array, and jsonb_build_object should all use the same\ntype.\n\n> What I'm wondering about as I think about this is why we don't allow\n> unknown literals to be resolved as text when matching to anyelement.\n> Maybe that was intentional, or maybe just overly conservative; or maybe\n> there is a good reason for it. I don't recall, but it would be worth\n> excavating in the list archives to see if it was discussed when the\n> polymorphic types were being designed.\n\nI excavated two separate threads from 2003 from when you and Joe Conway\nwere designing SQL99 array support and the initial polymorphic types:\n\nhttps://www.postgresql.org/message-id/3E701869.4020301%40joeconway.com\nhttps://www.postgresql.org/message-id/1272.1048633920%40sss.pgh.pa.us\n\nI didn't see anything obvious about unknown coercions, though I\ncertainly could have overlooked something. For what it's worth, the\nerror message \"could not determine polymorphic type because input has\ntype unknown\" has existed, with slightly different wording, since\nthe very first commit of the feature:\n\nhttps://github.com/postgres/postgres/commit/730840c9b (parse_coerce.c, L840)\n\n> A relevant data point is that we *do* allow the case with the more\n> recently added \"anycompatible\" polymorphics:\n>\n> <...snipped examples...>\n>\n> So even if we decide that changing the rules for \"anyelement\" is\n> too scary, I think switching to_json to anycompatible would be\n> preferable to switching it to \"any\".\n\nOh these new polymorphic types are interesting. I hadn't seen these.\n\nSo, I don't feel particularly qualified to determine how to proceed.\nThese are the options that I would be excited about:\n\n 1. Switch to_jsonb to take \"any\", as in the above patch.\n 2. Convert all of to_jsonb, jsonb_build_array, and jsonb_build_object\n to use the new \"anycompatible\" type.\n 3. Switch to_jsonb to take \"anyelement\", but change \"anyelement\" and\n friends so that \"unknown\" arguments are coereced to text.\n\nWould someone care to offer guidance on which path to choose?\n\nNikhil\n\n\n",
"msg_date": "Thu, 5 Nov 2020 22:51:36 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does to_json take \"anyelement\" rather than \"any\"?"
},
{
"msg_contents": "pá 6. 11. 2020 v 1:39 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Thu, Nov 5, 2020 at 3:43 PM Mohamed Wael Khobalatte <\n> mkhobalatte@grubhub.com> wrote:\n>\n>> You can always cast to text yourself, of course, but I am not familiar\n>> with the type hierarchy enough to tell why `to_json` can't deduce that as\n>> text whereas the other function can.\n>>\n>\n> My understanding is that \"any\" is defined to accept that behavior -\n> allowing any pseudo-type and unknown. The \"anyelement\" polymorphic\n> pseudo-type is defined such that only concrete known types are allowed to\n> match - and then the rules of polymorphism apply when performing a lookup.\n> My uninformed conclusion is that since to_json only defines a single\n> parameter that changing it from \"anyelement\" to \"any\" would be reasonable\n> and the hack describe probably \"just works\" (though I'd test it on a\n> wide-range of built-in types first if I was actually going to use the hack).\n>\n> You only get to use \"any\" for a C-language function but that is indeed the\n> case here.\n>\n\nType \"anyelement\" can force the function's result type directly. But there\ncannot be function that returns UNKNOWN.\n\nType \"any\" just accept any argument without any impact on result type.\nUnfortunately, inside a function is necessary to do much more work related\nto casting types, and the execution can be slower.\n\nI checked the source code of to_json and this function can use \"any\"\nwithout any change.\n\nRegards\n\nPavel\n\n\n\n> David J.\n>\n>\n\npá 6. 11. 2020 v 1:39 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Thu, Nov 5, 2020 at 3:43 PM Mohamed Wael Khobalatte <mkhobalatte@grubhub.com> wrote: You can always cast to text yourself, of course, but I am not familiar with the type hierarchy enough to tell why `to_json` can't deduce that as text whereas the other function can. My understanding is that \"any\" is defined to accept that behavior - allowing any pseudo-type and unknown. The \"anyelement\" polymorphic pseudo-type is defined such that only concrete known types are allowed to match - and then the rules of polymorphism apply when performing a lookup. My uninformed conclusion is that since to_json only defines a single parameter that changing it from \"anyelement\" to \"any\" would be reasonable and the hack describe probably \"just works\" (though I'd test it on a wide-range of built-in types first if I was actually going to use the hack).You only get to use \"any\" for a C-language function but that is indeed the case here.Type \"anyelement\" can force the function's result type directly. But there cannot be function that returns UNKNOWN. Type \"any\" just accept any argument without any impact on result type. Unfortunately, inside a function is necessary to do much more work related to casting types, and the execution can be slower. I checked the source code of to_json and this function can use \"any\" without any change. RegardsPavelDavid J.",
"msg_date": "Fri, 6 Nov 2020 06:25:26 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does to_json take \"anyelement\" rather than \"any\"?"
}
] |
[
{
"msg_contents": "Hi,\n\nAs pointed out in [1], BRIN is not properly handling toasted data, which\nmay easily lead to index tuples referencing TOAST-ed values. Which is\nclearly wrong - it's trivial to trigger failues after a DELETE.\n\nAttached is a patch that aims to fix this - AFAIK the brin_form_tuple\nwas simply missing the TOAST_INDEX_HACK stuff from index_form_tuple,\nwhich ensures the data is detoasted and (possibly) re-compressed. The\ncode is mostly the same, with some BRIN-specific tweaks (looking at\noi_typecache instead of the index descriptor, etc.).\n\nI also attach a simple SQL script that I used to trigger the issue. This\nneeds to be turned into a regression test, I'll work on that tomorrow.\n\n\nA separate question is what to do about existing indexes - ISTM the only\nthing we can do is to tell the users to reindex all BRIN indexes on\nvarlena values. Something like this:\n\n select * from pg_class\n where relam = (select oid from pg_am where amname = 'brin')\n and oid in (select attrelid from pg_attribute where attlen = -1\n and attstorage in ('e', 'x'));\n\n\nregards\n\n\n[1] https://www.postgresql.org/message-id/20201001184133.oq5uq75sb45pu3aw%40development\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 4 Nov 2020 02:05:44 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Fix brin_form_tuple to properly detoast data"
},
{
"msg_contents": "On 11/4/20 2:05 AM, Tomas Vondra wrote:\n> Hi,\n> \n> As pointed out in [1], BRIN is not properly handling toasted data, which\n> may easily lead to index tuples referencing TOAST-ed values. Which is\n> clearly wrong - it's trivial to trigger failues after a DELETE.\n> \n> Attached is a patch that aims to fix this - AFAIK the brin_form_tuple\n> was simply missing the TOAST_INDEX_HACK stuff from index_form_tuple,\n> which ensures the data is detoasted and (possibly) re-compressed. The\n> code is mostly the same, with some BRIN-specific tweaks (looking at\n> oi_typecache instead of the index descriptor, etc.).\n> \n> I also attach a simple SQL script that I used to trigger the issue. This\n> needs to be turned into a regression test, I'll work on that tomorrow.\n> \n\nOK, so here's an improved version of the fix - aside from the code (same \nas in v1), there are two patches with regression tests. Ultimately those \nshould be merged with the fix, but this way it's possible to apply the \nregression tests to trigger the issue.\n\nThe first test is fairly trivial - it simply builds index on toasted \ndata and then shows how an insert and select fail. There's a caveat, \nthat this requires a DELETE + VACUUM, and the VACUUM actually has to \ncleanup the rows. So there must be no concurrent transactions that might \nneed the rows, which is unlikely in regression tests. So this requires \nwaiting for all running transactions to finish - I did that by building \nan index concurrently. It's a bit strange, but it's better than any \nother solution I could think of (timeout or some custom wait for xacts).\n\nThe second test is a bit redundant - it merely checks that both CREATE \nINDEX and INSERT INTO fail the same way when the index tuple gets too \nlarge. Before the fix there were some inconsistencies - the CREATE INDEX \nsucceeded because it used TOASTed data. So ultimately this tests the \nsame thing, but from a different perspective.\n\n\nregards\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 4 Nov 2020 23:25:22 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix brin_form_tuple to properly detoast data"
},
{
"msg_contents": "On 2020-Nov-04, Tomas Vondra wrote:\n\n> The first test is fairly trivial - it simply builds index on toasted data\n> and then shows how an insert and select fail. There's a caveat, that this\n> requires a DELETE + VACUUM, and the VACUUM actually has to cleanup the rows.\n> So there must be no concurrent transactions that might need the rows, which\n> is unlikely in regression tests. So this requires waiting for all running\n> transactions to finish - I did that by building an index concurrently. It's\n> a bit strange, but it's better than any other solution I could think of\n> (timeout or some custom wait for xacts).\n\nThere are recent changes in vacuum for temp tables (commit 94bc27b57680?)\nthat would maybe make this stable enough, without having to have the CIC\nthere. At least, I tried it locally a few times and it appears to work well.\nThis won't work for older releases though, just master. This is patch\n0001 attached here.\n\n> The second test is a bit redundant - it merely checks that both CREATE INDEX\n> and INSERT INTO fail the same way when the index tuple gets too large.\n> Before the fix there were some inconsistencies - the CREATE INDEX succeeded\n> because it used TOASTed data. So ultimately this tests the same thing, but\n> from a different perspective.\n\nHmm. This one shows page size in the error messages, so it'll fail on\nnonstandard builds. I think we try to stay away from introducing those,\nso I'd leave this test out.\n\nThe code fix looks all right -- I'd just move the #include lines to\ntheir place. Patch 0002.\n\nYou add this comment:\n\n> +\t\t\t/*\n> +\t\t\t * Do nothing if value is not of varlena type. We don't need to\n> +\t\t\t * care about NULL values here, thanks to bv_allnulls above.\n> +\t\t\t *\n> +\t\t\t * If value is stored EXTERNAL, must fetch it so we are not\n> +\t\t\t * depending on outside storage.\n> +\t\t\t *\n> +\t\t\t * XXX Is this actually true? Could it be that the summary is\n> +\t\t\t * NULL even for range with non-NULL data? E.g. degenerate bloom\n> +\t\t\t * filter may be thrown away, etc.\n> +\t\t\t */\n\nI think the XXX comment points to a bug that we don't have right now,\nsince neither minmax nor inclusion can end up with a NULL summary\nstarting from non-NULL data. But if the comment about bloom is correct,\nthen surely it'll become a bug when bloom is added.\n\nI don't think we need the second part of this comment:\n\n> +/*\n> + * This enables de-toasting of index entries. Needed until VACUUM is\n> + * smart enough to rebuild indexes from scratch.\n> + */\n\n... because, surely, we're now never working on having VACUUM rebuild\nindexes from scratch. In fact, I wonder if we need the #define at\nall. I propose to remove all those #ifdef lines in your patch.\n\nThe fix looks good to me. I just added a comment in 0003.",
"msg_date": "Thu, 5 Nov 2020 14:17:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix brin_form_tuple to properly detoast data"
},
{
"msg_contents": "\n\nOn 11/5/20 6:17 PM, Alvaro Herrera wrote:\n> On 2020-Nov-04, Tomas Vondra wrote:\n> \n>> The first test is fairly trivial - it simply builds index on toasted data\n>> and then shows how an insert and select fail. There's a caveat, that this\n>> requires a DELETE + VACUUM, and the VACUUM actually has to cleanup the rows.\n>> So there must be no concurrent transactions that might need the rows, which\n>> is unlikely in regression tests. So this requires waiting for all running\n>> transactions to finish - I did that by building an index concurrently. It's\n>> a bit strange, but it's better than any other solution I could think of\n>> (timeout or some custom wait for xacts).\n> \n> There are recent changes in vacuum for temp tables (commit 94bc27b57680?)\n> that would maybe make this stable enough, without having to have the CIC\n> there. At least, I tried it locally a few times and it appears to work well.\n> This won't work for older releases though, just master. This is patch\n> 0001 attached here.\n> \n\nIIUC you're suggesting to use a temporary table in the test? \nUnfortunately, this does not work on older releases, and IMHO the test \nshould be backpatched too. IMHO the CIC \"hack\" is acceptable, unless \nthere's a better solution that I'm not aware of.\n\n>> The second test is a bit redundant - it merely checks that both CREATE INDEX\n>> and INSERT INTO fail the same way when the index tuple gets too large.\n>> Before the fix there were some inconsistencies - the CREATE INDEX succeeded\n>> because it used TOASTed data. So ultimately this tests the same thing, but\n>> from a different perspective.\n> \n> Hmm. This one shows page size in the error messages, so it'll fail on\n> nonstandard builds. I think we try to stay away from introducing those,\n> so I'd leave this test out.\n\nHmm, OK. I don't think having \"redundant\" test is a big deal, but I \nhaven't thought about builds with different block sizes. I'll leave this \nout.\n\n> \n> The code fix looks all right -- I'd just move the #include lines to\n> their place. Patch 0002.\n> \n\nOK\n\n> You add this comment:\n> \n>> +\t\t\t/*\n>> +\t\t\t * Do nothing if value is not of varlena type. We don't need to\n>> +\t\t\t * care about NULL values here, thanks to bv_allnulls above.\n>> +\t\t\t *\n>> +\t\t\t * If value is stored EXTERNAL, must fetch it so we are not\n>> +\t\t\t * depending on outside storage.\n>> +\t\t\t *\n>> +\t\t\t * XXX Is this actually true? Could it be that the summary is\n>> +\t\t\t * NULL even for range with non-NULL data? E.g. degenerate bloom\n>> +\t\t\t * filter may be thrown away, etc.\n>> +\t\t\t */\n> \n> I think the XXX comment points to a bug that we don't have right now,\n> since neither minmax nor inclusion can end up with a NULL summary\n> starting from non-NULL data. But if the comment about bloom is correct,\n> then surely it'll become a bug when bloom is added.\n> \n\nYeah, but that'd be something for the bloom patch to fix, I think.\n\nThis got me thinking though - wouldn't it be better to handle too large \nvalues by treating the range as \"degenerate\" (i.e. store NULL and \nconsider it as matching all queries), instead of failing the CREATE \nINDEX or DML? I find the current behavior rather annoying, because it \ndepends on the other rows in the page range, not just on the one row the \nuser deals with. Perhaps this might even be considered an information \nleak about the other data. Of course, not something this patch should \ndeal with.\n\n> I don't think we need the second part of this comment:\n> \n>> +/*\n>> + * This enables de-toasting of index entries. Needed until VACUUM is\n>> + * smart enough to rebuild indexes from scratch.\n>> + */\n> \n> ... because, surely, we're now never working on having VACUUM rebuild\n> indexes from scratch. In fact, I wonder if we need the #define at\n> all. I propose to remove all those #ifdef lines in your patch.\n> \n\nThat's a verbatim copy of a comment from indextuple.c. IMHO we should \nkeep it the same in both places.\n\n> The fix looks good to me. I just added a comment in 0003.\n> \n\nThanks. Any opinions on fixing this in existing clusters? Any better \nideas than just giving users the SQL query to list possibly-affected \nindexes?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Nov 2020 18:42:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix brin_form_tuple to properly detoast data"
},
{
"msg_contents": "On 2020-Nov-05, Tomas Vondra wrote:\n\n> On 11/5/20 6:17 PM, Alvaro Herrera wrote:\n\n> > There are recent changes in vacuum for temp tables (commit 94bc27b57680?)\n> > that would maybe make this stable enough, without having to have the CIC\n> > there. At least, I tried it locally a few times and it appears to work well.\n> > This won't work for older releases though, just master. This is patch\n> > 0001 attached here.\n> \n> IIUC you're suggesting to use a temporary table in the test? Unfortunately,\n> this does not work on older releases, and IMHO the test should be\n> backpatched too. IMHO the CIC \"hack\" is acceptable, unless there's a better\n> solution that I'm not aware of.\n\nOh, sure, the CIC hack is acceptable for the older branches. I'm just\nsaying that you can use a temp table everywhere, and keep CIC in the old\nbranches and no CIC in master.\n\n> This got me thinking though - wouldn't it be better to handle too large\n> values by treating the range as \"degenerate\" (i.e. store NULL and consider\n> it as matching all queries), instead of failing the CREATE INDEX or DML? I\n> find the current behavior rather annoying, because it depends on the other\n> rows in the page range, not just on the one row the user deals with. Perhaps\n> this might even be considered an information leak about the other data. Of\n> course, not something this patch should deal with.\n\nHmm. Regarding text I remember thinking we could just truncate values\n(as we do for LIKE, as I recall). I suppose that strategy would work\neven for bytea.\n\n\n> > > +/*\n> > > + * This enables de-toasting of index entries. Needed until VACUUM is\n> > > + * smart enough to rebuild indexes from scratch.\n> > > + */\n> > \n> > ... because, surely, we're now never working on having VACUUM rebuild\n> > indexes from scratch. In fact, I wonder if we need the #define at\n> > all. I propose to remove all those #ifdef lines in your patch.\n> \n> That's a verbatim copy of a comment from indextuple.c. IMHO we should keep\n> it the same in both places.\n\nSure, if you want to.\n\n> > The fix looks good to me. I just added a comment in 0003.\n> \n> Thanks. Any opinions on fixing this in existing clusters? Any better ideas\n> than just giving users the SQL query to list possibly-affected indexes?\n\nNone here.\n\n\n",
"msg_date": "Thu, 5 Nov 2020 18:16:04 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix brin_form_tuple to properly detoast data"
},
{
"msg_contents": "On Thu, 5 Nov 2020 18:16:04 -0300\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2020-Nov-05, Tomas Vondra wrote:\n> \n> > On 11/5/20 6:17 PM, Alvaro Herrera wrote: \n> \n> > > There are recent changes in vacuum for temp tables (commit\n> > > 94bc27b57680?) that would maybe make this stable enough, without\n> > > having to have the CIC there. At least, I tried it locally a few\n> > > times and it appears to work well. This won't work for older\n> > > releases though, just master. This is patch 0001 attached here. \n> > \n> > IIUC you're suggesting to use a temporary table in the test?\n> > Unfortunately, this does not work on older releases, and IMHO the\n> > test should be backpatched too. IMHO the CIC \"hack\" is acceptable,\n> > unless there's a better solution that I'm not aware of. \n> \n> Oh, sure, the CIC hack is acceptable for the older branches. I'm just\n> saying that you can use a temp table everywhere, and keep CIC in the\n> old branches and no CIC in master.\n> \n> > This got me thinking though - wouldn't it be better to handle too\n> > large values by treating the range as \"degenerate\" (i.e. store NULL\n> > and consider it as matching all queries), instead of failing the\n> > CREATE INDEX or DML? I find the current behavior rather annoying,\n> > because it depends on the other rows in the page range, not just on\n> > the one row the user deals with. Perhaps this might even be\n> > considered an information leak about the other data. Of course, not\n> > something this patch should deal with. \n> \n> Hmm. Regarding text I remember thinking we could just truncate values\n> (as we do for LIKE, as I recall). I suppose that strategy would work\n> even for bytea.\n> \n> \n> > > > +/*\n> > > > + * This enables de-toasting of index entries. Needed until\n> > > > VACUUM is\n> > > > + * smart enough to rebuild indexes from scratch.\n> > > > + */ \n> > > \n> > > ... because, surely, we're now never working on having VACUUM\n> > > rebuild indexes from scratch. In fact, I wonder if we need the\n> > > #define at all. I propose to remove all those #ifdef lines in\n> > > your patch. \n> > \n> > That's a verbatim copy of a comment from indextuple.c. IMHO we\n> > should keep it the same in both places. \n> \n> Sure, if you want to.\n> \n> > > The fix looks good to me. I just added a comment in 0003. \n> > \n> > Thanks. Any opinions on fixing this in existing clusters? Any\n> > better ideas than just giving users the SQL query to list\n> > possibly-affected indexes? \n> \n> None here.\n> \n\nOK, pushed and backpatched all the way back to 9.5. I decided not to\nuse the temporary table - I'd still need to use the CIC trick on older\nreleases, and there were enough differences already.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 7 Nov 2020 00:45:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix brin_form_tuple to properly detoast data"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile running some multixact-oriented stress tests, I noticed that\ncommit dee663f7843:\n\n Defer flushing of SLRU files.\n\n Previously, we called fsync() after writing out individual pg_xact,\n pg_multixact and pg_commit_ts pages due to cache pressure, leading to\n regular I/O stalls in user backends and recovery. Collapse requests for\n the same file into a single system call as part of the next checkpoint,\n as we already did for relation files, using the infrastructure developed\n by commit 3eb77eba. This can cause a significant improvement to\n recovery performance, especially when it's otherwise CPU-bound.\n\n ...\n\nseems to trigger this issue:\n\n [17820] LOG: checkpoint starting: wal\n [17820] PANIC: could not fsync file \"pg_multixact/offsets/06E0\": No such file or directory\n [17818] LOG: checkpointer process (PID 17820) was terminated by signal 6: Aborted\n [17818] LOG: terminating any other active server processes\n\nwhich is then followed by this during recovery:\n\n [18599] LOG: redo starts at 1F/FF098138\n [18599] LOG: file \"pg_multixact/offsets/0635\" doesn't exist, reading as zeroes\n [18599] CONTEXT: WAL redo at 1F/FF09A218 for MultiXact/CREATE_ID: 104201060 offset 1687158668 nmembers 3: 2128819 (keysh) 2128823 (keysh) 2128827 (keysh)\n [18599] LOG: file \"pg_multixact/members/7DE3\" doesn't exist, reading as zeroes\n [18599] CONTEXT: WAL redo at 1F/FF09A218 for MultiXact/CREATE_ID: 104201060 offset 1687158668 nmembers 3: 2128819 (keysh) 2128823 (keysh) 2128827 (keysh)\n [18599] LOG: redo done at 2A/D4D8BFB0 system usage: CPU: user: 265.57 s, system: 12.43 s, elapsed: 278.06 s\n [18599] LOG: checkpoint starting: end-of-recovery immediate\n [18599] PANIC: could not fsync file \"pg_multixact/offsets/06E0\": No such file or directory\n [17818] LOG: startup process (PID 18599) was terminated by signal 6: Aborted\n [17818] LOG: aborting startup due to startup process failure\n [17818] LOG: database system is shut down\n\nat which point the cluster is kaput, of course.\n\nIt's clearly the fault of dee663f7843 - 4 failures out of 4 attempts on\nthat commit, and after switching to ca7f8e2b86 it goes away.\n\nReproducing it is pretty simple, but it takes a bit of time. Essentially\ndo this:\n\n create table t (a int primary key);\n insert into t select i from generate_series(1,1000) s(i);\n\nand then run\n\n SELECT * FROM t FOR KEY SHARE;\n\nfrom pgbench with many concurrent clients. I do this:\n\n pgbench -n -c 32 -j 8 -f select.sql -T 86400 test\n\nAfter a while (~1h on my machine) the pg_multixact gets over 10GB, which\ntriggers a more aggressive cleanup (per MultiXactMemberFreezeThreshold).\nMy guess is that this discards some of the files, but checkpointer is\nnot aware of that, or something like that. Not sure.\n\nAttached are backtraces from the two crashes - regular and during\nrecovery. Not sure how interesting / helpful that is, it probably does\nnot say much about how we got there.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 4 Nov 2020 02:32:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "PANIC: could not fsync file \"pg_multixact/...\" since commit\n dee663f7843"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 2:32 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> After a while (~1h on my machine) the pg_multixact gets over 10GB, which\n> triggers a more aggressive cleanup (per MultiXactMemberFreezeThreshold).\n> My guess is that this discards some of the files, but checkpointer is\n> not aware of that, or something like that. Not sure.\n\nUrgh. Thanks. Looks like perhaps the problem is that I have\nRegisterSyncRequest(&tag, SYNC_FORGET_REQUEST, true) in one codepath\nthat unlinks files, but not another. Looking.\n\n\n",
"msg_date": "Wed, 4 Nov 2020 14:49:24 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not fsync file \"pg_multixact/...\" since commit\n dee663f7843"
},
{
"msg_contents": "On Wed, Nov 04, 2020 at 02:49:24PM +1300, Thomas Munro wrote:\n>On Wed, Nov 4, 2020 at 2:32 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> After a while (~1h on my machine) the pg_multixact gets over 10GB, which\n>> triggers a more aggressive cleanup (per MultiXactMemberFreezeThreshold).\n>> My guess is that this discards some of the files, but checkpointer is\n>> not aware of that, or something like that. Not sure.\n>\n>Urgh. Thanks. Looks like perhaps the problem is that I have\n>RegisterSyncRequest(&tag, SYNC_FORGET_REQUEST, true) in one codepath\n>that unlinks files, but not another. Looking.\n\nMaybe. I didn't have time to investigate this more deeply, and it takes\nquite a bit of time to reproduce. I can try again with extra logging or\ntest some proposed fixes, if you give me a patch.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 4 Nov 2020 02:57:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: could not fsync file \"pg_multixact/...\" since commit\n dee663f7843"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 2:57 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Wed, Nov 04, 2020 at 02:49:24PM +1300, Thomas Munro wrote:\n> >On Wed, Nov 4, 2020 at 2:32 PM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >> After a while (~1h on my machine) the pg_multixact gets over 10GB, which\n> >> triggers a more aggressive cleanup (per MultiXactMemberFreezeThreshold).\n> >> My guess is that this discards some of the files, but checkpointer is\n> >> not aware of that, or something like that. Not sure.\n> >\n> >Urgh. Thanks. Looks like perhaps the problem is that I have\n> >RegisterSyncRequest(&tag, SYNC_FORGET_REQUEST, true) in one codepath\n> >that unlinks files, but not another. Looking.\n>\n> Maybe. I didn't have time to investigate this more deeply, and it takes\n> quite a bit of time to reproduce. I can try again with extra logging or\n> test some proposed fixes, if you give me a patch.\n\nI think this should be fixed by doing all unlinking through a common\ncode path. Does this pass your test?",
"msg_date": "Wed, 4 Nov 2020 17:36:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not fsync file \"pg_multixact/...\" since commit\n dee663f7843"
},
{
"msg_contents": "On Wed, Nov 04, 2020 at 05:36:46PM +1300, Thomas Munro wrote:\n>On Wed, Nov 4, 2020 at 2:57 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Wed, Nov 04, 2020 at 02:49:24PM +1300, Thomas Munro wrote:\n>> >On Wed, Nov 4, 2020 at 2:32 PM Tomas Vondra\n>> ><tomas.vondra@2ndquadrant.com> wrote:\n>> >> After a while (~1h on my machine) the pg_multixact gets over 10GB, which\n>> >> triggers a more aggressive cleanup (per MultiXactMemberFreezeThreshold).\n>> >> My guess is that this discards some of the files, but checkpointer is\n>> >> not aware of that, or something like that. Not sure.\n>> >\n>> >Urgh. Thanks. Looks like perhaps the problem is that I have\n>> >RegisterSyncRequest(&tag, SYNC_FORGET_REQUEST, true) in one codepath\n>> >that unlinks files, but not another. Looking.\n>>\n>> Maybe. I didn't have time to investigate this more deeply, and it takes\n>> quite a bit of time to reproduce. I can try again with extra logging or\n>> test some proposed fixes, if you give me a patch.\n>\n>I think this should be fixed by doing all unlinking through a common\n>code path. Does this pass your test?\n\nSeems to be working - without the patch it failed after ~1h, now it's\nrunning for more than 2h without a crash. I'll let it run for a few more\nhours (on both machines).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 4 Nov 2020 14:50:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: PANIC: could not fsync file \"pg_multixact/...\" since commit\n dee663f7843"
},
{
"msg_contents": "On 11/4/20 2:50 PM, Tomas Vondra wrote:\n> On Wed, Nov 04, 2020 at 05:36:46PM +1300, Thomas Munro wrote:\n>> On Wed, Nov 4, 2020 at 2:57 PM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>>> On Wed, Nov 04, 2020 at 02:49:24PM +1300, Thomas Munro wrote:\n>>> >On Wed, Nov 4, 2020 at 2:32 PM Tomas Vondra\n>>> ><tomas.vondra@2ndquadrant.com> wrote:\n>>> >> After a while (~1h on my machine) the pg_multixact gets over 10GB, \n>>> which\n>>> >> triggers a more aggressive cleanup (per \n>>> MultiXactMemberFreezeThreshold).\n>>> >> My guess is that this discards some of the files, but checkpointer is\n>>> >> not aware of that, or something like that. Not sure.\n>>> >\n>>> >Urgh. Thanks. Looks like perhaps the problem is that I have\n>>> >RegisterSyncRequest(&tag, SYNC_FORGET_REQUEST, true) in one codepath\n>>> >that unlinks files, but not another. Looking.\n>>>\n>>> Maybe. I didn't have time to investigate this more deeply, and it takes\n>>> quite a bit of time to reproduce. I can try again with extra logging or\n>>> test some proposed fixes, if you give me a patch.\n>>\n>> I think this should be fixed by doing all unlinking through a common\n>> code path. Does this pass your test?\n> \n> Seems to be working - without the patch it failed after ~1h, now it's\n> running for more than 2h without a crash. I'll let it run for a few more\n> hours (on both machines).\n> \n\nIt's been running for hours on both machines, without any crashes etc. \nWhile that's not a definitive proof the fix is correct, it certainly \nbehaves differently.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Nov 2020 00:07:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not fsync file \"pg_multixact/...\" since commit\n dee663f7843"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 12:07 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> It's been running for hours on both machines, without any crashes etc.\n> While that's not a definitive proof the fix is correct, it certainly\n> behaves differently.\n\nThanks! Embarrassed to have missed that. Pushed.\n\n\n",
"msg_date": "Thu, 5 Nov 2020 14:21:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: could not fsync file \"pg_multixact/...\" since commit\n dee663f7843"
}
] |
[
{
"msg_contents": "Hello.\n\nWhile updating a patch, I noticed that the replication slot stats\npatch (9868167500) put some somewhat doubious codes.\n\nIn pgstat_recv_replslot, an assertion like the following exists:\n\n>\tidx = pgstat_replslot_index(msg->m_slotname, !msg->m_drop);\n..\n>\tAssert(idx >= 0 && idx < max_replication_slots);\n\nBut the idx should be 0..(max_replication_slots - 1).\n\n\nIn the same function the following code assumes that the given \"char\n*name\" has the length of NAMEDATALEN. It actually is, but that\nassumption seems a bit bogus. I think it should use strlcpy instead.\n\n\n>pgstat_replslot_index(const char *name, bool create_it)\n...\n>\tmemcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 04 Nov 2020 17:55:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Some doubious code in pgstat.c"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 2:25 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> While updating a patch, I noticed that the replication slot stats\n> patch (9868167500) put some somewhat doubious codes.\n>\n> In pgstat_recv_replslot, an assertion like the following exists:\n>\n> > idx = pgstat_replslot_index(msg->m_slotname, !msg->m_drop);\n> ..\n> > Assert(idx >= 0 && idx < max_replication_slots);\n>\n> But the idx should be 0..(max_replication_slots - 1).\n>\n\nRight.\n\n>\n> In the same function the following code assumes that the given \"char\n> *name\" has the length of NAMEDATALEN. It actually is, but that\n> assumption seems a bit bogus. I think it should use strlcpy instead.\n>\n\nAgreed.\n\nYour patch looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 4 Nov 2020 15:20:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some doubious code in pgstat.c"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 6:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 4, 2020 at 2:25 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > Hello.\n> >\n> > While updating a patch, I noticed that the replication slot stats\n> > patch (9868167500) put some somewhat doubious codes.\n> >\n> > In pgstat_recv_replslot, an assertion like the following exists:\n> >\n> > > idx = pgstat_replslot_index(msg->m_slotname, !msg->m_drop);\n> > ..\n> > > Assert(idx >= 0 && idx < max_replication_slots);\n> >\n> > But the idx should be 0..(max_replication_slots - 1).\n> >\n>\n> Right.\n>\n> >\n> > In the same function the following code assumes that the given \"char\n> > *name\" has the length of NAMEDATALEN. It actually is, but that\n> > assumption seems a bit bogus. I think it should use strlcpy instead.\n> >\n>\n> Agreed.\n\n+1\n\nThe commit uses memcpy in the same way in other places too, for\ninstance in pgstat_report_replslot_drop(). Should we fix all of them?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 4 Nov 2020 22:49:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some doubious code in pgstat.c"
},
{
"msg_contents": "At Wed, 4 Nov 2020 22:49:57 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Wed, Nov 4, 2020 at 6:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Nov 4, 2020 at 2:25 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > Hello.\n> > >\n> > > While updating a patch, I noticed that the replication slot stats\n> > > patch (9868167500) put some somewhat doubious codes.\n> > >\n> > > In pgstat_recv_replslot, an assertion like the following exists:\n> > >\n> > > > idx = pgstat_replslot_index(msg->m_slotname, !msg->m_drop);\n> > > ..\n> > > > Assert(idx >= 0 && idx < max_replication_slots);\n> > >\n> > > But the idx should be 0..(max_replication_slots - 1).\n> > >\n> >\n> > Right.\n> >\n> > >\n> > > In the same function the following code assumes that the given \"char\n> > > *name\" has the length of NAMEDATALEN. It actually is, but that\n> > > assumption seems a bit bogus. I think it should use strlcpy instead.\n> > >\n> >\n> > Agreed.\n> \n> +1\n> \n> The commit uses memcpy in the same way in other places too, for\n> instance in pgstat_report_replslot_drop(). Should we fix all of them?\n\nAbsolutely. the same is seen at several places. Please find the\nattached.\n\nAs another issue, just replace memcpy with strlcpy makes compiler\ncomplain of type mismatch, as the first paramter to memcpy had an\nneedless \"&\" operator. I removed it in this patch.\n\n(&msg.m_slotname is a \"char (*)[NAMEDATALEN]\", not a \"char *\".)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 05 Nov 2020 11:18:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some doubious code in pgstat.c"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 11:18 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 4 Nov 2020 22:49:57 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > On Wed, Nov 4, 2020 at 6:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Nov 4, 2020 at 2:25 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > Hello.\n> > > >\n> > > > While updating a patch, I noticed that the replication slot stats\n> > > > patch (9868167500) put some somewhat doubious codes.\n> > > >\n> > > > In pgstat_recv_replslot, an assertion like the following exists:\n> > > >\n> > > > > idx = pgstat_replslot_index(msg->m_slotname, !msg->m_drop);\n> > > > ..\n> > > > > Assert(idx >= 0 && idx < max_replication_slots);\n> > > >\n> > > > But the idx should be 0..(max_replication_slots - 1).\n> > > >\n> > >\n> > > Right.\n> > >\n> > > >\n> > > > In the same function the following code assumes that the given \"char\n> > > > *name\" has the length of NAMEDATALEN. It actually is, but that\n> > > > assumption seems a bit bogus. I think it should use strlcpy instead.\n> > > >\n> > >\n> > > Agreed.\n> >\n> > +1\n> >\n> > The commit uses memcpy in the same way in other places too, for\n> > instance in pgstat_report_replslot_drop(). Should we fix all of them?\n>\n> Absolutely. the same is seen at several places. Please find the\n> attached.\n>\n> As another issue, just replace memcpy with strlcpy makes compiler\n> complain of type mismatch, as the first paramter to memcpy had an\n> needless \"&\" operator. I removed it in this patch.\n>\n> (&msg.m_slotname is a \"char (*)[NAMEDATALEN]\", not a \"char *\".)\n>\n\nThe patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 5 Nov 2020 13:14:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some doubious code in pgstat.c"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 9:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 5, 2020 at 11:18 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > As another issue, just replace memcpy with strlcpy makes compiler\n> > complain of type mismatch, as the first paramter to memcpy had an\n> > needless \"&\" operator. I removed it in this patch.\n> >\n> > (&msg.m_slotname is a \"char (*)[NAMEDATALEN]\", not a \"char *\".)\n> >\n>\n> The patch looks good to me.\n>\n\nLGTM as well but the proposed commit message seems to be a bit\nunclear. How about something like this:\n\"Use strlcpy instead of memcpy for copying the slot name in pgstat.c.\n\nThere is no outright bug here but it is better to be consistent with\nthe usage at other places in the same file. In the passing, fix a wrong\nAssertion in pgstat_recv_replslot.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 Nov 2020 11:48:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some doubious code in pgstat.c"
},
{
"msg_contents": "At Thu, 5 Nov 2020 11:48:24 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Nov 5, 2020 at 9:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Nov 5, 2020 at 11:18 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > As another issue, just replace memcpy with strlcpy makes compiler\n> > > complain of type mismatch, as the first paramter to memcpy had an\n> > > needless \"&\" operator. I removed it in this patch.\n> > >\n> > > (&msg.m_slotname is a \"char (*)[NAMEDATALEN]\", not a \"char *\".)\n> > >\n> >\n> > The patch looks good to me.\n> >\n> \n> LGTM as well but the proposed commit message seems to be a bit\n> unclear. How about something like this:\n> \"Use strlcpy instead of memcpy for copying the slot name in pgstat.c.\n> \n> There is no outright bug here but it is better to be consistent with\n> the usage at other places in the same file. In the passing, fix a wrong\n> Assertion in pgstat_recv_replslot.\"\n\nLooks better, thanks.\n\nBy the way, I noticed the following sequence.\n\npgstat.c: 3204\n3204>\tlbeentry.st_appname[0] = '\\0';\n3205>\tif (MyProcPort && MyProcPort->remote_hostname)\n3206>\t\tstrlcpy(lbeentry.st_clienthostname, MyProcPort->remote_hostname,\n3207>\t\t\t\tNAMEDATALEN);\n3208>\telse\n3209>\t\tlbeentry.st_clienthostname[0] = '\\0';\n3210>\tlbeentry.st_activity_raw[0] = '\\0';\n3211>\t/* Also make sure the last byte in each string area is always 0 */\n3212>\tlbeentry.st_appname[NAMEDATALEN - 1] = '\\0';\n3213>\tlbeentry.st_clienthostname[NAMEDATALEN - 1] = '\\0';\n3214>\tlbeentry.st_activity_raw[pgstat_track_activity_query_size - 1] = '\\0';\n\n\nThe strlcpy at the line 3206 makes sure that st_clienthostname is\nnull-terminated so it's nonsense to do line 3213. st_appname and\nst_activity_raw are set to zero-length string.\n\nIs there any point in setting terminating nul to them?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Thu, 05 Nov 2020 17:43:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some doubious code in pgstat.c"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 2:13 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 5 Nov 2020 11:48:24 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Thu, Nov 5, 2020 at 9:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 5, 2020 at 11:18 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > As another issue, just replace memcpy with strlcpy makes compiler\n> > > > complain of type mismatch, as the first paramter to memcpy had an\n> > > > needless \"&\" operator. I removed it in this patch.\n> > > >\n> > > > (&msg.m_slotname is a \"char (*)[NAMEDATALEN]\", not a \"char *\".)\n> > > >\n> > >\n> > > The patch looks good to me.\n> > >\n> >\n> > LGTM as well but the proposed commit message seems to be a bit\n> > unclear. How about something like this:\n> > \"Use strlcpy instead of memcpy for copying the slot name in pgstat.c.\n> >\n> > There is no outright bug here but it is better to be consistent with\n> > the usage at other places in the same file. In the passing, fix a wrong\n> > Assertion in pgstat_recv_replslot.\"\n>\n> Looks better, thanks.\n>\n> By the way, I noticed the following sequence.\n>\n> pgstat.c: 3204\n> 3204> lbeentry.st_appname[0] = '\\0';\n> 3205> if (MyProcPort && MyProcPort->remote_hostname)\n> 3206> strlcpy(lbeentry.st_clienthostname, MyProcPort->remote_hostname,\n> 3207> NAMEDATALEN);\n> 3208> else\n> 3209> lbeentry.st_clienthostname[0] = '\\0';\n> 3210> lbeentry.st_activity_raw[0] = '\\0';\n> 3211> /* Also make sure the last byte in each string area is always 0 */\n> 3212> lbeentry.st_appname[NAMEDATALEN - 1] = '\\0';\n> 3213> lbeentry.st_clienthostname[NAMEDATALEN - 1] = '\\0';\n> 3214> lbeentry.st_activity_raw[pgstat_track_activity_query_size - 1] = '\\0';\n>\n>\n> The strlcpy at the line 3206 makes sure that st_clienthostname is\n> null-terminated so it's nonsense to do line 3213. st_appname and\n> st_activity_raw are set to zero-length string.\n>\n> Is there any point in setting terminating nul to them?\n>\n\nI also don't see any reason for the same except being extra careful.\nThis is not directly related to this patch so I think we can leave\nthis or if you want you can discuss this in a separate thread. It\nseems to be introduced in commit 85ccb689.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 Nov 2020 14:50:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some doubious code in pgstat.c"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 2:13 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 5 Nov 2020 11:48:24 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Thu, Nov 5, 2020 at 9:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 5, 2020 at 11:18 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > As another issue, just replace memcpy with strlcpy makes compiler\n> > > > complain of type mismatch, as the first paramter to memcpy had an\n> > > > needless \"&\" operator. I removed it in this patch.\n> > > >\n> > > > (&msg.m_slotname is a \"char (*)[NAMEDATALEN]\", not a \"char *\".)\n> > > >\n> > >\n> > > The patch looks good to me.\n> > >\n> >\n> > LGTM as well but the proposed commit message seems to be a bit\n> > unclear. How about something like this:\n> > \"Use strlcpy instead of memcpy for copying the slot name in pgstat.c.\n> >\n> > There is no outright bug here but it is better to be consistent with\n> > the usage at other places in the same file. In the passing, fix a wrong\n> > Assertion in pgstat_recv_replslot.\"\n>\n> Looks better, thanks.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 6 Nov 2020 16:40:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some doubious code in pgstat.c"
},
{
"msg_contents": "At Fri, 6 Nov 2020 16:40:39 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Nov 5, 2020 at 2:13 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 5 Nov 2020 11:48:24 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Thu, Nov 5, 2020 at 9:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Nov 5, 2020 at 11:18 AM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > > > As another issue, just replace memcpy with strlcpy makes compiler\n> > > > > complain of type mismatch, as the first paramter to memcpy had an\n> > > > > needless \"&\" operator. I removed it in this patch.\n> > > > >\n> > > > > (&msg.m_slotname is a \"char (*)[NAMEDATALEN]\", not a \"char *\".)\n> > > > >\n> > > >\n> > > > The patch looks good to me.\n> > > >\n> > >\n> > > LGTM as well but the proposed commit message seems to be a bit\n> > > unclear. How about something like this:\n> > > \"Use strlcpy instead of memcpy for copying the slot name in pgstat.c.\n> > >\n> > > There is no outright bug here but it is better to be consistent with\n> > > the usage at other places in the same file. In the passing, fix a wrong\n> > > Assertion in pgstat_recv_replslot.\"\n> >\n> > Looks better, thanks.\n> >\n> \n> Pushed!\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 09 Nov 2020 11:48:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some doubious code in pgstat.c"
}
] |
[
{
"msg_contents": "\nGreetings Stephen,\n\nI saw up 410MB/s for a few seconds with this patch on NVMe, and that's \nhuge ~5.2x improvement which is amazing for a such simple patch.\n\nThe system and data was identical like last time, so results are directly comparable \nto the previous post. The only change is that I've applied Yours patch on top of \nREL12_STABLE as this is cluster where I was having original data. I think it deserves\n it's own commitfest entry, right? Or do we combine those two?\n\nEach test was conducted with \n1. alter system set effective_io_concurrency=$N; \n2. echo 3 > /proc/sys/vm/drop_caches; 3x sync; /etc/init.d/postgresql-12 restart;\n3. analyze verbose t;\n4. the measurements were observed by pidstat -d and iostat -dxm.\n\nno readahead (blockdev --setra 0), cold cache:\neffective_io_concurrency=64; => ~26MB/s\neffective_io_concurrency=32; => ~29MB/s\neffective_io_concurrency=8; => ~31MB/s\neffective_io_concurrency=4; => ~32MB/s\neffective_io_concurrency=1; => ~32MB/s\neffective_io_concurrency=0; => ~32MB/s\n\nreadahead=256 (128kB, Linux defaults), cold cache:\neffective_io_concurrency=0; => ~60..68MB/s (analyze total elapsed time 3min 22s , baseline)\neffective_io_concurrency=1; => ~62MB/s\neffective_io_concurrency=4; => ~370MB/s\neffective_io_concurrency=8; => ~380..410MB/s <-- wow! up to 52k read IOPS, CPU util 100%\neffective_io_concurrency=16; => ~320..390MB/s (analyze total elapsed time ~39s)\n\nI think one needs to be sure to restart, so that autovacuum \nworkers get the new effective_io_concurrency, maybe that's a \ndocumentation thing only.\n\nI have just one question, with this patch the strace I/O of analyze \nlook like below:\n\npread(72, ..., 8192, 738181120) = 8192\nfadvise64(72, 738500608, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 738607104, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\npread(72, \"..., 8192, 738500608) = 8192\nfadvise64(72, 738607104, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\npread(72, .., 8192, 738607104) = 8192\nfadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 740122624, 8192, POSIX_FADV_WILLNEED) = 0\npread(72, ..., 8192, 738754560) = 8192\nfadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(72, 740122624, 8192, POSIX_FADV_WILLNEED) = 0\n\nIf you highlight \"738754560\" in the output it appears to duplicate the \nsyscalls issued until it preads() - in case of \"738754560\" offset it was \nasked for 3 times. Also I wouldn't imagine in wildest dreams that \nposix_fadvise(POSIX_FADV_WILLNEED) is such a cheap syscall.\n\n-J.\n\n----------------------------------------------------------------------------------------------------------------\n\nFrom: Stephen Frost\nSent: Tuesday, November 3, 2020 6:47 PM\nTo: Jakub Wartak\nCc: pgsql-hackers\nSubject: Re: automatic analyze: readahead - add \"IO read time\" log message \n\nGreetings,\n\n* Jakub Wartak (Jakub.Wartak@tomtom.com) wrote:\n> >Interesting that you weren't seeing any benefit to disabling readahead.\n> \n> I've got some free minutes and I have repeated the exercise in more realistic\n> and strict environment that previous one to conclude that the current situation is preferable:\n\nThanks for spending time on this!\n\n> Analyzed table was having 171GB (as reported by \\dt+) + indexes: 35GB, 147GB, 35GB, 65GB (as reported by \\di+)\n> Linux kernel 4.14.x, 2x NVME under dm-0 (it might matter as /dev/dm-0 might is different layer and might have different storage settings), VG on top of dm-0, LV with stripe-size 8kB, ext4.\n> s_b=128MB, RAM=128GB (- ~30GB which were reserved for HugePages), typical output of PgSQL12: \n> INFO: \"x\": scanned 1500000 of 22395442 pages, containing 112410444 live rows and 0 dead rows; 1500000 rows in sample, 1678321053 estimated total rows\n> \n> Hot VFS cache:\n> Run0: Defaults, default RA on dm-1=256 (*512=128kB), most of the time is spent heapam_scan_analyze_next_block() -> .. -> pread() which causes ~70..80MB/s as reported by pidstat, maximum 22-25% CPU, ~8k IOPS in iostat with average request size per IO=25 sectors(*512/1024 = ~12kB), readahead on, hot caches, total elapsed ~3m\n> Run1: Defaults, similar as above (hot VFS cache), total elapsed 2m:50s\n> Run2: Defaults, similar as above (hot VFS cache), total elapsed 2m:42s\n> Run3: Defaults, miliaria as above (hot VFS cache), total elapsed 2m:40s\n> \n> No VFS cache:\n> Run4: echo 3 > drop_caches, still with s_b=128MB: maximum 18-23% CPU, ~70MB/s read, ondemand_readahead visible in perf, total elapsed 3m30s\n> Run5: echo 3 > drop_caches, still with s_b=128MB: same as above, total elapsed 3m29s\n> Run6: echo 3 > drop_caches, still with s_b=128MB: same as above, total elapsed 3m28s\n> \n> No VFS cache, readahead off:\n> Run7: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: reads at 33MB/s, ~13% CPU, 8.7k read IOPS @ avgrq-sz = 11 sectors (*512=5.5kB), total elapsed 5m59s\n> Run8: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: as above, double-confirmed no readaheads [ pread()->generic_file_read_iter()->ext4_mpage_readpages()-> bio.. ], total elapsed 5m56s\n> Run9: echo 3 > drop_caches, still with s_b=128MB, blockdev --setra 0 /dev/dm-0: as above, total elapsed 5m55s\n\n[ ... ]\n\n> >The VACUUM case is going to be complicated by what's in the visibility\n> >map. (..) \n> \n> After observing the ANALYZE readahead behavior benefit I've abandoned\n> the case of testing much more advanced VACUUM processing, clearly Linux \n> read-ahead is beneficial in even simple cases.\n\nThis seems to be indicating that while the Linux kernel may end up\nreading pages we don't end up needing, it's much more often the case\nthat it's ending up reading *some* pages that we do need, and that's\nhappening often enough that it more than makes up for the extra reads\nbeing done.\n\nInstead of having these guessing games between the kernel and what PG's\ndoing, however, we could potentially do better using posix_fadvise() to\ntell the kernel, up front, exactly what blocks we are going to ask for,\nand perhaps that would end up improving things.\n\nAttached is a very much rough-n-ready patch for doing that, using\neffective_io_concurrency to control how many blocks to pre-fetch for\nANALYZE (0 meaning 'none'). If you've got a chance to test with\ndifferent settings for effective_io_concurrency with the patch applied\nto see what impact posix_fadvise() has on these ANALYZE runs, that would\nbe very cool to see.\n\nGoing between effective_cache_size = 0 and effective_cache_size = 10\nwith this patch, in some very quick testing on a laptop NVMe, while\nmaking sure to drop caches and restart PG in between to clear out\nshared_buffers, definitely shows that prefetching done this way is an\nimprovement over letting the kernel's normal read ahead handle it.\n\n> >> My only idea would be that a lot of those blocks could be read asynchronously in batches (AIO) with POSIX_FADV_RANDOM issued on block-range before, so maybe the the optimization is possible, but not until we'll have AIO ;)\n> >\n> > (..)AIO is a whole other animal that's been discussed off and on\n> >around here but it's a much larger and more invasive change than just\n> >calling posix_fadvise().\n> \n> Yes, I'm aware and I'm keeping my fingers crossed that maybe some day.... \n\nI don't think we should throw out the idea of using PrefetchBuffer()\nhere. \"Real\" AIO would certainly be good to have one of these days, but\nuntil then, posix_fadvise() could net us some of those gains in the\nmeantime.\n\n> The ANALYZE just seem fit to be natural candidate to use it. The only easy chance \n> of acceleration of stats gathering - at least to me and enduser point of view - \n> is to have more parallel autoanalyze workers running to drive more I/O concurrency \n> (by e.g. partitioning the data), both in readahead and non-readahead scenarios. \n> Which is a pity because 70-80% of such process sits idle. The readahead might read \n> 10x more unnecessary data, but pread() doesn't have to wait. <speculation>Once AIO \n> would be it could throw thousands of requests without readahead and achieve much \n> better efficiency probably</speculation>\n\nActual parallel ANALYZE could be interesting, of course, but that's also\nquite a bit more complicated to make happen, and even in that case it\nwould seem like we'd probably want to use posix_fadvise() anyway since\nit'd hopefully help.\n\n> I hope the previous simple patch goes into master and helps other people understand\n> the picture more easily.\n\nI agree that we should also be providing users with more information\nabout what's going on, but maybe we can do that and improve things by\nleveraging posix_fadvise().\n\nThanks,\n\nStephen\n\n",
"msg_date": "Wed, 4 Nov 2020 09:07:59 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": true,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "On Wed, Nov 04, 2020 at 09:07:59AM +0000, Jakub Wartak wrote:\n>\n>Greetings Stephen,\n>\n>I saw up 410MB/s for a few seconds with this patch on NVMe, and that's\n>huge ~5.2x improvement which is amazing for a such simple patch.\n>\n>The system and data was identical like last time, so results are directly comparable\n>to the previous post. The only change is that I've applied Yours patch on top of\n>REL12_STABLE as this is cluster where I was having original data. I think it deserves\n> it's own commitfest entry, right? Or do we combine those two?\n>\n>Each test was conducted with\n>1. alter system set effective_io_concurrency=$N;\n>2. echo 3 > /proc/sys/vm/drop_caches; 3x sync; /etc/init.d/postgresql-12 restart;\n>3. analyze verbose t;\n>4. the measurements were observed by pidstat -d and iostat -dxm.\n>\n>no readahead (blockdev --setra 0), cold cache:\n>effective_io_concurrency=64; => ~26MB/s\n>effective_io_concurrency=32; => ~29MB/s\n>effective_io_concurrency=8; => ~31MB/s\n>effective_io_concurrency=4; => ~32MB/s\n>effective_io_concurrency=1; => ~32MB/s\n>effective_io_concurrency=0; => ~32MB/s\n>\n>readahead=256 (128kB, Linux defaults), cold cache:\n>effective_io_concurrency=0; => ~60..68MB/s (analyze total elapsed time 3min 22s , baseline)\n>effective_io_concurrency=1; => ~62MB/s\n>effective_io_concurrency=4; => ~370MB/s\n>effective_io_concurrency=8; => ~380..410MB/s <-- wow! up to 52k read IOPS, CPU util 100%\n>effective_io_concurrency=16; => ~320..390MB/s (analyze total elapsed time ~39s)\n>\n\nThanks for the testing, those are indeed quite promising numbers for\nsuch a simple patch. I wonder how would this behave on a different\nstorage - I'll see if I can do some testing, but I guess it should be\nthe same as for other places that do prefetching.\n\n>I think one needs to be sure to restart, so that autovacuum\n>workers get the new effective_io_concurrency, maybe that's a\n>documentation thing only.\n>\n>I have just one question, with this patch the strace I/O of analyze\n>look like below:\n>\n>pread(72, ..., 8192, 738181120) = 8192\n>fadvise64(72, 738500608, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 738607104, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\n>pread(72, \"..., 8192, 738500608) = 8192\n>fadvise64(72, 738607104, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\n>pread(72, .., 8192, 738607104) = 8192\n>fadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 740122624, 8192, POSIX_FADV_WILLNEED) = 0\n>pread(72, ..., 8192, 738754560) = 8192\n>fadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\n>fadvise64(72, 740122624, 8192, POSIX_FADV_WILLNEED) = 0\n>\n>If you highlight \"738754560\" in the output it appears to duplicate the\n>syscalls issued until it preads() - in case of \"738754560\" offset it was\n>asked for 3 times. Also I wouldn't imagine in wildest dreams that\n>posix_fadvise(POSIX_FADV_WILLNEED) is such a cheap syscall.\n>\n\nIMHO that'a a bug in the patch, which always tries to prefetch all\n\"future\" blocks, including those that were already prefetched. It\nprobably needs to do something like bitmap heap scan where we track\nwhat was already prefetched and only issue the new blocks.\n\nBTW it seems your e-mail client does something funny, stripping the\n\"references\" headers, which breaks threading and makes following the\ndiscussion very hard (and will confuse the CF app too). If you look at\npgsql-hackers archives, each of your responses starts a new thread.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 4 Nov 2020 15:19:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Wed, Nov 04, 2020 at 09:07:59AM +0000, Jakub Wartak wrote:\n> >I saw up 410MB/s for a few seconds with this patch on NVMe, and that's\n> >huge ~5.2x improvement which is amazing for a such simple patch.\n\nNice!\n\n> >The system and data was identical like last time, so results are directly comparable\n> >to the previous post. The only change is that I've applied Yours patch on top of\n> >REL12_STABLE as this is cluster where I was having original data. I think it deserves\n> >it's own commitfest entry, right? Or do we combine those two?\n> >\n> >Each test was conducted with\n> >1. alter system set effective_io_concurrency=$N;\n> >2. echo 3 > /proc/sys/vm/drop_caches; 3x sync; /etc/init.d/postgresql-12 restart;\n> >3. analyze verbose t;\n> >4. the measurements were observed by pidstat -d and iostat -dxm.\n> >\n> >no readahead (blockdev --setra 0), cold cache:\n> >effective_io_concurrency=64; => ~26MB/s\n> >effective_io_concurrency=32; => ~29MB/s\n> >effective_io_concurrency=8; => ~31MB/s\n> >effective_io_concurrency=4; => ~32MB/s\n> >effective_io_concurrency=1; => ~32MB/s\n> >effective_io_concurrency=0; => ~32MB/s\n\nI'm speculating here, but it seems likely that by disabling read-ahead,\nthe posix_fadvise() calls are basically getting ignored and that's why\nthere's not much difference here...?\n\n> >readahead=256 (128kB, Linux defaults), cold cache:\n> >effective_io_concurrency=0; => ~60..68MB/s (analyze total elapsed time 3min 22s , baseline)\n> >effective_io_concurrency=1; => ~62MB/s\n> >effective_io_concurrency=4; => ~370MB/s\n> >effective_io_concurrency=8; => ~380..410MB/s <-- wow! up to 52k read IOPS, CPU util 100%\n> >effective_io_concurrency=16; => ~320..390MB/s (analyze total elapsed time ~39s)\n\nVery nice.\n\n> Thanks for the testing, those are indeed quite promising numbers for\n> such a simple patch. I wonder how would this behave on a different\n> storage - I'll see if I can do some testing, but I guess it should be\n> the same as for other places that do prefetching.\n\nThat would certainly be cool.\n\n> >I think one needs to be sure to restart, so that autovacuum\n> >workers get the new effective_io_concurrency, maybe that's a\n> >documentation thing only.\n> >\n> >I have just one question, with this patch the strace I/O of analyze\n> >look like below:\n> >\n> >pread(72, ..., 8192, 738181120) = 8192\n> >fadvise64(72, 738500608, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 738607104, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\n> >pread(72, \"..., 8192, 738500608) = 8192\n> >fadvise64(72, 738607104, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\n> >pread(72, .., 8192, 738607104) = 8192\n> >fadvise64(72, 738754560, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 740122624, 8192, POSIX_FADV_WILLNEED) = 0\n> >pread(72, ..., 8192, 738754560) = 8192\n> >fadvise64(72, 738762752, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739115008, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739155968, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739385344, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739696640, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 739934208, 8192, POSIX_FADV_WILLNEED) = 0\n> >fadvise64(72, 740122624, 8192, POSIX_FADV_WILLNEED) = 0\n> >\n> >If you highlight \"738754560\" in the output it appears to duplicate the\n> >syscalls issued until it preads() - in case of \"738754560\" offset it was\n> >asked for 3 times. Also I wouldn't imagine in wildest dreams that\n> >posix_fadvise(POSIX_FADV_WILLNEED) is such a cheap syscall.\n> \n> IMHO that'a a bug in the patch, which always tries to prefetch all\n> \"future\" blocks, including those that were already prefetched. It\n> probably needs to do something like bitmap heap scan where we track\n> what was already prefetched and only issue the new blocks.\n\nYeah, this was just something quickly thrown together to see if it'd\nhelp. I'll clean it up by, as you say, doing something similar to what\nwe do with bitmap heap scan, by having a separate BlockSampler that's\nset up early on and then have the pre-fetching keep pace with the actual\nscan instead of posix_fadvise()'ing the same pages over and over.\n\nI'll also look at doing something for VACUUM too, as that could also\nbenefit from posix_fadvise() calls, when we're skipping over all-frozen\npages.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 4 Nov 2020 09:30:41 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> >If you highlight \"738754560\" in the output it appears to duplicate the\n> >syscalls issued until it preads() - in case of \"738754560\" offset it was\n> >asked for 3 times. Also I wouldn't imagine in wildest dreams that\n> >posix_fadvise(POSIX_FADV_WILLNEED) is such a cheap syscall.\n> \n> IMHO that'a a bug in the patch, which always tries to prefetch all\n> \"future\" blocks, including those that were already prefetched. It\n> probably needs to do something like bitmap heap scan where we track\n> what was already prefetched and only issue the new blocks.\n\nUpdated patch attached which:\n\n- Starts out by pre-fetching the first effective_io_concurrency number\n of blocks we are going to want, hopefully making it so the kernel will\n trust our fadvise's over its own read-ahead, right from the start.\n- Makes sure the prefetch iterator is pushed forward whenever the\n regular interator is moved forward.\n- After each page read, issues a prefetch, similar to BitmapHeapScan, to\n hopefully avoiding having the prefetching get in the way of the\n regular i/o.\n- Added some comments, ran pgindent, added a commit message.\n\nI do think we should also include patch that Jakub wrote previously\nwhich adds information about the read rate of ANALYZE.\n\nI'll look at integrating that into this patch and then look at a new\npatch to do something similar for VACUUM in a bit.\n\nIf you're doing further benchmarking of ANALYZE though, this would\nprobably be the better patch to use. Certainly improved performance\nhere quite a bit with effective_io_concurrency set to 16.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 4 Nov 2020 11:02:14 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Hi,\n\nOn 11/4/20 5:02 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>>> If you highlight \"738754560\" in the output it appears to duplicate the\n>>> syscalls issued until it preads() - in case of \"738754560\" offset it was\n>>> asked for 3 times. Also I wouldn't imagine in wildest dreams that\n>>> posix_fadvise(POSIX_FADV_WILLNEED) is such a cheap syscall.\n>>\n>> IMHO that'a a bug in the patch, which always tries to prefetch all\n>> \"future\" blocks, including those that were already prefetched. It\n>> probably needs to do something like bitmap heap scan where we track\n>> what was already prefetched and only issue the new blocks.\n> \n> Updated patch attached which:\n> \n> - Starts out by pre-fetching the first effective_io_concurrency number\n> of blocks we are going to want, hopefully making it so the kernel will\n> trust our fadvise's over its own read-ahead, right from the start.\n> - Makes sure the prefetch iterator is pushed forward whenever the\n> regular interator is moved forward.\n> - After each page read, issues a prefetch, similar to BitmapHeapScan, to\n> hopefully avoiding having the prefetching get in the way of the\n> regular i/o.\n> - Added some comments, ran pgindent, added a commit message.\n> \n\nNice, that was quick ;-)\n\n> I do think we should also include patch that Jakub wrote previously\n> which adds information about the read rate of ANALYZE.\n> \n\n+1\n\n> I'll look at integrating that into this patch and then look at a new\n> patch to do something similar for VACUUM in a bit.\n> \n\n+1\n\n> If you're doing further benchmarking of ANALYZE though, this would\n> probably be the better patch to use. Certainly improved performance\n> here quite a bit with effective_io_concurrency set to 16.\n> \n\nYeah. I'd expect this to be heavily dependent on hardware.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Nov 2020 17:46:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On 11/4/20 5:02 PM, Stephen Frost wrote:\n> >* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> >>>If you highlight \"738754560\" in the output it appears to duplicate the\n> >>>syscalls issued until it preads() - in case of \"738754560\" offset it was\n> >>>asked for 3 times. Also I wouldn't imagine in wildest dreams that\n> >>>posix_fadvise(POSIX_FADV_WILLNEED) is such a cheap syscall.\n> >>\n> >>IMHO that'a a bug in the patch, which always tries to prefetch all\n> >>\"future\" blocks, including those that were already prefetched. It\n> >>probably needs to do something like bitmap heap scan where we track\n> >>what was already prefetched and only issue the new blocks.\n> >\n> >Updated patch attached which:\n> >\n> >- Starts out by pre-fetching the first effective_io_concurrency number\n> > of blocks we are going to want, hopefully making it so the kernel will\n> > trust our fadvise's over its own read-ahead, right from the start.\n> >- Makes sure the prefetch iterator is pushed forward whenever the\n> > regular interator is moved forward.\n> >- After each page read, issues a prefetch, similar to BitmapHeapScan, to\n> > hopefully avoiding having the prefetching get in the way of the\n> > regular i/o.\n> >- Added some comments, ran pgindent, added a commit message.\n> \n> Nice, that was quick ;-)\n\n:)\n\n> >I do think we should also include patch that Jakub wrote previously\n> >which adds information about the read rate of ANALYZE.\n> \n> +1\n\nAttached is an updated patch which updates the documentation and\nintegrates Jakub's initial work on improving the logging around\nauto-analyze (and I made the logging in auto-vacuum more-or-less match\nit).\n\n> >I'll look at integrating that into this patch and then look at a new\n> >patch to do something similar for VACUUM in a bit.\n> \n> +1\n\nI spent some time looking into this but it's a bit complicated.. For\nsome sound reasons, VACUUM will avoid skipping through a table when\nthere's only a few pages that it could skip (not skipping allows us to\nmove forward the relfrozenxid). That said, perhaps we could start doing\nprefetching once we've decided that we're skipping. We'd need to think\nabout if we have to worry about the VM changing between the pre-fetching\nand the time when we're actually going to ask for the page.. I don't\n*think* that's an issue because only VACUUM would be changing the pages\nto be all-frozen or all-visible, and so if we see a page that isn't one\nof those then we're going to want to visit that page and that's not\ngoing to change, and we probably don't need to care about a page that\nused to be all-frozen and now isn't during this run- but if the prefetch\nwent ahead and got page 10, and now page 8 is not-all-frozen and the\nactual scan is at page 5, then maybe it wants page 8 next and that isn't\nwhat we pre-fetched...\n\nAnyhow, all-in-all, definitely more complicated and probably best\nconsidered and discussed independently.\n\n> >If you're doing further benchmarking of ANALYZE though, this would\n> >probably be the better patch to use. Certainly improved performance\n> >here quite a bit with effective_io_concurrency set to 16.\n> \n> Yeah. I'd expect this to be heavily dependent on hardware.\n\nSure, I agree with that too.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 9 Nov 2020 13:06:44 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "On 11/9/20 7:06 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> On 11/4/20 5:02 PM, Stephen Frost wrote:\n>>> * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>>>>> If you highlight \"738754560\" in the output it appears to duplicate the\n>>>>> syscalls issued until it preads() - in case of \"738754560\" offset it was\n>>>>> asked for 3 times. Also I wouldn't imagine in wildest dreams that\n>>>>> posix_fadvise(POSIX_FADV_WILLNEED) is such a cheap syscall.\n>>>>\n>>>> IMHO that'a a bug in the patch, which always tries to prefetch all\n>>>> \"future\" blocks, including those that were already prefetched. It\n>>>> probably needs to do something like bitmap heap scan where we track\n>>>> what was already prefetched and only issue the new blocks.\n>>>\n>>> Updated patch attached which:\n>>>\n>>> - Starts out by pre-fetching the first effective_io_concurrency number\n>>> of blocks we are going to want, hopefully making it so the kernel will\n>>> trust our fadvise's over its own read-ahead, right from the start.\n>>> - Makes sure the prefetch iterator is pushed forward whenever the\n>>> regular interator is moved forward.\n>>> - After each page read, issues a prefetch, similar to BitmapHeapScan, to\n>>> hopefully avoiding having the prefetching get in the way of the\n>>> regular i/o.\n>>> - Added some comments, ran pgindent, added a commit message.\n>>\n>> Nice, that was quick ;-)\n> \n> :)\n> \n>>> I do think we should also include patch that Jakub wrote previously\n>>> which adds information about the read rate of ANALYZE.\n>>\n>> +1\n> \n> Attached is an updated patch which updates the documentation and\n> integrates Jakub's initial work on improving the logging around\n> auto-analyze (and I made the logging in auto-vacuum more-or-less match\n> it).\n> \n\nThanks. I'll do some testing/benchmarking once my machines are free, in\na couple days perhaps. But as I said before, I don't expect this to\nbehave very differently from other places that already do prefetching.\n\n>>> I'll look at integrating that into this patch and then look at a new\n>>> patch to do something similar for VACUUM in a bit.\n>>\n>> +1\n> \n> I spent some time looking into this but it's a bit complicated.. For\n> some sound reasons, VACUUM will avoid skipping through a table when\n> there's only a few pages that it could skip (not skipping allows us to\n> move forward the relfrozenxid). That said, perhaps we could start doing\n> prefetching once we've decided that we're skipping. We'd need to think\n> about if we have to worry about the VM changing between the pre-fetching\n> and the time when we're actually going to ask for the page.. I don't\n> *think* that's an issue because only VACUUM would be changing the pages\n> to be all-frozen or all-visible, and so if we see a page that isn't one\n> of those then we're going to want to visit that page and that's not\n> going to change, and we probably don't need to care about a page that\n> used to be all-frozen and now isn't during this run- but if the prefetch\n> went ahead and got page 10, and now page 8 is not-all-frozen and the\n> actual scan is at page 5, then maybe it wants page 8 next and that isn't\n> what we pre-fetched...\n> \n> Anyhow, all-in-all, definitely more complicated and probably best\n> considered and discussed independently>\n\n+1\n\nFWIW I wonder if this should be tracked separately in the CF app, as\nit's very different from the original \"add some logging\" patch, which\nmakes the CF entry rather misleading.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 9 Nov 2020 19:19:59 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> Thanks. I'll do some testing/benchmarking once my machines are free, in\n> a couple days perhaps. But as I said before, I don't expect this to\n> behave very differently from other places that already do prefetching.\n\nAgreed, but would still be nice to see test results beyond just what\nI've done.\n\n> FWIW I wonder if this should be tracked separately in the CF app, as\n> it's very different from the original \"add some logging\" patch, which\n> makes the CF entry rather misleading.\n\nI've gone ahead and updated the CF entry for this to hopefully make it\nclearer for those interested in looking at it. I'll try to come back to\nthis in the next CF, ideally we'd at least get someone else to take a\nlook at the code beyond me. :) (Obviously, you looked at it some, but\nwasn't really clear if you were alright with it or if you felt it needed\nmore review.)\n\nThanks!\n\nStephen",
"msg_date": "Sat, 5 Dec 2020 11:39:26 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> > Thanks. I'll do some testing/benchmarking once my machines are free, in\n> > a couple days perhaps. But as I said before, I don't expect this to\n> > behave very differently from other places that already do prefetching.\n> \n> Agreed, but would still be nice to see test results beyond just what\n> I've done.\n\nAny chance you were able to run those tests..?\n\n> > FWIW I wonder if this should be tracked separately in the CF app, as\n> > it's very different from the original \"add some logging\" patch, which\n> > makes the CF entry rather misleading.\n> \n> I've gone ahead and updated the CF entry for this to hopefully make it\n> clearer for those interested in looking at it. I'll try to come back to\n> this in the next CF, ideally we'd at least get someone else to take a\n> look at the code beyond me. :) (Obviously, you looked at it some, but\n> wasn't really clear if you were alright with it or if you felt it needed\n> more review.)\n\nI've updated the patch to leverage Tom's introduction of\nTimestatmpDifferenceMilliseconds, which simplifies things a bit (and I\ndon't think we need to worry about an analyze taking over 25 days...)\nand generally rebased this up to current HEAD.\n\nWould be great to get a review / comments from others as to if there's\nany concerns. I'll admit that it seems reasonably straight-forward to\nme, but hey, I wrote most of it, so that's not really a fair\nassessment... ;)\n\nThanks,\n\nStephen",
"msg_date": "Wed, 13 Jan 2021 16:17:24 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "On 13/01/2021 23:17, Stephen Frost wrote:\n> Would be great to get a review / comments from others as to if there's\n> any concerns. I'll admit that it seems reasonably straight-forward to\n> me, but hey, I wrote most of it, so that's not really a fair\n> assessment... ;)\n\nLook good overall. A few minor comments:\n\nThe patch consists of two part: add stats to the log for auto-analyze, \nand implement prefetching. They seem like independent features, consider \nsplitting into two patches.\n\nIt's a bit weird that you get more stats in the log for \nautovacuum/autoanalyze than you get with VACUUM/ANALYZE VERBOSE. Not \nreally this patch's fault though.\n\nThis conflicts with the patch at \nhttps://commitfest.postgresql.org/31/2907/, to refactor the table AM \nanalyze API. That's OK, it's straightforward to resolve regardless of \nwhich patch is committed first.\n\n> \t/* Outer loop over blocks to sample */\n> \twhile (BlockSampler_HasMore(&bs))\n> \t{\n> #ifdef USE_PREFETCH\n> \t\tBlockNumber prefetch_targblock = InvalidBlockNumber;\n> #endif\n> \t\tBlockNumber targblock = BlockSampler_Next(&bs);\n> \n> #ifdef USE_PREFETCH\n> \n> \t\t/*\n> \t\t * Make sure that every time the main BlockSampler is moved forward\n> \t\t * that our prefetch BlockSampler also gets moved forward, so that we\n> \t\t * always stay out ahead.\n> \t\t */\n> \t\tif (BlockSampler_HasMore(&prefetch_bs))\n> \t\t\tprefetch_targblock = BlockSampler_Next(&prefetch_bs);\n> #endif\n> \n> \t\tvacuum_delay_point();\n> \n> \t\tif (!table_scan_analyze_next_block(scan, targblock, vac_strategy))\n> \t\t\tcontinue;\n> \n> #ifdef USE_PREFETCH\n> \n> \t\t/*\n> \t\t * When pre-fetching, after we get a block, tell the kernel about the\n> \t\t * next one we will want, if there's any left.\n> \t\t */\n> \t\tif (effective_io_concurrency && prefetch_targblock != InvalidBlockNumber)\n> \t\t\tPrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_targblock);\n> #endif\n> \n> \t\twhile (table_scan_analyze_next_tuple(scan, OldestXmin, &liverows, &deadrows, slot))\n> \t\t{\n> \t\t\t...\n> \t\t}\n> \n> \t\tpgstat_progress_update_param(PROGRESS_ANALYZE_BLOCKS_DONE,\n> \t\t\t\t\t\t\t\t\t ++blksdone);\n> \t}\n\nIf effective_io_concurrency == 0, this calls \nBlockSampler_Next(&prefetch_bs) anyway, which is a waste of cycles.\n\nIf table_scan_analyze_next_block() returns false, we skip the \nPrefetchBuffer() call. That seem wrong.\n\nIs there any potential harm from calling PrefetchBuffer() on a page that \ntable_scan_analyze_next_block() later deems as unsuitable for smapling \nand returns false? That's theoretical at the moment, because \nheapam_scan_analyze_next_block() always returns true. (The tableam \nANALYZE API refactor patch will make this moot, as it moves this logic \ninto the tableam's implementation, so the implementation can do whatever \nmake sense for the particular AM.)\n\n- Heikki\n\n\n",
"msg_date": "Tue, 26 Jan 2021 12:23:29 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> On 13/01/2021 23:17, Stephen Frost wrote:\n> >Would be great to get a review / comments from others as to if there's\n> >any concerns. I'll admit that it seems reasonably straight-forward to\n> >me, but hey, I wrote most of it, so that's not really a fair\n> >assessment... ;)\n> \n> Look good overall. A few minor comments:\n\nThanks a lot for the review!\n\n> The patch consists of two part: add stats to the log for auto-analyze, and\n> implement prefetching. They seem like independent features, consider\n> splitting into two patches.\n\nYeah, that's a good point. I had anticipated that there would be\noverlap but in the end there really wasn't. Done in the attached.\n\n> It's a bit weird that you get more stats in the log for\n> autovacuum/autoanalyze than you get with VACUUM/ANALYZE VERBOSE. Not really\n> this patch's fault though.\n\nAgreed.\n\n> This conflicts with the patch at https://commitfest.postgresql.org/31/2907/,\n> to refactor the table AM analyze API. That's OK, it's straightforward to\n> resolve regardless of which patch is committed first.\n\nAgreed.\n\n> >\t/* Outer loop over blocks to sample */\n> >\twhile (BlockSampler_HasMore(&bs))\n> >\t{\n> >#ifdef USE_PREFETCH\n> >\t\tBlockNumber prefetch_targblock = InvalidBlockNumber;\n> >#endif\n> >\t\tBlockNumber targblock = BlockSampler_Next(&bs);\n> >\n> >#ifdef USE_PREFETCH\n> >\n> >\t\t/*\n> >\t\t * Make sure that every time the main BlockSampler is moved forward\n> >\t\t * that our prefetch BlockSampler also gets moved forward, so that we\n> >\t\t * always stay out ahead.\n> >\t\t */\n> >\t\tif (BlockSampler_HasMore(&prefetch_bs))\n> >\t\t\tprefetch_targblock = BlockSampler_Next(&prefetch_bs);\n> >#endif\n> >\n> >\t\tvacuum_delay_point();\n> >\n> >\t\tif (!table_scan_analyze_next_block(scan, targblock, vac_strategy))\n> >\t\t\tcontinue;\n> >\n> >#ifdef USE_PREFETCH\n> >\n> >\t\t/*\n> >\t\t * When pre-fetching, after we get a block, tell the kernel about the\n> >\t\t * next one we will want, if there's any left.\n> >\t\t */\n> >\t\tif (effective_io_concurrency && prefetch_targblock != InvalidBlockNumber)\n> >\t\t\tPrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_targblock);\n> >#endif\n> >\n> >\t\twhile (table_scan_analyze_next_tuple(scan, OldestXmin, &liverows, &deadrows, slot))\n> >\t\t{\n> >\t\t\t...\n> >\t\t}\n> >\n> >\t\tpgstat_progress_update_param(PROGRESS_ANALYZE_BLOCKS_DONE,\n> >\t\t\t\t\t\t\t\t\t ++blksdone);\n> >\t}\n> \n> If effective_io_concurrency == 0, this calls BlockSampler_Next(&prefetch_bs)\n> anyway, which is a waste of cycles.\n\nGood point, fixed.\n\n> If table_scan_analyze_next_block() returns false, we skip the\n> PrefetchBuffer() call. That seem wrong.\n\nAgreed, fixed.\n\n> Is there any potential harm from calling PrefetchBuffer() on a page that\n> table_scan_analyze_next_block() later deems as unsuitable for smapling and\n> returns false? That's theoretical at the moment, because\n> heapam_scan_analyze_next_block() always returns true. (The tableam ANALYZE\n> API refactor patch will make this moot, as it moves this logic into the\n> tableam's implementation, so the implementation can do whatever make sense\n> for the particular AM.)\n\nI can't see any potential harm and it seems pretty likely that if an\nheapam_scan_analyze_next_block()-equivilant were to decide that a block\nisn't appropriate to analyze it'd have to do so after reading that block\nanyway, making the prefetch still useful.\n\nPerhaps there'll be a case in the future where a given AM would know\nbased on just the block number that it isn't useful to analyze, in which\ncase it'd make sense to adjust the code to skip that block for both\nPrefetching and actually reading, but I don't think that would be too\nhard to do. Doesn't seem sensible to invent that in advance of actually\nhaving that case though- it's certainly not the case for heap AM today,\nat least, as you say.\n\nUnless there's anything else on this, I'll commit these sometime next\nweek.\n\nThanks again for the review!\n\nStephen",
"msg_date": "Fri, 5 Feb 2021 16:22:34 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "On 05/02/2021 23:22, Stephen Frost wrote:\n> Unless there's anything else on this, I'll commit these sometime next\n> week.\n\nOne more thing: Instead of using 'effective_io_concurrency' GUC \ndirectly, should call get_tablespace_maintenance_io_concurrency().\n\n- Heikki\n\n\n",
"msg_date": "Mon, 8 Feb 2021 13:24:37 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> On 05/02/2021 23:22, Stephen Frost wrote:\n> >Unless there's anything else on this, I'll commit these sometime next\n> >week.\n> \n> One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n> should call get_tablespace_maintenance_io_concurrency().\n\nAh, yeah, of course.\n\nUpdated patch attached.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 10 Feb 2021 17:10:08 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Hi,\n\nOn 2/10/21 11:10 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n>> On 05/02/2021 23:22, Stephen Frost wrote:\n>>> Unless there's anything else on this, I'll commit these sometime next\n>>> week.\n>>\n>> One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n>> should call get_tablespace_maintenance_io_concurrency().\n> \n> Ah, yeah, of course.\n> \n> Updated patch attached.\n> \n\nA couple minor comments:\n\n1) I think the patch should be split into two parts, one adding the\ntrack_io_timing, one adding the prefetching.\n\n2) I haven't tried but I'm pretty sure there'll be a compiler warning\nabout 'prefetch_maximum' being unused without USE_PREFETCH defined.\n\n3) Is there a way to reduce the amount of #ifdef in acquire_sample_rows?\n\nThis makes the code rather hard to read, IMHO. It seems to me we can\nmove the code around a bit and merge some of the #ifdef blocks - see the\nattached patch. Most of this is fairly trivial, with the exception of\nmoving PrefetchBuffer before table_scan_analyze_next_block - AFAIK this\ndoes not materially change the behavior, but perhaps I'm wrong.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Mar 2021 04:26:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 2/10/21 11:10 PM, Stephen Frost wrote:\n> > * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> >> On 05/02/2021 23:22, Stephen Frost wrote:\n> >>> Unless there's anything else on this, I'll commit these sometime next\n> >>> week.\n> >>\n> >> One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n> >> should call get_tablespace_maintenance_io_concurrency().\n> > \n> > Ah, yeah, of course.\n> > \n> > Updated patch attached.\n> \n> A couple minor comments:\n> \n> 1) I think the patch should be split into two parts, one adding the\n> track_io_timing, one adding the prefetching.\n\nThis was already done..\n\n> 2) I haven't tried but I'm pretty sure there'll be a compiler warning\n> about 'prefetch_maximum' being unused without USE_PREFETCH defined.\n\nAh, that part is likely true, moved down into the #ifdef block to\naddress that, which also is good since it should avoid mistakenly using\nit outside of the #ifdef's later on by mistake too.\n\n> 3) Is there a way to reduce the amount of #ifdef in acquire_sample_rows?\n\nPerhaps..\n\n> This makes the code rather hard to read, IMHO. It seems to me we can\n> move the code around a bit and merge some of the #ifdef blocks - see the\n> attached patch. Most of this is fairly trivial, with the exception of\n> moving PrefetchBuffer before table_scan_analyze_next_block - AFAIK this\n> does not materially change the behavior, but perhaps I'm wrong.\n\nbut I don't particularly like doing the prefetch right before we run\nvacuum_delay_point() and potentially sleep.\n\nRebased and updated patch attached.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 8 Mar 2021 14:42:53 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "On 3/8/21 8:42 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>> On 2/10/21 11:10 PM, Stephen Frost wrote:\n>>> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n>>>> On 05/02/2021 23:22, Stephen Frost wrote:\n>>>>> Unless there's anything else on this, I'll commit these sometime next\n>>>>> week.\n>>>>\n>>>> One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n>>>> should call get_tablespace_maintenance_io_concurrency().\n>>>\n>>> Ah, yeah, of course.\n>>>\n>>> Updated patch attached.\n>>\n>> A couple minor comments:\n>>\n>> 1) I think the patch should be split into two parts, one adding the\n>> track_io_timing, one adding the prefetching.\n> \n> This was already done..\n> \n\nNot sure what you mean by \"done\"? I see the patch still does both\nchanges related to track_io_timing and prefetching.\n\n>> 2) I haven't tried but I'm pretty sure there'll be a compiler warning\n>> about 'prefetch_maximum' being unused without USE_PREFETCH defined.\n> \n> Ah, that part is likely true, moved down into the #ifdef block to\n> address that, which also is good since it should avoid mistakenly using\n> it outside of the #ifdef's later on by mistake too.\n> \n\nOK\n\n>> 3) Is there a way to reduce the amount of #ifdef in acquire_sample_rows?\n> \n> Perhaps..\n> \n>> This makes the code rather hard to read, IMHO. It seems to me we can\n>> move the code around a bit and merge some of the #ifdef blocks - see the\n>> attached patch. Most of this is fairly trivial, with the exception of\n>> moving PrefetchBuffer before table_scan_analyze_next_block - AFAIK this\n>> does not materially change the behavior, but perhaps I'm wrong.\n> \n> but I don't particularly like doing the prefetch right before we run\n> vacuum_delay_point() and potentially sleep.\n> \n\nWhy? Is that just a matter of personal preference (fair enough) or is\nthere a reason why that would be wrong/harmful?\n\nI think e.g. prefetch_targblock could be moved to the next #ifdef, which\nwill eliminate the one-line ifdef.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 10 Mar 2021 23:29:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 3/8/21 8:42 PM, Stephen Frost wrote:\n> > * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> >> On 2/10/21 11:10 PM, Stephen Frost wrote:\n> >>> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> >>>> On 05/02/2021 23:22, Stephen Frost wrote:\n> >>>>> Unless there's anything else on this, I'll commit these sometime next\n> >>>>> week.\n> >>>>\n> >>>> One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n> >>>> should call get_tablespace_maintenance_io_concurrency().\n> >>>\n> >>> Ah, yeah, of course.\n> >>>\n> >>> Updated patch attached.\n> >>\n> >> A couple minor comments:\n> >>\n> >> 1) I think the patch should be split into two parts, one adding the\n> >> track_io_timing, one adding the prefetching.\n> > \n> > This was already done..\n> \n> Not sure what you mean by \"done\"? I see the patch still does both\n> changes related to track_io_timing and prefetching.\n\nThey are two patches..\n\n➜ ~ grep Subject analyze_prefetch_v8.patch \nSubject: [PATCH 1/2] Improve logging of auto-vacuum and auto-analyze\nSubject: [PATCH 2/2] Use pre-fetching for ANALYZE\n\nThe first doesn't have any prefetch-related things, the second doesn't\nhave any track_io_timing things..\n\n> >> 3) Is there a way to reduce the amount of #ifdef in acquire_sample_rows?\n> > \n> > Perhaps..\n> > \n> >> This makes the code rather hard to read, IMHO. It seems to me we can\n> >> move the code around a bit and merge some of the #ifdef blocks - see the\n> >> attached patch. Most of this is fairly trivial, with the exception of\n> >> moving PrefetchBuffer before table_scan_analyze_next_block - AFAIK this\n> >> does not materially change the behavior, but perhaps I'm wrong.\n> > \n> > but I don't particularly like doing the prefetch right before we run\n> > vacuum_delay_point() and potentially sleep.\n> \n> Why? Is that just a matter of personal preference (fair enough) or is\n> there a reason why that would be wrong/harmful?\n\nTelling the kernel \"hey, we're about to need this, please go get it\" and\nthen immediately going to sleep just strikes me as a bit off. We should\nbe trying to minimize the time between prefetch and actual request for\nthe page. Of course, there's going to be some times that we will issue\na prefetch and then come around again and end up hitting the limit and\nsleeping before we actually request the page and maybe it doesn't\nultimately matter much but it just seems better to sleep first before\nissuing the prefetch to minimize the amount 'outstanding' when we do end\nup sleeping. One thing about prefetching is that the kernel is\ncertainly within its rights to decide to drop the page before we\nactually go to read it, if it's under pressure and we're just sleeping.\n\n> I think e.g. prefetch_targblock could be moved to the next #ifdef, which\n> will eliminate the one-line ifdef.\n\nSure, done in the attached.\n\nThanks for the review! Unless there's other comments, I'll plan to push\nthis over the weekend or early next week.\n\nStephen",
"msg_date": "Thu, 11 Mar 2021 19:11:56 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "\n\nOn 3/12/21 1:11 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>> On 3/8/21 8:42 PM, Stephen Frost wrote:\n>>> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>>>> On 2/10/21 11:10 PM, Stephen Frost wrote:\n>>>>> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n>>>>>> On 05/02/2021 23:22, Stephen Frost wrote:\n>>>>>>> Unless there's anything else on this, I'll commit these sometime next\n>>>>>>> week.\n>>>>>>\n>>>>>> One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n>>>>>> should call get_tablespace_maintenance_io_concurrency().\n>>>>>\n>>>>> Ah, yeah, of course.\n>>>>>\n>>>>> Updated patch attached.\n>>>>\n>>>> A couple minor comments:\n>>>>\n>>>> 1) I think the patch should be split into two parts, one adding the\n>>>> track_io_timing, one adding the prefetching.\n>>>\n>>> This was already done..\n>>\n>> Not sure what you mean by \"done\"? I see the patch still does both\n>> changes related to track_io_timing and prefetching.\n> \n> They are two patches..\n> \n> ➜ ~ grep Subject analyze_prefetch_v8.patch \n> Subject: [PATCH 1/2] Improve logging of auto-vacuum and auto-analyze\n> Subject: [PATCH 2/2] Use pre-fetching for ANALYZE\n> \n> The first doesn't have any prefetch-related things, the second doesn't\n> have any track_io_timing things..\n> \n\nOh! I didn't realize a single file can contain multiple separate\npatches, so I saw a single file and assumed it's one patch. How do you\nproduce a single file, and is that better than just generating multiple\nfiles using git format-patch?\n\n>>>> 3) Is there a way to reduce the amount of #ifdef in acquire_sample_rows?\n>>>\n>>> Perhaps..\n>>>\n>>>> This makes the code rather hard to read, IMHO. It seems to me we can\n>>>> move the code around a bit and merge some of the #ifdef blocks - see the\n>>>> attached patch. Most of this is fairly trivial, with the exception of\n>>>> moving PrefetchBuffer before table_scan_analyze_next_block - AFAIK this\n>>>> does not materially change the behavior, but perhaps I'm wrong.\n>>>\n>>> but I don't particularly like doing the prefetch right before we run\n>>> vacuum_delay_point() and potentially sleep.\n>>\n>> Why? Is that just a matter of personal preference (fair enough) or is\n>> there a reason why that would be wrong/harmful?\n> \n> Telling the kernel \"hey, we're about to need this, please go get it\" and\n> then immediately going to sleep just strikes me as a bit off. We should\n> be trying to minimize the time between prefetch and actual request for\n> the page. Of course, there's going to be some times that we will issue\n> a prefetch and then come around again and end up hitting the limit and\n> sleeping before we actually request the page and maybe it doesn't\n> ultimately matter much but it just seems better to sleep first before\n> issuing the prefetch to minimize the amount 'outstanding' when we do end\n> up sleeping. One thing about prefetching is that the kernel is\n> certainly within its rights to decide to drop the page before we\n> actually go to read it, if it's under pressure and we're just sleeping.\n> \n\nI don't know, but this argument seems rather hand-wavy to me, TBH.\n\nFirstly, we're prefetching quite far ahead anyway, so there's almost\nalways going to be a vacuum_delay_point() call between you issue the\nprefetch and actually using the page. Secondly, vacuum_delay_point won't\nreally sleep for most of the calls (assuming the cost limit is not\nunreasonably small).\n\nThe way I see it (a) it's unlikely a particular vacuum_delay_point()\ncall will sleep, while at the same time (b) it's quite probable one of\nthe calls between prefetching and using the page will sleep.\n\nSo I think the exact ordering of those two calls does not really matter,\nat least not for this particular reason.\n\n\nThat being said, I don't feel strongly about this - if you prefer it\nlike this, so be it. I can live with two ifdefs instead of one.\n\n>> I think e.g. prefetch_targblock could be moved to the next #ifdef, which\n>> will eliminate the one-line ifdef.\n> \n> Sure, done in the attached.\n> \n> Thanks for the review! Unless there's other comments, I'll plan to push\n> this over the weekend or early next week.\n> \n\n+1\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Mar 2021 01:29:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 3/12/21 1:11 AM, Stephen Frost wrote:\n> > * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> >> On 3/8/21 8:42 PM, Stephen Frost wrote:\n> >>> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> >>>> On 2/10/21 11:10 PM, Stephen Frost wrote:\n> >>>>> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> >>>>>> On 05/02/2021 23:22, Stephen Frost wrote:\n> >>>>>>> Unless there's anything else on this, I'll commit these sometime next\n> >>>>>>> week.\n> >>>>>>\n> >>>>>> One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n> >>>>>> should call get_tablespace_maintenance_io_concurrency().\n> >>>>>\n> >>>>> Ah, yeah, of course.\n> >>>>>\n> >>>>> Updated patch attached.\n> >>>>\n> >>>> A couple minor comments:\n> >>>>\n> >>>> 1) I think the patch should be split into two parts, one adding the\n> >>>> track_io_timing, one adding the prefetching.\n> >>>\n> >>> This was already done..\n> >>\n> >> Not sure what you mean by \"done\"? I see the patch still does both\n> >> changes related to track_io_timing and prefetching.\n> > \n> > They are two patches..\n> > \n> > ➜ ~ grep Subject analyze_prefetch_v8.patch \n> > Subject: [PATCH 1/2] Improve logging of auto-vacuum and auto-analyze\n> > Subject: [PATCH 2/2] Use pre-fetching for ANALYZE\n> > \n> > The first doesn't have any prefetch-related things, the second doesn't\n> > have any track_io_timing things..\n> \n> Oh! I didn't realize a single file can contain multiple separate\n> patches, so I saw a single file and assumed it's one patch. How do you\n> produce a single file, and is that better than just generating multiple\n> files using git format-patch?\n\nI use my gfp alias, which sends to stdout:\n\ngfp='git format-patch @{u} --stdout'\n\nSo I only have one file to move around and there's no question about the\nordering of applying them and such.. Not sure if everyone would agree\nthat it's better or worse but I do find it works a bit better for me\nthan dealing with multiple files.\n\n> >>>> 3) Is there a way to reduce the amount of #ifdef in acquire_sample_rows?\n> >>>\n> >>> Perhaps..\n> >>>\n> >>>> This makes the code rather hard to read, IMHO. It seems to me we can\n> >>>> move the code around a bit and merge some of the #ifdef blocks - see the\n> >>>> attached patch. Most of this is fairly trivial, with the exception of\n> >>>> moving PrefetchBuffer before table_scan_analyze_next_block - AFAIK this\n> >>>> does not materially change the behavior, but perhaps I'm wrong.\n> >>>\n> >>> but I don't particularly like doing the prefetch right before we run\n> >>> vacuum_delay_point() and potentially sleep.\n> >>\n> >> Why? Is that just a matter of personal preference (fair enough) or is\n> >> there a reason why that would be wrong/harmful?\n> > \n> > Telling the kernel \"hey, we're about to need this, please go get it\" and\n> > then immediately going to sleep just strikes me as a bit off. We should\n> > be trying to minimize the time between prefetch and actual request for\n> > the page. Of course, there's going to be some times that we will issue\n> > a prefetch and then come around again and end up hitting the limit and\n> > sleeping before we actually request the page and maybe it doesn't\n> > ultimately matter much but it just seems better to sleep first before\n> > issuing the prefetch to minimize the amount 'outstanding' when we do end\n> > up sleeping. One thing about prefetching is that the kernel is\n> > certainly within its rights to decide to drop the page before we\n> > actually go to read it, if it's under pressure and we're just sleeping.\n> \n> I don't know, but this argument seems rather hand-wavy to me, TBH.\n\nI agree it is, but then again, the only argument raised against doing it\nthis way is that it might reduce the number of #ifdef's and, at least\nfor my part, that doesn't seem like a terribly strong argument in this\ncase.\n\n> Firstly, we're prefetching quite far ahead anyway, so there's almost\n> always going to be a vacuum_delay_point() call between you issue the\n> prefetch and actually using the page. Secondly, vacuum_delay_point won't\n> really sleep for most of the calls (assuming the cost limit is not\n> unreasonably small).\n> \n> The way I see it (a) it's unlikely a particular vacuum_delay_point()\n> call will sleep, while at the same time (b) it's quite probable one of\n> the calls between prefetching and using the page will sleep.\n\nDoing the prefetch before the sleep sets us up in a position where\nthere'll always be one more page that's been prefetched while we're\nsleeping, no matter when that sleep actually happens, than if we do the\nprefetch after the sleep. Considering the default's only 10, that's 10%\nof the pages prefetched.\n\n> So I think the exact ordering of those two calls does not really matter,\n> at least not for this particular reason.\n\nI don't see it as a huge deal, but..\n\n> That being said, I don't feel strongly about this - if you prefer it\n> like this, so be it. I can live with two ifdefs instead of one.\n\nYeah, I do think I prefer it.\n\n> >> I think e.g. prefetch_targblock could be moved to the next #ifdef, which\n> >> will eliminate the one-line ifdef.\n> > \n> > Sure, done in the attached.\n> > \n> > Thanks for the review! Unless there's other comments, I'll plan to push\n> > this over the weekend or early next week.\n> \n> +1\n\nThanks again for the review!\n\nStephen",
"msg_date": "Thu, 11 Mar 2021 19:41:04 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> > >> I think e.g. prefetch_targblock could be moved to the next #ifdef, which\n> > >> will eliminate the one-line ifdef.\n> > > \n> > > Sure, done in the attached.\n> > > \n> > > Thanks for the review! Unless there's other comments, I'll plan to push\n> > > this over the weekend or early next week.\n> > \n> > +1\n> \n> Thanks again for the review!\n\nand pushed.\n\nThanks everyone for the suggestions and reviews, and to Jakub in\nparticular for starting us down this path towards improving things in\nANALYZE.\n\nStephen",
"msg_date": "Tue, 16 Mar 2021 15:19:25 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Hi,\n\nOn 11.02.2021 01:10, Stephen Frost wrote:\n\n> Greetings,\n>\n> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n>> On 05/02/2021 23:22, Stephen Frost wrote:\n>>> Unless there's anything else on this, I'll commit these sometime next\n>>> week.\n>> One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n>> should call get_tablespace_maintenance_io_concurrency().\n> Ah, yeah, of course.\n>\n> Updated patch attached.\n\n\nI happened to notice that get_tablespace_io_concurrency() is called \ninstead of get_tablespace_maintenance_io_concurrency(). It doesn't look \nright, no?\n\n\nRegards,\nEgor Rogov.\n\n\n\n",
"msg_date": "Tue, 25 May 2021 19:00:46 +0300",
"msg_from": "Egor Rogov <e.rogov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Egor Rogov (e.rogov@postgrespro.ru) wrote:\n> On 11.02.2021 01:10, Stephen Frost wrote:\n> >* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> >>On 05/02/2021 23:22, Stephen Frost wrote:\n> >>>Unless there's anything else on this, I'll commit these sometime next\n> >>>week.\n> >>One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n> >>should call get_tablespace_maintenance_io_concurrency().\n> >Ah, yeah, of course.\n> >\n> >Updated patch attached.\n> \n> I happened to notice that get_tablespace_io_concurrency() is called instead\n> of get_tablespace_maintenance_io_concurrency(). It doesn't look right, no?\n\nHah, indeed you're right. Will fix.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 25 May 2021 20:36:52 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "This patch adds hits/misses/dirtied, but explain says hit/read/dirtied/written.\n\nShould it say \"read\" instead of \"misses\" ?\n\nsrc/backend/access/heap/vacuumlazy.c: _(\"buffer usage: %lld hits, %lld misses, %lld dirtied\\n\"),\n\nsrc/backend/commands/explain.c- appendStringInfo(es->str, \" hit=%lld\",\nsrc/backend/commands/explain.c- (long long) usage->shared_blks_hit);\nsrc/backend/commands/explain.c- if (usage->shared_blks_read > 0)\nsrc/backend/commands/explain.c- appendStringInfo(es->str, \" read=%lld\",\nsrc/backend/commands/explain.c- (long long) usage->shared_blks_read);\nsrc/backend/commands/explain.c- if (usage->shared_blks_dirtied > 0)\nsrc/backend/commands/explain.c: appendStringInfo(es->str, \" dirtied=%lld\",\nsrc/backend/commands/explain.c- (long long) usage->shared_blks_dirtied);\nsrc/backend/commands/explain.c- if (usage->shared_blks_written > 0)\nsrc/backend/commands/explain.c- appendStringInfo(es->str, \" written=%lld\",\nsrc/backend/commands/explain.c- (long long) usage->shared_blks_written);\nsrc/backend/commands/explain.c- if (has_local || has_temp)\n\n\n",
"msg_date": "Tue, 25 May 2021 19:49:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> This patch adds hits/misses/dirtied, but explain says hit/read/dirtied/written.\n> \n> Should it say \"read\" instead of \"misses\" ?\n> \n> src/backend/access/heap/vacuumlazy.c: _(\"buffer usage: %lld hits, %lld misses, %lld dirtied\\n\"),\n\nErrr, the analyze readahead patch set didn't add the above use of\n'misses', that's been around for an age or more. For giggles, I hunted\nit back to be this 2011 commit: 9d3b50244357ef4c4e3b6e01f91de599077179c8\n\nAs for the \"misses\" usage which was actually added by the patch here:\n\nsrc/backend/commands/analyze.c: appendStringInfo(&buf, _(\"buffer usage: %lld hits, %lld misses, %lld dirtied\\n\"),\n\nThat was specifically done to match the 'vacuumlazy.c' usage of 'misses'\nabove.\n\nAs for which to use, I don't have any particular dog in that fight, but\nI do think we should be consistent between how VACUUM reports this\ninformation and how ANALYZE does.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 25 May 2021 20:57:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Egor Rogov (e.rogov@postgrespro.ru) wrote:\n> > On 11.02.2021 01:10, Stephen Frost wrote:\n> > >* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> > >>On 05/02/2021 23:22, Stephen Frost wrote:\n> > >>>Unless there's anything else on this, I'll commit these sometime next\n> > >>>week.\n> > >>One more thing: Instead of using 'effective_io_concurrency' GUC directly,\n> > >>should call get_tablespace_maintenance_io_concurrency().\n> > >Ah, yeah, of course.\n> > >\n> > >Updated patch attached.\n> > \n> > I happened to notice that get_tablespace_io_concurrency() is called instead\n> > of get_tablespace_maintenance_io_concurrency(). It doesn't look right, no?\n> \n> Hah, indeed you're right. Will fix.\n\nFound this under a bit of a pile in my todo list. :)\n\nFix pushed.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 27 Aug 2021 19:24:55 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: automatic analyze: readahead - add \"IO read time\" log message"
}
] |
[
{
"msg_contents": "Hi all.\n\nBack in 2016 I started a thread about making cancellations safer[1], I'd\nlike to try to pick this up again. Here a summary of the previous\nconversation:\n\nThe main ask here is to allow clients to specify which command to cancel,\nto avoid various race conditions where the wrong command is accidentally\ncancelled. The implementation proposal by Tom was for the client and server\nto simply count messages on each side, and to modify the CancelRequest\nmessage for the client to integrate the requested sequence number to be\ncancelled.\n\nThe CancelRequest format change can probably happen in a non-breaking way,\ne.g. by defining a new cancel request code (the 1234/5678 at the beginning\nof the message). The client could know that the server is able to accept\nthe new CancelRequest format via a minor protocol bump, or some other means.\n\nThere was also discussion of allowing \"cancel up to\" semantics, where all\nmessages up to the provided sequence number are cancelled. While this could\nbe useful, it's important to allow setting a lower bound as well. A new\nexperimental mode in the Npgsql driver \"multiplexes\" queries from unrelated\nthreads into the same physical connection, possibly pipelining two\nunrelated queries go into a single outgoing TCP packet. In this case, the\nsecond query producer may issue a cancellation, and unintentionally cancel\nthe first query which is still in progress. So ideally the CancelRequest\nwould contain both a lower and an upper bound (with the lower being\noptional).\n\nSince cancellation requests may arrive before their target queries do, the\nbackend should be able to track unfulfilled requests and apply them as soon\nas the target query is received.\n\nFinally, there was also an orthogonal discussion about widening the cancel\nkey (currently 32-bit). However, that seems like it would be a protocol\nbreaking change, so possibly better treated separately.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CADT4RqDh1CEgz7QgKwYSLT9TMCk7O%3DncauUaSQKVt_nPNTE9wQ%40mail.gmail.com#00832c72f4b93d57d6b0ac59de8eca85\n\nHi all.Back in 2016 I started a thread about making cancellations safer[1], I'd like to try to pick this up again. Here a summary of the previous conversation:The main ask here is to allow clients to specify which command to cancel, to avoid various race conditions where the wrong command is accidentally cancelled. The implementation proposal by Tom was for the client and server to simply count messages on each side, and to modify the CancelRequest message for the client to integrate the requested sequence number to be cancelled.The CancelRequest format change can probably happen in a non-breaking way, e.g. by defining a new cancel request code (the 1234/5678 at the beginning of the message). The client could know that the server is able to accept the new CancelRequest format via a minor protocol bump, or some other means.There was also discussion of allowing \"cancel up to\" semantics, where all messages up to the provided sequence number are cancelled. While this could be useful, it's important to allow setting a lower bound as well. A new experimental mode in the Npgsql driver \"multiplexes\" queries from unrelated threads into the same physical connection, possibly pipelining two unrelated queries go into a single outgoing TCP packet. In this case, the second query producer may issue a cancellation, and unintentionally cancel the first query which is still in progress. So ideally the CancelRequest would contain both a lower and an upper bound (with the lower being optional).Since cancellation requests may arrive before their target queries do, the backend should be able to track unfulfilled requests and apply them as soon as the target query is received.Finally, there was also an orthogonal discussion about widening the cancel key (currently 32-bit). However, that seems like it would be a protocol breaking change, so possibly better treated separately.[1] https://www.postgresql.org/message-id/flat/CADT4RqDh1CEgz7QgKwYSLT9TMCk7O%3DncauUaSQKVt_nPNTE9wQ%40mail.gmail.com#00832c72f4b93d57d6b0ac59de8eca85",
"msg_date": "Wed, 4 Nov 2020 17:50:08 +0200",
"msg_from": "Shay Rojansky <roji@roji.org>",
"msg_from_op": true,
"msg_subject": "Making cancellations safe"
},
{
"msg_contents": "On Wed, 4 Nov 2020 at 10:50, Shay Rojansky <roji@roji.org> wrote:\n\n> Hi all.\n>\n> Back in 2016 I started a thread about making cancellations safer[1], I'd\n> like to try to pick this up again. Here a summary of the previous\n> conversation:\n>\n> The main ask here is to allow clients to specify which command to cancel,\n> to avoid various race conditions where the wrong command is accidentally\n> cancelled. The implementation proposal by Tom was for the client and server\n> to simply count messages on each side, and to modify the CancelRequest\n> message for the client to integrate the requested sequence number to be\n> cancelled.\n>\n> The CancelRequest format change can probably happen in a non-breaking way,\n> e.g. by defining a new cancel request code (the 1234/5678 at the beginning\n> of the message). The client could know that the server is able to accept\n> the new CancelRequest format via a minor protocol bump, or some other means.\n>\n> There was also discussion of allowing \"cancel up to\" semantics, where all\n> messages up to the provided sequence number are cancelled. While this could\n> be useful, it's important to allow setting a lower bound as well. A new\n> experimental mode in the Npgsql driver \"multiplexes\" queries from unrelated\n> threads into the same physical connection, possibly pipelining two\n> unrelated queries go into a single outgoing TCP packet. In this case, the\n> second query producer may issue a cancellation, and unintentionally cancel\n> the first query which is still in progress. So ideally the CancelRequest\n> would contain both a lower and an upper bound (with the lower being\n> optional).\n>\n> Since cancellation requests may arrive before their target queries do, the\n> backend should be able to track unfulfilled requests and apply them as soon\n> as the target query is received.\n>\n> Finally, there was also an orthogonal discussion about widening the cancel\n> key (currently 32-bit). However, that seems like it would be a protocol\n> breaking change, so possibly better treated separately.\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CADT4RqDh1CEgz7QgKwYSLT9TMCk7O%3DncauUaSQKVt_nPNTE9wQ%40mail.gmail.com#00832c72f4b93d57d6b0ac59de8eca85\n>\n\n\nI think this is going to become more relevant as libraries push the\nprotocol. Currently we have a number of implementations of pipelining where\nthis would be useful.\n\nIn general I think we need to have a way to cancel a specific query.\n\nRegards,\n\nDave\n\nOn Wed, 4 Nov 2020 at 10:50, Shay Rojansky <roji@roji.org> wrote:Hi all.Back in 2016 I started a thread about making cancellations safer[1], I'd like to try to pick this up again. Here a summary of the previous conversation:The main ask here is to allow clients to specify which command to cancel, to avoid various race conditions where the wrong command is accidentally cancelled. The implementation proposal by Tom was for the client and server to simply count messages on each side, and to modify the CancelRequest message for the client to integrate the requested sequence number to be cancelled.The CancelRequest format change can probably happen in a non-breaking way, e.g. by defining a new cancel request code (the 1234/5678 at the beginning of the message). The client could know that the server is able to accept the new CancelRequest format via a minor protocol bump, or some other means.There was also discussion of allowing \"cancel up to\" semantics, where all messages up to the provided sequence number are cancelled. While this could be useful, it's important to allow setting a lower bound as well. A new experimental mode in the Npgsql driver \"multiplexes\" queries from unrelated threads into the same physical connection, possibly pipelining two unrelated queries go into a single outgoing TCP packet. In this case, the second query producer may issue a cancellation, and unintentionally cancel the first query which is still in progress. So ideally the CancelRequest would contain both a lower and an upper bound (with the lower being optional).Since cancellation requests may arrive before their target queries do, the backend should be able to track unfulfilled requests and apply them as soon as the target query is received.Finally, there was also an orthogonal discussion about widening the cancel key (currently 32-bit). However, that seems like it would be a protocol breaking change, so possibly better treated separately.[1] https://www.postgresql.org/message-id/flat/CADT4RqDh1CEgz7QgKwYSLT9TMCk7O%3DncauUaSQKVt_nPNTE9wQ%40mail.gmail.com#00832c72f4b93d57d6b0ac59de8eca85I think this is going to become more relevant as libraries push the protocol. Currently we have a number of implementations of pipelining where this would be useful.In general I think we need to have a way to cancel a specific query.Regards,Dave",
"msg_date": "Mon, 9 Nov 2020 08:56:42 -0500",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: Making cancellations safe"
}
] |
[
{
"msg_contents": "crake is showing xversion upgrade failures since 9e38c2bb50:\n\npg_restore: error: could not execute query: ERROR: function array_cat(anyarray, anyarray) does not exist\nCommand was: CREATE AGGREGATE \"public\".\"array_cat_accum\"(\"anyarray\") (\n SFUNC = \"array_cat\",\n STYPE = \"anyarray\",\n INITCOND = '{}'\n);\n\nAs was discussed in the thread leading up to that commit, modifying the\nsignature of array_cat and friends could break user-defined operators\nand aggregates based on those functions. It seems to me that the\nusability gain from this change is worth that cost, but it is causing\nan issue for xversion tests.\n\nI think the most plausible response is to add this aggregate to the filter\nlogic that already exists in the xversion tests. Perhaps we could\nalternatively change this test case so that it relies on some other\npolymorphic function, but I'm not quite sure what a good candidate\nwould be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Nov 2020 19:43:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "I wrote:\n> I think the most plausible response is to add this aggregate to the filter\n> logic that already exists in the xversion tests. Perhaps we could\n> alternatively change this test case so that it relies on some other\n> polymorphic function, but I'm not quite sure what a good candidate\n> would be.\n\nAfter looking at the commit that added array_cat_accum() (65d9aedb1),\nI decided that it's fine to replace this aggregate with one using another\nanyarray function, such as array_larger(). The point of that test is just\nto show that the array argument can be array-of-record, so we don't need\nthe operation to be array_cat() specifically. So I propose the attached\npatches to un-break the xversion tests.\n\nI'll hold off pushing this till after this week's wraps, though.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 08 Nov 2020 18:47:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "On Wed, Nov 04, 2020 at 07:43:51PM -0500, Tom Lane wrote:\n> crake is showing xversion upgrade failures since 9e38c2bb50:\n> \n> pg_restore: error: could not execute query: ERROR: function array_cat(anyarray, anyarray) does not exist\n> Command was: CREATE AGGREGATE \"public\".\"array_cat_accum\"(\"anyarray\") (\n> SFUNC = \"array_cat\",\n> STYPE = \"anyarray\",\n> INITCOND = '{}'\n> );\n> \n> As was discussed in the thread leading up to that commit, modifying the\n> signature of array_cat and friends could break user-defined operators\n> and aggregates based on those functions. It seems to me that the\n> usability gain from this change is worth that cost, but it is causing\n> an issue for xversion tests.\n\nI upgraded an internal DB to v14b1, but it took several tries, since there were\nerrors during pg_restore regarding aggregates using polymorphic functions\nanyarray, which are now anycompatiblearray.\n\nI succeeded in upgrading after dropping our aggregates.\n\nI have a backup from the v13 DB, and it restores okay on v13.\nHowever it fails with the same errors when restoring into v14.\n\nI think this was all known, so I'm just adding a data point.\n\nIt's be easy enough to replace our \"anyarrays\" with \"anycompatiblearrays\".\n\nBut I think this should be called out as an incompatible change in the release\nnotes.\n\npg_restore: error: could not execute query: ERROR: function array_append(anyarray, anyelement) does not exist\nCommand was: CREATE AGGREGATE public.array_accum(anyelement) (\n SFUNC = array_append,\n STYPE = anyarray,\n INITCOND = '{}',\n PARALLEL = safe\n);\n\npg_restore: error: could not execute query: ERROR: function array_append(anyarray, anyelement) does not exist\nCommand was: CREATE AGGREGATE public.pdp_context_count(anyelement) (\n SFUNC = array_append,\n STYPE = anyarray,\n INITCOND = '{}',\n FINALFUNC = public._final_pdp_context_count,\n PARALLEL = safe\n);\n\npg_restore: error: could not execute query: ERROR: function array_append(anyarray, anyelement) does not exist\nCommand was: CREATE AGGREGATE public.ts_mode(anyelement) (\n SFUNC = array_append,\n STYPE = anyarray,\n INITCOND = '{}',\n FINALFUNC = public._final_mode\n);\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 20 May 2021 18:29:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Nov 04, 2020 at 07:43:51PM -0500, Tom Lane wrote:\n>> As was discussed in the thread leading up to that commit, modifying the\n>> signature of array_cat and friends could break user-defined operators\n>> and aggregates based on those functions. It seems to me that the\n>> usability gain from this change is worth that cost, but it is causing\n>> an issue for xversion tests.\n\n> But I think this should be called out as an incompatible change in the release\n> notes.\n\nIf it was not, yes it should be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 May 2021 19:35:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "On Thu, May 20, 2021 at 07:35:10PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Wed, Nov 04, 2020 at 07:43:51PM -0500, Tom Lane wrote:\n> >> As was discussed in the thread leading up to that commit, modifying the\n> >> signature of array_cat and friends could break user-defined operators\n> >> and aggregates based on those functions. It seems to me that the\n> >> usability gain from this change is worth that cost, but it is causing\n> >> an issue for xversion tests.\n> \n> > But I think this should be called out as an incompatible change in the release\n> > notes.\n> \n> If it was not, yes it should be.\n\n@Bruce, I propose:\n\nSome system functions are changed to accept \"anycompatiblearray\" arguments.\nThis causes failures when restoring a database backup or running pg_restore if\nthere were aggregate functions defined using those functions with their\noriginal argument types.\n\nSuch aggregate functions should be dropped before upgrade/restore and then\nre-created afterwards using the \"anycompatible\" functions. The affected\nfunctions are: array_append, array_prepend, array_cat, array_position,\narray_positions, array_remove, array_replace, and width_bucket.\n\n(Re-defining the function before upgrading is possible when upgrading from v13,\nonly).\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 25 May 2021 11:14:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "@Bruce: Would you add something about this to the release notes before beta2?\n\nI added it as an OpenItem.\n\nOn Tue, May 25, 2021 at 11:14:58AM -0500, Justin Pryzby wrote:\n> On Thu, May 20, 2021 at 07:35:10PM -0400, Tom Lane wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > On Wed, Nov 04, 2020 at 07:43:51PM -0500, Tom Lane wrote:\n> > >> As was discussed in the thread leading up to that commit, modifying the\n> > >> signature of array_cat and friends could break user-defined operators\n> > >> and aggregates based on those functions. It seems to me that the\n> > >> usability gain from this change is worth that cost, but it is causing\n> > >> an issue for xversion tests.\n> > \n> > > But I think this should be called out as an incompatible change in the release\n> > > notes.\n> > \n> > If it was not, yes it should be.\n> \n> @Bruce, I propose:\n> \n> Some system functions are changed to accept \"anycompatiblearray\" arguments.\n> This causes failures when restoring a database backup or running pg_restore if\n> there were aggregate functions defined using those functions with their\n> original argument types.\n> \n> Such aggregate functions should be dropped before upgrade/restore and then\n> re-created afterwards using the \"anycompatible\" functions. The affected\n> functions are: array_append, array_prepend, array_cat, array_position,\n> array_positions, array_remove, array_replace, and width_bucket.\n\n\n",
"msg_date": "Tue, 8 Jun 2021 17:56:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "On Tue, Jun 8, 2021 at 05:56:18PM -0500, Justin Pryzby wrote:\n> @Bruce: Would you add something about this to the release notes before beta2?\n> \n> I added it as an OpenItem.\n\nOK, see below.\n\n> On Tue, May 25, 2021 at 11:14:58AM -0500, Justin Pryzby wrote:\n> > On Thu, May 20, 2021 at 07:35:10PM -0400, Tom Lane wrote:\n> > > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > > On Wed, Nov 04, 2020 at 07:43:51PM -0500, Tom Lane wrote:\n> > > >> As was discussed in the thread leading up to that commit, modifying the\n> > > >> signature of array_cat and friends could break user-defined operators\n> > > >> and aggregates based on those functions. It seems to me that the\n> > > >> usability gain from this change is worth that cost, but it is causing\n> > > >> an issue for xversion tests.\n\nUh, this is _using_ these functions in aggregates, or changing the\nsystem functions' argument types, right? I didn't think we supported\ndump/restore of modified system tables.\n\n> > > \n> > > > But I think this should be called out as an incompatible change in the release\n> > > > notes.\n> > > \n> > > If it was not, yes it should be.\n> > \n> > @Bruce, I propose:\n> > \n> > Some system functions are changed to accept \"anycompatiblearray\" arguments.\n> > This causes failures when restoring a database backup or running pg_restore if\n> > there were aggregate functions defined using those functions with their\n> > original argument types.\n> > \n> > Such aggregate functions should be dropped before upgrade/restore and then\n> > re-created afterwards using the \"anycompatible\" functions. The affected\n> > functions are: array_append, array_prepend, array_cat, array_position,\n> > array_positions, array_remove, array_replace, and width_bucket.\n\nI read the entire thread and I see:\n\n\tpg_restore: error: could not execute query: ERROR: function\n\t\tarray_append(anyarray, anyelement) does not exist\n\tCommand was: CREATE AGGREGATE public.array_accum(anyelement) (\n\t SFUNC = array_append,\n\t STYPE = anyarray,\n\t INITCOND = '{}',\n\t PARALLEL = safe\n\t);\n\nThis involves creating an aggreate that _uses_ these array functions as\ntheir state transition function?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 8 Jun 2021 20:02:46 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "On Tue, Jun 08, 2021 at 08:02:46PM -0400, Bruce Momjian wrote:\n> This involves creating an aggreate that _uses_ these array functions as\n> their state transition function?\n\nYes\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 8 Jun 2021 19:10:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "On Tue, Jun 8, 2021 at 07:10:00PM -0500, Justin Pryzby wrote:\n> On Tue, Jun 08, 2021 at 08:02:46PM -0400, Bruce Momjian wrote:\n> > This involves creating an aggreate that _uses_ these array functions as\n> > their state transition function?\n> \n> Yes\n\nOK, I came up with the attached patch. This is one of the few cases\nwhere the incompatibility is not clearly related to the feature, so I\nleft the existing item alone and just created a new one with the same\ncommit message in the incompatibilities section.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 11 Jun 2021 20:32:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> OK, I came up with the attached patch. This is one of the few cases\n> where the incompatibility is not clearly related to the feature, so I\n> left the existing item alone and just created a new one with the same\n> commit message in the incompatibilities section.\n\nI think phrasing this as though user-defined aggregates are the only\npain point is incorrect. For example, a custom operator based on\narray_cat would have issues too.\n\nI suggest a treatment more like\n\n Some built-in array-related functions changed signature (Tom Lane)\n\n Certain functions were redefined to take anycompatiblearray instead\n of anyarray. While this does not affect ordinary calls, it does\n affect code that directly names these functions along with their\n argument types; for example, custom aggregates and operators based\n on these functions. The affected functions are [ blah, blah ]\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Jun 2021 20:56:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests\n (v14 release notes)"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 08:56:19PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > OK, I came up with the attached patch. This is one of the few cases\n> > where the incompatibility is not clearly related to the feature, so I\n> > left the existing item alone and just created a new one with the same\n> > commit message in the incompatibilities section.\n> \n> I think phrasing this as though user-defined aggregates are the only\n> pain point is incorrect. For example, a custom operator based on\n> array_cat would have issues too.\n> \n> I suggest a treatment more like\n> \n> Some built-in array-related functions changed signature (Tom Lane)\n> \n> Certain functions were redefined to take anycompatiblearray instead\n> of anyarray. While this does not affect ordinary calls, it does\n> affect code that directly names these functions along with their\n> argument types; for example, custom aggregates and operators based\n> on these functions. The affected functions are [ blah, blah ]\n\nOK, I used some of your ideas and tried for something more general; \npatch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 11 Jun 2021 21:12:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> OK, I used some of your ideas and tried for something more general; \n> patch attached.\n\nI think it's a good idea to mention custom aggregates and operators\nspecifically, as otherwise people will look at this and have little\nidea what you're on about. I just want wording like \"such as custom\naggregates and operators\", in case somebody has done some other creative\nthing that breaks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Jun 2021 21:17:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests\n (v14 release notes)"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 09:12:55PM -0400, Bruce Momjian wrote:\n> OK, I used some of your ideas and tried for something more general; \n> patch attached.\n\nThis is good.\n\nBut I wonder if \"dropped before upgrading\" is too specific to pg_upgrade?\n\nDropping the aggregate before starting a backup to be restored into a new\nversion seems like a bad way to do it. More likely, I would restore whatever\nbackup I had, get errors, and then eventually recreate the aggregates.\n\n> + <para>\n> + User-defined objects that reference some built-in array functions\n> + along with their argument types must be recreated (Tom Lane)\n> + </para>\n> +\n> + <para>\n> + Specifically, <link\n> + linkend=\"functions-array\"><function>array_append()</function></link>,\n...\n> + used to take <type>anyarray</type> arguments but now take\n> + <type>anycompatiblearray</type>. Therefore, user-defined objects\n> + that reference the old array function signature must be dropped\n> + before upgrading and recreated once the upgrade completes.\n> + </para>\n> + </listitem>\n\n\n",
"msg_date": "Fri, 11 Jun 2021 20:19:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 09:17:46PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > OK, I used some of your ideas and tried for something more general; \n> > patch attached.\n> \n> I think it's a good idea to mention custom aggregates and operators\n> specifically, as otherwise people will look at this and have little\n> idea what you're on about. I just want wording like \"such as custom\n> aggregates and operators\", in case somebody has done some other creative\n> thing that breaks.\n\nAgreed, updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Fri, 11 Jun 2021 21:40:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 08:19:48PM -0500, Justin Pryzby wrote:\n> On Fri, Jun 11, 2021 at 09:12:55PM -0400, Bruce Momjian wrote:\n> > OK, I used some of your ideas and tried for something more general; \n> > patch attached.\n> \n> This is good.\n> \n> But I wonder if \"dropped before upgrading\" is too specific to pg_upgrade?\n> \n> Dropping the aggregate before starting a backup to be restored into a new\n> version seems like a bad way to do it. More likely, I would restore whatever\n> backup I had, get errors, and then eventually recreate the aggregates.\n\nI am actually unclear on that. Do people really restore a dump and just\nignore errors, or somehow track them and go back and try to fix them. \nIsn't there a cascading effect if other things depend on it? How do\nthey get the object definitions from a huge dump file? What process\nshould we recommend? I have just never seen good documentation on how\nthis handled.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 11 Jun 2021 21:42:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 09:40:07PM -0400, Bruce Momjian wrote:\n> On Fri, Jun 11, 2021 at 09:17:46PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > OK, I used some of your ideas and tried for something more general; \n> > > patch attached.\n> > \n> > I think it's a good idea to mention custom aggregates and operators\n> > specifically, as otherwise people will look at this and have little\n> > idea what you're on about. I just want wording like \"such as custom\n> > aggregates and operators\", in case somebody has done some other creative\n> > thing that breaks.\n> \n> Agreed, updated patch attached.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 12:49:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade\n tests (v14 release notes)"
},
{
"msg_contents": "Hi everyone!\n\nSorry for bumping old thread.\n\n> On 25 May 2021, at 21:14, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> Such aggregate functions should be dropped before upgrade/restore and then\n> re-created afterwards using the \"anycompatible\" functions. The affected\n> functions are: array_append, array_prepend, array_cat, array_position,\n> array_positions, array_remove, array_replace, and width_bucket.\n\nWe've just stumbled upon the problem in our service. Would it be backpatchable to add this check to pg_upgrade?\n\nI want to check something like\n\nselect * from pg_aggregate join pg_proc on (aggtransfn = pg_proc.oid)\nwhere proname in ('array_append', 'array_prepend','array_cat', 'array_position','array_positions', 'array_remove', 'array_replace', 'width_bucket') ;\n\nselect * from pg_operator join pg_proc on (oprcode = pg_proc.oid)\nwhere proname in ('array_append', 'array_prepend','array_cat', 'array_position','array_positions', 'array_remove', 'array_replace', 'width_bucket') and pg_operator.oid >= 16384;\n\nif pg_upgrade is executed with --check option.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 24 Jun 2022 16:09:46 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "> On 24 Jun 2022, at 16:09, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Would it be backpatchable to add this check to pg_upgrade?\n\nJust to be clear of what exactly I propose I drafted a patch. PFA.\nI've tested it with PG13 and\nCREATE AGGREGATE public.array_accum(anyelement) (\n SFUNC = array_append,\n STYPE = anyarray,\n INITCOND = '{}',\n PARALLEL = safe\n);\nCREATE OPERATOR ##% (leftarg=anyarray, rightarg=anyelement,function=array_append);\n\nOperator output currently look a bit strage, but does it's job. pg_upgrade_output.d/operators.txt:\nIn database: postgres\n (oid=16385) ##% in public\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 24 Jun 2022 17:46:19 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 04:09:46PM +0500, Andrey Borodin wrote:\n> Hi everyone!\n> \n> Sorry for bumping old thread.\n\nPlease find this newer thread+patch here ;)\nhttps://www.postgresql.org/message-id/20220614230949.GX29853@telsasoft.com\n\n> On 25 May 2021, at 21:14, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > \n> > Such aggregate functions should be dropped before upgrade/restore and then\n> > re-created afterwards using the \"anycompatible\" functions. The affected\n> > functions are: array_append, array_prepend, array_cat, array_position,\n> > array_positions, array_remove, array_replace, and width_bucket.\n> \n> We've just stumbled upon the problem in our service. Would it be backpatchable to add this check to pg_upgrade?\n\nI guess you mean to backpatch to v14 for people upgrading from v13.\n\nI realized that my latest patch would break upgrades from old servers, which do\nnot have array_position/s nor width_bucket, so ::reprocedure would fail. Maybe\nAndrey's way is better (checking proname rather than its OID).\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 24 Jun 2022 08:30:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I realized that my latest patch would break upgrades from old servers, which do\n> not have array_position/s nor width_bucket, so ::reprocedure would fail. Maybe\n> Andrey's way is better (checking proname rather than its OID).\n\nproname is dangerous, because there's nothing stopping users from\nadding more functions with the same name.\n\nJust use a server-version-dependent list of regprocedure OIDs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Jun 2022 10:06:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
},
{
"msg_contents": "> On 24 Jun 2022, at 18:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Fri, Jun 24, 2022 at 04:09:46PM +0500, Andrey Borodin wrote:\n>> Hi everyone!\n>> \n>> Sorry for bumping old thread.\n> \n> Please find this newer thread+patch here ;)\n> https://www.postgresql.org/message-id/20220614230949.GX29853@telsasoft.com\n\nOops. Let's discard my patch and I'll review yours. Many thanks for fixing this stuff :)\n\n> On 24 Jun 2022, at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> I realized that my latest patch would break upgrades from old servers, which do\n>> not have array_position/s nor width_bucket, so ::reprocedure would fail. Maybe\n>> Andrey's way is better (checking proname rather than its OID).\n> \n> proname is dangerous, because there's nothing stopping users from\n> adding more functions with the same name.\n> \n> Just use a server-version-dependent list of regprocedure OIDs.\n\nServer-version-dependent list of oids seems more error prone. I think we can just check proname by the list and oid < 16384.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 24 Jun 2022 23:28:31 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: array_cat anycompatible change is breaking xversion upgrade tests"
}
] |
[
{
"msg_contents": "A few client tools duplicate error messages already provided by libpq, \nsuch as\n\npg_rewind: fatal: could not connect to server: could not connect to \nserver: No such file or directory\n\npg_basebackup: error: could not connect to server: could not connect to \nserver: No such file or directory\n\npsql: error: could not connect to server: could not connect to server: \nNo such file or directory\n\nThe psql case is actually a regression introduced in PG12, but the other \ntwo appear to be ancient.\n\nOther client tools provide a different error message so in aggregate it \nlooks like this:\n\ncreatedb: error: could not connect to database template1: could not \nconnect to server: No such file or directory\n\nThe attached patch removes the redundant message from the client tools. \nI suppose it's a bit dubious because there is no guarantee what the \nlevel of detail the message supplied by libpq has. But I think these \nfew cases are not particularly hard to keep in sync.",
"msg_date": "Thu, 5 Nov 2020 13:27:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "redundant error messages"
},
{
"msg_contents": "On Thu, 5 Nov 2020 at 09:27, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> A few client tools duplicate error messages already provided by libpq,\n> such as\n>\n> pg_rewind: fatal: could not connect to server: could not connect to\n> server: No such file or directory\n>\n> Good catch!\n\n\n> Other client tools provide a different error message so in aggregate it\n> looks like this:\n>\n> createdb: error: could not connect to database template1: could not\n> connect to server: No such file or directory\n>\n> Is the database name important for this message? You should inform which\ndatabase you want to connect for all client tools except pg_dumpall. Hence,\nyou\nalready know which database has the connection problem. IMO the pg_dumpall\nmessage should inform the database name. My suggestion is:\n\n if (fail_on_error)\n {\n pg_log_error(\"database \\\"%s\\\": %s\",\n dbname, PQerrorMessage(conn));\n exit_nicely(1);\n }\n\nand remove the redundant 'could not connect to database %s' from\nscripts/common.c.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Thu, 5 Nov 2020 at 09:27, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:A few client tools duplicate error messages already provided by libpq, \nsuch as\n\npg_rewind: fatal: could not connect to server: could not connect to \nserver: No such file or directoryGood catch! \nOther client tools provide a different error message so in aggregate it \nlooks like this:\n\ncreatedb: error: could not connect to database template1: could not \nconnect to server: No such file or directory\nIs the database name important for this message? You should inform whichdatabase you want to connect for all client tools except pg_dumpall. Hence, you already know which database has the connection problem. IMO the pg_dumpallmessage should inform the database name. My suggestion is: if (fail_on_error) { pg_log_error(\"database \\\"%s\\\": %s\", dbname, PQerrorMessage(conn)); exit_nicely(1); }and remove the redundant 'could not connect to database %s' fromscripts/common.c.-- Euler Taveira http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 5 Nov 2020 10:34:06 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: redundant error messages"
},
{
"msg_contents": "On Thu, 5 Nov 2020 at 08:34, Euler Taveira <euler.taveira@2ndquadrant.com>\nwrote:\n\nIs the database name important for this message? You should inform which\n> database you want to connect for all client tools except pg_dumpall.\n> Hence, you\n> already know which database has the connection problem. IMO the pg_dumpall\n> message should inform the database name. My suggestion is:\n>\n\nIn principle, the client knows the database name. In practice, if it's\ncoming from PGDATABASE or via a service configuration, one may be confused\nabout the database; having the error message be explicit will avoid many\nproblems. I can easily imagine that \"unable to connect to database\" would\nbe mystifying, whereas \"unable to connect to database foo\" would elicit the\nresponse, \"wait, I'm trying to connect to what now?\" leading much more\nquickly to a resolution.\n\nOn Thu, 5 Nov 2020 at 08:34, Euler Taveira <euler.taveira@2ndquadrant.com> wrote:Is the database name important for this message? You should inform whichdatabase you want to connect for all client tools except pg_dumpall. Hence, you already know which database has the connection problem. IMO the pg_dumpallmessage should inform the database name. My suggestion is: In principle, the client knows the database name. In practice, if it's coming from PGDATABASE or via a service configuration, one may be confused about the database; having the error message be explicit will avoid many problems. I can easily imagine that \"unable to connect to database\" would be mystifying, whereas \"unable to connect to database foo\" would elicit the response, \"wait, I'm trying to connect to what now?\" leading much more quickly to a resolution.",
"msg_date": "Thu, 5 Nov 2020 09:49:32 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: redundant error messages"
},
{
"msg_contents": "On 2020-Nov-05, Isaac Morland wrote:\n\n> In principle, the client knows the database name. In practice, if it's\n> coming from PGDATABASE or via a service configuration, one may be confused\n> about the database; having the error message be explicit will avoid many\n> problems. I can easily imagine that \"unable to connect to database\" would\n> be mystifying, whereas \"unable to connect to database foo\" would elicit the\n> response, \"wait, I'm trying to connect to what now?\" leading much more\n> quickly to a resolution.\n\nAlso consider cases like running something via cron, where the person\nreading the error output does not necessarily know what command is being\nrun: it might be hidden inside a script. It's often very helpful to\nhave object names in error messages, even if for the normal usage it\nseems that the object being operated on is very obvious by just looking\nat the command.\n\n\n",
"msg_date": "Thu, 5 Nov 2020 12:53:56 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: redundant error messages"
},
{
"msg_contents": "On 2020-11-05 13:27, Peter Eisentraut wrote:\n> A few client tools duplicate error messages already provided by libpq,\n> such as\n> \n> pg_rewind: fatal: could not connect to server: could not connect to\n> server: No such file or directory\n> \n> pg_basebackup: error: could not connect to server: could not connect to\n> server: No such file or directory\n> \n> psql: error: could not connect to server: could not connect to server:\n> No such file or directory\n> \n> The psql case is actually a regression introduced in PG12, but the other\n> two appear to be ancient.\n\nI have committed fixes for these.\n\n\n",
"msg_date": "Sat, 7 Nov 2020 23:15:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: redundant error messages"
}
] |
[
{
"msg_contents": "Hi,\n\nThe patch that makes pg_stat_statements track the number of rows that\nREFRESH MATERIALIZED VIEW command processes was proposed at [1].\nWhen reviewing the patch, I just wondered why the completion tag output\nfor REFRESH MATERIALIZED VIEW doesn't include the row count. Is this\nan intentional behavior? I've not found the past discussion about this yet.\n\nOTOH, CREATE MATERIALIZED VIEW reports something like \"SELECT NNN\"\n(NNN is the row count) as the completion tag output. If WITH NO DATA is\nspecified, \"CREATE MATERIALIZED VIEW\" is reported instead. Is it better\nto make REFRESH MATERIALIZED VIEW do the similar thing? Or we should\nnot do that, for example, since changing the completion tag output may\nbreak the client app?\n\n[1] https://postgr.es/m/71f6bc72f8bbaa06e701f8bd2562c347@oss.nttdata.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 6 Nov 2020 01:20:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "REFRESH MATERIALIZED VIEW and completion tag output"
},
{
"msg_contents": "\n\n> On Nov 5, 2020, at 8:20 AM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> The patch that makes pg_stat_statements track the number of rows that\n> REFRESH MATERIALIZED VIEW command processes was proposed at [1].\n> When reviewing the patch, I just wondered why the completion tag output\n> for REFRESH MATERIALIZED VIEW doesn't include the row count. Is this\n> an intentional behavior? I've not found the past discussion about this yet.\n\nOf 191 command tags, only 7 return the number of rows. See src/include/tcop/cmdtaglist.h. REFRESH MATERIALIZED VIEW is similar to CREATE MATERIALIZED VIEW and also CREATE TABLE AS, which also do not return the row count. I think this behavior is historical, and preserved for compatibility.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 5 Nov 2020 08:56:43 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REFRESH MATERIALIZED VIEW and completion tag output"
},
{
"msg_contents": "\n\nOn 2020/11/06 1:56, Mark Dilger wrote:\n> \n> \n>> On Nov 5, 2020, at 8:20 AM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> The patch that makes pg_stat_statements track the number of rows that\n>> REFRESH MATERIALIZED VIEW command processes was proposed at [1].\n>> When reviewing the patch, I just wondered why the completion tag output\n>> for REFRESH MATERIALIZED VIEW doesn't include the row count. Is this\n>> an intentional behavior? I've not found the past discussion about this yet.\n> \n> Of 191 command tags, only 7 return the number of rows. See src/include/tcop/cmdtaglist.h. REFRESH MATERIALIZED VIEW is similar to CREATE MATERIALIZED VIEW and also CREATE TABLE AS, which also do not return the row count.\n\nYes, so I was wondering if REFRESH MATERIALIZED VIEW also should use\nCMDTAG_SELECT like CREATE MATERIALIZED VIEW does.\n\n> I think this behavior is historical, and preserved for compatibility.\n\nMaybe true. The patch that I mentioned upthread tries to change\nREFRESH MATERIALIZED VIEW so that it register the rowcount into\nits query completion so that pg_stat_statements can track\nthe number of processed rows. But display_rowcount for\nCMDTAG_REFRESH_MATERIALIZED_VIEW still should be false\nin cmdtaglist.h to avoid the change of the completion tag output.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 6 Nov 2020 09:45:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: REFRESH MATERIALIZED VIEW and completion tag output"
},
{
"msg_contents": "\n\n> On Nov 5, 2020, at 4:45 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> \n> \n> On 2020/11/06 1:56, Mark Dilger wrote:\n>>> On Nov 5, 2020, at 8:20 AM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> \n>>> The patch that makes pg_stat_statements track the number of rows that\n>>> REFRESH MATERIALIZED VIEW command processes was proposed at [1].\n>>> When reviewing the patch, I just wondered why the completion tag output\n>>> for REFRESH MATERIALIZED VIEW doesn't include the row count. Is this\n>>> an intentional behavior? I've not found the past discussion about this yet.\n>> Of 191 command tags, only 7 return the number of rows. See src/include/tcop/cmdtaglist.h. REFRESH MATERIALIZED VIEW is similar to CREATE MATERIALIZED VIEW and also CREATE TABLE AS, which also do not return the row count.\n> \n> Yes, so I was wondering if REFRESH MATERIALIZED VIEW also should use\n> CMDTAG_SELECT like CREATE MATERIALIZED VIEW does.\n\nFor both CREATE MATERIALIZED VIEW and CREATE TABLE AS, ExecCreateTableAs() does:\n\n SetQueryCompletion(qc, CMDTAG_SELECT, queryDesc->estate->es_processed);\n\nbut ExecRefreshMatView() does nothing with it's QueryCompletion parameter. I don't know why this case should behave differently than the other two.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 5 Nov 2020 21:07:27 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REFRESH MATERIALIZED VIEW and completion tag output"
}
] |
[
{
"msg_contents": "Seems to be missing.\n\nThe 2nd patch does some more cleanup - Before, a failed syscache lookup would\nERROR, but I don't think that's supposed to happen. get_rel_relispartition()\nwould instead return false, and we won't call get_partition_parent().\n\ncommit a8ad949f22b8dd7b23049b0b0704e5be9233e319\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu Nov 5 12:06:49 2020 -0600\n\n list_free() in index_get_partition()\n \n which was added at: a6da0047158b8a227f883aeed19eb7fcfbef11fb\n\ndiff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c\nindex 239ac017fa..4dfac39adf 100644\n--- a/src/backend/catalog/partition.c\n+++ b/src/backend/catalog/partition.c\n@@ -153,44 +153,45 @@ Oid\n index_get_partition(Relation partition, Oid indexId)\n {\n \tList\t *idxlist = RelationGetIndexList(partition);\n \tListCell *l;\n \n \tforeach(l, idxlist)\n \t{\n \t\tOid\t\t\tpartIdx = lfirst_oid(l);\n \t\tHeapTuple\ttup;\n \t\tForm_pg_class classForm;\n \t\tbool\t\tispartition;\n \n \t\ttup = SearchSysCache1(RELOID, ObjectIdGetDatum(partIdx));\n \t\tif (!HeapTupleIsValid(tup))\n \t\t\telog(ERROR, \"cache lookup failed for relation %u\", partIdx);\n \t\tclassForm = (Form_pg_class) GETSTRUCT(tup);\n \t\tispartition = classForm->relispartition;\n \t\tReleaseSysCache(tup);\n \t\tif (!ispartition)\n \t\t\tcontinue;\n-\t\tif (get_partition_parent(lfirst_oid(l)) == indexId)\n+\t\tif (get_partition_parent(partIdx) == indexId)\n \t\t{\n \t\t\tlist_free(idxlist);\n \t\t\treturn partIdx;\n \t\t}\n \t}\n \n+\tlist_free(idxlist);\n \treturn InvalidOid;\n }\n\ncommit 0a01cb7561d6ec74aa5829040bd1478e7b113d89\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu Nov 5 12:14:24 2020 -0600\n\n index_get_partition(): use lsyscache ??\n \n The old code failed when !HeapTupleIsValid(), the new code will return false -\n is that ok ?? Two of the existing callers error anyway.\n\ndiff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c\nindex 4dfac39adf..3d78bc0872 100644\n--- a/src/backend/catalog/partition.c\n+++ b/src/backend/catalog/partition.c\n@@ -28,6 +28,7 @@\n #include \"partitioning/partbounds.h\"\n #include \"rewrite/rewriteManip.h\"\n #include \"utils/fmgroids.h\"\n+#include \"utils/lsyscache.h\"\n #include \"utils/partcache.h\"\n #include \"utils/rel.h\"\n #include \"utils/syscache.h\"\n@@ -158,17 +159,8 @@ index_get_partition(Relation partition, Oid indexId)\n \tforeach(l, idxlist)\n \t{\n \t\tOid\t\t\tpartIdx = lfirst_oid(l);\n-\t\tHeapTuple\ttup;\n-\t\tForm_pg_class classForm;\n-\t\tbool\t\tispartition;\n-\n-\t\ttup = SearchSysCache1(RELOID, ObjectIdGetDatum(partIdx));\n-\t\tif (!HeapTupleIsValid(tup))\n-\t\t\telog(ERROR, \"cache lookup failed for relation %u\", partIdx);\n-\t\tclassForm = (Form_pg_class) GETSTRUCT(tup);\n-\t\tispartition = classForm->relispartition;\n-\t\tReleaseSysCache(tup);\n-\t\tif (!ispartition)\n+\n+\t\tif (!get_rel_relispartition(partIdx))\n \t\t\tcontinue;\n \t\tif (get_partition_parent(partIdx) == indexId)\n \t\t{",
"msg_date": "Thu, 5 Nov 2020 14:36:06 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "list_free() in index_get_partition()"
},
{
"msg_contents": "On Thu, Nov 05, 2020 at 02:36:06PM -0600, Justin Pryzby wrote:\n> Seems to be missing.\n\nAny code paths calling index_get_partition() is in tablecmds.c,\ninvolving commands that are not that hot with such lookups, but that\ncould be an issue if this gets called by some out-of-core code if\nmemory context cleanup is sloppy. So I agree to fix this one and\nbackpatch down to 12. I'd like to apply your fix, except if Alvaro\nthinks differently as that's his commit originally. So let's wait a\nbit first.\n\n> The 2nd patch does some more cleanup - Before, a failed syscache lookup would\n> ERROR, but I don't think that's supposed to happen. get_rel_relispartition()\n> would instead return false, and we won't call get_partition_parent().\n\nThe cache lookup error is here as a safeguard to remind that any code\npath calling index_get_partition() needs a proper lock on the\nrelations involved before doing such lookups, so that's a defensive\nmove. By switching the code as you do in 0002, you could mask such\nlogical problems as different errors could get raised. So I don't\nthink that it is a good idea.\n--\nMichael",
"msg_date": "Fri, 6 Nov 2020 09:32:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: list_free() in index_get_partition()"
},
{
"msg_contents": "On 2020-Nov-06, Michael Paquier wrote:\n\n> On Thu, Nov 05, 2020 at 02:36:06PM -0600, Justin Pryzby wrote:\n> > Seems to be missing.\n> \n> Any code paths calling index_get_partition() is in tablecmds.c,\n> involving commands that are not that hot with such lookups, but that\n> could be an issue if this gets called by some out-of-core code if\n> memory context cleanup is sloppy. So I agree to fix this one and\n> backpatch down to 12. I'd like to apply your fix, except if Alvaro\n> thinks differently as that's his commit originally. So let's wait a\n> bit first.\n\nAgreed; I'll get this pushed now. Thanks for reporting.\n\n> > The 2nd patch does some more cleanup - Before, a failed syscache lookup would\n> > ERROR, but I don't think that's supposed to happen. get_rel_relispartition()\n> > would instead return false, and we won't call get_partition_parent().\n> \n> The cache lookup error is here as a safeguard to remind that any code\n> path calling index_get_partition() needs a proper lock on the\n> relations involved before doing such lookups, so that's a defensive\n> move. By switching the code as you do in 0002, you could mask such\n> logical problems as different errors could get raised. So I don't\n> think that it is a good idea.\n\nYeah, I'm not so sure either. I have memories of purposefully not using\nget_rel_relispartition there, but I don't remember exactly why and\nit's not in the archives.\n\n\n",
"msg_date": "Fri, 6 Nov 2020 20:40:05 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: list_free() in index_get_partition()"
}
] |
[
{
"msg_contents": "subscribing\nsubscribing",
"msg_date": "Fri, 06 Nov 2020 10:26:29 +0800",
"msg_from": "\"meission\" <meission@aliyun.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?c3Vic2NyaWJpbmc=?="
}
] |
[
{
"msg_contents": "Hi all,\n\nPlease find attached a patch set to sanitize the use of MD5 we have in\nthe core tree.\n\nAs of now, there are two duplicated implementations of MD5, one in\ncontrib/pgcrypto/ used as a fallback when not compiling with OpenSSL,\nand one in src/common/ used by the backend when compiling with *or*\nwithout OpenSSL. This is bad on several aspects:\n- There is no need to have the same implementation twice, obviously.\n- When compiling with OpenSSL, we use an incorrect implementation,\ncausing Postgres to cheat if FIPS is enabled because MD5 should not be\nauthorized. Making use of what OpenSSL provides with EVP allows us to\nrely on OpenSSL to control such restrictions. So we authorize MD5\nauthentications while these should be blocked, making Postgres not\ncompletely compliant with STIG and its kind.\n\nThe attached patch set does a bit of rework to make the Postgres code\nmore consistent with OpenSSL, similarly to the work I did for all the\nSHA2 implementations with EVP in [1]:\n- 0001 is something stolen from the SHA2 set, adding to resowner.c\ncontrol of EVP contexts, so as it is possible to clean up anything\nallocated by OpenSSL.\n- 0002 is the central piece, that moves the duplicated\nimplementation. src/common/ and pgcrypto/ use the same code, but I\nhave reused pgcrypto as it was already doing the init/update/final\nsplit similarly to PostgreSQL. New APIs are designed to control MD5\ncontexts, similarly to the work done for SHA2. Upon using this patch,\nnote that pgcrypto+OpenSSL uses our in-core implementation instead of\nOpenSSL's one, but that's fixed in 0003. We have a set of three\nconvenience routines used to generate MD5-hashed passwords, that I\nhave moved to a new file in src/common/md5_common.c, aimed at being\nshared between all the implementations.\n- 0003 adds the MD5 implementation based on OpenSSL's EVP, ending the\nwork.\n\nThis set of patches is independent on the SHA2 refactoring, even if it\nshares a part with the SHA2 refactoring in its design. Note that 0001\nand 0002 don't depend on each other, but 0003 depends on both.\n\nThanks,\n\n[1]: https://www.postgresql.org/message-id/20200924025314.GE7405@paquier.xyz\n--\nMichael",
"msg_date": "Fri, 6 Nov 2020 16:34:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Refactor MD5 implementations and switch to EVP for OpenSSL"
},
{
"msg_contents": "On Fri, Nov 06, 2020 at 04:34:34PM +0900, Michael Paquier wrote:\n> The attached patch set does a bit of rework to make the Postgres code\n> more consistent with OpenSSL, similarly to the work I did for all the\n> SHA2 implementations with EVP in [1]:\n> - 0001 is something stolen from the SHA2 set, adding to resowner.c\n> control of EVP contexts, so as it is possible to clean up anything\n> allocated by OpenSSL.\n> - 0002 is the central piece, that moves the duplicated\n> implementation. src/common/ and pgcrypto/ use the same code, but I\n> have reused pgcrypto as it was already doing the init/update/final\n> split similarly to PostgreSQL. New APIs are designed to control MD5\n> contexts, similarly to the work done for SHA2. Upon using this patch,\n> note that pgcrypto+OpenSSL uses our in-core implementation instead of\n> OpenSSL's one, but that's fixed in 0003. We have a set of three\n> convenience routines used to generate MD5-hashed passwords, that I\n> have moved to a new file in src/common/md5_common.c, aimed at being\n> shared between all the implementations.\n> - 0003 adds the MD5 implementation based on OpenSSL's EVP, ending the\n> work.\n\nThe CF bot has been complaining on Windows and this issue is fixed in\nthe attached. A refresh of src/tools/msvc for pgcrypto was just\nmissing.\n--\nMichael",
"msg_date": "Tue, 10 Nov 2020 13:28:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor MD5 implementations and switch to EVP for OpenSSL"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 01:28:09PM +0900, Michael Paquier wrote:\n> The CF bot has been complaining on Windows and this issue is fixed in\n> the attached. A refresh of src/tools/msvc for pgcrypto was just\n> missing.\n\nNow that HEAD has the necessary infrastructure to be able to plug in\neasily new cryptohash routines, here is a rebased patch for MD5. The\nbasics are unchanged. Here is a summary:\n- The duplication with MD5 implementations (pgcrypto, src/common/) is\nremoved, and gets only used when not building with OpenSSL.\n- MD5 uses EVP when building with OpenSSL.\n- Similarly to SHA2, the fallback implementation of MD5 is kept\ninternal to src/common/, with an internal header called md5_int.h.\nThe routines for init, update and final calls are similar to the SHA2\nequivalents, making the changes of cryptohash.c straight-forward.\n\nThe amount of code shaved is still nice:\n13 files changed, 641 insertions(+), 775 deletions(-) \n--\nMichael",
"msg_date": "Fri, 4 Dec 2020 16:05:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor MD5 implementations and switch to EVP for OpenSSL"
},
{
"msg_contents": "> On 4 Dec 2020, at 08:05, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Nov 10, 2020 at 01:28:09PM +0900, Michael Paquier wrote:\n>> The CF bot has been complaining on Windows and this issue is fixed in\n>> the attached. A refresh of src/tools/msvc for pgcrypto was just\n>> missing.\n> \n> Now that HEAD has the necessary infrastructure to be able to plug in\n> easily new cryptohash routines, here is a rebased patch for MD5.\n\nThis is a pretty straightforward patch which given the cryptohash framework\nadded previously does what it says on the tin. +1 on rolling MD5 into what we\nalready have for SHA2 now. Applies clean and all tests pass.\n\nOne (two-part) comment on the patch though:\n\nThe re-arrangement of the code does lead to some attribution confusion however\nIMO. pgcrypto/md5.c and src/common/md5.c are combined with the actual MD5\nimplementation from pgcrypto/md5.c and the PG glue code from src/common/md5.c.\nThat is in itself a good choice, but the file headers are intact and now claims\ntwo implementations of which only one remains.\n\nFurther, bytesToHex was imported in commit 957613be18e6b7 with attribution to\n\"Sverre H. Huseby <sverrehu@online.no>\" without a PGDG defined copyright,\nwhich was later added in 2ca65f716aee9ec441dda91d91b88dd7a00bffa1. This patch\nmoves bytesToHex from md5.c (where his attribution is) to md5_common.c with no\nmaintained attribution. Off the cuff it seems we should either attribute in a\ncomment, or leave the function and export it, but the usual \"I'm not a lawyer\"\ndisclaimer applies. Do you have any thoughts?\n\n> The amount of code shaved is still nice:\n> 13 files changed, 641 insertions(+), 775 deletions(-) \n\nAlways nice with a net minus patch with sustained functionality.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 7 Dec 2020 14:15:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Refactor MD5 implementations and switch to EVP for OpenSSL"
},
{
"msg_contents": "On Mon, Dec 07, 2020 at 02:15:58PM +0100, Daniel Gustafsson wrote:\n> This is a pretty straightforward patch which given the cryptohash framework\n> added previously does what it says on the tin. +1 on rolling MD5 into what we\n> already have for SHA2 now. Applies clean and all tests pass.\n\nThanks.\n\n> One (two-part) comment on the patch though:\n>\n> The re-arrangement of the code does lead to some attribution confusion however\n> IMO. pgcrypto/md5.c and src/common/md5.c are combined with the actual MD5\n> implementation from pgcrypto/md5.c and the PG glue code from src/common/md5.c.\n> That is in itself a good choice, but the file headers are intact and now claims\n> two implementations of which only one remains.\n\nI was pretty sure that at some point I got the attributions messed up.\nThanks for looking at this stuff and ringing the bell.\n\n> Further, bytesToHex was imported in commit 957613be18e6b7 with attribution to\n> \"Sverre H. Huseby <sverrehu@online.no>\" without a PGDG defined copyright,\n> which was later added in 2ca65f716aee9ec441dda91d91b88dd7a00bffa1. This patch\n> moves bytesToHex from md5.c (where his attribution is) to md5_common.c with no\n> maintained attribution. Off the cuff it seems we should either attribute in a\n> comment, or leave the function and export it, but the usual \"I'm not a lawyer\"\n> disclaimer applies. Do you have any thoughts?\n\nHmm. Looking at this area of this history, like d330f155, I think\nthat what we just need to do is to switch the attribution of Sverre\nto md5_common.c for all the sub-functions dedicated to the generation\nof the MD5 passwords by fixing the header comment of the file as this\nis the remaining code coming from the file where this code was. In\nthe new md5.c and md5_int.h, the fallback implementation that we get\nto use is the KAME one from pgcrypto so we should just mention KAME\nthere.\n\nI have spent some time double-checking all this stuff, adjusting some\ncomments, and making the style of the new files more consistent with\nthe surroundings while minimizing the amount of noise diffs (pgindent\nhas adjusted some things by itself for the new files). In short, this\nseems rather committable to me.\n--\nMichael",
"msg_date": "Wed, 9 Dec 2020 14:47:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor MD5 implementations and switch to EVP for OpenSSL"
},
{
"msg_contents": "> On 9 Dec 2020, at 06:47, Michael Paquier <michael@paquier.xyz> wrote:\n\n> In short, this seems rather committable to me.\n\nAgreed, this version of the patch looks good to me. I've looked over the\nattributions for the code movement and it seems to match now, and all tests\npass etc. +1 on going ahead with this version.\n\nThe tiniest level of nitpicking would be that md5.h doesn't have the \n/* PG_MD5_H */ comment on the #endif line (and it's even tinier since it's\nnot even the fault of this patch - I just happened to notice when looking\nat that file).\n\ncheers ./daniel\n\n\n",
"msg_date": "Wed, 9 Dec 2020 13:52:32 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Refactor MD5 implementations and switch to EVP for OpenSSL"
},
{
"msg_contents": "On Wed, Dec 09, 2020 at 01:52:32PM +0100, Daniel Gustafsson wrote:\n> The tiniest level of nitpicking would be that md5.h doesn't have the \n> /* PG_MD5_H */ comment on the #endif line (and it's even tinier since it's\n> not even the fault of this patch - I just happened to notice when looking\n> at that file).\n\nGood catch. I have fixed this one, looked again at the code this\nmorning, did more tests on Linux/Windows with/without OpenSSL, and\nfinally committed the patch.\n--\nMichael",
"msg_date": "Thu, 10 Dec 2020 12:02:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor MD5 implementations and switch to EVP for OpenSSL"
}
] |
[
{
"msg_contents": "Since the impact of the long transaction is huge in postgresql, for example\na long transaction by incident, tables may become very huge and it can't\nbecome small again even the transaction is completed unless a vacuum full\nis used.\n\nI have 2 ideas about this. One is in the Read Committed level, we can\nadvance xmin\naggressively. suppose it started at t1, and complete a query at t2. the\nxmin should\nbe t1 currently. Can we advance the xmin to t2 since it is read committed\nlevel,\nThe data older than t2 will never be used? Another one is can we force to\nclean\nup the old tuples which are older than xxx? If users want to access that,\nwe can just raise errors. Oracle uses this strategy and the error code is\nORA-01555.\n\n\n-- \nBest Regards\nAndy Fan\n\nSince the impact of the long transaction is huge in postgresql, for examplea long transaction by incident, tables may become very huge and it can'tbecome small again even the transaction is completed unless a vacuum fullis used. I have 2 ideas about this. One is in the Read Committed level, we can advance xminaggressively. suppose it started at t1, and complete a query at t2. the xmin should be t1 currently. Can we advance the xmin to t2 since it is read committed level, The data older than t2 will never be used? Another one is can we force to clean up the old tuples which are older than xxx? If users want to access that, we can just raise errors. Oracle uses this strategy and the error code is ORA-01555. -- Best RegardsAndy Fan",
"msg_date": "Fri, 6 Nov 2020 16:48:28 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Advance xmin aggressively on Read Commit isolation level"
},
{
"msg_contents": "On Fri, Nov 6, 2020 at 9:48 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I have 2 ideas about this. One is in the Read Committed level, we can advance xmin\n> aggressively. suppose it started at t1, and complete a query at t2. the xmin should\n> be t1 currently. Can we advance the xmin to t2 since it is read committed level,\n> The data older than t2 will never be used? Another one is can we force to clean\n> up the old tuples which are older than xxx? If users want to access that,\n> we can just raise errors. Oracle uses this strategy and the error code is\n> ORA-01555.\n\nHi Andy,\n\nFor the second idea, we have old_snapshot_threshold which does exactly\nthat since 9.6. There have been some questions about whether it works\ncorrectly, though: see https://commitfest.postgresql.org/30/2682/ if\nyou would like to help look into that :-)\n\n\n",
"msg_date": "Fri, 6 Nov 2020 21:53:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Advance xmin aggressively on Read Commit isolation level"
},
{
"msg_contents": "On Fri, Nov 6, 2020 at 4:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Fri, Nov 6, 2020 at 9:48 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > I have 2 ideas about this. One is in the Read Committed level, we can\n> advance xmin\n> > aggressively. suppose it started at t1, and complete a query at t2. the\n> xmin should\n> > be t1 currently. Can we advance the xmin to t2 since it is read\n> committed level,\n> > The data older than t2 will never be used? Another one is can we force\n> to clean\n> > up the old tuples which are older than xxx? If users want to access\n> that,\n> > we can just raise errors. Oracle uses this strategy and the error code\n> is\n> > ORA-01555.\n>\n> Hi Andy,\n>\n> For the second idea, we have old_snapshot_threshold which does exactly\n> that since 9.6. There have been some questions about whether it works\n> correctly, though: see https://commitfest.postgresql.org/30/2682/ if\n> you would like to help look into that :-)\n>\n\nHi Tomas:\n This is exactly what I want and I have big interest with that. Thanks for\nthe information!\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Nov 6, 2020 at 4:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Fri, Nov 6, 2020 at 9:48 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I have 2 ideas about this. One is in the Read Committed level, we can advance xmin\n> aggressively. suppose it started at t1, and complete a query at t2. the xmin should\n> be t1 currently. Can we advance the xmin to t2 since it is read committed level,\n> The data older than t2 will never be used? Another one is can we force to clean\n> up the old tuples which are older than xxx? If users want to access that,\n> we can just raise errors. Oracle uses this strategy and the error code is\n> ORA-01555.\n\nHi Andy,\n\nFor the second idea, we have old_snapshot_threshold which does exactly\nthat since 9.6. There have been some questions about whether it works\ncorrectly, though: see https://commitfest.postgresql.org/30/2682/ if\nyou would like to help look into that :-)\nHi Tomas: This is exactly what I want and I have big interest with that. Thanks forthe information!-- Best RegardsAndy Fan",
"msg_date": "Fri, 6 Nov 2020 22:20:16 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Advance xmin aggressively on Read Commit isolation level"
}
] |
[
{
"msg_contents": "Hello, hackers!\n\nWhile trying to test a patch that adds a synchronization barrier in \npgbench [1] on Windows, I found that since the commit \"Use ppoll(2), if \navailable, to wait for input in pgbench.\" [2] I cannot use a large \nnumber of client connections in pgbench on my Windows virtual machines \n(Windows Server 2008 R2 and Windows 2019), for example:\n\n> bin\\pgbench.exe -c 90 -S -T 3 postgres\nstarting vacuum...end.\ntoo many client connections for select()\n\nThe almost same thing happens with reindexdb and vacuumdb (build on \ncommit [3]):\n\n> bin\\reindexdb.exe -j 95 postgres\nreindexdb: fatal: too many jobs for this platform -- try 90\n\n> bin\\vacuumdb.exe -j 95 postgres\nvacuumdb: vacuuming database \"postgres\"\nvacuumdb: fatal: too many jobs for this platform -- try 90\n\nIIUC the checks below are not correct on Windows, since on this system \nsockets can have values equal to or greater than FD_SETSIZE (see Windows \ndocumentation [4] and pgbench debug output in attached \npgbench_debug.txt).\n\nsrc/bin/pgbench/pgbench.c, the function add_socket_to_set:\nif (fd < 0 || fd >= FD_SETSIZE)\n{\n\t/*\n\t * Doing a hard exit here is a bit grotty, but it doesn't seem worth\n\t * complicating the API to make it less grotty.\n\t */\n\tpg_log_fatal(\"too many client connections for select()\");\n\texit(1);\n}\n\nsrc/bin/scripts/scripts_parallel.c, the function ParallelSlotsSetup:\n/*\n * Fail and exit immediately if trying to use a socket in an\n * unsupported range. POSIX requires open(2) to use the lowest\n * unused file descriptor and the hint given relies on that.\n */\nif (PQsocket(conn) >= FD_SETSIZE)\n{\n\tpg_log_fatal(\"too many jobs for this platform -- try %d\", i);\n\texit(1);\n}\n\nI tried to fix this, see attached fix_max_client_conn_on_Windows.patch \n(based on commit [3]). I checked it for reindexdb and vacuumdb, and it \nworks for simple databases (1025 jobs are not allowed and 1024 jobs is \nok). Unfortunately, pgbench was getting connection errors when it tried \nto use 1000 jobs on my virtual machines, although there were no errors \nfor fewer jobs (500) and the same number of clients (1000)...\n\nAny suggestions are welcome!\n\n[1] \nhttps://www.postgresql.org/message-id/flat/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de\n[2] \nhttps://github.com/postgres/postgres/commit/60e612b602999e670f2d57a01e52799eaa903ca9\n[3] \nhttps://github.com/postgres/postgres/commit/48e1291342dd7771cf8c67aa1d7ec1f394b95dd8\n[4] From \nhttps://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-select \n:\nInternally, socket handles in an fd_set structure are not represented as \nbit flags as in Berkeley Unix. Their data representation is opaque.\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 06 Nov 2020 23:34:53 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "pgbench stopped supporting large number of client connections on\n Windows"
},
{
"msg_contents": "\nHello Marina,\n\n> While trying to test a patch that adds a synchronization barrier in pgbench \n> [1] on Windows,\n\nThanks for trying that, I do not have a windows setup for testing, and the \nsync code I wrote for Windows is basically blind coding:-(\n\n> I found that since the commit \"Use ppoll(2), if available, to \n> wait for input in pgbench.\" [2] I cannot use a large number of client \n> connections in pgbench on my Windows virtual machines (Windows Server 2008 R2 \n> and Windows 2019), for example:\n>\n>> bin\\pgbench.exe -c 90 -S -T 3 postgres\n> starting vacuum...end.\n\nISTM that 1 thread with 90 clients is a bad idea, see below.\n\n> The almost same thing happens with reindexdb and vacuumdb (build on \n> commit [3]):\n\nWindows fd implementation is somehow buggy because it does not return the \nsmallest number available, and then with the assumption that select uses a \ndense array indexed with them (true on linux, less so on Windows which \nprobably uses a sparse array), so that the number gets over the limit, \neven if less are actually used, hence the catch, as you noted.\n\nAnother point is windows has a hardcoded number of objects one thread can \nreally wait for, typically 64, so that waiting for more requires actually \nforking threads to do the waiting. But if you are ready to fork threads \njust to wait, then probaly you could have started pgbench with more \nthreads in the first place. Now it would probably not make the problem go \naway because fd numbers would be per process, not per thread, but it \nreally suggests that one should not load a thread is more than 64 clients.\n\n> IIUC the checks below are not correct on Windows, since on this system \n> sockets can have values equal to or greater than FD_SETSIZE (see Windows \n> documentation [4] and pgbench debug output in attached pgbench_debug.txt).\n\nOkay.\n\nBut then, how may one detect that there are too many fds in the set?\n\nI think that an earlier version of the code needed to make assumptions \nabout the internal implementation of windows (there is a counter somewhere \nin windows fd_set struct), which was rejected because if was breaking the \ninterface. Now your patch is basically resurrecting that. Why not if there \nis no other solution, but this is quite depressing, and because it breaks \nthe interface it would be broken if windows changed its internals for some \nreason:-(\n\nDoesn't windows has \"ppoll\"? Should we implement the stuff above windows \npolling capabilities and coldly skip its failed posix portability \nattempts? This raises again the issue that you should not have more that \n64 clients per thread anyway, because it is an intrinsic limit on windows.\n\nI think that at one point it was suggested to error or warn if \nnclients/nthreads is too great, but that was not kept in the end.\n\n> I tried to fix this, see attached fix_max_client_conn_on_Windows.patch (based \n> on commit [3]). I checked it for reindexdb and vacuumdb, and it works for \n> simple databases (1025 jobs are not allowed and 1024 jobs is ok). \n> Unfortunately, pgbench was getting connection errors when it tried to use \n> 1000 jobs on my virtual machines, although there were no errors for fewer \n> jobs (500) and the same number of clients (1000)...\n\nIt seems that the max number of threads you can start depends on available \nmemory, because each thread is given its own stack, so it would depend on \nyour vm settings?\n\n> Any suggestions are welcome!\n\nUse ppoll, and start more threads but not too many?\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 6 Nov 2020 23:01:55 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench stopped supporting large number of client connections\n on Windows"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Any suggestions are welcome!\n\n> Use ppoll, and start more threads but not too many?\n\nDoes ppoll exist on Windows?\n\nThere was a prior thread on this topic, which seems to have drifted off\ninto the sunset:\n\nhttps://www.postgresql.org/message-id/flat/BL0PR1901MB1985F219C46C61EDE7036C34ED8E0%40BL0PR1901MB1985.namprd19.prod.outlook.com\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Nov 2020 01:06:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench stopped supporting large number of client connections on\n Windows"
},
{
"msg_contents": "Hello Tom,\n\n>> Use ppoll, and start more threads but not too many?\n>\n> Does ppoll exist on Windows?\n\nSome g*gling suggest that the answer is no.\n\n> There was a prior thread on this topic, which seems to have drifted off \n> into the sunset:\n\nIndeed. I may have contributed to this dwindling by not adding a CF entry \nfor this thread, so that there was no reminder anywhere.\n\n> https://www.postgresql.org/message-id/flat/BL0PR1901MB1985F219C46C61EDE7036C34ED8E0%40BL0PR1901MB1985.namprd19.prod.outlook.com\n\nIt seems that there is no simple good solution around windows wait event \nimplementations:\n - timeout seems to be milliseconds on all things I saw\n - 64 is an intrinsic limit, probably because of the underlying nᅵ implementations\n\nMaybe we could provide a specific windows implementation limited to 64 fd \n(which is not a bad thing bench-wise) but with a rounded-down timeout, so \nthat it could end-up on an activate spinning wait in some cases, which is \nprobably not a bug issue, all things considered.\n\n-- \nFabien.",
"msg_date": "Sun, 8 Nov 2020 09:44:56 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench stopped supporting large number of client connections\n on Windows"
},
{
"msg_contents": "On 2020-11-07 01:01, Fabien COELHO wrote:\n> Hello Marina,\n\nHello, Fabien!\n\nThank you for your comments!\n\n>> While trying to test a patch that adds a synchronization barrier in \n>> pgbench [1] on Windows,\n> \n> Thanks for trying that, I do not have a windows setup for testing, and\n> the sync code I wrote for Windows is basically blind coding:-(\n\nFYI:\n\n1) It looks like pgbench will no longer support Windows XP due to the \nfunction DeleteSynchronizationBarrier. From \nhttps://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-deletesynchronizationbarrier \n:\n\nMinimum supported client: Windows 8 [desktop apps only]\nMinimum supported server: Windows Server 2012 [desktop apps only]\n\nOn Windows Server 2008 R2 (MSVC 2013) the 6-th version of the patch [1] \nhas compiled without (new) warnings, but when running pgbench I got the \nfollowing error:\n\nThe procedure entry point DeleteSynchronizationBarrier could not be \nlocated in the dynamic link library KERNEL32.dll.\n\n2) On Windows Server 2019 (MSVC 2019) the 6-th version of the patch [1] \nwith fix_max_client_conn_on_Windows.patch has compiled without (new) \nwarnings. I made a few runs (-M prepared -c 100 -j 10 -T 10 -P1 -S) with \nand without your patches. On Linux (-M prepared -c 1000 -j 500 -T 10 -P1 \n-S) your patches fix problems with progress reports as in [2], but on \nWindows I did not notice such changes, see attached \npgbench_runs_linux_vs_windows.zip.\n\n>> The almost same thing happens with reindexdb and vacuumdb (build on \n>> commit [3]):\n> \n> Windows fd implementation is somehow buggy because it does not return\n> the smallest number available, and then with the assumption that\n> select uses a dense array indexed with them (true on linux, less so on\n> Windows which probably uses a sparse array), so that the number gets\n> over the limit, even if less are actually used, hence the catch, as\n> you noted.\n\nI agree with you. It looks like the structure fd_set just contains used \nsockets by this application on Windows, and the macro FD_SETSIZE is used \nonly here.\n\n From \nhttps://docs.microsoft.com/en-us/windows/win32/api/winsock/ns-winsock-fd_set \n:\n\ntypedef struct fd_set {\n u_int fd_count;\n SOCKET fd_array[FD_SETSIZE];\n} fd_set, FD_SET, *PFD_SET, *LPFD_SET;\n\n From \nhttps://docs.microsoft.com/en-us/windows/win32/winsock/maximum-number-of-sockets-supported-2 \n:\n\nThe maximum number of sockets that a Windows Sockets application can use \nis not affected by the manifest constant FD_SETSIZE. This value defined \nin the Winsock2.h header file is used in constructing the FD_SET \nstructures used with select function.\n\n>> IIUC the checks below are not correct on Windows, since on this system \n>> sockets can have values equal to or greater than FD_SETSIZE (see \n>> Windows documentation [4] and pgbench debug output in attached \n>> pgbench_debug.txt).\n> \n> Okay.\n> \n> But then, how may one detect that there are too many fds in the set?\n> \n> I think that an earlier version of the code needed to make assumptions\n> about the internal implementation of windows (there is a counter\n> somewhere in windows fd_set struct), which was rejected because if was\n> breaking the interface. Now your patch is basically resurrecting that.\n\nI tried to keep the behaviour \"we check if the socket value can be used \nin select() at runtime\", but now I will also read that thread...\n\n> Why not if there is no other solution, but this is quite depressing,\n> and because it breaks the interface it would be broken if windows\n> changed its internals for some reason:-(\n\nIt looks like if the internals of the structure fd_set are changed, we \nwill also have problems with the function pgwin32_select from \nsrc/backend/port/win32/socket.c, because it uses fd_set.fd_count too?..\n\n(I'm writing responses to the rest of your comments but it takes \ntime...)\n\n[1] \nhttps://www.postgresql.org/message-id/alpine.DEB.2.22.394.2011021726390.989361%40pseudo\n[2] \nhttps://www.postgresql.org/message-id/20200227185129.hikscyenomnlrord%40alap3.anarazel.de\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 08 Nov 2020 17:59:19 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pgbench stopped supporting large number of client connections on\n Windows"
},
{
"msg_contents": " Hi Marina,\n\nOn Sun, Nov 8, 2020 at 11:59 PM Marina Polyakova\n<m.polyakova@postgrespro.ru> wrote:\n>\n> On 2020-11-07 01:01, Fabien COELHO wrote:\n> > Hello Marina,\n>\n> Hello, Fabien!\n>\n> Thank you for your comments!\n>\n> >> While trying to test a patch that adds a synchronization barrier in\n> >> pgbench [1] on Windows,\n> >\n> > Thanks for trying that, I do not have a windows setup for testing, and\n> > the sync code I wrote for Windows is basically blind coding:-(\n>\n> FYI:\n>\n> 1) It looks like pgbench will no longer support Windows XP due to the\n> function DeleteSynchronizationBarrier. From\n> https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-deletesynchronizationbarrier\n> :\n>\n> Minimum supported client: Windows 8 [desktop apps only]\n> Minimum supported server: Windows Server 2012 [desktop apps only]\n>\n> On Windows Server 2008 R2 (MSVC 2013) the 6-th version of the patch [1]\n> has compiled without (new) warnings, but when running pgbench I got the\n> following error:\n>\n> The procedure entry point DeleteSynchronizationBarrier could not be\n> located in the dynamic link library KERNEL32.dll.\n>\n> 2) On Windows Server 2019 (MSVC 2019) the 6-th version of the patch [1]\n> with fix_max_client_conn_on_Windows.patch has compiled without (new)\n> warnings. I made a few runs (-M prepared -c 100 -j 10 -T 10 -P1 -S) with\n> and without your patches. On Linux (-M prepared -c 1000 -j 500 -T 10 -P1\n> -S) your patches fix problems with progress reports as in [2], but on\n> Windows I did not notice such changes, see attached\n> pgbench_runs_linux_vs_windows.zip.\n>\n> >> The almost same thing happens with reindexdb and vacuumdb (build on\n> >> commit [3]):\n> >\n> > Windows fd implementation is somehow buggy because it does not return\n> > the smallest number available, and then with the assumption that\n> > select uses a dense array indexed with them (true on linux, less so on\n> > Windows which probably uses a sparse array), so that the number gets\n> > over the limit, even if less are actually used, hence the catch, as\n> > you noted.\n>\n> I agree with you. It looks like the structure fd_set just contains used\n> sockets by this application on Windows, and the macro FD_SETSIZE is used\n> only here.\n>\n> From\n> https://docs.microsoft.com/en-us/windows/win32/api/winsock/ns-winsock-fd_set\n> :\n>\n> typedef struct fd_set {\n> u_int fd_count;\n> SOCKET fd_array[FD_SETSIZE];\n> } fd_set, FD_SET, *PFD_SET, *LPFD_SET;\n>\n> From\n> https://docs.microsoft.com/en-us/windows/win32/winsock/maximum-number-of-sockets-supported-2\n> :\n>\n> The maximum number of sockets that a Windows Sockets application can use\n> is not affected by the manifest constant FD_SETSIZE. This value defined\n> in the Winsock2.h header file is used in constructing the FD_SET\n> structures used with select function.\n>\n> >> IIUC the checks below are not correct on Windows, since on this system\n> >> sockets can have values equal to or greater than FD_SETSIZE (see\n> >> Windows documentation [4] and pgbench debug output in attached\n> >> pgbench_debug.txt).\n> >\n> > Okay.\n> >\n> > But then, how may one detect that there are too many fds in the set?\n> >\n> > I think that an earlier version of the code needed to make assumptions\n> > about the internal implementation of windows (there is a counter\n> > somewhere in windows fd_set struct), which was rejected because if was\n> > breaking the interface. Now your patch is basically resurrecting that.\n>\n> I tried to keep the behaviour \"we check if the socket value can be used\n> in select() at runtime\", but now I will also read that thread...\n>\n> > Why not if there is no other solution, but this is quite depressing,\n> > and because it breaks the interface it would be broken if windows\n> > changed its internals for some reason:-(\n>\n> It looks like if the internals of the structure fd_set are changed, we\n> will also have problems with the function pgwin32_select from\n> src/backend/port/win32/socket.c, because it uses fd_set.fd_count too?..\n>\n> (I'm writing responses to the rest of your comments but it takes\n> time...)\n>\n\nThis patch on Commitfest has been \"Waiting on Author\" for almost 2\nmonths. Could you share the current status? Are you updating the\npatch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 7 Jan 2021 22:48:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench stopped supporting large number of client connections on\n Windows"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jan 7, 2021 at 10:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi Marina,\n>\n> On Sun, Nov 8, 2020 at 11:59 PM Marina Polyakova\n> <m.polyakova@postgrespro.ru> wrote:\n> >\n> > On 2020-11-07 01:01, Fabien COELHO wrote:\n> > > Hello Marina,\n> >\n> > Hello, Fabien!\n> >\n> > Thank you for your comments!\n> >\n> > >> While trying to test a patch that adds a synchronization barrier in\n> > >> pgbench [1] on Windows,\n> > >\n> > > Thanks for trying that, I do not have a windows setup for testing, and\n> > > the sync code I wrote for Windows is basically blind coding:-(\n> >\n> > FYI:\n> >\n> > 1) It looks like pgbench will no longer support Windows XP due to the\n> > function DeleteSynchronizationBarrier. From\n> > https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-deletesynchronizationbarrier\n> > :\n> >\n> > Minimum supported client: Windows 8 [desktop apps only]\n> > Minimum supported server: Windows Server 2012 [desktop apps only]\n> >\n> > On Windows Server 2008 R2 (MSVC 2013) the 6-th version of the patch [1]\n> > has compiled without (new) warnings, but when running pgbench I got the\n> > following error:\n> >\n> > The procedure entry point DeleteSynchronizationBarrier could not be\n> > located in the dynamic link library KERNEL32.dll.\n> >\n> > 2) On Windows Server 2019 (MSVC 2019) the 6-th version of the patch [1]\n> > with fix_max_client_conn_on_Windows.patch has compiled without (new)\n> > warnings. I made a few runs (-M prepared -c 100 -j 10 -T 10 -P1 -S) with\n> > and without your patches. On Linux (-M prepared -c 1000 -j 500 -T 10 -P1\n> > -S) your patches fix problems with progress reports as in [2], but on\n> > Windows I did not notice such changes, see attached\n> > pgbench_runs_linux_vs_windows.zip.\n> >\n> > >> The almost same thing happens with reindexdb and vacuumdb (build on\n> > >> commit [3]):\n> > >\n> > > Windows fd implementation is somehow buggy because it does not return\n> > > the smallest number available, and then with the assumption that\n> > > select uses a dense array indexed with them (true on linux, less so on\n> > > Windows which probably uses a sparse array), so that the number gets\n> > > over the limit, even if less are actually used, hence the catch, as\n> > > you noted.\n> >\n> > I agree with you. It looks like the structure fd_set just contains used\n> > sockets by this application on Windows, and the macro FD_SETSIZE is used\n> > only here.\n> >\n> > From\n> > https://docs.microsoft.com/en-us/windows/win32/api/winsock/ns-winsock-fd_set\n> > :\n> >\n> > typedef struct fd_set {\n> > u_int fd_count;\n> > SOCKET fd_array[FD_SETSIZE];\n> > } fd_set, FD_SET, *PFD_SET, *LPFD_SET;\n> >\n> > From\n> > https://docs.microsoft.com/en-us/windows/win32/winsock/maximum-number-of-sockets-supported-2\n> > :\n> >\n> > The maximum number of sockets that a Windows Sockets application can use\n> > is not affected by the manifest constant FD_SETSIZE. This value defined\n> > in the Winsock2.h header file is used in constructing the FD_SET\n> > structures used with select function.\n> >\n> > >> IIUC the checks below are not correct on Windows, since on this system\n> > >> sockets can have values equal to or greater than FD_SETSIZE (see\n> > >> Windows documentation [4] and pgbench debug output in attached\n> > >> pgbench_debug.txt).\n> > >\n> > > Okay.\n> > >\n> > > But then, how may one detect that there are too many fds in the set?\n> > >\n> > > I think that an earlier version of the code needed to make assumptions\n> > > about the internal implementation of windows (there is a counter\n> > > somewhere in windows fd_set struct), which was rejected because if was\n> > > breaking the interface. Now your patch is basically resurrecting that.\n> >\n> > I tried to keep the behaviour \"we check if the socket value can be used\n> > in select() at runtime\", but now I will also read that thread...\n> >\n> > > Why not if there is no other solution, but this is quite depressing,\n> > > and because it breaks the interface it would be broken if windows\n> > > changed its internals for some reason:-(\n> >\n> > It looks like if the internals of the structure fd_set are changed, we\n> > will also have problems with the function pgwin32_select from\n> > src/backend/port/win32/socket.c, because it uses fd_set.fd_count too?..\n> >\n> > (I'm writing responses to the rest of your comments but it takes\n> > time...)\n> >\n>\n> This patch on Commitfest has been \"Waiting on Author\" for almost 2\n> months. Could you share the current status? Are you updating the\n> patch?\n\nStatus update for a commitfest entry.\n\nSince this is a bug fix, I've moved this patch to the next commitfest.\nI think if this patch is still inactive until the feature freeze we\ncan remove this entry by setting it to \"Returned with Feedback\".\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Feb 2021 22:18:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench stopped supporting large number of client connections on\n Windows"
},
{
"msg_contents": "On 2/1/21 8:18 AM, Masahiko Sawada wrote:\n>>\n>> This patch on Commitfest has been \"Waiting on Author\" for almost 2\n>> months. Could you share the current status? Are you updating the\n>> patch?\n> \n> Status update for a commitfest entry.\n> \n> Since this is a bug fix, I've moved this patch to the next commitfest.\n> I think if this patch is still inactive until the feature freeze we\n> can remove this entry by setting it to \"Returned with Feedback\".\n\nIf no further work is planned on this patch then we should close it. I \nplan to do that on April 1 if there are no objections.\n\nIt's listed as a bug but seems like a limitation that could just be \ndocumented.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 29 Mar 2021 10:32:29 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pgbench stopped supporting large number of client connections on\n Windows"
}
] |
[
{
"msg_contents": "Hi Marina,\nNice catch.\n\n>rc/bin/pgbench/pgbench.c, the function add_socket_to_set:\n>if (fd < 0 || fd >= FD_SETSIZE)\n>{\n>/*\n>* Doing a hard exit here is a bit grotty, but it doesn't seem worth\n>* complicating the API to make it less grotty.\n>*/\n>pg_log_fatal(\"too many client connections for select()\");\n>exit(1);\n>}\n\nIt seems to me that the limit is hardcode in,\nsrc/backend/port/win32/socket.c\n\nFD_SETSIZE * 2\n\nthat would be 2048?\n\nregards,\n\nRanier Vilela\n\nHi Marina,Nice catch.\n>rc/bin/pgbench/pgbench.c, the function add_socket_to_set:>if (fd < 0 || fd >= FD_SETSIZE)>{\t>/*\t >* Doing a hard exit here is a bit grotty, but it doesn't seem worth\t >* complicating the API to make it less grotty.\t >*/\t>pg_log_fatal(\"too many client connections for select()\");>exit(1);>}\nIt seems to me that the limit is hardcode in, src/backend/port/win32/socket.cFD_SETSIZE * 2that would be 2048?regards,Ranier Vilela",
"msg_date": "Fri, 6 Nov 2020 17:54:13 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "re: pgbench stopped supporting large number of client connections on\n Windows"
},
{
"msg_contents": "On 2020-11-06 23:54, Ranier Vilela wrote:\n> Hi Marina,\n\nHello!\n\n> Nice catch.\n\nThank you!\n\n>> rc/bin/pgbench/pgbench.c, the function add_socket_to_set:\n>> if (fd < 0 || fd >= FD_SETSIZE)\n>> {\n>> /*\n>> * Doing a hard exit here is a bit grotty, but it doesn't seem worth\n>> * complicating the API to make it less grotty.\n>> */\n>> pg_log_fatal(\"too many client connections for select()\");\n>> exit(1);\n>> }\n> \n> It seems to me that the limit is hardcode in,\n> src/backend/port/win32/socket.c\n> \n> FD_SETSIZE * 2\n> \n> that would be 2048?\n\n1) If you mean the function pgwin32_select in the file \nsrc/backend/port/win32/socket.c, IIUC it is only used in the backend, \nsee src/include/port/win32_port.h:\n\n#ifndef FRONTEND\n<...>\n#define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)\n<...>\n#endif\t\t\t\t\t\t\t/* FRONTEND */\n\n2) It looks like FD_SETSIZE does not set a limit on the socket value on \nWindows, see \nhttps://docs.microsoft.com/en-us/windows/win32/winsock/maximum-number-of-sockets-supported-2 \n:\n\nThe maximum number of sockets that a Windows Sockets application can use \nis not affected by the manifest constant FD_SETSIZE. This value defined \nin the Winsock2.h header file is used in constructing the FD_SET \nstructures used with select function.\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 07 Nov 2020 20:55:19 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pgbench stopped supporting large number of client connections on\n Windows"
},
{
"msg_contents": "Em sáb., 7 de nov. de 2020 às 14:55, Marina Polyakova <\nm.polyakova@postgrespro.ru> escreveu:\n\n> On 2020-11-06 23:54, Ranier Vilela wrote:\n> > Hi Marina,\n>\n> Hello!\n>\n> 1) If you mean the function pgwin32_select in the file\n> src/backend/port/win32/socket.c, IIUC it is only used in the backend,\n> see src/include/port/win32_port.h:\n>\n> #ifndef FRONTEND\n> <...>\n> #define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)\n> <...>\n> #endif /* FRONTEND */\n>\nYes. My mistake, you right here.\n\n\n> 2) It looks like FD_SETSIZE does not set a limit on the socket value on\n> Windows, see\n>\n> https://docs.microsoft.com/en-us/windows/win32/winsock/maximum-number-of-sockets-supported-2\n> :\n>\n> The maximum number of sockets that a Windows Sockets application can use\n> is not affected by the manifest constant FD_SETSIZE. This value defined\n> in the Winsock2.h header file is used in constructing the FD_SET\n> structures used with select function.\n>\nCorrect.\nIt seems that the limit will be defined by compilation, before the\ninclusion of Winsock2.h.\nHave you tried to define -DFD_SETSIZE=2048\n\nbest regards,\nRanier Vilela\n\nEm sáb., 7 de nov. de 2020 às 14:55, Marina Polyakova <m.polyakova@postgrespro.ru> escreveu:On 2020-11-06 23:54, Ranier Vilela wrote:\n> Hi Marina,\n\nHello!\n\n1) If you mean the function pgwin32_select in the file \nsrc/backend/port/win32/socket.c, IIUC it is only used in the backend, \nsee src/include/port/win32_port.h:\n\n#ifndef FRONTEND\n<...>\n#define select(n, r, w, e, timeout) pgwin32_select(n, r, w, e, timeout)\n<...>\n#endif /* FRONTEND */Yes. My mistake, you right here. \n\n2) It looks like FD_SETSIZE does not set a limit on the socket value on \nWindows, see \nhttps://docs.microsoft.com/en-us/windows/win32/winsock/maximum-number-of-sockets-supported-2 \n:\n\nThe maximum number of sockets that a Windows Sockets application can use \nis not affected by the manifest constant FD_SETSIZE. This value defined \nin the Winsock2.h header file is used in constructing the FD_SET \nstructures used with select function.Correct.It seems that the limit will be defined by compilation, before the inclusion of Winsock2.h.Have you tried to define -DFD_SETSIZE=2048best regards,Ranier Vilela",
"msg_date": "Sat, 7 Nov 2020 15:17:13 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench stopped supporting large number of client connections on\n Windows"
}
] |
[
{
"msg_contents": "See\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c66a3225e07b5098a796f24588a6b81bfdedd2fd\n\nPlease send any corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Nov 2020 17:07:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "First-draft release notes for back branches are up"
},
{
"msg_contents": "On Fri, 06 Nov 2020 17:07:12 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> See\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c66a3225e07b5098a796f24588a6b81bfdedd2fd\n> \n> Please send any corrections by Sunday.\n> \n\nWe should probably include instructions what to do about the BRIN\ndata corruption fixed by 7577dd8480 - a query to list might be\naffected by the bug and should be rebuilt.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 7 Nov 2020 00:52:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> We should probably include instructions what to do about the BRIN\n> data corruption fixed by 7577dd8480 - a query to list might be\n> affected by the bug and should be rebuilt.\n\nDo you have some suggested text?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Nov 2020 21:52:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "Hi,\n\nOn 11/7/20 3:52 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> We should probably include instructions what to do about the BRIN\n>> data corruption fixed by 7577dd8480 - a query to list might be\n>> affected by the bug and should be rebuilt.\n> \n> Do you have some suggested text?\n> \n\nI think this might work:\n\nFix handling of toasted values in BRIN indexes\n\nThis mistake can result in BRIN indexes referencing toasted values,\nwhich may unexpectedly disapper after a cleanup of the table, leading\nto errors due to missing chunks:\n\n ERROR: missing chunk number 0 for toast value 16433 in pg_toast_16426\n\nFortunately, this issue only affects BRIN indexes on columns with\ntoastable data types. While no index corruption due to this bug is\nknown to have occurred in the field, it is recommended that production\ninstallations REINDEX all brin indexes at a convenient time after\nupgrading to X.X.\n\nThe list of porentially corrupted BRIN indexes may be obtained using\nthis query:\n\n select pg_get_indexdef(oid)\n from pg_class\n where (relkind = 'i')\n and relam = 3580\n and exists (select 1 from pg_attribute\n where attrelid = pg_class.oid\n and attlen = -1\n and attstorage in ('x', 'e'))\n\nIt might be better to propose CREATE INDEX CONCURRENTLY, but I don't\nthink there is a function to generate that SQL.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 7 Nov 2020 23:05:49 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 11/7/20 3:52 AM, Tom Lane wrote:\n>> Do you have some suggested text?\n\n> I think this might work:\n\nI dunno ... given that we have zero field reports, I doubt this is\nsomething we need to tell every BRIN user to do. The text I put in\nearlier today just recommends reindexing if you see the error.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Nov 2020 17:21:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for back branches are up"
},
{
"msg_contents": "On 11/7/20 11:21 PM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 11/7/20 3:52 AM, Tom Lane wrote:\n>>> Do you have some suggested text?\n> \n>> I think this might work:\n> \n> I dunno ... given that we have zero field reports, I doubt this is\n> something we need to tell every BRIN user to do. The text I put in\n> earlier today just recommends reindexing if you see the error.\n> \n\nSorry, I haven't noticed you already wrote something :-( I agree it's\nenough to recommend reindexing only when there's an error.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 8 Nov 2020 00:10:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for back branches are up"
}
] |
[
{
"msg_contents": "Hi, hacker\n\nI met an error as below when I use ecpg\n\na.pgc:5: ERROR: invalid bit string literal\na.pgc:5: ERROR: internal error: unreachable state; please report this to <pgsql-bugs@lists.postgresql.org>\n\nthe test source is attached.\n\nAfter investigating the code, I think the process of pgc.l is:\nStep 1: <SQL>{xbstart}, addlitchar('b') is called, literalbuf contains a char 'b';\nStep 2: <xb>{xbinside}, the rest of char is added in literalbuf\nStep 3: <xqs>{other}, the condition literalbuf[strspn(literalbuf, \"01\") + 1] != '\\0' will always be true;\nerror is occurred here\n\nI try to fix this bug by deleting 'addlitchar('b');' from source I also add a test case to test all const str in ecpg.\n\nThe patch is also attached.\n\nBest regards, Shenhao Wang",
"msg_date": "Fri, 6 Nov 2020 22:56:11 +0000",
"msg_from": "\"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "errors when there is a bit literal in ecpg"
},
{
"msg_contents": "\"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com> writes:\n> I met an error as below when I use ecpg\n\n> a.pgc:5: ERROR: invalid bit string literal\n> a.pgc:5: ERROR: internal error: unreachable state; please report this to <pgsql-bugs@lists.postgresql.org>\n\nIndeed. This has apparently been broken for a very long time (though\nthe \"unreachable state\" part is fairly new).\n\n> I try to fix this bug by deleting 'addlitchar('b');' from source I also add a test case to test all const str in ecpg.\n\nI thought that a whole new test case was overkill when we could just add a\ncouple of lines to an existing test. Other than that, looks good, pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Nov 2020 15:05:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: errors when there is a bit literal in ecpg"
}
] |
[
{
"msg_contents": "I propose to apply the following changes to messages in pg13.\n\nIn 0001, I propose changing messages that were introduced as different \nfor parallel vacuum workers. Frankly I don't understand why we are\nbragging about the vacuum being done in a parallel worker; does the user\ncare? It seems to me that users are just satisfied to know that the\nindexes were scanned; the fact that this was done in a parallel worker\nis not of much interest, so why call attention to that? Therefore, we\ncan reduce the message to what's emitted in the normal case.\n\nIn 0002, I propose to remove the word \"concurrently\" in an error\nmessage when an invalid index cannot be reindexed. In fact, the problem\nis generic: we just cannot reindex the index at all, regardless of\nconcurrently or not. So we can reduce this message to be identical to\nthe one we throw in the non-concurrent case.\n\n(Dropped 0003 while composing this email.)\n\nPatch 0004 just adds a comment to clarify a message that I found\nconfusing when doing the translation.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W",
"msg_date": "Sat, 7 Nov 2020 00:49:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "PG13: message style changes"
},
{
"msg_contents": "On Sat, Nov 07, 2020 at 12:49:43AM -0300, Alvaro Herrera wrote:\n> In 0001, I propose changing messages that were introduced as different \n> for parallel vacuum workers. Frankly I don't understand why we are\n> bragging about the vacuum being done in a parallel worker; does the user\n> care? It seems to me that users are just satisfied to know that the\n> indexes were scanned; the fact that this was done in a parallel worker\n> is not of much interest, so why call attention to that? Therefore, we\n> can reduce the message to what's emitted in the normal case.\n\nIndeed. Worth noting also that one can get the same level of\ninformation with %P in log_line_prefix.\n\n> In 0002, I propose to remove the word \"concurrently\" in an error\n> message when an invalid index cannot be reindexed. In fact, the problem\n> is generic: we just cannot reindex the index at all, regardless of\n> concurrently or not. So we can reduce this message to be identical to\n> the one we throw in the non-concurrent case.\n\nNo issues from me here.\n\n> Patch 0004 just adds a comment to clarify a message that I found\n> confusing when doing the translation.\n\n+1.\n--\nMichael",
"msg_date": "Sat, 7 Nov 2020 14:58:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG13: message style changes"
},
{
"msg_contents": "Thanks for looking! Pushed.\n\n\n\n",
"msg_date": "Sat, 7 Nov 2020 19:37:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: PG13: message style changes"
}
] |
[
{
"msg_contents": "Hi\n\nI had to solve a slow due very slow JIT preparing. I found very unpleased\nsituation where some subplan was strangely slow\n\n -> Hash Anti Join (cost=11.35..19.41 rows=143 width=3011) (actual\ntime=5039.022..5039.105 rows=203 loops=1)\n Hash Cond: (term_p.id_i = term_d_1.id_i)\n -> Seq Scan on public.term_p (cost=0.00..6.03 rows=203 width=24)\n(actual time=5038.980..5039.014 rows=203 loops=1)\n Filter: (term_p.to_h IS NULL)\n -> Hash (cost=10.60..10.60 rows=60 width=16) (actual\ntime=0.008..0.009 rows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n -> Seq Scan on public.term_d term_d_1 (cost=0.00..10.60\nrows=60 width=16) (actual time=0.008..0.008 rows=0 loops=1)\n\nIt looks very strange - why does a scan of 203 rows need 5 sec?\n\nThere is a overhead of JIT but on different place of EXPLAIN\n\nJIT:\n Functions: 416\n Options: Inlining true, Optimization true, Expressions true, Deforming\ntrue\n Timing: Generation 61.002 ms, Inlining 67.897 ms, Optimization 2853.125\nms, Emission 2116.233 ms, Total 5098.258 ms\n\nWith some deduction we can detect so slow seq scan is probably JIT ??\n\nBut some tools like https://explain.depesz.com displays very strange\nstatistics then - so JIT overhead should be displayed in related plans too.\n\nRegards\n\nPavel",
"msg_date": "Sat, 7 Nov 2020 13:54:31 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "hard readable explain due JIT"
}
] |
[
{
"msg_contents": "The problems discussed in bug #16703 [1] show that pg_dump needs a\nversion of LOCK TABLE that behaves differently for views than\nwhat we have now. Since v11, LOCK TABLE on a view recurses to all\ntables and views named in the view, and it does so using the view\nowner's permissions, meaning that a view that would have permissions\nfailures if executed will also have permissions failures when locked.\nThat's probably fine for ordinary usage, but it's disastrous for\npg_dump --- even a superuser can't lock such a view.\n\nMoreover, pg_dump doesn't really need the recursive behavior. It just\nneeds the view's definition to hold still; and in any case, a typical\npg_dump run would have independently acquired locks on all the other\nrelations anyway. The recursion is buying us nothing, except perhaps\nan increased risk of deadlocks against concurrent DDL operations.\n(I'm not quite sure if that's significant, given that pg_dump pays\nno attention to the order in which it locks things. But it sure as\nheck isn't *decreasing* the risk; and it's a behavior that we could\nnot hope to improve with more smarts about pg_dump's lock ordering.)\n\nSo we need a way to turn that off. What I proposed in that thread\nwas\n\n> (1) Make pg_dump use LOCK TABLE ONLY, not LOCK TABLE.\n> (2) Make LOCK TABLE ONLY on a view not recurse to the view's dependencies.\n\nwhich would each be quite trivial to do. An objection to this approach\nis that ONLY typically controls recursion to a table's inheritance or\npartitioning children, which is a different animal from recursion to\na view's dependencies. That argument would lead to wanting some other\nsyntax to control this. I do not find that argument compelling enough\nto justify making pg_dump deal with two different commands depending\non the relation's relkind, but it is a plausible concern.\n\nClosely related to this is whether pg_dump ought to be using ONLY for\nlocking regular tables too. I tend to think that it should be, again\non the grounds that any child tables we may be interested in will get\nlocked separately, so that we're not doing anything by recursing except\nexpending extra cycles and perhaps increasing the chance of a deadlock.\n\nA completely different approach we could consider is to weaken the\npermissions requirements for LOCK on a view, say \"allow it if either\nthe calling user or the view owner has the needed permission\". This\nseems generally pretty messy and so I don't much like it, but we\nshould consider as many solutions as we can think of.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16703-e348f58aab3cf6cc%40postgresql.org\n\n\n",
"msg_date": "Sat, 07 Nov 2020 11:57:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Rethinking LOCK TABLE's behavior on views"
},
{
"msg_contents": "On Sat, Nov 07, 2020 at 11:57:20AM -0500, Tom Lane wrote:\n> The problems discussed in bug #16703 [1] show that pg_dump needs a\n> version of LOCK TABLE that behaves differently for views than\n> what we have now. Since v11, LOCK TABLE on a view recurses to all\n> tables and views named in the view, and it does so using the view\n> owner's permissions, meaning that a view that would have permissions\n> failures if executed will also have permissions failures when locked.\n> That's probably fine for ordinary usage, but it's disastrous for\n> pg_dump --- even a superuser can't lock such a view.\n> \n> Moreover, pg_dump doesn't really need the recursive behavior. It just\n> needs the view's definition to hold still; and in any case, a typical\n> pg_dump run would have independently acquired locks on all the other\n> relations anyway. The recursion is buying us nothing, except perhaps\n> an increased risk of deadlocks against concurrent DDL operations.\n\nThe getTables() locking aims to take the locks that will be taken later. That\navoids failing after expensive work. For views, the later lock-taker is\npg_get_viewdef(), which locks more than just the view but less than[2] LOCK\nTABLE. Recursion buys us more than nothing for \"pg_dump --table=viewname\", so\nabandoning recursion unconditionally is a step in the wrong direction. I\ndon't expect --table to be as excellent as complete dumps, but a change that\nmakes it worse does lose points. I want to keep the recursion.\n\n> (I'm not quite sure if that's significant, given that pg_dump pays\n> no attention to the order in which it locks things. But it sure as\n> heck isn't *decreasing* the risk; and it's a behavior that we could\n> not hope to improve with more smarts about pg_dump's lock ordering.)\n\nReordering to avoid deadlocks would be best-effort, so it's fine not to have\nfull control over the order.\n\n> Closely related to this is whether pg_dump ought to be using ONLY for\n> locking regular tables too. I tend to think that it should be, again\n> on the grounds that any child tables we may be interested in will get\n> locked separately, so that we're not doing anything by recursing except\n> expending extra cycles and perhaps increasing the chance of a deadlock.\n\nAgreed. \"pg_dump --table=inheritance_parent\" never queries inheritance\nchildren, so it's nice not to lock them.\n\n> A completely different approach we could consider is to weaken the\n> permissions requirements for LOCK on a view, say \"allow it if either\n> the calling user or the view owner has the needed permission\". This\n> seems generally pretty messy and so I don't much like it, but we\n> should consider as many solutions as we can think of.\n\nThis is the best of what you've listed by a strong margin, and I don't know of\nbetter options you've not listed. +1 for it. Does it work for you? I think\nthe mess arises from LOCK TABLE serving \"get locks sufficient for $ACTIONS\" as\na family of use cases. For views only, different $ACTIONS want different\nbehavior. $ACTIONS==SELECT wants today's behavior; pg_get_viewdef() wants\nshallower recursion and caller permissions; DROP VIEW wants no recursion.\n\n\n> [1] https://www.postgresql.org/message-id/flat/16703-e348f58aab3cf6cc%40postgresql.org\n\n[2] For example, pg_get_viewdef('pg_user') locks pg_shadow, but \"LOCK TABLE\npg_user\" additionally locks pg_authid and pg_db_role_setting.\n\n\n",
"msg_date": "Sat, 7 Nov 2020 14:46:06 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking LOCK TABLE's behavior on views"
},
{
"msg_contents": "On 2020-Nov-07, Noah Misch wrote:\n\n> On Sat, Nov 07, 2020 at 11:57:20AM -0500, Tom Lane wrote:\n\n> > A completely different approach we could consider is to weaken the\n> > permissions requirements for LOCK on a view, say \"allow it if either\n> > the calling user or the view owner has the needed permission\". This\n> > seems generally pretty messy and so I don't much like it, but we\n> > should consider as many solutions as we can think of.\n> \n> This is the best of what you've listed by a strong margin, and I don't know of\n> better options you've not listed. +1 for it. Does it work for you?\n\nIt does sound attractive from a user complexity perspective, even if it\ndoes sound messy form an implementation perspective.\n\n> I think\n> the mess arises from LOCK TABLE serving \"get locks sufficient for $ACTIONS\" as\n> a family of use cases. For views only, different $ACTIONS want different\n> behavior. $ACTIONS==SELECT wants today's behavior; pg_get_viewdef() wants\n> shallower recursion and caller permissions; DROP VIEW wants no recursion.\n\nMaybe we can tackle this problem directly, by adding a clause to LOCK\nTABLE to indicate a purpose for the lock that the server can use to\ndetermine the level of recursion. For example\n LOCK TABLE xyz IN <mode> FOR <purpose>\nwhere <purpose> can be READ, DROP, DEFINE.\n\n(For back-patch purposes we could store the purpose in LockStmt->mode,\nwhich has more than enough unused bits).\n\n\n",
"msg_date": "Mon, 9 Nov 2020 11:42:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking LOCK TABLE's behavior on views"
},
{
"msg_contents": "On Mon, Nov 09, 2020 at 11:42:33AM -0300, Alvaro Herrera wrote:\n> On 2020-Nov-07, Noah Misch wrote:\n> > On Sat, Nov 07, 2020 at 11:57:20AM -0500, Tom Lane wrote:\n> > > A completely different approach we could consider is to weaken the\n> > > permissions requirements for LOCK on a view, say \"allow it if either\n> > > the calling user or the view owner has the needed permission\". This\n> > > seems generally pretty messy and so I don't much like it, but we\n> > > should consider as many solutions as we can think of.\n> > \n> > This is the best of what you've listed by a strong margin, and I don't know of\n> > better options you've not listed. +1 for it. Does it work for you?\n> \n> It does sound attractive from a user complexity perspective, even if it\n> does sound messy form an implementation perspective.\n> \n> > I think\n> > the mess arises from LOCK TABLE serving \"get locks sufficient for $ACTIONS\" as\n> > a family of use cases. For views only, different $ACTIONS want different\n> > behavior. $ACTIONS==SELECT wants today's behavior; pg_get_viewdef() wants\n> > shallower recursion and caller permissions; DROP VIEW wants no recursion.\n> \n> Maybe we can tackle this problem directly, by adding a clause to LOCK\n> TABLE to indicate a purpose for the lock that the server can use to\n> determine the level of recursion. For example\n> LOCK TABLE xyz IN <mode> FOR <purpose>\n> where <purpose> can be READ, DROP, DEFINE.\n\nPossible. Regrettably, we're not set up for it; running pg_get_viewdef() to\ncompletion is today's way to determine what it will lock. Each of these modes\nprobably would have condensed copies of the operation they mimic, which I'd\nfind sadder than locking somewhat more than pg_dump needs (via today's \"LOCK\nTABLE viewname\" behavior). Is it plausible to do without that duplication?\n\n\n",
"msg_date": "Tue, 10 Nov 2020 18:21:32 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking LOCK TABLE's behavior on views"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile rebasing some of the BRIN patches, I noticed some of the code in\nbrin_memtuple_initialize is duplicated. This happened in 8bf74967dab\nwhich moved some of the code from brin_new_memtuple, not removing the\nshared pieces. In practice this is benign, of course.\n\nBarring objections I'll get the attached fix committed and backpatched.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 8 Nov 2020 02:23:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Yet another (minor) fix in BRIN"
},
{
"msg_contents": "On 2020-Nov-08, Tomas Vondra wrote:\n\n> While rebasing some of the BRIN patches, I noticed some of the code in\n> brin_memtuple_initialize is duplicated. This happened in 8bf74967dab\n> which moved some of the code from brin_new_memtuple, not removing the\n> shared pieces. In practice this is benign, of course.\n> \n> Barring objections I'll get the attached fix committed and backpatched.\n\nLGTM, thanks for noticing.\n\n\n\n",
"msg_date": "Sat, 7 Nov 2020 22:51:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Yet another (minor) fix in BRIN"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Nov-08, Tomas Vondra wrote:\n>> While rebasing some of the BRIN patches, I noticed some of the code in\n>> brin_memtuple_initialize is duplicated. This happened in 8bf74967dab\n>> which moved some of the code from brin_new_memtuple, not removing the\n>> shared pieces. In practice this is benign, of course.\n>> \n>> Barring objections I'll get the attached fix committed and backpatched.\n\n> LGTM, thanks for noticing.\n\nThe weekend before stable-branch releases is probably not the best\ntime to be pushing \"minor\" fixes into those branches. I got my\nfingers burned today, and so did Peter. Don't follow our example ;-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Nov 2020 22:45:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Yet another (minor) fix in BRIN"
},
{
"msg_contents": "On Sat, Nov 07, 2020 at 10:45:26PM -0500, Tom Lane wrote:\n> The weekend before stable-branch releases is probably not the best\n> time to be pushing \"minor\" fixes into those branches. I got my\n> fingers burned today, and so did Peter. Don't follow our example ;-)\n\nYou could just apply your stuff after the version is tagged (not\nstamped as there could be urgent bug fixes between the stamp time and\nthe tag time, like packaing issues).\n--\nMichael",
"msg_date": "Sun, 8 Nov 2020 20:34:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Yet another (minor) fix in BRIN"
},
{
"msg_contents": "On 11/8/20 12:34 PM, Michael Paquier wrote:\n> On Sat, Nov 07, 2020 at 10:45:26PM -0500, Tom Lane wrote:\n>> The weekend before stable-branch releases is probably not the best\n>> time to be pushing \"minor\" fixes into those branches. I got my\n>> fingers burned today, and so did Peter. Don't follow our example ;-)\n> \n> You could just apply your stuff after the version is tagged (not\n> stamped as there could be urgent bug fixes between the stamp time and\n> the tag time, like packaing issues).\n\nYeah, that's what I was planning to do - I was not suggesting I'll push\nthis right away. Or at least I did not mean to. Sorry if that was not\nquite clear.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 8 Nov 2020 16:47:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Yet another (minor) fix in BRIN"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nHere is a scenario that produces segfault during update (on version 12 \nand 13):\n\ncreate table bdttrig (created_at timestamp without time zone not null \ndefault now(),col1 bool not null default false, col2 text not null \ndefault 'def', col3 char(6) not null default 'bdtbdt');\nCREATE TABLE\n\nCREATE or replace FUNCTION trigger_function()\nRETURNS TRIGGER\nLANGUAGE PLPGSQL\nAS $$\nBEGIN\nIF (NEW != OLD) THEN\nNEW.created_at = OLD.created_at;\nRETURN NEW;\nEND IF;\nRETURN OLD;\nEND;\n$$\n;\n\nCREATE FUNCTION\n\ncreate trigger bdt_trigger before update on bdttrig for each row EXECUTE \nfunction trigger_function();\n\nCREATE TRIGGER\n\ninsert into bdttrig(col1) values (true);\n\nINSERT 0 1\n\nalter table bdttrig add column col4 text not null default 'default';\n\nALTER TABLE\n\nalter table bdttrig drop column col2;\n\nALTER TABLE\n\nupdate bdttrig set col1 = true;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nI did some digging and found out and tested that the issue is fixed by \nthis commit: \nhttps://github.com/postgres/postgres/commit/20d3fe9009ddbbbb3da3a2da298f922054b43f8c\n\nSo, we would need to back port this commit on 12 and 13.\n\nThanks\n\nBertrand\n\n\n\n\n\n\n\nHi hackers,\nHere is a scenario that produces segfault during update (on\n version 12 and 13):\n\ncreate table bdttrig (created_at timestamp without time zone not\n null default now(),col1 bool not null default false, col2 text not\n null default 'def', col3 char(6) not null default 'bdtbdt');\nCREATE TABLE\nCREATE or replace FUNCTION trigger_function()\n RETURNS TRIGGER\n LANGUAGE PLPGSQL\n AS $$\n BEGIN\n IF (NEW != OLD) THEN\n NEW.created_at = OLD.created_at;\n RETURN NEW;\n END IF;\n RETURN OLD;\n END;\n $$\n ;\nCREATE FUNCTION\ncreate trigger bdt_trigger before update on bdttrig for each row\n EXECUTE function trigger_function();\nCREATE TRIGGER\ninsert into bdttrig(col1) values (true);\nINSERT 0 1\nalter table bdttrig add column col4 text not null default\n 'default';\nALTER TABLE\nalter table bdttrig drop column col2;\nALTER TABLE\nupdate bdttrig set col1 = true;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nI did some digging and found out and tested that the issue is\n fixed by this commit:\nhttps://github.com/postgres/postgres/commit/20d3fe9009ddbbbb3da3a2da298f922054b43f8c\nSo, we would need to back port this commit on 12 and 13.\nThanks\nBertrand",
"msg_date": "Sun, 8 Nov 2020 17:30:15 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "[BUG]: segfault during update"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> Here is a scenario that produces segfault during update (on version 12 \n> and 13):\n\nHm. So the point about failing to reproduce dropped columns is more\ncritical than I thought. I wonder how come we get away with that before\nv12?\n\n> So, we would need to back port this commit on 12 and 13.\n\nYeah, this is sufficient reason why we must use the more invasive\npatch on those branches. What I'm wondering now is if there's a\nway to break even-older branches based on failure to handle dropped\ncolumns here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Nov 2020 12:18:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]: segfault during update"
},
{
"msg_contents": "I wrote:\n> Yeah, this is sufficient reason why we must use the more invasive\n> patch on those branches. What I'm wondering now is if there's a\n> way to break even-older branches based on failure to handle dropped\n> columns here.\n\nAfter tracing through the example in v11, I see why those branches\nare not broken: when ExecBRUpdateTriggers decides to return the\ntrigger-returned tuple, it sticks it into a target slot like this:\n\n /*\n * Return the modified tuple using the es_trig_tuple_slot. We assume\n * the tuple was allocated in per-tuple memory context, and therefore\n * will go away by itself. The tuple table slot should not try to\n * clear it.\n */\n TupleTableSlot *newslot = estate->es_trig_tuple_slot;\n TupleDesc tupdesc = RelationGetDescr(relinfo->ri_RelationDesc);\n\n if (newslot->tts_tupleDescriptor != tupdesc)\n ExecSetSlotDescriptor(newslot, tupdesc);\n ExecStoreTuple(newtuple, newslot, InvalidBuffer, false);\n\nSo the slot that ExecConstraints et al will be working with contains\nthe relation's actual tuple descriptor, not the approximate descr\nobtained by looking at the plan tlist.\n\nThis logic is entirely gone in v12, which confirms my instinct that\nthere was something about Andres' slot-manipulation changes that\nbroke this scenario. In v12 we end up using the junkfilter's output\nslot, which does not have a sufficiently accurate tupdesc to deal with\nan on-disk tuple rather than one constructed by the executor.\n\nSo I'll go see about back-patching 20d3fe900.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Nov 2020 12:46:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]: segfault during update"
},
{
"msg_contents": "Hi,\n\nOn 11/8/20 6:46 PM, Tom Lane wrote:\n> I wrote:\n>> Yeah, this is sufficient reason why we must use the more invasive\n>> patch on those branches. What I'm wondering now is if there's a\n>> way to break even-older branches based on failure to handle dropped\n>> columns here.\n> After tracing through the example in v11, I see why those branches\n> are not broken: when ExecBRUpdateTriggers decides to return the\n> trigger-returned tuple, it sticks it into a target slot like this:\n>\n> /*\n> * Return the modified tuple using the es_trig_tuple_slot. We assume\n> * the tuple was allocated in per-tuple memory context, and therefore\n> * will go away by itself. The tuple table slot should not try to\n> * clear it.\n> */\n> TupleTableSlot *newslot = estate->es_trig_tuple_slot;\n> TupleDesc tupdesc = RelationGetDescr(relinfo->ri_RelationDesc);\n>\n> if (newslot->tts_tupleDescriptor != tupdesc)\n> ExecSetSlotDescriptor(newslot, tupdesc);\n> ExecStoreTuple(newtuple, newslot, InvalidBuffer, false);\n>\n> So the slot that ExecConstraints et al will be working with contains\n> the relation's actual tuple descriptor, not the approximate descr\n> obtained by looking at the plan tlist.\n>\n> This logic is entirely gone in v12, which confirms my instinct that\n> there was something about Andres' slot-manipulation changes that\n> broke this scenario. In v12 we end up using the junkfilter's output\n> slot, which does not have a sufficiently accurate tupdesc to deal with\n> an on-disk tuple rather than one constructed by the executor.\n>\n> So I'll go see about back-patching 20d3fe900.\n\nThanks for the back-patching!\n\nBertrand\n\n\n\n",
"msg_date": "Mon, 9 Nov 2020 06:41:20 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG]: segfault during update"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-08 12:46:44 -0500, Tom Lane wrote:\n> This logic is entirely gone in v12, which confirms my instinct that\n> there was something about Andres' slot-manipulation changes that\n> broke this scenario.\n\nEntirely possible :(. In my defense, it wasn't exactly obvious or\ndocumented that the fast default code relied on this\nExecSetSlotDescriptor()...\n\n\n> In v12 we end up using the junkfilter's output\n> slot, which does not have a sufficiently accurate tupdesc to deal with\n> an on-disk tuple rather than one constructed by the executor.\n\nI really wonder if we ought to redesign the column default logic to\nreally be a property of the Attr, instead of the constraint stuff it is\nnow.\n\n> So I'll go see about back-patching 20d3fe900.\n\nThanks.\n\nIs it worth adding Bertrand's testcase to the regression suite in some\nform?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Nov 2020 21:10:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]: segfault during update"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-11-08 12:46:44 -0500, Tom Lane wrote:\n>> In v12 we end up using the junkfilter's output\n>> slot, which does not have a sufficiently accurate tupdesc to deal with\n>> an on-disk tuple rather than one constructed by the executor.\n\n> I really wonder if we ought to redesign the column default logic to\n> really be a property of the Attr, instead of the constraint stuff it is\n> now.\n\nYeah, I don't much like treating it as a constraint either. Not quite\nsure that it's worth the work to move it somewhere else, though.\n\n> Is it worth adding Bertrand's testcase to the regression suite in some\n> form?\n\nI did add an equivalent test case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Nov 2020 00:24:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]: segfault during update"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a draft of the release announcement for the upcoming\n2020-11-12 update release.\n\nCorrections and feedback welcome, so long as it is submitted by\n2020-11-11 AoE[1].\n\nThanks!\n\nJonathan\n\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Sun, 8 Nov 2020 16:56:00 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "2020-11-12 Release Announcement Draft"
}
] |
[
{
"msg_contents": "We had an outage caused by transaction wraparound. And yes, one of the\nfirst things I did on this site was check that we didn't have any\ndatabases that were in danger of wraparound.\n\nHowever since then we added a monitoring job that used a temporary\ntable with ON COMMIT DELETE ROWS. Since it was a simple monitoring job\nit stayed connected to the database and used this small temporary\ntable for a very long period of time.\n\nThe temporary table never got vacuumed by autovacuum and never by the\nmonitoring job (since it was being truncated on every transaction why\nwould it need to be vacuumed...).\n\nWe've been around this bush before. Tom added orphaned table\nprotection to autovacuum precisely because temporary tables can cause\nthe datfrozenxid to get held back indefinitely. Then Michael Paquier\nand Tsunakawa Takayuki both found it worth making this more\naggressive.\n\nBut none of that helped as the temporary schema was still in use so\nthey were not considered \"orphaned\" temp tables at all.\n\nI think we need to add some warnings to autovacuum when it detects\n*non* orphaned temporary tables that are older than the freeze\nthreshold.\n\nHowever in the case of ON COMMIT DELETE ROWS we can do better. Why not\njust reset the relfrozenxid and other stats as if the table was\nfreshly created when it's truncated?\n\nI put together this quick patch to check the idea and it seems to\nintegrate fine in the code. I'm not sure about a few points but I\ndon't think they're showstoppers.\n\n1) Should we update relpages and reltuples. I think so but an argument\ncould be made that people might be depending on running analyze once\nwhen the data is loaded and then not having to run analyze on every\ndata load.\n\n2) adding the dependency on heapam.h to heap.c makes sense because of\nheap_inplace_update bt it may be a bit annoying because I suspect\nthat's a useful sanity check that the tableam stuff hasn't been\nbypassed\n\n3) I added a check to the regression tests but I'm not sure it's a\ngood idea to actually commit this. It could fail if there's a parallel\ntransaction going on and even moving the test to the serial schedule\nmight not guarantee that never happens due to autovacuum running\nanalyze?\n\nI didn't actually add the warning to autovacuum yet.\n\n-- \ngreg",
"msg_date": "Sun, 8 Nov 2020 18:19:57 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Sun, Nov 08, 2020 at 06:19:57PM -0500, Greg Stark wrote:\n> However in the case of ON COMMIT DELETE ROWS we can do better. Why not\n> just reset the relfrozenxid and other stats as if the table was\n> freshly created when it's truncated?\n\nThat concept is sound.\n\n> 1) Should we update relpages and reltuples. I think so but an argument\n> could be made that people might be depending on running analyze once\n> when the data is loaded and then not having to run analyze on every\n> data load.\n\nI'd wager no, we should not. An app that ever analyzes an ON COMMIT DELETE\nROWS temp table probably analyzes it every time. If not, it's fair to guess\nthat similar statistics recur in each xact.\n\n> 2) adding the dependency on heapam.h to heap.c makes sense because of\n> heap_inplace_update bt it may be a bit annoying because I suspect\n> that's a useful sanity check that the tableam stuff hasn't been\n> bypassed\n\nThat is not terrible. How plausible would it be to call vac_update_relstats()\nfor this, instead of reimplementing part of it?\n\n> 3) I added a check to the regression tests but I'm not sure it's a\n> good idea to actually commit this. It could fail if there's a parallel\n> transaction going on and even moving the test to the serial schedule\n> might not guarantee that never happens due to autovacuum running\n> analyze?\n\nRight.\n\n> @@ -3340,6 +3383,7 @@ heap_truncate_one_rel(Relation rel)\n> \n> \t/* Truncate the underlying relation */\n> \ttable_relation_nontransactional_truncate(rel);\n> +\tResetVacStats(rel);\n\nI didn't test, but I expect this will cause a stats reset for the second\nTRUNCATE here:\n\nCREATE TABLE t ();\n...\nBEGIN;\nTRUNCATE t;\nTRUNCATE t; -- inplace relfrozenxid reset\nROLLBACK; -- inplace reset survives\n\nDoes that indeed happen?\n\n\n",
"msg_date": "Sun, 8 Nov 2020 21:17:11 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Mon, 9 Nov 2020 at 00:17, Noah Misch <noah@leadboat.com> wrote:\n>\n> > 2) adding the dependency on heapam.h to heap.c makes sense because of\n> > heap_inplace_update bt it may be a bit annoying because I suspect\n> > that's a useful sanity check that the tableam stuff hasn't been\n> > bypassed\n>\n> That is not terrible. How plausible would it be to call vac_update_relstats()\n> for this, instead of reimplementing part of it?\n\nIt didn't seem worth it to change its API to add boolean flags to skip\nsetting some of the variables (I was originally only doing\nrelfrozenxid and minmmxid). Now that I'm doing most of the variables\nmaybe it makes a bit more sense.\n\n> > @@ -3340,6 +3383,7 @@ heap_truncate_one_rel(Relation rel)\n> >\n> > /* Truncate the underlying relation */\n> > table_relation_nontransactional_truncate(rel);\n> > + ResetVacStats(rel);\n>\n> I didn't test, but I expect this will cause a stats reset for the second\n> TRUNCATE here:\n>\n> CREATE TABLE t ();\n> ...\n> BEGIN;\n> TRUNCATE t;\n> TRUNCATE t; -- inplace relfrozenxid reset\n> ROLLBACK; -- inplace reset survives\n>\n> Does that indeed happen?\n\nApparently no, see below. I have to say I was pretty puzzled by the\nactual behaviour which is that the rollback actually does roll back\nthe inplace update. But I *think* what is happening is that the first\ntruncate does an MVCC update so the inplace update happens only to the\nnewly created tuple which is never commited.\n\nThinking about things a bit this does worry me a bit. I wonder if\ninplace update is really safe outside of vacuum where we know we're\nnot in a transaction that can be rolled back. But IIRC doing a\nnon-inplace update on pg_class for these columns breaks other things.\nI don't know if that's still true.\n\nAlso, in checking this question I realized I had missed 3d351d91. I\nshould be initializing reltuples to -1 not 0.\n\n\npostgres=# vacuum t;\nVACUUM\npostgres=# select\nrelname,relpages,reltuples,relallvisible,relfrozenxid from pg_class\nwhere oid='t'::regclass;\n relname | relpages | reltuples | relallvisible | relfrozenxid\n---------+----------+-----------+---------------+--------------\n t | 9 | 2000 | 9 | 15557\n(1 row)\n\npostgres=# begin;\nBEGIN\npostgres=*# truncate t;\nTRUNCATE TABLE\npostgres=*# truncate t;\nTRUNCATE TABLE\npostgres=*# select\nrelname,relpages,reltuples,relallvisible,relfrozenxid from pg_class\nwhere oid='t'::regclass;\n relname | relpages | reltuples | relallvisible | relfrozenxid\n---------+----------+-----------+---------------+--------------\n t | 0 | 0 | 0 | 15562\n(1 row)\n\npostgres=*# abort;\nROLLBACK\npostgres=# select\nrelname,relpages,reltuples,relallvisible,relfrozenxid from pg_class\nwhere oid='t'::regclass;\n relname | relpages | reltuples | relallvisible | relfrozenxid\n---------+----------+-----------+---------------+--------------\n t | 9 | 2000 | 9 | 15557\n(1 row)\n\n\n\n\n-- \ngreg\n\n\n",
"msg_date": "Mon, 9 Nov 2020 01:23:12 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 3:23 PM Greg Stark <stark@mit.edu> wrote:\n>\n> On Mon, 9 Nov 2020 at 00:17, Noah Misch <noah@leadboat.com> wrote:\n> >\n> > > 2) adding the dependency on heapam.h to heap.c makes sense because of\n> > > heap_inplace_update bt it may be a bit annoying because I suspect\n> > > that's a useful sanity check that the tableam stuff hasn't been\n> > > bypassed\n> >\n> > That is not terrible. How plausible would it be to call vac_update_relstats()\n> > for this, instead of reimplementing part of it?\n>\n> It didn't seem worth it to change its API to add boolean flags to skip\n> setting some of the variables (I was originally only doing\n> relfrozenxid and minmmxid). Now that I'm doing most of the variables\n> maybe it makes a bit more sense.\n>\n> > > @@ -3340,6 +3383,7 @@ heap_truncate_one_rel(Relation rel)\n> > >\n> > > /* Truncate the underlying relation */\n> > > table_relation_nontransactional_truncate(rel);\n> > > + ResetVacStats(rel);\n> >\n> > I didn't test, but I expect this will cause a stats reset for the second\n> > TRUNCATE here:\n> >\n> > CREATE TABLE t ();\n> > ...\n> > BEGIN;\n> > TRUNCATE t;\n> > TRUNCATE t; -- inplace relfrozenxid reset\n> > ROLLBACK; -- inplace reset survives\n> >\n> > Does that indeed happen?\n>\n> Apparently no, see below. I have to say I was pretty puzzled by the\n> actual behaviour which is that the rollback actually does roll back\n> the inplace update. But I *think* what is happening is that the first\n> truncate does an MVCC update so the inplace update happens only to the\n> newly created tuple which is never commited.\n\nI think in-plase update that the patch introduces is not used because\nTRUNCATE doesn't use heap_truncate_one_rel() to truncate a table in\nthat scenario. It does MVCC update the pg_class tuple for a new\nrelfilenode with new relfrozenxid and other stats, see\nRelationSetNewRelfilenode(). If we create and truncate a table within\nthe transaction it does in-place update that the patch introduces but\nI think it's no problem in this case either.\n\n>\n> Thinking about things a bit this does worry me a bit. I wonder if\n> inplace update is really safe outside of vacuum where we know we're\n> not in a transaction that can be rolled back. But IIRC doing a\n> non-inplace update on pg_class for these columns breaks other things.\n> I don't know if that's still true.\n\nheap_truncate_one_rel() is not a transaction-safe operation. Doing\nin-place updates during that operation seems okay to me unless I'm\nmissing something.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 10 Nov 2020 16:10:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 04:10:57PM +0900, Masahiko Sawada wrote:\n> On Mon, Nov 9, 2020 at 3:23 PM Greg Stark <stark@mit.edu> wrote:\n> > On Mon, 9 Nov 2020 at 00:17, Noah Misch <noah@leadboat.com> wrote:\n\n> > > > @@ -3340,6 +3383,7 @@ heap_truncate_one_rel(Relation rel)\n> > > >\n> > > > /* Truncate the underlying relation */\n> > > > table_relation_nontransactional_truncate(rel);\n> > > > + ResetVacStats(rel);\n> > >\n> > > I didn't test, but I expect this will cause a stats reset for the second\n> > > TRUNCATE here:\n> > >\n> > > CREATE TABLE t ();\n> > > ...\n> > > BEGIN;\n> > > TRUNCATE t;\n> > > TRUNCATE t; -- inplace relfrozenxid reset\n> > > ROLLBACK; -- inplace reset survives\n> > >\n> > > Does that indeed happen?\n> >\n> > Apparently no, see below. I have to say I was pretty puzzled by the\n> > actual behaviour which is that the rollback actually does roll back\n> > the inplace update. But I *think* what is happening is that the first\n> > truncate does an MVCC update so the inplace update happens only to the\n> > newly created tuple which is never commited.\n> \n> I think in-plase update that the patch introduces is not used because\n> TRUNCATE doesn't use heap_truncate_one_rel() to truncate a table in\n> that scenario. It does MVCC update the pg_class tuple for a new\n> relfilenode with new relfrozenxid and other stats, see\n> RelationSetNewRelfilenode(). If we create and truncate a table within\n> the transaction it does in-place update that the patch introduces but\n> I think it's no problem in this case either.\n\nAgreed. Rolling back a heap_truncate_one_rel() always implies rolling back to\nan earlier version of the entire pg_class tuple. (That may not be true of\nmapped relations, but truncating them is unreasonable.) Thanks for checking.\n\n> > Thinking about things a bit this does worry me a bit. I wonder if\n> > inplace update is really safe outside of vacuum where we know we're\n> > not in a transaction that can be rolled back. But IIRC doing a\n> > non-inplace update on pg_class for these columns breaks other things.\n> > I don't know if that's still true.\n> \n> heap_truncate_one_rel() is not a transaction-safe operation. Doing\n> in-place updates during that operation seems okay to me unless I'm\n> missing something.\n\nYep.\n\n\n",
"msg_date": "Fri, 25 Dec 2020 11:24:37 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Here's an updated patch. I added some warning messages to autovacuum.\n\nOne thing I learned trying to debug this situation in production is\nthat it's nigh impossible to find the pid of the session using a\ntemporary schema. The number in the schema refers to the backendId in\nthe sinval stuff for which there's no external way to look up the\ncorresponding pid. It would have been very helpful if autovacuum had\njust told me which backend pid to kill.\n\nI still have the regression test in the patch and as before I think\nit's probably not worth committing. I'm leaning to reverting that\nsection of the patch before comitting.\n\nIncidentally there's still a hole here where a new session can attach\nto an existing temporary schema where a table was never truncated due\nto a session dieing abnormally. That new session could be a long-lived\nsession but never use the temporary schema causing the old table to\njust sit there. Autovacuum has no way to tell it's not actually in\nuse. I tend to think the optimization to defer cleaning the temporary\nschema until it's used might not really be an optimization. It still\nneeds to be cleaned someday so it's just moving the work around. Just\nremoving that optimization might be the easiest way to close this\nhole. The only alternative I see is adding a flag to PROC or somewhere\nwhere autovacuum can see if the backend has actually initialized the\ntemporary schema yet or not.",
"msg_date": "Tue, 12 Oct 2021 18:04:35 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On 10/12/21, 3:07 PM, \"Greg Stark\" <stark@mit.edu> wrote:\r\n> Here's an updated patch. I added some warning messages to autovacuum.\r\n\r\nI think this is useful optimization, and I intend to review the patch\r\nmore closely soon. It looks reasonable to me after a quick glance.\r\n\r\n> One thing I learned trying to debug this situation in production is\r\n> that it's nigh impossible to find the pid of the session using a\r\n> temporary schema. The number in the schema refers to the backendId in\r\n> the sinval stuff for which there's no external way to look up the\r\n> corresponding pid. It would have been very helpful if autovacuum had\r\n> just told me which backend pid to kill.\r\n\r\nI certainly think it would be good to have autovacuum log the PID, but\r\nIIUC a query like this would help you map the backend ID to the PID:\r\n\r\n SELECT bid, pg_stat_get_backend_pid(bid) AS pid FROM pg_stat_get_backend_idset() bid;\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 3 Dec 2021 01:10:42 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-12 18:04:35 -0400, Greg Stark wrote:\n> Here's an updated patch.\n\nUnfortunately it doesn't apply anymore these days: http://cfbot.cputube.org/patch_37_3358.log\n\nMarked as waiting-on-author.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Mar 2022 18:05:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "No problem, I can update the patch and check on the fuzz.\n\nBut the actual conflict is just in the test and I'm not sure it's\nreally worth having a test at all. It's testing a pretty low level\ndetail. So I'm leaning toward fixing the conflict by just ripping the\ntest out.\n\nNathan also pointed out there was a simpler way to get the pid. I\ndon't think the way I was doing it was wrong but I'll double check\nthat.\n\n\n",
"msg_date": "Mon, 21 Mar 2022 21:27:08 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Here's a rebased patch. I split the test into a separate patch that I\nwould lean to dropping. But at least it applies now.\n\nI did look into pg_stat_get_backend_pid() and I guess it would work\nbut going through the stats mechanism does seem like going the long\nway around since we're already looking at the backendId info directly\nhere, we just weren't grabbing the pid.\n\nI did make a small change, I renamed the checkTempNamespaceStatus()\nfunction to checkTempNamespaceStatusAndPid(). It seems unlikely there\nare any external consumers of this function (the only internal\nconsumer is autovacuum.c). But just in case I renamed it to protect\nagainst any external modules failing from the added parameter.",
"msg_date": "Tue, 22 Mar 2022 23:58:31 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "I had to rebase this again after Tom's cleanup of heap.c removing some includes.\n\nI had to re-add snapmgr to access RecentXmin. I occurs to me to ask\nwhether RecentXmin is actually guaranteed to be set. I haven't\nchecked. I thought it was set when the first snapshot was taken and\npresumably even if it's a non-transactional truncate we're still in a\ntransaction?\n\nThe patch also added heapam.h to heap.c which might seem like a layer\nviolation. I think it's ok since it's just to be able to update the\ncatalog (heap_inplace_update is in heapam.h).",
"msg_date": "Mon, 28 Mar 2022 16:11:55 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-28 16:11:55 -0400, Greg Stark wrote:\n> From 4515075b644d1e38920eb5bdaaa898e1698510a8 Mon Sep 17 00:00:00 2001\n> From: Greg Stark <stark@mit.edu>\n> Date: Tue, 22 Mar 2022 15:51:32 -0400\n> Subject: [PATCH v4 1/2] Update relfrozenxmin when truncating temp tables\n> \n> Make ON COMMIT DELETE ROWS reset relfrozenxmin and other table stats\n> like normal truncate. Otherwise even typical short-lived transactions\n> using temporary tables can easily cause them to reach relfrozenxid.\n\nMight be worth mentioning that ON COMMIT DELETE is implemented as truncating\ntables. If we actually implemented it as deleting rows, it'd not at all be\ncorrect to reset relfrozenxmin.\n\n\n> Also add warnings when old temporary tables are found to still be in\n> use during autovacuum. Long lived sessions using temporary tables are\n> required to vacuum them themselves.\n\nI'd do that in a separate patch.\n\n\n> +/*\n> + * Reset the relfrozenxid and other stats to the same values used when\n> + * creating tables. This is used after non-transactional truncation.\n> + *\n> + * This reduces the need for long-running programs to vacuum their own\n> + * temporary tables (since they're not covered by autovacuum) at least in the\n> + * case where they're ON COMMIT DELETE ROWS.\n> + *\n> + * see also src/backend/commands/vacuum.c vac_update_relstats()\n> + * also see AddNewRelationTuple() above\n> + */\n> +\n> +static void\n> +ResetVacStats(Relation rel)\n> +{\n> +\tHeapTuple\tctup;\n> +\tForm_pg_class pgcform;\n> +\tRelation classRel;\n> +\n> +\t/* Fetch a copy of the tuple to scribble on */\n> +\tclassRel = table_open(RelationRelationId, RowExclusiveLock);\n> +\tctup = SearchSysCacheCopy1(RELOID, ObjectIdGetDatum(RelationGetRelid(rel)));\n>\n> +\tif (!HeapTupleIsValid(ctup))\n> +\t\telog(ERROR, \"pg_class entry for relid %u vanished during truncation\",\n> +\t\t\t RelationGetRelid(rel));\n> +\tpgcform = (Form_pg_class) GETSTRUCT(ctup);\n> +\n> +\t/*\n> +\t * Update relfrozenxid\n> +\t */\n> +\n> +\tpgcform->relpages = 0;\n> +\tpgcform->reltuples = -1;\n> +\tpgcform->relallvisible = 0;\n> +\tpgcform->relfrozenxid = RecentXmin;\n\nHm. Is RecentXmin guaranteed to be valid at this point?\n\n\n> +\tpgcform->relminmxid = GetOldestMultiXactId();\n\nUgh. That's pretty expensive for something now done at a much higher rate than\nbefore.\n\n\n> @@ -2113,20 +2126,31 @@ do_autovacuum(void)\n> \t\t\t\t * Remember it so we can try to delete it later.\n> \t\t\t\t */\n> \t\t\t\torphan_oids = lappend_oid(orphan_oids, relid);\n> +\t\t\t} else if (temp_status == TEMP_NAMESPACE_NOT_TEMP) {\n> +\t\t\t\telog(LOG, \"autovacuum: found temporary table \\\"%s.%s.%s\\\" in non-temporary namespace\",\n> +\t\t\t\t\t get_database_name(MyDatabaseId),\n> +\t\t\t\t\t get_namespace_name(classForm->relnamespace),\n> +\t\t\t\t\t NameStr(classForm->relname));\n> +\t\t\t} else if (temp_status == TEMP_NAMESPACE_IN_USE && wraparound) {\n\nwe put else if on a separate line from }. And { also is always on a separate\nline.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Mar 2022 13:30:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Mon, 28 Mar 2022 at 16:30, Andres Freund <andres@anarazel.de> wrote:\n>\n> > Make ON COMMIT DELETE ROWS reset relfrozenxmin and other table stats\n> > like normal truncate. Otherwise even typical short-lived transactions\n> > using temporary tables can easily cause them to reach relfrozenxid.\n>\n> Might be worth mentioning that ON COMMIT DELETE is implemented as truncating\n> tables. If we actually implemented it as deleting rows, it'd not at all be\n> correct to reset relfrozenxmin.\n\nIn the commit message or are you saying this needs documentation or a comment?\n\n> > Also add warnings when old temporary tables are found to still be in\n> > use during autovacuum. Long lived sessions using temporary tables are\n> > required to vacuum them themselves.\n>\n> I'd do that in a separate patch.\n\nHm, seems a bit small but sure no problem, I'll split it out.\n\n> > + pgcform->relfrozenxid = RecentXmin;\n>\n> Hm. Is RecentXmin guaranteed to be valid at this point?\n\nI mentioned the same worry. But ok, I just looked into it and it's\ndefinitely not a problem. We only do truncates after either a user\nissued TRUNCATE when the table was created in the same transaction or\nat commit iff a flag is set indicating temporary tables have been\nused. Either way a snapshot has been taken. I've added some comments\nand an assertion and I think if assertions are disabled and this\nimpossible condition is hit we can just skip the stats reset.\n\n> > + pgcform->relminmxid = GetOldestMultiXactId();\n>\n> Ugh. That's pretty expensive for something now done at a much higher rate than\n> before.\n\nThis I'm really not sure about. I really don't know much about\nmultixacts. I've been reading a bit but I'm not sure what to do yet.\nI'm wondering if there's something cheaper we can use. We don't need\nthe oldest mxid that might be visible in a table somewhere, just the\noldest that has a real live uncommitted transaction in it that could\nyet create new tuples in the truncated table.\n\nIn the case of temporary tables I think we could just set it to the\nnext mxid since there are no live transactions capable of inserting\ninto the temporary table. But in the case of a table created in this\ntransaction then that wouldn't be good enough. I think? I'm not clear\nwhether existing mxids get reused for new updates if they happen to\nhave the same set of locks in them as some existing mxid.\n\n> we put else if on a separate line from }. And { also is always on a separate\n> line.\n\nSorry, old habits...\n\n\nIncidentally.... in doing the above I noticed an actual bug :( The\ntoast reset had the wrong relid in it. I'll add the toast table to the\ntest too.\n\n\n\n--\ngreg\n\n\n",
"msg_date": "Tue, 29 Mar 2022 19:51:26 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 4:52 PM Greg Stark <stark@mit.edu> wrote:\n\n> On Mon, 28 Mar 2022 at 16:30, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > Make ON COMMIT DELETE ROWS reset relfrozenxmin and other table\n> stats\n> > > like normal truncate. Otherwise even typical short-lived\n> transactions\n> > > using temporary tables can easily cause them to reach relfrozenxid.\n> >\n> > Might be worth mentioning that ON COMMIT DELETE is implemented as\n> truncating\n> > tables. If we actually implemented it as deleting rows, it'd not at all\n> be\n> > correct to reset relfrozenxmin.\n>\n> In the commit message or are you saying this needs documentation or a\n> comment?\n>\n\nJust flying by here but...\n\nThe user-facing documentation already covers this:\n\nhttps://www.postgresql.org/docs/current/sql-createtable.html\n\n\"All rows in the temporary table will be deleted at the end of each\ntransaction block. Essentially, an automatic TRUNCATE is done at each\ncommit. When used on a partitioned table, this is not cascaded to its\npartitions.\"\n\nI'm not sure why we felt the need to add \"essentially\" here - but maybe\nit's because we didn't \"reset relfronzedenxmin and other table stats like\nnormal truncate.\"? Or maybe just natural word flow.\n\nEither way, maybe word it like this to avoid the need for essentially\naltogether:\n\nThe temporary table will be automatically truncated at the end of each\ntransaction block. However, unlike the TRUNCATE command, descendent tables\nwill not be cascaded to. (I'm changing partitions to descendant tables to\nmake a point here - the TRUNCATE command only references descendent tables,\nnot mentioning partitioning by name at all. Is this desirable?)\n\nI don't have any substantive insight into the commit message or code\ncomments; but it doesn't seem immediately wrong to assume the reader\nunderstands that ON COMMIT DELETE ROWS uses something more akin to TRUNCATE\nrather than DELETE since that is what the feature is documented to do. The\ncommit message in particular seems like it doesn't need to teach that\npoint; but can do so if it makes understanding the changes easier.\n\nDavid J.\n\nOn Tue, Mar 29, 2022 at 4:52 PM Greg Stark <stark@mit.edu> wrote:On Mon, 28 Mar 2022 at 16:30, Andres Freund <andres@anarazel.de> wrote:\n>\n> > Make ON COMMIT DELETE ROWS reset relfrozenxmin and other table stats\n> > like normal truncate. Otherwise even typical short-lived transactions\n> > using temporary tables can easily cause them to reach relfrozenxid.\n>\n> Might be worth mentioning that ON COMMIT DELETE is implemented as truncating\n> tables. If we actually implemented it as deleting rows, it'd not at all be\n> correct to reset relfrozenxmin.\n\nIn the commit message or are you saying this needs documentation or a comment?Just flying by here but...The user-facing documentation already covers this:https://www.postgresql.org/docs/current/sql-createtable.html\"All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic TRUNCATE is done at each commit. When used on a partitioned table, this is not cascaded to its partitions.\"I'm not sure why we felt the need to add \"essentially\" here - but maybe it's because we didn't \"reset relfronzedenxmin and other table stats like normal truncate.\"? Or maybe just natural word flow.Either way, maybe word it like this to avoid the need for essentially altogether:The temporary table will be automatically truncated at the end of each transaction block. However, unlike the TRUNCATE command, descendent tables will not be cascaded to. (I'm changing partitions to descendant tables to make a point here - the TRUNCATE command only references descendent tables, not mentioning partitioning by name at all. Is this desirable?)I don't have any substantive insight into the commit message or code comments; but it doesn't seem immediately wrong to assume the reader understands that ON COMMIT DELETE ROWS uses something more akin to TRUNCATE rather than DELETE since that is what the feature is documented to do. The commit message in particular seems like it doesn't need to teach that point; but can do so if it makes understanding the changes easier.David J.",
"msg_date": "Tue, 29 Mar 2022 17:30:21 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-29 19:51:26 -0400, Greg Stark wrote:\n> On Mon, 28 Mar 2022 at 16:30, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > Make ON COMMIT DELETE ROWS reset relfrozenxmin and other table stats\n> > > like normal truncate. Otherwise even typical short-lived transactions\n> > > using temporary tables can easily cause them to reach relfrozenxid.\n> >\n> > Might be worth mentioning that ON COMMIT DELETE is implemented as truncating\n> > tables. If we actually implemented it as deleting rows, it'd not at all be\n> > correct to reset relfrozenxmin.\n> \n> In the commit message or are you saying this needs documentation or a comment?\n\nIn the commit message.\n\n\n> > > + pgcform->relminmxid = GetOldestMultiXactId();\n> >\n> > Ugh. That's pretty expensive for something now done at a much higher rate than\n> > before.\n> \n> This I'm really not sure about. I really don't know much about\n> multixacts. I've been reading a bit but I'm not sure what to do yet.\n> I'm wondering if there's something cheaper we can use. We don't need\n> the oldest mxid that might be visible in a table somewhere, just the\n> oldest that has a real live uncommitted transaction in it that could\n> yet create new tuples in the truncated table.\n\n> In the case of temporary tables I think we could just set it to the\n> next mxid since there are no live transactions capable of inserting\n> into the temporary table. But in the case of a table created in this\n> transaction then that wouldn't be good enough. I think? I'm not clear\n> whether existing mxids get reused for new updates if they happen to\n> have the same set of locks in them as some existing mxid.\n\nYes, that can happen. But of course the current xid is always part of the\nmultixact, so it can't be a multixact from before the transaction started.\n\nThere's already a record of the oldest mxid a backend considers live, computed\non the first use of multixacts in a transaction. See\nMultiXactIdSetOldestVisible(). Which I think might serve as a suitable\nrelminmxid of a temporary table in an already running transaction?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 29 Mar 2022 17:32:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "I've updated the patches.\n\nAdding the assertion actually turned up a corner case where RecentXmin\nwas *not* set. If you lock a temporary table and that's the only thing\nyou do in a transaction then the flag is set indicating you've used\nthe temp schema but you never take a snapshot :(\n\nI also split out the warnings and added a test that relfrozenxid was\nadvanced on the toast table as well.\n\nI haven't wrapped my head around multixacts yet. It's complicated by\nthis same codepath being used for truncates of regular tables that\nwere created in the same transaction.",
"msg_date": "Thu, 31 Mar 2022 16:05:13 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Thu, 31 Mar 2022 at 16:05, Greg Stark <stark@mit.edu> wrote:\n>\n> I haven't wrapped my head around multixacts yet. It's complicated by\n> this same codepath being used for truncates of regular tables that\n> were created in the same transaction.\n\nSo my best idea so far is to actually special-case the temp table case\nin this code path. I think that's easy enough since I have the heap\ntuple I'm about to replace.\n\nIn the temp table case I would just use the value Andres proposes.\n\nIn the \"truncating table in same transaction it was created\" case then\nI would go ahead and use the expensive GetOldestMultiXactId() which\nshould be ok for that case. At least I think the \"much higher rate\"\ncomment was motivated by the idea that every transaction commit (when\ntemp tables are being used) is more frequent than any specific user\nddl.\n\nIt's not brilliant since it seems to be embedding knowledge of the\ncases where this optimization applies in a lower level function. If we\nthink of some other case where it could apply it wouldn't be obvious\nthat it will have a cost to it. But it doesn't seem too terrible to\nme.\n\nAn alternative would be to simply not adjust relminmxid for non-temp\ntables at all. I guess that's not too bad either since these are\nnon-temp tables that autovacuum will be able to do anti-wraparound\nvacuums on. And I get the impression mxids don't wraparound nearly as\noften as xids?\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 1 Apr 2022 11:53:59 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "So here's an updated patch.\n\nI had to add a public method to multixact.c to expose the locally\ncalculated OldestMultiXactId. It's possible we could use something\neven tighter (like the current next mxid since we're about to commit)\nbut I didn't see a point in going further and it would have become\nmore complex.\n\nI also added a branch in heapam_handler.c in ..._set_new_filenode() of\ntemporary tables. It feels like a free win and it's more consistent.\n\nI'm not 100% on the tableam abstraction -- it's possible all of this\nchange should have happened in heapam_handler somewhere? I don't think\nso but it does feel weird to be touching it and also doing the same\nthing elsewhere.\n\nI think this has addressed all the questions now.",
"msg_date": "Mon, 4 Apr 2022 23:04:04 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Simple Rebase",
"msg_date": "Tue, 28 Jun 2022 11:02:58 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Sun, 8 Nov 2020 at 18:19, Greg Stark <stark@mit.edu> wrote:\n>\n> We had an outage caused by transaction wraparound. And yes, one of the\n> first things I did on this site was check that we didn't have any\n> databases that were in danger of wraparound.\n\nFwiw this patch has been in \"Ready for Committer\" state since April\nand has been moved forward three times including missing the release.\nIt's a pretty short patch and fixes a problem that caused an outage\nfor $previous_employer and I've had private discussions from other\npeople who have been struggling with the same issue. Personally I\nconsider it pretty close to a bug fix and worth backpatching. I think\nit's pretty annoying to have put out a release without this fix.\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 2 Nov 2022 09:54:54 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> Simple Rebase\n\nI took a little bit of a look through these.\n\n* I find 0001 a bit scary, specifically that it's decided it's\nokay to apply extract_autovac_opts, pgstat_fetch_stat_tabentry_ext,\nand especially relation_needs_vacanalyze to another session's\ntemp table. How safe is that really?\n\n* Don't see much point in renaming checkTempNamespaceStatus.\nThat doesn't make it not an ABI break. If we want to back-patch\nthis we'll have to do something different than what you did here.\n\n* In 0002, I don't especially approve of what you've done with\nthe relminmxid calculation --- that seems to move it out of\n\"pure bug fix\" territory and into \"hmm, I wonder if this\ncreates new bugs\" territory. Also, skipping that update\nfor non-temp tables immediately falsifies ResetVacStats'\nclaimed charter of \"resetting to the same values used when\ncreating tables\". Surely GetOldestMultiXactId isn't *that*\nexpensive, especially compared to the costs of file truncation.\nI think you should just do GetOldestMultiXactId straight up,\nand maybe submit a separate performance-improvement patch\nto make it do the other thing (in both places) for temp tables.\n\n* I wonder if this is the best place for ResetVacStats --- it\ndoesn't seem to be very close to the code it needs to stay in\nsync with. If there's no better place, perhaps adding cross-\nreference comments in both directions would be advisable.\n\n* 0003 says it's running temp.sql by itself to avoid interference\nfrom other sessions, but sadly that cannot avoid interference\nfrom background autovacuum/autoanalyze. I seriously doubt this\npatch would survive contact with the buildfarm. Do we actually\nneed a new test case? It's not like the code won't get exercised\nwithout it --- we have plenty of temp table truncations, surely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Nov 2022 11:34:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Sat, 5 Nov 2022 at 11:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Stark <stark@mit.edu> writes:\n> > Simple Rebase\n>\n> I took a little bit of a look through these.\n>\n> * I find 0001 a bit scary, specifically that it's decided it's\n> okay to apply extract_autovac_opts, pgstat_fetch_stat_tabentry_ext,\n> and especially relation_needs_vacanalyze to another session's\n> temp table. How safe is that really?\n\nI can look a bit more closely but none of them are doing any thing\nwith the table itself, just the catalog entries which afaik have\nalways been fair game for other sessions. So I'm not really clear what\nkind of unsafeness you're asking about.\n\n> * Don't see much point in renaming checkTempNamespaceStatus.\n> That doesn't make it not an ABI break. If we want to back-patch\n> this we'll have to do something different than what you did here.\n\nWell it's an ABI break but at least it's an ABI break that gives a\nbuild-time error or shared library loading error rather than one that\njust crashes or writes to random memory at runtime.\n\n> * In 0002, I don't especially approve of what you've done with\n> the relminmxid calculation --- that seems to move it out of\n> \"pure bug fix\" territory and into \"hmm, I wonder if this\n> creates new bugs\" territory.\n\nHm. Ok, I can separate that into a separate patch. I admit I have a\nlot of trouble remembering how multixactids work.\n\n\n> Also, skipping that update\n> for non-temp tables immediately falsifies ResetVacStats'\n> claimed charter of \"resetting to the same values used when\n> creating tables\". Surely GetOldestMultiXactId isn't *that*\n> expensive, especially compared to the costs of file truncation.\n> I think you should just do GetOldestMultiXactId straight up,\n> and maybe submit a separate performance-improvement patch\n> to make it do the other thing (in both places) for temp tables.\n\nHm. the feedback I got earlier was that it was quite expensive. That\nsaid, I think the concern was about the temp tables where the truncate\nwas happening on every transaction commit when they're used. For\nregular truncates I'm not sure how important optimizing it is.\n\n> * I wonder if this is the best place for ResetVacStats --- it\n> doesn't seem to be very close to the code it needs to stay in\n> sync with. If there's no better place, perhaps adding cross-\n> reference comments in both directions would be advisable.\n\nI'll look at that. I think relfrozenxid and relfrozenmxid are touched\nin a lot of places so it may be tilting at windmills to try to\ncentralize the code working with them at this point...\n\n> * 0003 says it's running temp.sql by itself to avoid interference\n> from other sessions, but sadly that cannot avoid interference\n> from background autovacuum/autoanalyze. I seriously doubt this\n> patch would survive contact with the buildfarm. Do we actually\n> need a new test case? It's not like the code won't get exercised\n> without it --- we have plenty of temp table truncations, surely.\n\nNo I don't think we do. I kept it in a separate commit so it could be\ndropped when committing.\n\nBut just having truncate working isn't really good enough either. An\nearly version of the patch had a bug that meant it didn't run at all\nso truncate worked fine but relfrozenxid never got reset.\n\nIn thinking about whether we could have a basic test that temp tables\nare getting reset at all it occurs to me that there's still a gap\nhere:\n\nYou can have a session attached to a temp namespace that never\nactually uses the temp tables. That would prevent autovacuum from\ndropping them and still never reset their vacuum stats. :( Offhand I\nthink PreCommit_on_commit_actions() could occasionally truncate all ON\nCOMMIT TRUNCATE tables even if they haven't been touched in this\ntransaction.\n\n\n--\ngreg\n\n\n",
"msg_date": "Thu, 1 Dec 2022 11:13:01 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-01 11:13:01 -0500, Greg Stark wrote:\n> On Sat, 5 Nov 2022 at 11:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Greg Stark <stark@mit.edu> writes:\n> > > Simple Rebase\n> >\n> > I took a little bit of a look through these.\n> >\n> > * I find 0001 a bit scary, specifically that it's decided it's\n> > okay to apply extract_autovac_opts, pgstat_fetch_stat_tabentry_ext,\n> > and especially relation_needs_vacanalyze to another session's\n> > temp table. How safe is that really?\n> \n> I can look a bit more closely but none of them are doing any thing\n> with the table itself, just the catalog entries which afaik have\n> always been fair game for other sessions. So I'm not really clear what\n> kind of unsafeness you're asking about.\n\nIs that actually true? Don't we skip some locking operations for temporary\ntables, which then also means catalog modifications cannot safely be done in\nother sessions?\n\n\n> > Also, skipping that update\n> > for non-temp tables immediately falsifies ResetVacStats'\n> > claimed charter of \"resetting to the same values used when\n> > creating tables\". Surely GetOldestMultiXactId isn't *that*\n> > expensive, especially compared to the costs of file truncation.\n> > I think you should just do GetOldestMultiXactId straight up,\n> > and maybe submit a separate performance-improvement patch\n> > to make it do the other thing (in both places) for temp tables.\n> \n> Hm. the feedback I got earlier was that it was quite expensive. That\n> said, I think the concern was about the temp tables where the truncate\n> was happening on every transaction commit when they're used. For\n> regular truncates I'm not sure how important optimizing it is.\n\nAnd it's not called just once, but once for each relation.\n\n\n> > * I wonder if this is the best place for ResetVacStats --- it\n> > doesn't seem to be very close to the code it needs to stay in\n> > sync with. If there's no better place, perhaps adding cross-\n> > reference comments in both directions would be advisable.\n> \n> I'll look at that. I think relfrozenxid and relfrozenmxid are touched\n> in a lot of places so it may be tilting at windmills to try to\n> centralize the code working with them at this point...\n\nNot convinced. Yes, there's plenty of references to relfrozenxid, but most of\nthem don't modify it.\n\n\nI find it problematic that ResetVacStats() bypasses tableam. Normal vacuums\netc go through tableam but you put a ResetVacStats() besides each call to\ntable_relation_nontransactional_truncate(). Seems like this should just be in\nheapam_relation_nontransactional_truncate()?\n\nIs it a good idea to use heap_inplace_update() in ResetVacStats()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 11:17:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 14:18, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-12-01 11:13:01 -0500, Greg Stark wrote:\n> > On Sat, 5 Nov 2022 at 11:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > > * I find 0001 a bit scary, specifically that it's decided it's\n> > > okay to apply extract_autovac_opts, pgstat_fetch_stat_tabentry_ext,\n> > > and especially relation_needs_vacanalyze to another session's\n> > > temp table. How safe is that really?\n> >\n> > I can look a bit more closely but none of them are doing any thing\n> > with the table itself, just the catalog entries which afaik have\n> > always been fair game for other sessions. So I'm not really clear what\n> > kind of unsafeness you're asking about.\n>\n> Is that actually true? Don't we skip some locking operations for temporary\n> tables, which then also means catalog modifications cannot safely be done in\n> other sessions?\n\nThis code isn't doing any catalog modifications from other sessions.\nThe code Tom's referring to is just autovacuum looking at relfrozenxid\nand relfrozenmxid and printing warnings if they're approaching the\nwraparound limits that would otherwise trigger an anti-wraparound\nvacuum.\n\n> I find it problematic that ResetVacStats() bypasses tableam. Normal vacuums\n> etc go through tableam but you put a ResetVacStats() besides each call to\n> table_relation_nontransactional_truncate(). Seems like this should just be in\n> heapam_relation_nontransactional_truncate()?\n\nOk. Think this patch actually predated the tableam (by a lot. I've had\nothers actually approach me about whether there's a good solution\nbecause it's been biting them too) and I didn't see that in the merge\nforward.\n\n> Is it a good idea to use heap_inplace_update() in ResetVacStats()?\n\nThis is a good question. I had the impression it was actually the\nright thing to do and there's actually been bugs in the past caused by\n*not* using heap_inplace_update() so I think it's actually important\nto get this right.\n\nI don't see any documentation explaining what the rules are for when\ninplace edits are safe or unsafe or indeed when they're necessary for\ncorrectness. So I searched back through the archives and checked when\nit came up.\n\nIt seems there are a few issues:\n\na) Nontransactional operations like vacuum have to use it because they\ndon't have a transaction. Presumably this is why vacuum normally uses\ninplace_update for these stats.\n\nb) in the past SnapshotNow scans would behave incorrectly if we do\nnormal mvcc updates on rows without exclusive locks protecting against\nconcurrent scans. I'm not sure if this is still a factor these days\nwith the types of scans that still exist.\n\nc) There are some constraints having to do with logical replication\nthat I didn't understand. I hope they don't relate to frozenxid but I\ndon't know\n\nd) There were also some issues having to do with SInval messages but I\nthink they were additional constraints that inplace updates needed to\nbe concerned about.\n\nThese truncates are done at end of transaction but precommit so the\ntransaction is still alive and there obviously should be no concurrent\nscans on temporary tables so I think it should be safe to do a regular\nmvcc update. Is it a good idea to bloat the catalog though? If you\nhave many temporary tables and don't actually touch more than a few of\nthem in your transaction that could be a lot of new tuple inserts on\nevery commit.\n\nActually it's only sort of true -- if no persistent xid is created\nthen we would be creating one just for this. But that shouldn't happen\nbecause we only truncate if the transaction ever \"touched\" a temporary\ntable. It occurs to me it could still be kind of a problem if you have\na temporary table that you use once and then your session stays alive\nfor a long time without using temporary tables. Then it won't be\ntruncated and the frozenxid won't be advanced :(\n\nIt's kind of annoying that we have to put RecentXmin and\nGet{Our,}OldestMultiXactId() in the table when truncating and then\nkeep advancing them even if there's no data in the table. Ideally\nwouldn't it be better to be able to have Invalid{Sub,}Xid there and\nonly initialize it when a first insert is made?\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 2 Dec 2022 23:33:29 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "So.... I talked about this patch with Ronan Dunklau and he had a good\nquestion.... Why are we maintaining relfrozenxid and relminmxid in\npg_class for temporary tables at all? Autovacuum can't use them and\nother sessions won't care about them. The only session that might care\nabout them is the one attached to the temp schema.\n\nSo we could replace relfrozenxid and relminmxid for temp tables with a\nlocal hash table that can be updated on every truncate easily and\nefficiently.\n\nIf a temp table actually wraps around the only session that runs into\nproblems is the one attached to that temp schema. It can throw local\nsession errors and doesn't need to interfere with the rest of the\ncluster in any way. It could even start running vacuums though I'm not\nconvinced that's a great solution.\n\nAt least I think so. I'm pretty sure about relfrozenxid but as always\nI don't really know how relminmxid works. I think we only need to\nworry about multixacts for subtransactions, all of which are in the\nsame transaction -- does that even work that way?\n\nBut this is really attractive since it means no catalog updates just\nfor temp tables on every transaction and no wraparound cluster\nproblems even if you have on-commit-preserve-rows tables. It really\nshouldn't be possible for a regular user to cause the whole cluster to\nrun into problems just by creating a temp table and keeping a\nconnection around a while.\n\n\n",
"msg_date": "Tue, 6 Dec 2022 13:47:39 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-06 13:47:39 -0500, Greg Stark wrote:\n> So.... I talked about this patch with Ronan Dunklau and he had a good\n> question.... Why are we maintaining relfrozenxid and relminmxid in\n> pg_class for temporary tables at all? Autovacuum can't use them and\n> other sessions won't care about them. The only session that might care\n> about them is the one attached to the temp schema.\n\nUh, without relfrozenxid for temp tables we can end up truncating clog\n\"ranges\" away that are required to access the temp tables. So this would\nbasically mean that temp tables can't be used reliably anymore.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:59:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 13:59, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-12-06 13:47:39 -0500, Greg Stark wrote:\n> > So.... I talked about this patch with Ronan Dunklau and he had a good\n> > question.... Why are we maintaining relfrozenxid and relminmxid in\n> > pg_class for temporary tables at all? Autovacuum can't use them and\n> > other sessions won't care about them. The only session that might care\n> > about them is the one attached to the temp schema.\n>\n> Uh, without relfrozenxid for temp tables we can end up truncating clog\n> \"ranges\" away that are required to access the temp tables. So this would\n> basically mean that temp tables can't be used reliably anymore.\n\nTrue, we would have to have some other mechanism for exporting the\nfrozenxid that the session needs. Presumably that would be something\nin PGProc like the xmin and other numbers. It could be updated by\nscanning our local hash table whenever a transaction starts. This also\nprobably is what would be needed for multixacts I guess?\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 6 Dec 2022 14:50:34 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-06 14:50:34 -0500, Greg Stark wrote:\n> On Tue, 6 Dec 2022 at 13:59, Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-12-06 13:47:39 -0500, Greg Stark wrote:\n> > > So.... I talked about this patch with Ronan Dunklau and he had a good\n> > > question.... Why are we maintaining relfrozenxid and relminmxid in\n> > > pg_class for temporary tables at all? Autovacuum can't use them and\n> > > other sessions won't care about them. The only session that might care\n> > > about them is the one attached to the temp schema.\n> >\n> > Uh, without relfrozenxid for temp tables we can end up truncating clog\n> > \"ranges\" away that are required to access the temp tables. So this would\n> > basically mean that temp tables can't be used reliably anymore.\n> \n> True, we would have to have some other mechanism for exporting the\n> frozenxid that the session needs. Presumably that would be something\n> in PGProc like the xmin and other numbers. It could be updated by\n> scanning our local hash table whenever a transaction starts.\n\nThat'd be a fair bit of new mechanism. Not at all impossible, but I'm doubtful\nthe complexity is worth it. In your patch the relevant catalog change is an\ninplace change and thus doesn't cause bloat. And if we have to retain the\nclog, I don't see that much benefit in the proposed approach.\n\n\n> This also probably is what would be needed for multixacts I guess?\n\nYes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 12:18:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 14:18, Andres Freund <andres@anarazel.de> wrote:\n>\n> I find it problematic that ResetVacStats() bypasses tableam. Normal vacuums\n> etc go through tableam but you put a ResetVacStats() besides each call to\n> table_relation_nontransactional_truncate(). Seems like this should just be in\n> heapam_relation_nontransactional_truncate()?\n\nSo this seems a bit weird. The only other part of the tableam that\ntouches freezexid and minmxid is table_relation_set_new_filelocator()\nwhich returns them via references so that the callers\n(heap.c:heap_create() and relcache.c:RelationSetNewRelfilenumber())\ncan set them themselves.\n\nI can't really duplicate that layering without changing the API of\nheapam_relation_nontransactional_truncate(). Which if it changes would\nbe quite annoying I think for external pluggable tableams.\n\nBut you're right that where I've put it will update relfrozenxid and\nminmxid even for relations that have a tableam handler that returns\nInvalidXid and doesn't need it. That does seem inappropriate.\n\nI could put it directly in the heapam_handler but is that a layering\nissue to be doing a catalog update from within the tableam_handler?\nThere are no other catalog updates in there.\n\nOn the other hand the existing callers of\n*_nontransactional_truncate() don't have any existing catalog updates\nthey want to make so perhaps returning the xid values by reference was\njust for convenience to avoid doing an extra update and isn't needed\nhere.\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 7 Dec 2022 17:02:46 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, 7 Dec 2022 at 22:02, Greg Stark <stark@mit.edu> wrote:\n> > Seems like this should just be in\n> > heapam_relation_nontransactional_truncate()?\n\nSo here I've done it that way. It is a bit of an unfortunate layering\nsince it means the heapam_handler is doing the catalog change but it\ndoes seem inconvenient to pass relfrozenxid etc back up and have the\ncaller make the changes when there are no other changes to make.\n\nAlso, I'm not sure what changed but maybe there was some other commits\nin vacuum.c in the meantime. I remember being frustrated previously\ntrying to reuse that code but now it works fine. So I was able to\nreduce the copy-pasted code significantly.\n\n(The tests are probably not worth committing, they're just here for my\nown testing to be sure it's doing anything)\n\n-- \ngreg",
"msg_date": "Tue, 13 Dec 2022 14:22:37 +0000",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Sat, 5 Nov 2022 at 15:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Stark <stark@mit.edu> writes:\n> > Simple Rebase\n>\n> I took a little bit of a look through these.\n>\n> * I find 0001 a bit scary, specifically that it's decided it's\n> okay to apply extract_autovac_opts, pgstat_fetch_stat_tabentry_ext,\n> and especially relation_needs_vacanalyze to another session's\n> temp table. How safe is that really?\n\nSo I don't see any evidence we skip any locking on pg_class when doing\nupdates on rows for temporary tables. It's a bit hard to tell because\nthere are several ways of checking if a table is temporary. Most\nplaces just check relpersistence but there is also an accessing macro\nRelationIsPermanent() as well as a relcache field rd_islocaltemp which\ncould be used. I'm still looking But so far nearly all the checks I've\nfound just throw errors for temporary tables and none relate to any\noperations on pg_class entries.\n\nIn any case we're already using the pg_class struct to look at\nrelpersistence itself.... So... the danger to check for is something\nwe would already be at risk of. Namely that the pg_class row is\nupdated without any locking and then vacuumed away while we hold this\nstruct pointer and we're looking at fields that have since been\noverwritten with other data from an unrelated row. But that would\nrequire all kinds of rules to be broken and would be posing a risk for\nanyone just running select * from pg_class. So I find it hard to\nbelieve we would be doing this.\n\nextract_autovac_opts looks at a variable sized field so concurrent\nupdates would be an issue, but obviously there are only mvcc updates\nto this field so I don't see how it could be a problem.\n\npgstat_fetch_stat_tabentry I don't even see what the possible risks\nwould be. The words persistence and temporary don't appear in pgstat.c\n(except \"temporary statistics\" in some log messages).\n\nAnd then there's relation_needs_vacanalyze() and it looks at\nrelfrozenxid and relminmxid (and relname in some debug messages).\nThose fields could get updated by a concurrent vacuum or -- after this\npatch -- a truncate in an inplace_update. That seems to be the only\nreal risk here.\n\nBut this is not related to temporary tables at all. Any pg_class entry\ncan get in_place_update'd by plain old vacuum to update the\nrelfrozenxid and relminmxid. The in_place_update would take an\nexclusive lock on the buffer but I think that doesn't actually protect\nus since autovacuum would only have a pin? Or does the SysCache\nprotect us by copying out the whole row while it's locked? This is\nworth answering but it's not an issue related to this patch or\ntemporary tables. Is autovacuum looking at relfrozenxid and relminmxid\nin a way that's safely protected against a concurrent manual vacuum\nissuing an in_place_update?\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 14 Dec 2022 18:17:28 +0000",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 1:18 PM Greg Stark <stark@mit.edu> wrote:\n> So I don't see any evidence we skip any locking on pg_class when doing\n> updates on rows for temporary tables.\n\nI don't know what this means. You don't have to lock pg_class to\nupdate rows in any table, whether temporary or otherwise.\n\nYou do have to lock a table in order to update its pg_class row,\nthough, whether the table is temporary or not. Otherwise, another\nsession could drop it while you're doing something with it, after\nwhich bad things would happen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:22:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "> You do have to lock a table in order to update its pg_class row,\n> though, whether the table is temporary or not. Otherwise, another\n> session could drop it while you're doing something with it, after\n> which bad things would happen.\n\nI was responding to this from Andres:\n\n> Is that actually true? Don't we skip some locking operations for temporary\n> tables, which then also means catalog modifications cannot safely be done in\n> other sessions?\n\nI don't actually see this in the code but in any case we're not doing\nany catalog modifications here. We're just inspecting values of\nrelfrozenxid and relminmxid in the struct returned from\nSearchSysCache. Which I think is no different for temp tables than any\nother table.\n\n\n",
"msg_date": "Wed, 14 Dec 2022 21:43:52 +0000",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 4:44 PM Greg Stark <stark@mit.edu> wrote:\n> > You do have to lock a table in order to update its pg_class row,\n> > though, whether the table is temporary or not. Otherwise, another\n> > session could drop it while you're doing something with it, after\n> > which bad things would happen.\n>\n> I was responding to this from Andres:\n>\n> > Is that actually true? Don't we skip some locking operations for temporary\n> > tables, which then also means catalog modifications cannot safely be done in\n> > other sessions?\n>\n> I don't actually see this in the code ...\n\nYes, I think Andres may be wrong in this case.\n\n(Dang, I don't get to say that very often.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Dec 2022 08:29:31 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On 2022-Dec-13, Greg Stark wrote:\n\n> So here I've done it that way. It is a bit of an unfortunate layering\n> since it means the heapam_handler is doing the catalog change but it\n> does seem inconvenient to pass relfrozenxid etc back up and have the\n> caller make the changes when there are no other changes to make.\n\nAre you still at this? CFbot says the meson tests failed last time for\nsome reason:\nhttp://commitfest.cputube.org/greg-stark.html\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 2 Feb 2023 15:47:05 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "I think that was spurious. It looked good when we looked at it yesterday.\nThe rest that failed seemed unrelated and was also taking on my SSL patch\ntoo.\n\nI talked to Andres about the possibility of torn reads in the pg_class\nstats but those are all 4-byte columns so probably safe. And in any case\nthat's a pre-existing possibility just more likely (if it's possible at\nall) by frequent truncates.\n\nOn Thu, Feb 2, 2023, 15:47 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2022-Dec-13, Greg Stark wrote:\n>\n> > So here I've done it that way. It is a bit of an unfortunate layering\n> > since it means the heapam_handler is doing the catalog change but it\n> > does seem inconvenient to pass relfrozenxid etc back up and have the\n> > caller make the changes when there are no other changes to make.\n>\n> Are you still at this? CFbot says the meson tests failed last time for\n> some reason:\n> http://commitfest.cputube.org/greg-stark.html\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n>\n\nI think that was spurious. It looked good when we looked at it yesterday. The rest that failed seemed unrelated and was also taking on my SSL patch too.I talked to Andres about the possibility of torn reads in the pg_class stats but those are all 4-byte columns so probably safe. And in any case that's a pre-existing possibility just more likely (if it's possible at all) by frequent truncates.On Thu, Feb 2, 2023, 15:47 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Dec-13, Greg Stark wrote:\n\n> So here I've done it that way. It is a bit of an unfortunate layering\n> since it means the heapam_handler is doing the catalog change but it\n> does seem inconvenient to pass relfrozenxid etc back up and have the\n> caller make the changes when there are no other changes to make.\n\nAre you still at this? CFbot says the meson tests failed last time for\nsome reason:\nhttp://commitfest.cputube.org/greg-stark.html\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Sat, 4 Feb 2023 17:12:36 +0100",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-04 17:12:36 +0100, Greg Stark wrote:\n> I think that was spurious. It looked good when we looked at it yesterday.\n> The rest that failed seemed unrelated and was also taking on my SSL patch\n> too.\n\nI don't think the SSL failures are related to the failure of this\npatch. That was in one of the new tests executed as part of the main\nregression tests:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6418299974582272/testrun/build/testrun/regress/regress/regression.diffs\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/temp.out /tmp/cirrus-ci-build/build/testrun/regress/regress/results/temp.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/temp.out\t2023-02-04 05:43:14.225905000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/temp.out\t2023-02-04 05:46:57.468250000 +0000\n@@ -108,7 +108,7 @@\n :old_relfrozenxid <> :new_relfrozenxid AS frozenxid_advanced;\n pages_analyzed | pages_reset | tuples_analyzed | tuples_reset | frozenxid_advanced \n ----------------+-------------+-----------------+--------------+--------------------\n- t | t | t | t | t\n+ t | t | t | t | f\n (1 row)\n \n -- The toast table can't be analyzed so relpages and reltuples can't\n\n\nWhereas the SSL test once failed in subscription/031_column_list (a test\nwith some known stability issues) and twice in postgres_fdw.\n\nUnfortunately the postgres_fdw failures are failing to upload:\n\n[17:41:25.601] Failed to upload artifacts: Put \"https://storage.googleapis.com/cirrus-ci-5309429912436736-3271c9/artifacts/postgresql-cfbot/postgresql/6061134453669888/testrun/build/testrun/runningcheck.log?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=cirrus-ci%40cirrus-ci-community.iam.gserviceaccount.com%2F20230128%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20230128T174012Z&X-Goog-Expires=600&X-Goog-SignedHeaders=host%3Bx-goog-content-length-range%3Bx-goog-meta-created_by_task&X-Goog-Signature=6f5606e3966d68060a14077deb93ed5bf680c4636e6409e5eba6ca8f1ff9b11302c1b5605089e2cd759fd90d1542a4e2c794fd4c1210f04b056d7e09db54d3e983c34539fb4c24787b659189c27e1b6d0ebc1d1807b38a066c10e62fa57a374c3a7fbc610edddf1dfe900b3c788c8d7d7ded3366449b4520992c5ed7a3136c7103b7a668b591542bba58a32f5a84cb21bbeeafea09dc525d1631a5f413a0f98df43cc90ebf6c4206e6df61606bc634c3a8116c53d7c6dd4bc5b26547cd7d1a1633839ace694b73426267a9f434317350905b905b9c88132be14a7762c2f204b8072a3bd7e4e1d30217d9e60102d525b08e28bcfaabae80fba734a1015d8eb0a7\": http2: request body larger than specified content length\n\nHm, I suspect the problem is that we didn't shut down the server due to\nthe error, so the log file was changing while cirrus was trying to\nupload.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 5 Feb 2023 15:30:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-05 15:30:57 -0800, Andres Freund wrote:\n> Hm, I suspect the problem is that we didn't shut down the server due to\n> the error, so the log file was changing while cirrus was trying to\n> upload.\n\nPushed a fix for that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Feb 2023 15:42:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Thu, Feb 2, 2023, 15:47 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Are you still at this? CFbot says the meson tests failed last time for\n> some reason:\n> http://commitfest.cputube.org/greg-stark.html\n\nOn Sat, Feb 04, 2023 at 05:12:36PM +0100, Greg Stark wrote:\n> I think that was spurious. It looked good when we looked at it yesterday.\n> The rest that failed seemed unrelated and was also taking on my SSL patch\n> too.\n\nThe patch still occasionally fails its tests under freebsd.\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/42/3358\n\n\n",
"msg_date": "Wed, 29 Mar 2023 16:48:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, 29 Mar 2023 at 17:48, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> The patch still occasionally fails its tests under freebsd.\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/42/3358\n\nSo on the one hand, I don't think the plan is to actually commit the\ntests. They're very specific to one bit of internal implementation and\nthey're the kind of test that makes maintaining the test suite a pain\nand patches to cause false positives. They're only in the patch I\nposted at all to demonstrate that the code was actually running at all\nand having the desired effect.\n\nThat said I would be a lot more sanguine about this failure if I had\nany theory for *why* it would fail. And on FreeBSD specifically which\nis even stranger.\n\nAfaict the relfrozenxid should always be our own transaction when the\ntable is created and then again our own new transaction when the table\nis truncated. And neither the INSERT nor the ANALYZE should be\ntouching relfrozenxid, nor should it be possible autovacuum is\ninterfering given it's a temp table (and we're attached to the\nschema). And none of this should be platform dependent.\n\nI wonder if some other test is behaving differently on FreeBSD and\nleaving behind a prepared transaction or a zombie session in some idle\nstate or something like that? Is there anything (aside from\nautovacuum) connecting or running in the background in the test\nenvironment that could be creating a transaction id and holding back\nsnapshot xmin?\n\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 5 Apr 2023 01:41:42 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, 5 Apr 2023 at 01:41, Greg Stark <stark@mit.edu> wrote:\n>\n> On Wed, 29 Mar 2023 at 17:48, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > The patch still occasionally fails its tests under freebsd.\n> > https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/42/3358\n>\n> I wonder if some other test is behaving differently on FreeBSD and\n> leaving behind a prepared transaction or a zombie session in some idle\n> state or something like that? Is there anything (aside from\n> autovacuum) connecting or running in the background in the test\n> environment that could be creating a transaction id and holding back\n> snapshot xmin?\n\nOk, I've reproduced this here by running the tests under meson. It\ndoesn't look like it's platform dependent.\n\nIt seems under meson the different test suites are run in parallel or\nat least isolation/deadlock-parallel are still running stuff when the\nregression checks are running. If that's not expected then maybe\nsomething's not behaving as expected? I've attached pg_stat_activity\nfrom during the test run.\n\nRegardless it shows these tests are obviously not robust enough to\ninclude as they would break for anyone running make installcheck on a\nnon-idle cluster.\n\nThat's fine, as I said, the tests were just there to give a reviewer\nmore confidence and I think it's fine to just not include them in the\ncommit.\n\n-- \ngreg",
"msg_date": "Wed, 5 Apr 2023 10:19:10 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-05 10:19:10 -0400, Greg Stark wrote:\n> On Wed, 5 Apr 2023 at 01:41, Greg Stark <stark@mit.edu> wrote:\n> >\n> > On Wed, 29 Mar 2023 at 17:48, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > The patch still occasionally fails its tests under freebsd.\n> > > https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/42/3358\n> >\n> > I wonder if some other test is behaving differently on FreeBSD and\n> > leaving behind a prepared transaction or a zombie session in some idle\n> > state or something like that? Is there anything (aside from\n> > autovacuum) connecting or running in the background in the test\n> > environment that could be creating a transaction id and holding back\n> > snapshot xmin?\n>\n> Ok, I've reproduced this here by running the tests under meson. It\n> doesn't look like it's platform dependent.\n>\n> It seems under meson the different test suites are run in parallel or\n> at least isolation/deadlock-parallel are still running stuff when the\n> regression checks are running. If that's not expected then maybe\n> something's not behaving as expected? I've attached pg_stat_activity\n> from during the test run.\n\nThe freebsd test that failed is running tests in parallel, against an existing\ncluster. In that case it's expected that there's some concurrency.\n\nWhy does this cause your tests to fail? They're in separate databases, so the\nvisibility effects of the concurrent tests should be somewhat limited.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Apr 2023 08:11:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, 5 Apr 2023 at 11:15, Andres Freund <andres@anarazel.de> wrote:\n>\n> The freebsd test that failed is running tests in parallel, against an existing\n> cluster. In that case it's expected that there's some concurrency.\n>\n> Why does this cause your tests to fail? They're in separate databases, so the\n> visibility effects of the concurrent tests should be somewhat limited.\n\nBecause I'm checking that relfrozenxid was updated but any concurrent\ntransactions even in other databases hold back the xmin.\n\nHonestly I'm glad I wrote the test because it was hard to know whether\nmy code was doing anything at all without it (and it wasn't in the\nfirst cut...) But I don't think there's much value in having it be in\nthe regression suite. We don't generally write tests to ensure that a\nspecific internal implementation behaves in the specific way it was\nwritten to.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 5 Apr 2023 13:26:53 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-05 13:26:53 -0400, Greg Stark wrote:\n> On Wed, 5 Apr 2023 at 11:15, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > The freebsd test that failed is running tests in parallel, against an existing\n> > cluster. In that case it's expected that there's some concurrency.\n> >\n> > Why does this cause your tests to fail? They're in separate databases, so the\n> > visibility effects of the concurrent tests should be somewhat limited.\n> \n> Because I'm checking that relfrozenxid was updated but any concurrent\n> transactions even in other databases hold back the xmin.\n\nNot if you determine a relation specific xmin, and the relation is not a\nshared relation.\n\nISTM that the problem here really is that you're relying on RecentXmin, rather\nthan computing something more accurate. Why not use\nGetOldestNonRemovableTransactionId(rel) - It's a bit more expensive, but I\ndon't think it'll matter compared to the cost of truncating the relation.\n\n\nSomehow it doesn't feel right to use vac_update_relstats() in\nheapam_handler.c.\n\nI also don't like that your patch references\nheapam_relation_nontransactional_truncate in AddNewRelationTuple() - we\nshouldn't add more comments piercing tableam than necessary.\n\n\n> Honestly I'm glad I wrote the test because it was hard to know whether\n> my code was doing anything at all without it (and it wasn't in the\n> first cut...) But I don't think there's much value in having it be in\n> the regression suite. We don't generally write tests to ensure that a\n> specific internal implementation behaves in the specific way it was\n> written to.\n\nTo me it seems important to test that your change actually does what it\nintends to. Possibly the test needs to be relaxed some, but I do think we want\ntests for the change.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Apr 2023 10:41:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, 5 Apr 2023 at 13:42, Andres Freund <andres@anarazel.de> wrote:\n>\n> Not if you determine a relation specific xmin, and the relation is not a\n> shared relation.\n>\n> ISTM that the problem here really is that you're relying on RecentXmin, rather\n> than computing something more accurate. Why not use\n> GetOldestNonRemovableTransactionId(rel) - It's a bit more expensive, but I\n> don't think it'll matter compared to the cost of truncating the relation.\n\nThanks for the review!\n\nHm, I was just copying heapam_handler.c:593 so it would be consistent\nwith what we do when we create a new table. I wasn't aware we had\nanything that did this extra work I'll look at it.\n\nBut I'm not sure it's the best idea to decide on how\ntruncate/vacuum/create table work based on what happens to be easier\nto test. I mean I'm all for testable code but tieing vacuum behaviour\nto what our test framework happens to not interfere with might be a\nbit fragile. Like, if we happen to want to change the testing\nframework I think this demonstrates that it will be super easy for it\nto break the tests again. And if we discover we have to change the\nrelfrozenxid behaviour it might be hard to keep this test working.\n\n\n> Somehow it doesn't feel right to use vac_update_relstats() in\n> heapam_handler.c.\n>\n> I also don't like that your patch references\n> heapam_relation_nontransactional_truncate in AddNewRelationTuple() - we\n> shouldn't add more comments piercing tableam than necessary.\n\nI'll take another look at this tomorrow. Probably I can extract the\ncommon part of that function or I've misunderstood which bits of code\nare above or below the tableam.\n\nI think fundamentally the hardest bit was that the initial\nrelfrozenxid bubbles up from heapam_handler.c via a return value from\nset_new_filelocator. So unless I want to add a new tableam method just\nfor relfrozenxid it's a bit awkward to get the right data to\nAddNewRelationTuple and vac_update_relstats without duplicating code\nand crosslinking in comments.\n\n> To me it seems important to test that your change actually does what it\n> intends to. Possibly the test needs to be relaxed some, but I do think we want\n> tests for the change.\n\nI missed the comment about relaxing the tests until just now. I'll\nthink about if there's an easy way out in that direction too.\n\nIf it's cutting it too fine to the end of the commitfest we could\nalways just commit the warnings from the 001 patch which would already\nbe a *huge* help for admins running into this issue.\n\nChag Sameach!\n\n\n--\ngreg\n\n\n",
"msg_date": "Thu, 6 Apr 2023 18:14:48 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, 5 Apr 2023 at 13:42, Andres Freund <andres@anarazel.de> wrote:\n>\n> ISTM that the problem here really is that you're relying on RecentXmin, rather\n> than computing something more accurate. Why not use\n> GetOldestNonRemovableTransactionId(rel) - It's a bit more expensive, but I\n> don't think it'll matter compared to the cost of truncating the relation.\n\nI'm trying to wrap my head around GetOldestNonRemovableTransactionId()\nand whether it's the right thing here. This comment is not helping me:\n\n/*\n * Return the oldest XID for which deleted tuples must be preserved in the\n * passed table.\n *\n * If rel is not NULL the horizon may be considerably more recent than\n * otherwise (i.e. fewer tuples will be removable). In the NULL case a horizon\n * that is correct (but not optimal) for all relations will be returned.\n *\n * This is used by VACUUM to decide which deleted tuples must be preserved in\n * the passed in table.\n */\n\n\nAm I crazy or is the parenthetical comment there exactly backwards? If\nthe horizon is *more recent* then fewer tuples are *non*-removable.\nI.e. *more* tuples are removable, no?\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 12 Apr 2023 16:22:41 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 4:23 PM Greg Stark <stark@mit.edu> wrote:\n> I'm trying to wrap my head around GetOldestNonRemovableTransactionId()\n> and whether it's the right thing here. This comment is not helping me:\n>\n> /*\n> * Return the oldest XID for which deleted tuples must be preserved in the\n> * passed table.\n> *\n> * If rel is not NULL the horizon may be considerably more recent than\n> * otherwise (i.e. fewer tuples will be removable). In the NULL case a horizon\n> * that is correct (but not optimal) for all relations will be returned.\n> *\n> * This is used by VACUUM to decide which deleted tuples must be preserved in\n> * the passed in table.\n> */\n>\n>\n> Am I crazy or is the parenthetical comment there exactly backwards? If\n> the horizon is *more recent* then fewer tuples are *non*-removable.\n> I.e. *more* tuples are removable, no?\n\nIsn't it the non-parenthetical part that's wrong? I would expect that\nif we don't know which relation it is, the horizon might be\nconsiderably LESS recent, which would result in fewer tuples being\nremovable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Apr 2023 12:44:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 9:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Apr 12, 2023 at 4:23 PM Greg Stark <stark@mit.edu> wrote:\n> > Am I crazy or is the parenthetical comment there exactly backwards? If\n> > the horizon is *more recent* then fewer tuples are *non*-removable.\n> > I.e. *more* tuples are removable, no?\n>\n> Isn't it the non-parenthetical part that's wrong? I would expect that\n> if we don't know which relation it is, the horizon might be\n> considerably LESS recent, which would result in fewer tuples being\n> removable.\n\nYou can make arguments for either way of restating it being clearer\nthan the other.\n\nPersonally I think that the comment should explain what happens when\nyou pass NULL as your relation, rather than explaining what doesn't\nhappen (or does happen?) when you pass a non-NULL relation pointer.\nThat way the just-pass-NULL case can be addressed as the\npossibly-aberrant case -- the possibly-sloppy approach. You're really\nsupposed to pass a non-NULL relation pointer if at all possible.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:01:07 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Thu, 13 Apr 2023 at 13:01, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Apr 13, 2023 at 9:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Apr 12, 2023 at 4:23 PM Greg Stark <stark@mit.edu> wrote:\n> > > Am I crazy or is the parenthetical comment there exactly backwards? If\n> > > the horizon is *more recent* then fewer tuples are *non*-removable.\n> > > I.e. *more* tuples are removable, no?\n> >\n> > Isn't it the non-parenthetical part that's wrong? I would expect that\n> > if we don't know which relation it is, the horizon might be\n> > considerably LESS recent, which would result in fewer tuples being\n> > removable.\n>\n> You can make arguments for either way of restating it being clearer\n> than the other.\n\nYeah, I think Robert is being confused by the implicit double\nnegative. If we don't know which relation it is it's because relation\nis NULL and the comment is talking about if it's \"not NULL\". I think\nyou're right that it would be less confusing if it just says \"if you\npass NULL we have to give a conservative result which means an older\nxid and fewer removable tuples\".\n\nBut I'm saying the parenthetical part is not just confusing, it's\noutright wrong. I guess that just means the first half was so\nconfusing it confused not only the reader but the author too.\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 14 Apr 2023 10:05:08 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 7:05 AM Greg Stark <stark@mit.edu> wrote:\n> But I'm saying the parenthetical part is not just confusing, it's\n> outright wrong. I guess that just means the first half was so\n> confusing it confused not only the reader but the author too.\n\nI knew that that was what you meant. I agree that it's outright wrong.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 14 Apr 2023 08:18:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-14 10:05:08 -0400, Greg Stark wrote:\n> On Thu, 13 Apr 2023 at 13:01, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Thu, Apr 13, 2023 at 9:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 12, 2023 at 4:23 PM Greg Stark <stark@mit.edu> wrote:\n> > > > Am I crazy or is the parenthetical comment there exactly backwards? If\n> > > > the horizon is *more recent* then fewer tuples are *non*-removable.\n> > > > I.e. *more* tuples are removable, no?\n> > >\n> > > Isn't it the non-parenthetical part that's wrong? I would expect that\n> > > if we don't know which relation it is, the horizon might be\n> > > considerably LESS recent, which would result in fewer tuples being\n> > > removable.\n\nIf rel is *not NULL*, the horizon is more recent - that seems correct?\n\n\n> > You can make arguments for either way of restating it being clearer\n> > than the other.\n> \n> Yeah, I think Robert is being confused by the implicit double\n> negative. If we don't know which relation it is it's because relation\n> is NULL and the comment is talking about if it's \"not NULL\". I think\n> you're right that it would be less confusing if it just says \"if you\n> pass NULL we have to give a conservative result which means an older\n> xid and fewer removable tuples\".\n> \n> But I'm saying the parenthetical part is not just confusing, it's\n> outright wrong. I guess that just means the first half was so\n> confusing it confused not only the reader but the author too.\n\nI don't think it's outright wrong, but it is very confusing what it relates\nto. For some reason I tried to \"attach\" the parenthetical to the \"otherwise\",\nwhich doesn't make a whole lot of sense. How about:\n\n * If rel is not NULL the horizon may be considerably more recent (i.e.\n * allowing more tuples to be removed) than otherwise. In the NULL case a\n * horizon that is correct (but not optimal) for all relations will be\n * returned. Thus, if possible, a relation should be provided.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Apr 2023 10:47:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 10:47 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't think it's outright wrong, but it is very confusing what it relates\n> to. For some reason I tried to \"attach\" the parenthetical to the \"otherwise\",\n> which doesn't make a whole lot of sense. How about:\n\nI suppose that it doesn't matter whether it's outright wrong, or just\nunclear. Either way it should be improved.\n\n> * If rel is not NULL the horizon may be considerably more recent (i.e.\n> * allowing more tuples to be removed) than otherwise. In the NULL case a\n> * horizon that is correct (but not optimal) for all relations will be\n> * returned. Thus, if possible, a relation should be provided.\n\nThat seems much better to me. The most important part is the last sentence.\n\nThe key idea is that you as a caller should provide a rel if at all\npossible (and if not you should feel a pang of guilt). That emphasis\nmakes the potential consequences much more obvious.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 14 Apr 2023 12:37:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "On Wed, 5 Apr 2023 at 13:42, Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Somehow it doesn't feel right to use vac_update_relstats() in\n> heapam_handler.c.\n>\n> I also don't like that your patch references\n> heapam_relation_nontransactional_truncate in AddNewRelationTuple() - we\n> shouldn't add more comments piercing tableam than necessary.\n\nI'm really puzzled because this does look like it was in the last\npatch on the mailing list archive. But it's definitely not the code I\nhave here. I guess I did some cleanup that I never posted, so sorry.\n\nI've attached patches using GetOldestNonRemovableTransactinId() and it\nseems to have fixed the race condition here. At least I can't\nreproduce it any more.\n\n\n\n> Not if you determine a relation specific xmin, and the relation is not a\n> shared relation.\n>\n> ISTM that the problem here really is that you're relying on RecentXmin, rather\n> than computing something more accurate. Why not use\n> GetOldestNonRemovableTransactionId(rel) - It's a bit more expensive, but I\n> don't think it'll matter compared to the cost of truncating the relation.\n\nI am a bit nervous about the overhead here because if your transaction\ntouched *any* temporary tables then this gets called for *every*\ntemporary table with ON COMMIT DELETE. That could be a lot and it's\nnot obvious to users that having temporary tables will impose an\noverhead even if they're not actually using them.\n\nSo I went ahead and used GetOldestNonRemovableTransactionId and tried\nto do some profiling. But this is on a cassert enabled build with -O0\nso it's not serious profiling. I can repeat it on a real build if it\nmatters. But it's been a long time since I've read gprof output. This\nis for -F PreCommit_on_commit_actions so the percentages are as a\npercent of just the precommit cleanup:\n\nindex % time self children called name\n 0.00 0.00 10102/10102 CommitTransaction (1051)\n[1] 100.0 0.01 31.47 10102 PreCommit_on_commit_actions [1]\n 0.01 31.43 10100/10100 heap_truncate [2]\n 0.00 0.03 1005050/1005260 lappend_oid [325]\n-----------------------------------------------\n 0.01 31.43 10100/10100 PreCommit_on_commit_actions [1]\n[2] 99.9 0.01 31.43 10100 heap_truncate [2]\n 0.09 27.30 1005050/1005050 heap_truncate_one_rel [3]\n 0.20 3.57 1005050/6087120 table_open <cycle 1> [465]\n 0.01 0.22 1005050/6045137 table_close [48]\n 0.00 0.03 1005050/1017744 lappend [322]\n 0.01 0.00 10100/10100 heap_truncate_check_FKs [425]\n-----------------------------------------------\n 0.09 27.30 1005050/1005050 heap_truncate [2]\n[3] 87.0 0.09 27.30 1005050 heap_truncate_one_rel [3]\n 0.02 12.23 1005050/1005050 RelationTruncateIndexes [5]\n 0.06 10.08 1005050/1005050 ResetVacStats [7]\n 0.03 4.89 1005050/1005050\ntable_relation_nontransactional_truncate [12]\n\nI think this is saying that more than half the time is being spent\njust checking for indexes. There were no indexes on these temporary\ntables. Does not having any indexes cause the relcache treat it as a\ncache miss every time?\n\n 0.06 10.08 1005050/1005050 heap_truncate_one_rel [3]\n[7] 32.2 0.06 10.08 1005050 ResetVacStats [7]\n 0.02 3.83 1005050/1005250 SearchSysCacheCopy [16]\n 0.20 3.57 1005050/6087120 table_open <cycle 1> [465]\n 0.01 2.02 1005050/1005050 heap_inplace_update [35]\n 0.01 0.22 1005050/6045137 table_close [48]\n 0.00 0.20 1005050/1005150\nGetOldestNonRemovableTransactionId [143]\n 0.00 0.01 1005050/1005150 GetOurOldestMultiXactId [421]\n 0.00 0.00 1005050/1008750 ObjectIdGetDatum [816]\n\nI guess this means GetOldestNonRemovableTransactionId is not the main\ncost in ResetVacStats though I don't understand why the syscache would\nbe so slow.\n\nI think there's a facility for calculating the Horizons and then\nreusing them for a while but I don't see how to use that here. It\nwould be appropriate I think.\n\n\n>\n> > Honestly I'm glad I wrote the test because it was hard to know whether\n> > my code was doing anything at all without it (and it wasn't in the\n> > first cut...) But I don't think there's much value in having it be in\n> > the regression suite. We don't generally write tests to ensure that a\n> > specific internal implementation behaves in the specific way it was\n> > written to.\n>\n> To me it seems important to test that your change actually does what it\n> intends to. Possibly the test needs to be relaxed some, but I do think we want\n> tests for the change.\n>\n> Greetings,\n>\n> Andres Freund\n\n\n\n--\ngreg",
"msg_date": "Mon, 17 Apr 2023 17:40:07 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "Hm, in an optimized build using kernel perf I see this. But I don't\nknow how to find what the call sites are for LWLockAcquire/Release. If\nit's the locks on pgproc that would be kind of bad.\n\nI wonder if I should be gathering horizons once in the\nPrecommitActions and then just using those for every temp table\nsomehow. Perhaps only actually doing an update if the relfrozenxid is\nactually at least vacuum_freeze_table_age old.\n\n 3.98% postgres LWLockAcquire\n 3.51% postgres LWLockRelease\n 3.18% postgres hash_search_with_hash_value\n 2.20% postgres DropRelationLocalBuffers\n 1.80% [kernel] check_preemption_disabled\n 1.52% postgres hash_bytes\n 1.27% postgres LockAcquireExtended\n 0.97% postgres _bt_compare\n 0.95% [kernel] kmem_cache_alloc\n\nI still think we should be applying the vacuum warning messages to\nstable and probably backpatching. I've actually heard from other users\nwho have faced the same surprise wraparound shutdown.\n\n\n",
"msg_date": "Tue, 18 Apr 2023 11:09:44 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Temporary tables versus wraparound... again"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, this patch was marked in CF as \"Needs Review\", but there has been\nno activity on this thread for 9+ months.\n\nSince there seems not much interest, I have changed the status to\n\"Returned with Feedback\" [1]. Feel free to propose a stronger use case\nfor the patch and add an entry for the same.\n\n======\n[1] https://commitfest.postgresql.org/46/3358/\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 12:29:55 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Temporary tables versus wraparound... again"
}
] |
[
{
"msg_contents": "Michael reported a BF failure [1] related to one of the logical\nstreaming test case and I've analyzed the issue. As responded on\npgsql-committers [2], the issue here is that the streaming\ntransactions can be interleaved and because we are maintaining whether\nxact_wrote_changes at the LogicalDecodingContext level, one of later\ntransaction can overwrite the flag for previously streaming\ntransaction. I think it is logical to have this flag at each\ntransaction level (aka in ReorderBufferTxn), however till now it was\nfine because the changes of each transaction are decoded at one-shot\nwhich will be no longer true. We can keep a output_plugin_private data\npointer in ReorderBufferTxn which will be used by test_decoding module\nto keep this and any other such flags in future. We need to set this\nflag at begin_cb and stream_start_cb APIs and then reset/remove it at\nstream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n\nAdditionally, we can extend the existing test case\nconcurrent_stream.spec to cover this scenario by adding a step to have\nan empty transaction before the commit of transaction which we are\ngoing to stream changes for (before s1_commit).\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/20201109014118.GD1695%40paquier.xyz\n[2] - https://www.postgresql.org/message-id/CAA4eK1JMCm9HURVmOapo%2Bv2u2EEABOuzgp7XJ32C072ygcKktQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 Nov 2020 11:01:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Michael reported a BF failure [1] related to one of the logical\n> streaming test case and I've analyzed the issue. As responded on\n> pgsql-committers [2], the issue here is that the streaming\n> transactions can be interleaved and because we are maintaining whether\n> xact_wrote_changes at the LogicalDecodingContext level, one of later\n> transaction can overwrite the flag for previously streaming\n> transaction. I think it is logical to have this flag at each\n> transaction level (aka in ReorderBufferTxn), however till now it was\n> fine because the changes of each transaction are decoded at one-shot\n> which will be no longer true. We can keep a output_plugin_private data\n> pointer in ReorderBufferTxn which will be used by test_decoding module\n> to keep this and any other such flags in future. We need to set this\n> flag at begin_cb and stream_start_cb APIs and then reset/remove it at\n> stream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n>\n> Additionally, we can extend the existing test case\n> concurrent_stream.spec to cover this scenario by adding a step to have\n> an empty transaction before the commit of transaction which we are\n> going to stream changes for (before s1_commit).\n>\n> Thoughts?\n\nThe analysis seems correct to me, I will work on it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Nov 2020 11:04:51 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 11:04 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Michael reported a BF failure [1] related to one of the logical\n> > streaming test case and I've analyzed the issue. As responded on\n> > pgsql-committers [2], the issue here is that the streaming\n> > transactions can be interleaved and because we are maintaining whether\n> > xact_wrote_changes at the LogicalDecodingContext level, one of later\n> > transaction can overwrite the flag for previously streaming\n> > transaction. I think it is logical to have this flag at each\n> > transaction level (aka in ReorderBufferTxn), however till now it was\n> > fine because the changes of each transaction are decoded at one-shot\n> > which will be no longer true. We can keep a output_plugin_private data\n> > pointer in ReorderBufferTxn which will be used by test_decoding module\n> > to keep this and any other such flags in future. We need to set this\n> > flag at begin_cb and stream_start_cb APIs and then reset/remove it at\n> > stream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n\nSo IIUC, we need to keep 'output_plugin_private' in\nLogicalDecodingContext as well as in ReorderBufferTxn, So the\noutput_plugin_private in the ReorderBufferTxn will currently just keep\none flag xact_wrote_changes and the remaining things will still be\nmaintained in output_plugin_private of the LogicalDecodingContext. Is\nmy understanding correct?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Nov 2020 11:20:49 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 11:04 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Michael reported a BF failure [1] related to one of the logical\n> > > streaming test case and I've analyzed the issue. As responded on\n> > > pgsql-committers [2], the issue here is that the streaming\n> > > transactions can be interleaved and because we are maintaining whether\n> > > xact_wrote_changes at the LogicalDecodingContext level, one of later\n> > > transaction can overwrite the flag for previously streaming\n> > > transaction. I think it is logical to have this flag at each\n> > > transaction level (aka in ReorderBufferTxn), however till now it was\n> > > fine because the changes of each transaction are decoded at one-shot\n> > > which will be no longer true. We can keep a output_plugin_private data\n> > > pointer in ReorderBufferTxn which will be used by test_decoding module\n> > > to keep this and any other such flags in future. We need to set this\n> > > flag at begin_cb and stream_start_cb APIs and then reset/remove it at\n> > > stream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n>\n> So IIUC, we need to keep 'output_plugin_private' in\n> LogicalDecodingContext as well as in ReorderBufferTxn, So the\n> output_plugin_private in the ReorderBufferTxn will currently just keep\n> one flag xact_wrote_changes and the remaining things will still be\n> maintained in output_plugin_private of the LogicalDecodingContext. Is\n> my understanding correct?\n>\n\nYes. But keep it as void * so that we can add more things later if required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 Nov 2020 11:32:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 11:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 11:04 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > Michael reported a BF failure [1] related to one of the logical\n> > > > streaming test case and I've analyzed the issue. As responded on\n> > > > pgsql-committers [2], the issue here is that the streaming\n> > > > transactions can be interleaved and because we are maintaining whether\n> > > > xact_wrote_changes at the LogicalDecodingContext level, one of later\n> > > > transaction can overwrite the flag for previously streaming\n> > > > transaction. I think it is logical to have this flag at each\n> > > > transaction level (aka in ReorderBufferTxn), however till now it was\n> > > > fine because the changes of each transaction are decoded at one-shot\n> > > > which will be no longer true. We can keep a output_plugin_private data\n> > > > pointer in ReorderBufferTxn which will be used by test_decoding module\n> > > > to keep this and any other such flags in future. We need to set this\n> > > > flag at begin_cb and stream_start_cb APIs and then reset/remove it at\n> > > > stream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n> >\n> > So IIUC, we need to keep 'output_plugin_private' in\n> > LogicalDecodingContext as well as in ReorderBufferTxn, So the\n> > output_plugin_private in the ReorderBufferTxn will currently just keep\n> > one flag xact_wrote_changes and the remaining things will still be\n> > maintained in output_plugin_private of the LogicalDecodingContext. Is\n> > my understanding correct?\n> >\n>\n> Yes. But keep it as void * so that we can add more things later if required.\n\nYeah, that makes sense to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Nov 2020 13:34:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 1:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 11:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 9, 2020 at 11:04 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > Michael reported a BF failure [1] related to one of the logical\n> > > > > streaming test case and I've analyzed the issue. As responded on\n> > > > > pgsql-committers [2], the issue here is that the streaming\n> > > > > transactions can be interleaved and because we are maintaining whether\n> > > > > xact_wrote_changes at the LogicalDecodingContext level, one of later\n> > > > > transaction can overwrite the flag for previously streaming\n> > > > > transaction. I think it is logical to have this flag at each\n> > > > > transaction level (aka in ReorderBufferTxn), however till now it was\n> > > > > fine because the changes of each transaction are decoded at one-shot\n> > > > > which will be no longer true. We can keep a output_plugin_private data\n> > > > > pointer in ReorderBufferTxn which will be used by test_decoding module\n> > > > > to keep this and any other such flags in future. We need to set this\n> > > > > flag at begin_cb and stream_start_cb APIs and then reset/remove it at\n> > > > > stream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n> > >\n> > > So IIUC, we need to keep 'output_plugin_private' in\n> > > LogicalDecodingContext as well as in ReorderBufferTxn, So the\n> > > output_plugin_private in the ReorderBufferTxn will currently just keep\n> > > one flag xact_wrote_changes and the remaining things will still be\n> > > maintained in output_plugin_private of the LogicalDecodingContext. Is\n> > > my understanding correct?\n> > >\n> >\n> > Yes. But keep it as void * so that we can add more things later if required.\n>\n> Yeah, that makes sense to me.\n\nI have made some POC changes and analyzed this further, I think that\nfor the streaming transaction we need 2 flags\n1) xact_wrote_changes 2) stream_wrote_changes\n\nSo basically, if the stream didn't make any changes we can skip the\nstream start and stream stop message for the empty stream, but if any\nof the streams has made any change then we need to emit the\ntransaction commit message. But if we want to avoid tracking the\nchanges per stream then maybe once we set the xact_wrote_changes to\ntrue once for the txn then we better emit the message for all the\nstream without tracking whether the stream is empty or not. What is\nyour thought on this?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Nov 2020 15:01:10 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 3:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 1:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 11:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 9, 2020 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 9, 2020 at 11:04 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > Michael reported a BF failure [1] related to one of the logical\n> > > > > > streaming test case and I've analyzed the issue. As responded on\n> > > > > > pgsql-committers [2], the issue here is that the streaming\n> > > > > > transactions can be interleaved and because we are maintaining whether\n> > > > > > xact_wrote_changes at the LogicalDecodingContext level, one of later\n> > > > > > transaction can overwrite the flag for previously streaming\n> > > > > > transaction. I think it is logical to have this flag at each\n> > > > > > transaction level (aka in ReorderBufferTxn), however till now it was\n> > > > > > fine because the changes of each transaction are decoded at one-shot\n> > > > > > which will be no longer true. We can keep a output_plugin_private data\n> > > > > > pointer in ReorderBufferTxn which will be used by test_decoding module\n> > > > > > to keep this and any other such flags in future. We need to set this\n> > > > > > flag at begin_cb and stream_start_cb APIs and then reset/remove it at\n> > > > > > stream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n> > > >\n> > > > So IIUC, we need to keep 'output_plugin_private' in\n> > > > LogicalDecodingContext as well as in ReorderBufferTxn, So the\n> > > > output_plugin_private in the ReorderBufferTxn will currently just keep\n> > > > one flag xact_wrote_changes and the remaining things will still be\n> > > > maintained in output_plugin_private of the LogicalDecodingContext. Is\n> > > > my understanding correct?\n> > > >\n> > >\n> > > Yes. But keep it as void * so that we can add more things later if required.\n> >\n> > Yeah, that makes sense to me.\n>\n> I have made some POC changes and analyzed this further, I think that\n> for the streaming transaction we need 2 flags\n> 1) xact_wrote_changes 2) stream_wrote_changes\n>\n> So basically, if the stream didn't make any changes we can skip the\n> stream start and stream stop message for the empty stream, but if any\n> of the streams has made any change then we need to emit the\n> transaction commit message. But if we want to avoid tracking the\n> changes per stream then maybe once we set the xact_wrote_changes to\n> true once for the txn then we better emit the message for all the\n> stream without tracking whether the stream is empty or not. What is\n> your thought on this?\n>\n\nI would prefer to have two separate flags to control this behavior\nbecause without that it is quite possible that in some of the cases we\ndisplay empty stream start/stop messages even when that is not\nintended. The bigger question is do we want to give users an option\nfor skip_empty_streams similar to skip_empty_xacts? I would again\nprefer to give a separate option to the user as well. What do you\nthink?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 Nov 2020 16:22:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 3:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 1:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 9, 2020 at 11:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 9, 2020 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 9, 2020 at 11:04 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > > Michael reported a BF failure [1] related to one of the logical\n> > > > > > > streaming test case and I've analyzed the issue. As responded on\n> > > > > > > pgsql-committers [2], the issue here is that the streaming\n> > > > > > > transactions can be interleaved and because we are maintaining whether\n> > > > > > > xact_wrote_changes at the LogicalDecodingContext level, one of later\n> > > > > > > transaction can overwrite the flag for previously streaming\n> > > > > > > transaction. I think it is logical to have this flag at each\n> > > > > > > transaction level (aka in ReorderBufferTxn), however till now it was\n> > > > > > > fine because the changes of each transaction are decoded at one-shot\n> > > > > > > which will be no longer true. We can keep a output_plugin_private data\n> > > > > > > pointer in ReorderBufferTxn which will be used by test_decoding module\n> > > > > > > to keep this and any other such flags in future. We need to set this\n> > > > > > > flag at begin_cb and stream_start_cb APIs and then reset/remove it at\n> > > > > > > stream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n> > > > >\n> > > > > So IIUC, we need to keep 'output_plugin_private' in\n> > > > > LogicalDecodingContext as well as in ReorderBufferTxn, So the\n> > > > > output_plugin_private in the ReorderBufferTxn will currently just keep\n> > > > > one flag xact_wrote_changes and the remaining things will still be\n> > > > > maintained in output_plugin_private of the LogicalDecodingContext. Is\n> > > > > my understanding correct?\n> > > > >\n> > > >\n> > > > Yes. But keep it as void * so that we can add more things later if required.\n> > >\n> > > Yeah, that makes sense to me.\n> >\n> > I have made some POC changes and analyzed this further, I think that\n> > for the streaming transaction we need 2 flags\n> > 1) xact_wrote_changes 2) stream_wrote_changes\n> >\n> > So basically, if the stream didn't make any changes we can skip the\n> > stream start and stream stop message for the empty stream, but if any\n> > of the streams has made any change then we need to emit the\n> > transaction commit message. But if we want to avoid tracking the\n> > changes per stream then maybe once we set the xact_wrote_changes to\n> > true once for the txn then we better emit the message for all the\n> > stream without tracking whether the stream is empty or not. What is\n> > your thought on this?\n> >\n>\n> I would prefer to have two separate flags to control this behavior\n> because without that it is quite possible that in some of the cases we\n> display empty stream start/stop messages even when that is not\n> intended.\n\n+1\n\n The bigger question is do we want to give users an option\n> for skip_empty_streams similar to skip_empty_xacts? I would again\n> prefer to give a separate option to the user as well. What do you\n> think?\n\nYeah, I think giving an option would be better.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Nov 2020 17:37:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 5:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 3:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 9, 2020 at 1:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 9, 2020 at 11:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 9, 2020 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Nov 9, 2020 at 11:04 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Mon, Nov 9, 2020 at 11:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > Michael reported a BF failure [1] related to one of the logical\n> > > > > > > > streaming test case and I've analyzed the issue. As responded on\n> > > > > > > > pgsql-committers [2], the issue here is that the streaming\n> > > > > > > > transactions can be interleaved and because we are maintaining whether\n> > > > > > > > xact_wrote_changes at the LogicalDecodingContext level, one of later\n> > > > > > > > transaction can overwrite the flag for previously streaming\n> > > > > > > > transaction. I think it is logical to have this flag at each\n> > > > > > > > transaction level (aka in ReorderBufferTxn), however till now it was\n> > > > > > > > fine because the changes of each transaction are decoded at one-shot\n> > > > > > > > which will be no longer true. We can keep a output_plugin_private data\n> > > > > > > > pointer in ReorderBufferTxn which will be used by test_decoding module\n> > > > > > > > to keep this and any other such flags in future. We need to set this\n> > > > > > > > flag at begin_cb and stream_start_cb APIs and then reset/remove it at\n> > > > > > > > stream_commit_cb, stream_abort_cb and stream_stop_cb APIs.\n> > > > > >\n> > > > > > So IIUC, we need to keep 'output_plugin_private' in\n> > > > > > LogicalDecodingContext as well as in ReorderBufferTxn, So the\n> > > > > > output_plugin_private in the ReorderBufferTxn will currently just keep\n> > > > > > one flag xact_wrote_changes and the remaining things will still be\n> > > > > > maintained in output_plugin_private of the LogicalDecodingContext. Is\n> > > > > > my understanding correct?\n> > > > > >\n> > > > >\n> > > > > Yes. But keep it as void * so that we can add more things later if required.\n> > > >\n> > > > Yeah, that makes sense to me.\n> > >\n> > > I have made some POC changes and analyzed this further, I think that\n> > > for the streaming transaction we need 2 flags\n> > > 1) xact_wrote_changes 2) stream_wrote_changes\n> > >\n> > > So basically, if the stream didn't make any changes we can skip the\n> > > stream start and stream stop message for the empty stream, but if any\n> > > of the streams has made any change then we need to emit the\n> > > transaction commit message. But if we want to avoid tracking the\n> > > changes per stream then maybe once we set the xact_wrote_changes to\n> > > true once for the txn then we better emit the message for all the\n> > > stream without tracking whether the stream is empty or not. What is\n> > > your thought on this?\n> > >\n> >\n> > I would prefer to have two separate flags to control this behavior\n> > because without that it is quite possible that in some of the cases we\n> > display empty stream start/stop messages even when that is not\n> > intended.\n>\n> +1\n>\n> The bigger question is do we want to give users an option\n> > for skip_empty_streams similar to skip_empty_xacts? I would again\n> > prefer to give a separate option to the user as well. What do you\n> > think?\n>\n> Yeah, I think giving an option would be better.\n\nI think we should also think about the combinations of the\nskip_empty_xacts and skip_empty_streams. For example, if the user\npasses the skip_empty_xacts to false and skip_empty_streams to true\nthen what should be the behavior, if the complete transaction\noption1: It should not print any stream_start/stream_stop and just\nprint commit stream because skip_empty_xacts is false and\nskip_empty_streams is true.\noption2: It should print the stream_start message for the very first\nstream because it is the first stream if the txn and skip_empty_xacts\nis false so print it and later it will print-stream commit.\noption3: Or for the first stream we first put the BEGIN message i.e\nstream begin\nstream start\nstream stop\nstream commit\noption4: the user should not be allowed to pass skip_empty_xacts =\nfalse with skip_empty_streams to true. Because if the streaming mode\nis on then we can not print the xact without printing streams.\n\nWhat is your opinion on this?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Nov 2020 18:00:15 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 6:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 5:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > The bigger question is do we want to give users an option\n> > > for skip_empty_streams similar to skip_empty_xacts? I would again\n> > > prefer to give a separate option to the user as well. What do you\n> > > think?\n> >\n> > Yeah, I think giving an option would be better.\n>\n> I think we should also think about the combinations of the\n> skip_empty_xacts and skip_empty_streams. For example, if the user\n> passes the skip_empty_xacts to false and skip_empty_streams to true\n> then what should be the behavior, if the complete transaction\n> option1: It should not print any stream_start/stream_stop and just\n> print commit stream because skip_empty_xacts is false and\n> skip_empty_streams is true.\n> option2: It should print the stream_start message for the very first\n> stream because it is the first stream if the txn and skip_empty_xacts\n> is false so print it and later it will print-stream commit.\n> option3: Or for the first stream we first put the BEGIN message i.e\n> stream begin\n> stream start\n> stream stop\n> stream commit\n> option4: the user should not be allowed to pass skip_empty_xacts =\n> false with skip_empty_streams to true. Because if the streaming mode\n> is on then we can not print the xact without printing streams.\n>\n> What is your opinion on this?\n>\n\nI would prefer option-4 and in addition to that we can ensure that if\nskip_empty_xacts = true then by default skip_empty_streams is also\ntrue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Nov 2020 08:15:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 8:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 9, 2020 at 6:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 5:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 9, 2020 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > The bigger question is do we want to give users an option\n> > > > for skip_empty_streams similar to skip_empty_xacts? I would again\n> > > > prefer to give a separate option to the user as well. What do you\n> > > > think?\n> > >\n> > > Yeah, I think giving an option would be better.\n> >\n> > I think we should also think about the combinations of the\n> > skip_empty_xacts and skip_empty_streams. For example, if the user\n> > passes the skip_empty_xacts to false and skip_empty_streams to true\n> > then what should be the behavior, if the complete transaction\n> > option1: It should not print any stream_start/stream_stop and just\n> > print commit stream because skip_empty_xacts is false and\n> > skip_empty_streams is true.\n> > option2: It should print the stream_start message for the very first\n> > stream because it is the first stream if the txn and skip_empty_xacts\n> > is false so print it and later it will print-stream commit.\n> > option3: Or for the first stream we first put the BEGIN message i.e\n> > stream begin\n> > stream start\n> > stream stop\n> > stream commit\n> > option4: the user should not be allowed to pass skip_empty_xacts =\n> > false with skip_empty_streams to true. Because if the streaming mode\n> > is on then we can not print the xact without printing streams.\n> >\n> > What is your opinion on this?\n> >\n>\n> I would prefer option-4 and in addition to that we can ensure that if\n> skip_empty_xacts = true then by default skip_empty_streams is also\n> true.\n\nBut then it will behave as a single option only, right? because if\n1. skip_empty_xacts = true, then we set skip_empty_streams = true\n2. skip_empty_xacts = false then skip_empty_streams can not be set to true\n\nSo as per the state machine either both will be true or both will be\nfalse, Am I missing something?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Nov 2020 10:26:11 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 10:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 8:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 9, 2020 at 6:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 9, 2020 at 5:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 9, 2020 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > The bigger question is do we want to give users an option\n> > > > > for skip_empty_streams similar to skip_empty_xacts? I would again\n> > > > > prefer to give a separate option to the user as well. What do you\n> > > > > think?\n> > > >\n> > > > Yeah, I think giving an option would be better.\n> > >\n> > > I think we should also think about the combinations of the\n> > > skip_empty_xacts and skip_empty_streams. For example, if the user\n> > > passes the skip_empty_xacts to false and skip_empty_streams to true\n> > > then what should be the behavior, if the complete transaction\n> > > option1: It should not print any stream_start/stream_stop and just\n> > > print commit stream because skip_empty_xacts is false and\n> > > skip_empty_streams is true.\n> > > option2: It should print the stream_start message for the very first\n> > > stream because it is the first stream if the txn and skip_empty_xacts\n> > > is false so print it and later it will print-stream commit.\n> > > option3: Or for the first stream we first put the BEGIN message i.e\n> > > stream begin\n> > > stream start\n> > > stream stop\n> > > stream commit\n> > > option4: the user should not be allowed to pass skip_empty_xacts =\n> > > false with skip_empty_streams to true. Because if the streaming mode\n> > > is on then we can not print the xact without printing streams.\n> > >\n> > > What is your opinion on this?\n> > >\n> >\n> > I would prefer option-4 and in addition to that we can ensure that if\n> > skip_empty_xacts = true then by default skip_empty_streams is also\n> > true.\n>\n> But then it will behave as a single option only, right? because if\n> 1. skip_empty_xacts = true, then we set skip_empty_streams = true\n>\n\nFor this case, users can use skip_empty_xacts = true and\nskip_empty_streams = false. I am just asking if the user has only used\nskip_empty_xacts = true and didn't use the 'skip_empty_streams'\noption.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Nov 2020 10:53:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 10:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 10:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 8:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 9, 2020 at 6:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 9, 2020 at 5:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 9, 2020 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > The bigger question is do we want to give users an option\n> > > > > > for skip_empty_streams similar to skip_empty_xacts? I would again\n> > > > > > prefer to give a separate option to the user as well. What do you\n> > > > > > think?\n> > > > >\n> > > > > Yeah, I think giving an option would be better.\n> > > >\n> > > > I think we should also think about the combinations of the\n> > > > skip_empty_xacts and skip_empty_streams. For example, if the user\n> > > > passes the skip_empty_xacts to false and skip_empty_streams to true\n> > > > then what should be the behavior, if the complete transaction\n> > > > option1: It should not print any stream_start/stream_stop and just\n> > > > print commit stream because skip_empty_xacts is false and\n> > > > skip_empty_streams is true.\n> > > > option2: It should print the stream_start message for the very first\n> > > > stream because it is the first stream if the txn and skip_empty_xacts\n> > > > is false so print it and later it will print-stream commit.\n> > > > option3: Or for the first stream we first put the BEGIN message i.e\n> > > > stream begin\n> > > > stream start\n> > > > stream stop\n> > > > stream commit\n> > > > option4: the user should not be allowed to pass skip_empty_xacts =\n> > > > false with skip_empty_streams to true. Because if the streaming mode\n> > > > is on then we can not print the xact without printing streams.\n> > > >\n> > > > What is your opinion on this?\n> > > >\n> > >\n> > > I would prefer option-4 and in addition to that we can ensure that if\n> > > skip_empty_xacts = true then by default skip_empty_streams is also\n> > > true.\n> >\n> > But then it will behave as a single option only, right? because if\n> > 1. skip_empty_xacts = true, then we set skip_empty_streams = true\n> >\n>\n> For this case, users can use skip_empty_xacts = true and\n> skip_empty_streams = false. I am just asking if the user has only used\n> skip_empty_xacts = true and didn't use the 'skip_empty_streams'\n> option.\n\nOk, thanks for the clarification.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Nov 2020 11:18:33 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 11:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 10:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 10:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 10, 2020 at 8:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 9, 2020 at 6:00 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 9, 2020 at 5:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Nov 9, 2020 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > The bigger question is do we want to give users an option\n> > > > > > > for skip_empty_streams similar to skip_empty_xacts? I would again\n> > > > > > > prefer to give a separate option to the user as well. What do you\n> > > > > > > think?\n> > > > > >\n> > > > > > Yeah, I think giving an option would be better.\n> > > > >\n> > > > > I think we should also think about the combinations of the\n> > > > > skip_empty_xacts and skip_empty_streams. For example, if the user\n> > > > > passes the skip_empty_xacts to false and skip_empty_streams to true\n> > > > > then what should be the behavior, if the complete transaction\n> > > > > option1: It should not print any stream_start/stream_stop and just\n> > > > > print commit stream because skip_empty_xacts is false and\n> > > > > skip_empty_streams is true.\n> > > > > option2: It should print the stream_start message for the very first\n> > > > > stream because it is the first stream if the txn and skip_empty_xacts\n> > > > > is false so print it and later it will print-stream commit.\n> > > > > option3: Or for the first stream we first put the BEGIN message i.e\n> > > > > stream begin\n> > > > > stream start\n> > > > > stream stop\n> > > > > stream commit\n> > > > > option4: the user should not be allowed to pass skip_empty_xacts =\n> > > > > false with skip_empty_streams to true. Because if the streaming mode\n> > > > > is on then we can not print the xact without printing streams.\n> > > > >\n> > > > > What is your opinion on this?\n> > > > >\n> > > >\n> > > > I would prefer option-4 and in addition to that we can ensure that if\n> > > > skip_empty_xacts = true then by default skip_empty_streams is also\n> > > > true.\n> > >\n> > > But then it will behave as a single option only, right? because if\n> > > 1. skip_empty_xacts = true, then we set skip_empty_streams = true\n> > >\n> >\n> > For this case, users can use skip_empty_xacts = true and\n> > skip_empty_streams = false. I am just asking if the user has only used\n> > skip_empty_xacts = true and didn't use the 'skip_empty_streams'\n> > option.\n>\n> Ok, thanks for the clarification.\n>\n\nI have prepared a patch for the same.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 10 Nov 2020 14:24:56 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 11:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 10:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > For this case, users can use skip_empty_xacts = true and\n> > > skip_empty_streams = false. I am just asking if the user has only used\n> > > skip_empty_xacts = true and didn't use the 'skip_empty_streams'\n> > > option.\n> >\n> > Ok, thanks for the clarification.\n> >\n>\n> I have prepared a patch for the same.\n>\n\nFew comments:\n1.\n+ else if (strcmp(elem->defname, \"skip-empty-streams\") == 0)\n+ {\n+ if (elem->arg == NULL)\n+ data->skip_empty_streams = true;\n+ else if (!parse_bool(strVal(elem->arg), &data->skip_empty_streams))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"could not parse value \\\"%s\\\" for parameter \\\"%s\\\"\",\n+ strVal(elem->arg), elem->defname)));\n+ if (!data->skip_empty_xacts && data->skip_empty_streams)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"the skip-empty-streams can not be true if skip-empty-xacts\nis false\")));\n }\n\nYou can probably add a comment as to why we are disallowing this case.\nI thought of considering 'stream-changes' parameter here because it\nwon't make sense to give this parameter without it, however, it seems\nthat is not necessary but maybe adding a comment\nhere in that regard would be a good idea.\n\n2.\npg_decode_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n {\n TestDecodingData *data = ctx->output_plugin_private;\n+ TestDecodingTxnData *txndata =\n+ MemoryContextAllocZero(ctx->context, sizeof(TestDecodingTxnData));\n+\n\nShall we free this memory at commit time for the sake of consistency,\notherwise also it would be freed with decoding context?\n\n3. Can you please prepare a separate patch for test case changes so\nthat it would be easier to verify that it fails without the patch and\npassed after the patch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Nov 2020 19:20:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 11:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 10, 2020 at 10:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > For this case, users can use skip_empty_xacts = true and\n> > > > skip_empty_streams = false. I am just asking if the user has only used\n> > > > skip_empty_xacts = true and didn't use the 'skip_empty_streams'\n> > > > option.\n> > >\n> > > Ok, thanks for the clarification.\n> > >\n> >\n> > I have prepared a patch for the same.\n> >\n>\n> Few comments:\n> 1.\n> + else if (strcmp(elem->defname, \"skip-empty-streams\") == 0)\n> + {\n> + if (elem->arg == NULL)\n> + data->skip_empty_streams = true;\n> + else if (!parse_bool(strVal(elem->arg), &data->skip_empty_streams))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"could not parse value \\\"%s\\\" for parameter \\\"%s\\\"\",\n> + strVal(elem->arg), elem->defname)));\n> + if (!data->skip_empty_xacts && data->skip_empty_streams)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"the skip-empty-streams can not be true if skip-empty-xacts\n> is false\")));\n> }\n>\n> You can probably add a comment as to why we are disallowing this case.\n> I thought of considering 'stream-changes' parameter here because it\n> won't make sense to give this parameter without it, however, it seems\n> that is not necessary but maybe adding a comment\n> here in that regard would be a good idea.\n\nShould we also consider the case that if the user just passed\nskip_empty_streams to true then we should automatically set\nskip_empty_xacts to true?\nAnd we will allow the 'skip_empty_streams' parameter only if\nstream-changes' is true.\n\n> 2.\n> pg_decode_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n> TestDecodingData *data = ctx->output_plugin_private;\n> + TestDecodingTxnData *txndata =\n> + MemoryContextAllocZero(ctx->context, sizeof(TestDecodingTxnData));\n> +\n>\n> Shall we free this memory at commit time for the sake of consistency,\n> otherwise also it would be freed with decoding context?\n\nYeah, actually I have freed in the stream commit but missed here. I\nwill do that.\n\n> 3. Can you please prepare a separate patch for test case changes so\n> that it would be easier to verify that it fails without the patch and\n> passed after the patch?\n\nok\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Nov 2020 09:59:51 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 10:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> >\n> > You can probably add a comment as to why we are disallowing this case.\n> > I thought of considering 'stream-changes' parameter here because it\n> > won't make sense to give this parameter without it, however, it seems\n> > that is not necessary but maybe adding a comment\n> > here in that regard would be a good idea.\n>\n> Should we also consider the case that if the user just passed\n> skip_empty_streams to true then we should automatically set\n> skip_empty_xacts to true?\n>\n\nIs there any problem if we don't do this? Actually, adding\ndependencies on parameters is confusing so I want to avoid that unless\nit is really required.\n\n> And we will allow the 'skip_empty_streams' parameter only if\n> stream-changes' is true.\n>\n\nCan't we simply ignore 'skip_empty_streams' if 'stream-changes' is not given?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Nov 2020 19:00:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 11, 2020 at 10:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > >\n> > > You can probably add a comment as to why we are disallowing this case.\n> > > I thought of considering 'stream-changes' parameter here because it\n> > > won't make sense to give this parameter without it, however, it seems\n> > > that is not necessary but maybe adding a comment\n> > > here in that regard would be a good idea.\n> >\n> > Should we also consider the case that if the user just passed\n> > skip_empty_streams to true then we should automatically set\n> > skip_empty_xacts to true?\n> >\n>\n> Is there any problem if we don't do this? Actually, adding\n> dependencies on parameters is confusing so I want to avoid that unless\n> it is really required.\n>\n> > And we will allow the 'skip_empty_streams' parameter only if\n> > stream-changes' is true.\n\nThe reason behind this thought is that if the user doesn't pass any\nvalue for 'skip_empty_xacts' then the default value will be false and\nif the user only pass 'skip_empty_streams' to true then we will error\nout assuming that skip_empty_xacts is false but skip_empty_streams is\ntrue. So it seems instead of error out we can assume that\nskip_empty_streams true mean skip_empty_xacts is also true if nothing\nis passed for that.\n\n> Can't we simply ignore 'skip_empty_streams' if 'stream-changes' is not given?\n\nYeah, we can do that.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Nov 2020 19:04:59 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 7:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Nov 11, 2020 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Nov 11, 2020 at 10:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > You can probably add a comment as to why we are disallowing this case.\n> > > > I thought of considering 'stream-changes' parameter here because it\n> > > > won't make sense to give this parameter without it, however, it seems\n> > > > that is not necessary but maybe adding a comment\n> > > > here in that regard would be a good idea.\n> > >\n> > > Should we also consider the case that if the user just passed\n> > > skip_empty_streams to true then we should automatically set\n> > > skip_empty_xacts to true?\n> > >\n> >\n> > Is there any problem if we don't do this? Actually, adding\n> > dependencies on parameters is confusing so I want to avoid that unless\n> > it is really required.\n> >\n> > > And we will allow the 'skip_empty_streams' parameter only if\n> > > stream-changes' is true.\n>\n> The reason behind this thought is that if the user doesn't pass any\n> value for 'skip_empty_xacts' then the default value will be false and\n> if the user only pass 'skip_empty_streams' to true then we will error\n> out assuming that skip_empty_xacts is false but skip_empty_streams is\n> true. So it seems instead of error out we can assume that\n> skip_empty_streams true mean skip_empty_xacts is also true if nothing\n> is passed for that.\n>\n\nSo, let's see the overall picture here. We can have four options:\nskip_empty_xacts = true, skip_empty_stream = false;\nskip_empty_xacts = true, skip_empty_stream = true;\nskip_empty_xacts = false, skip_empty_stream = false;\nskip_empty_xacts = false, skip_empty_stream = true;\n\nI think we want to say the first three could be supported and for the\nlast one either we can either give an error or make its behavior\nsimilar to option-2? Is this what your understanding as well?\n\nAnother thing I am thinking let's just not expose skip_empty_stream to\nthe user and consider the behavior based on the value of\nskip_empty_xacts. Internally, in the code, we can still have different\nvariables to distinguish between empty_xacts and empty_streams.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Nov 2020 08:46:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 11, 2020 at 7:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Nov 11, 2020 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Nov 11, 2020 at 10:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > >\n> > > > > You can probably add a comment as to why we are disallowing this case.\n> > > > > I thought of considering 'stream-changes' parameter here because it\n> > > > > won't make sense to give this parameter without it, however, it seems\n> > > > > that is not necessary but maybe adding a comment\n> > > > > here in that regard would be a good idea.\n> > > >\n> > > > Should we also consider the case that if the user just passed\n> > > > skip_empty_streams to true then we should automatically set\n> > > > skip_empty_xacts to true?\n> > > >\n> > >\n> > > Is there any problem if we don't do this? Actually, adding\n> > > dependencies on parameters is confusing so I want to avoid that unless\n> > > it is really required.\n> > >\n> > > > And we will allow the 'skip_empty_streams' parameter only if\n> > > > stream-changes' is true.\n> >\n> > The reason behind this thought is that if the user doesn't pass any\n> > value for 'skip_empty_xacts' then the default value will be false and\n> > if the user only pass 'skip_empty_streams' to true then we will error\n> > out assuming that skip_empty_xacts is false but skip_empty_streams is\n> > true. So it seems instead of error out we can assume that\n> > skip_empty_streams true mean skip_empty_xacts is also true if nothing\n> > is passed for that.\n> >\n>\n> So, let's see the overall picture here. We can have four options:\n> skip_empty_xacts = true, skip_empty_stream = false;\n> skip_empty_xacts = true, skip_empty_stream = true;\n> skip_empty_xacts = false, skip_empty_stream = false;\n> skip_empty_xacts = false, skip_empty_stream = true;\n>\n> I think we want to say the first three could be supported and for the\n> last one either we can either give an error or make its behavior\n> similar to option-2? Is this what your understanding as well?\n\nFor the last one if the user has specifically passed false for the\nskip_empty_xacts then error and if the user did not pass anything for\nskip_empty_xacts then make its behavior similar to option-2.\n\n> Another thing I am thinking let's just not expose skip_empty_stream to\n> the user and consider the behavior based on the value of\n> skip_empty_xacts. Internally, in the code, we can still have different\n> variables to distinguish between empty_xacts and empty_streams.\n\nYeah, even I think in most of the cases it makes more sense to have\nskip_empty_xacts and skip_empty_stream similar values. So better we\ndon't expose skip_empty_stream. I agree that we need to keep two\nvariables to track the empty stream and empty xacts.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Nov 2020 11:29:39 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 11:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Nov 12, 2020 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > Another thing I am thinking let's just not expose skip_empty_stream to\n> > the user and consider the behavior based on the value of\n> > skip_empty_xacts. Internally, in the code, we can still have different\n> > variables to distinguish between empty_xacts and empty_streams.\n>\n> Yeah, even I think in most of the cases it makes more sense to have\n> skip_empty_xacts and skip_empty_stream similar values. So better we\n> don't expose skip_empty_stream. I agree that we need to keep two\n> variables to track the empty stream and empty xacts.\n>\n\nSo, let's try to do this way and if we see any problems then we can re-think.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Nov 2020 11:38:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Thu, 12 Nov 2020 at 11:37 AM, Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Thu, Nov 12, 2020 at 11:29 AM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> >\n> > On Thu, Nov 12, 2020 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> >\n> > > Another thing I am thinking let's just not expose skip_empty_stream to\n> > > the user and consider the behavior based on the value of\n> > > skip_empty_xacts. Internally, in the code, we can still have different\n> > > variables to distinguish between empty_xacts and empty_streams.\n> >\n> > Yeah, even I think in most of the cases it makes more sense to have\n> > skip_empty_xacts and skip_empty_stream similar values. So better we\n> > don't expose skip_empty_stream. I agree that we need to keep two\n> > variables to track the empty stream and empty xacts.\n> >\n>\n> So, let's try to do this way and if we see any problems then we can\n> re-think.\n\n\nSounds good to me, I will send the updated patch.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, 12 Nov 2020 at 11:37 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Nov 12, 2020 at 11:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Nov 12, 2020 at 8:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > Another thing I am thinking let's just not expose skip_empty_stream to\n> > the user and consider the behavior based on the value of\n> > skip_empty_xacts. Internally, in the code, we can still have different\n> > variables to distinguish between empty_xacts and empty_streams.\n>\n> Yeah, even I think in most of the cases it makes more sense to have\n> skip_empty_xacts and skip_empty_stream similar values. So better we\n> don't expose skip_empty_stream. I agree that we need to keep two\n> variables to track the empty stream and empty xacts.\n>\n\nSo, let's try to do this way and if we see any problems then we can re-think.Sounds good to me, I will send the updated patch.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Nov 2020 11:44:05 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 11:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 10, 2020 at 10:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > For this case, users can use skip_empty_xacts = true and\n> > > > skip_empty_streams = false. I am just asking if the user has only used\n> > > > skip_empty_xacts = true and didn't use the 'skip_empty_streams'\n> > > > option.\n> > >\n> > > Ok, thanks for the clarification.\n> > >\n> >\n> > I have prepared a patch for the same.\n> >\n>\n> Few comments:\n> 1.\n> + else if (strcmp(elem->defname, \"skip-empty-streams\") == 0)\n> + {\n> + if (elem->arg == NULL)\n> + data->skip_empty_streams = true;\n> + else if (!parse_bool(strVal(elem->arg), &data->skip_empty_streams))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"could not parse value \\\"%s\\\" for parameter \\\"%s\\\"\",\n> + strVal(elem->arg), elem->defname)));\n> + if (!data->skip_empty_xacts && data->skip_empty_streams)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"the skip-empty-streams can not be true if skip-empty-xacts\n> is false\")));\n> }\n>\n> You can probably add a comment as to why we are disallowing this case.\n> I thought of considering 'stream-changes' parameter here because it\n> won't make sense to give this parameter without it, however, it seems\n> that is not necessary but maybe adding a comment\n> here in that regard would be a good idea.\n\nAs per our latest discussion, I have removed the extra input parameter\nso this comment is not needed now.\n\n> 2.\n> pg_decode_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n> TestDecodingData *data = ctx->output_plugin_private;\n> + TestDecodingTxnData *txndata =\n> + MemoryContextAllocZero(ctx->context, sizeof(TestDecodingTxnData));\n> +\n>\n> Shall we free this memory at commit time for the sake of consistency,\n> otherwise also it would be freed with decoding context?\n\nDone\n\n> 3. Can you please prepare a separate patch for test case changes so\n> that it would be easier to verify that it fails without the patch and\n> passed after the patch?\n\nDone\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Nov 2020 15:10:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 3:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > 3. Can you please prepare a separate patch for test case changes so\n> > that it would be easier to verify that it fails without the patch and\n> > passed after the patch?\n>\n> Done\n>\n\nFew comments:\n=================\n1.\n -- consume DDL\n SELECT data FROM pg_logical_slot_get_changes('isolation_slot',\nNULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n- CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n'select array_agg(md5(g::text))::text from generate_series(1, 80000)\ng';\n+ CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n'select array_agg(md5(g::text))::text from generate_series(1, 60000)\ng';\n }\n\n\nIs there a reason for this change? I think probably here a lesser\nnumber of rows are sufficient to serve the purpose of the test but I\nam not sure if it is related to this patch or there is any other\nreason behind this change?\n\n2.\n+typedef struct\n+{\n+ bool xact_wrote_changes;\n+ bool stream_wrote_changes;\n+} TestDecodingTxnData;\n+\n\nI think here a comment explaining why we need this as a separate\nstructure would be better and probably explain why we need two\ndifferent members.\n\n3.\npg_decode_commit_txn()\n{\n..\n- if (data->skip_empty_xacts && !data->xact_wrote_changes)\n+ pfree(txndata);\n+ txn->output_plugin_private = false;\n+\n\nHere, don't we need to set txn->output_plugin_private as NULL as it is\na pointer and we do explicitly test it for being NULL at other places?\nAlso, change at other places where it is set as false.\n\n4.\n@@ -592,10 +610,24 @@ pg_decode_stream_start(LogicalDecodingContext *ctx,\n ReorderBufferTXN *txn)\n {\n TestDecodingData *data = ctx->output_plugin_private;\n+ TestDecodingTxnData *txndata = txn->output_plugin_private;\n\n- data->xact_wrote_changes = false;\n+ /*\n+ * If this is the first stream for the txn then allocate the txn plugin\n+ * data and set the xact_wrote_changes to false.\n+ */\n+ if (txndata == NULL)\n+ {\n+ txndata =\n\nAs we are explicitly testing for NULL here, isn't it better to\nexplicitly initialize 'output_plugin_private' with NULL in\nReorderBufferGetTXN?\n\n5.\n@@ -633,8 +666,18 @@ pg_decode_stream_abort(LogicalDecodingContext *ctx,\n XLogRecPtr abort_lsn)\n {\n TestDecodingData *data = ctx->output_plugin_private;\n+ ReorderBufferTXN *toptxn = txn->toptxn ? txn->toptxn : txn;\n+ TestDecodingTxnData *txndata = toptxn->output_plugin_private;\n+ bool xact_wrote_changes = txndata->xact_wrote_changes;\n\n- if (data->skip_empty_xacts && !data->xact_wrote_changes)\n+ if (txn->toptxn == NULL)\n+ {\n+ Assert(txn->output_plugin_private != NULL);\n+ pfree(txndata);\n+ txn->output_plugin_private = false;\n+ }\n+\n\nHere, if we are expecting 'output_plugin_private' to be set only for\ntoptxn then the Assert and reset should happen for toptxn? I find the\nchanges in this function a bit unclear so probably adding a comment\nhere could help.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Nov 2020 15:18:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 3:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 12, 2020 at 3:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 10, 2020 at 2:25 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > 3. Can you please prepare a separate patch for test case changes so\n> > > that it would be easier to verify that it fails without the patch and\n> > > passed after the patch?\n> >\n> > Done\n> >\n>\n> Few comments:\n> =================\n> 1.\n> -- consume DDL\n> SELECT data FROM pg_logical_slot_get_changes('isolation_slot',\n> NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> - CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n> 'select array_agg(md5(g::text))::text from generate_series(1, 80000)\n> g';\n> + CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS\n> 'select array_agg(md5(g::text))::text from generate_series(1, 60000)\n> g';\n> }\n>\n>\n> Is there a reason for this change? I think probably here a lesser\n> number of rows are sufficient to serve the purpose of the test but I\n> am not sure if it is related to this patch or there is any other\n> reason behind this change?\n\nI think I changed for some experiment and got included in the patch so\nreverted this.\n\n> 2.\n> +typedef struct\n> +{\n> + bool xact_wrote_changes;\n> + bool stream_wrote_changes;\n> +} TestDecodingTxnData;\n> +\n>\n> I think here a comment explaining why we need this as a separate\n> structure would be better and probably explain why we need two\n> different members.\n\nDone\n\n> 3.\n> pg_decode_commit_txn()\n> {\n> ..\n> - if (data->skip_empty_xacts && !data->xact_wrote_changes)\n> + pfree(txndata);\n> + txn->output_plugin_private = false;\n> +\n>\n> Here, don't we need to set txn->output_plugin_private as NULL as it is\n> a pointer and we do explicitly test it for being NULL at other places?\n> Also, change at other places where it is set as false.\n\nFixed\n\n> 4.\n> @@ -592,10 +610,24 @@ pg_decode_stream_start(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn)\n> {\n> TestDecodingData *data = ctx->output_plugin_private;\n> + TestDecodingTxnData *txndata = txn->output_plugin_private;\n>\n> - data->xact_wrote_changes = false;\n> + /*\n> + * If this is the first stream for the txn then allocate the txn plugin\n> + * data and set the xact_wrote_changes to false.\n> + */\n> + if (txndata == NULL)\n> + {\n> + txndata =\n>\n> As we are explicitly testing for NULL here, isn't it better to\n> explicitly initialize 'output_plugin_private' with NULL in\n> ReorderBufferGetTXN?\n\nDone\n\n> 5.\n> @@ -633,8 +666,18 @@ pg_decode_stream_abort(LogicalDecodingContext *ctx,\n> XLogRecPtr abort_lsn)\n> {\n> TestDecodingData *data = ctx->output_plugin_private;\n> + ReorderBufferTXN *toptxn = txn->toptxn ? txn->toptxn : txn;\n> + TestDecodingTxnData *txndata = toptxn->output_plugin_private;\n> + bool xact_wrote_changes = txndata->xact_wrote_changes;\n>\n> - if (data->skip_empty_xacts && !data->xact_wrote_changes)\n> + if (txn->toptxn == NULL)\n> + {\n> + Assert(txn->output_plugin_private != NULL);\n> + pfree(txndata);\n> + txn->output_plugin_private = false;\n> + }\n> +\n>\n> Here, if we are expecting 'output_plugin_private' to be set only for\n> toptxn then the Assert and reset should happen for toptxn? I find the\n> changes in this function a bit unclear so probably adding a comment\n> here could help.\n\nI have added the comments.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 15 Nov 2020 11:34:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
},
{
"msg_contents": "On Sun, Nov 15, 2020 at 11:34 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n\nPushed after minor changes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 Nov 2020 19:36:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: logical streaming of xacts via test_decoding is broken"
}
] |
[
{
"msg_contents": "Hello,\n\nI've been doing some benchmarking on recent version of PostgreSQL and\nI'm seeing some regressions. The benchmark setup is as described in [1]\nexcept it looks like I got lucky in the runs used for that article.\n\nAfter many more runs, I get these NOPM averages (hopefully formatting\nwill survive):\n\n Users: 50 100 250 500\n12.4 485,914 707,535 739,808 589,856\n13.0 485,501 697,505 837,446 385,225\n14(2020-10-13) 521,902 759,609 941,212 611,647\n14(2020-11-02) 478,640 684,138 930,959 513,707\n\nThe 14s are taken from the nightly builds on those dates, I can't find\nany way to associate them with a specific git hash.\n\nFor 50 and 100 users, it seems there isn't all that much difference.\n250 users keeps improving, but at 500 users the wheels seem to fall off\nin 13, coming back a bit in 14.\n\nI'm not sure exactly what to make of this but we thought it would be\nimportant to raise the issue with the community. I'm trying to bisect\nthe 13 development cycle to see if anything stands out as the culprit,\nbut that will take some time.\n\nI plan to keep doing 14 at the start of every month (I started late in\nOctober).\n\n[1]\nhttps://www.enterprisedb.com/blog/postgresql-tproc-c-benchmarks-postgresql-12-vs-postgresql-13-performance\n\n-- \nVik Fearing\n\n\n",
"msg_date": "Mon, 9 Nov 2020 09:29:13 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Performance regressions"
},
{
"msg_contents": "On 11/9/20 9:29 AM, Vik Fearing wrote:\n> Hello,\n> \n> I've been doing some benchmarking on recent version of PostgreSQL and\n> I'm seeing some regressions. The benchmark setup is as described in [1]\n> except it looks like I got lucky in the runs used for that article.\n> \n> After many more runs, I get these NOPM averages (hopefully formatting\n> will survive):\n> \n> Users: 50 100 250 500\n> 12.4 485,914 707,535 739,808 589,856\n> 13.0 485,501 697,505 837,446 385,225\n> 14(2020-10-13) 521,902 759,609 941,212 611,647\n> 14(2020-11-02) 478,640 684,138 930,959 513,707\n\n\nI must have done something wrong because I cannot reproduce this\nanymore. Please disregard.\n-- \nVik Fearing\n\n\n",
"msg_date": "Tue, 8 Dec 2020 14:53:41 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: Performance regressions"
}
] |
[
{
"msg_contents": "Hi\n\nI think I found a typo for the output of an error message which may cause building warning.\nPlease refer to the attachment for the detail.\n\nPrevious discussion:\nhttps://www.postgresql.org/message-id/alpine.DEB.2.21.1910311939430.27369@lancre\n\nBest regards\nTang",
"msg_date": "Mon, 9 Nov 2020 08:39:29 +0000",
"msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Useless string ouput in error message"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 9:39 AM Tang, Haiying <tanghy.fnst@cn.fujitsu.com> wrote:\n>\n> Hi\n>\n> I think I found a typo for the output of an error message which may cause building warning.\n> Please refer to the attachment for the detail.\n>\n> Previous discussion:\n> https://www.postgresql.org/message-id/alpine.DEB.2.21.1910311939430.27369@lancre\n\nYeah, this looks like a copy/paste error. Pushed and backpatched to 13\nwhere it was introduced.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 9 Nov 2020 10:39:21 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Useless string ouput in error message"
},
{
"msg_contents": "\n>> I think I found a typo for the output of an error message which may cause building warning.\n>> Please refer to the attachment for the detail.\n\nIndeed. Thanks for the fix!\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 9 Nov 2020 21:30:01 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Useless string ouput in error message"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nToday I have accidentally noticed that autoprewarm feature of pg_prewarm \nused TimestampDifference()'s results in a wrong way.\n\nFirst, it used *seconds* result from it as a *milliseconds*. It was \ncausing it to make dump file autoprewarm.blocks ~every second with \ndefault setting of autoprewarm_interval = 300s.\n\nHere is a log part with debug output in this case:\n\n```\n2020-11-09 19:09:00.162 MSK [85328] LOG: dumping autoprewarm.blocks\n2020-11-09 19:09:01.161 MSK [85328] LOG: dumping autoprewarm.blocks\n2020-11-09 19:09:02.160 MSK [85328] LOG: dumping autoprewarm.blocks\n2020-11-09 19:09:03.159 MSK [85328] LOG: dumping autoprewarm.blocks\n```\n\nAfter fixing this issue I have noticed that it still dumps blocks twice \nat each timeout (here I set autoprewarm_interval to 15s):\n\n```\n2020-11-09 19:18:59.692 MSK [85662] LOG: dumping autoprewarm.blocks\n2020-11-09 19:18:59.700 MSK [85662] LOG: dumping autoprewarm.blocks\n\n2020-11-09 19:19:14.694 MSK [85662] LOG: dumping autoprewarm.blocks\n2020-11-09 19:19:14.704 MSK [85662] LOG: dumping autoprewarm.blocks\n```\n\nThis happens because at timeout time we were using continue, but \nactually we still have to wait the entire autoprewarm_interval after \nsuccessful dumping.\n\nI have fixed both issues in the attached patches and also added a \nminimalistic tap test as a first one to verify that this automatic \ndamping still works after refactoring. I put Robert into CC, since he is \nan author of this feature.\n\nWhat do you think?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Mon, 09 Nov 2020 19:45:18 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Misuse of TimestampDifference() in the autoprewarm feature of\n pg_prewarm"
},
{
"msg_contents": "Hi,\n\nOn Mon, Nov 9, 2020 at 10:15 PM Alexey Kondratov <a.kondratov@postgrespro.ru>\nwrote:\n\n> Hi Hackers,\n>\n> Today I have accidentally noticed that autoprewarm feature of pg_prewarm\n> used TimestampDifference()'s results in a wrong way.\n>\n> First, it used *seconds* result from it as a *milliseconds*. It was\n> causing it to make dump file autoprewarm.blocks ~every second with\n> default setting of autoprewarm_interval = 300s.\n>\n>\nI had a look at this, and I agree that this is an issue. I also had a look\nat\nthe patch 0002, and the patch looks good to me.\n\nIn patch 0003 there is a typo:\n\n+ /* We have to sleep even after a successfull dump */\n\ns/successfull/successful\n\n\nRegards,\nJeevan Ladhe\n\nHi,On Mon, Nov 9, 2020 at 10:15 PM Alexey Kondratov <a.kondratov@postgrespro.ru> wrote:Hi Hackers,\n\nToday I have accidentally noticed that autoprewarm feature of pg_prewarm \nused TimestampDifference()'s results in a wrong way.\n\nFirst, it used *seconds* result from it as a *milliseconds*. It was \ncausing it to make dump file autoprewarm.blocks ~every second with \ndefault setting of autoprewarm_interval = 300s.\nI had a look at this, and I agree that this is an issue. I also had a look atthe patch 0002, and the patch looks good to me.In patch 0003 there is a typo:+ /* We have to sleep even after a successfull dump */s/successfull/successfulRegards,Jeevan Ladhe",
"msg_date": "Mon, 9 Nov 2020 23:35:52 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Misuse of TimestampDifference() in the autoprewarm feature of\n pg_prewarm"
},
{
"msg_contents": "Alexey Kondratov <a.kondratov@postgrespro.ru> writes:\n> After fixing this issue I have noticed that it still dumps blocks twice \n> at each timeout (here I set autoprewarm_interval to 15s):\n> ...\n> This happens because at timeout time we were using continue, but \n> actually we still have to wait the entire autoprewarm_interval after \n> successful dumping.\n\nI don't think your 0001 is correct. It would be okay if apw_dump_now()\ncould be counted on to take negligible time, but we shouldn't assume\nthat should we?\n\nI agree that the \"continue\" seems a bit bogus, because it's skipping\nthe ResetLatch call at the bottom of the loop; it's not quite clear\nto me whether that's a good thing or not. But the general idea of\nthe existing code seems to be to loop around and make a fresh calculation\nof how-long-to-wait, and that doesn't seem wrong.\n\n0002 seems like a pretty clear bug fix, though I wonder if this is exactly\nwhat we want to do going forward. It seems like a very large fraction of\nthe callers of TimestampDifference would like to have the value in msec,\nwhich means we're doing a whole lot of expensive and error-prone\narithmetic to break down the difference to sec/usec and then put it\nback together again. Let's get rid of that by inventing, say\nTimestampDifferenceMilliseconds(...).\n\nBTW, I see another bug of a related ilk. Look what\npostgres_fdw/connection.c is doing:\n\n TimestampDifference(now, endtime, &secs, µsecs);\n\n /* To protect against clock skew, limit sleep to one minute. */\n cur_timeout = Min(60000, secs * USECS_PER_SEC + microsecs);\n\n /* Sleep until there's something to do */\n wc = WaitLatchOrSocket(MyLatch,\n WL_LATCH_SET | WL_SOCKET_READABLE |\n WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n PQsocket(conn),\n cur_timeout, PG_WAIT_EXTENSION);\n\nWaitLatchOrSocket's timeout is measured in msec not usec. I think the\ncomment about \"clock skew\" is complete BS, and the Min() calculation was\nput in as a workaround by somebody observing that the sleep waited too\nlong, but not understanding why.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Nov 2020 13:53:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Misuse of TimestampDifference() in the autoprewarm feature of\n pg_prewarm"
},
{
"msg_contents": "On 2020-11-09 21:53, Tom Lane wrote:\n> Alexey Kondratov <a.kondratov@postgrespro.ru> writes:\n>> After fixing this issue I have noticed that it still dumps blocks \n>> twice\n>> at each timeout (here I set autoprewarm_interval to 15s):\n>> ...\n>> This happens because at timeout time we were using continue, but\n>> actually we still have to wait the entire autoprewarm_interval after\n>> successful dumping.\n> \n> I don't think your 0001 is correct. It would be okay if apw_dump_now()\n> could be counted on to take negligible time, but we shouldn't assume\n> that should we?\n> \n\nYes, it seems so, if I understand you correctly. I had a doubt about \npossibility of pg_ctl to exit earlier than a dumping process. Now I \nadded an explicit wait for dump file into test.\n\n> I agree that the \"continue\" seems a bit bogus, because it's skipping\n> the ResetLatch call at the bottom of the loop; it's not quite clear\n> to me whether that's a good thing or not. But the general idea of\n> the existing code seems to be to loop around and make a fresh \n> calculation\n> of how-long-to-wait, and that doesn't seem wrong.\n\nI have left the last patch intact, since it resolves the 'double dump' \nissue, but I agree with нщгк point about existing logic of the code, \nalthough it is a bit broken. So I have to think more about how to fix it \nin a better way.\n\n> 0002 seems like a pretty clear bug fix, though I wonder if this is \n> exactly\n> what we want to do going forward. It seems like a very large fraction \n> of\n> the callers of TimestampDifference would like to have the value in \n> msec,\n> which means we're doing a whole lot of expensive and error-prone\n> arithmetic to break down the difference to sec/usec and then put it\n> back together again. Let's get rid of that by inventing, say\n> TimestampDifferenceMilliseconds(...).\n\nYeah, I get into this problem after a bug in another extension — \npg_wait_sampling. I have attached 0002, which implements \nTimestampDifferenceMilliseconds(), so 0003 just uses this new function \nto solve the initial issues. If it looks good to you, then we can switch \nall similar callers to it.\n\n> BTW, I see another bug of a related ilk. Look what\n> postgres_fdw/connection.c is doing:\n> \n> TimestampDifference(now, endtime, &secs, µsecs);\n> \n> /* To protect against clock skew, limit sleep to one \n> minute. */\n> cur_timeout = Min(60000, secs * USECS_PER_SEC + \n> microsecs);\n> \n> /* Sleep until there's something to do */\n> wc = WaitLatchOrSocket(MyLatch,\n> WL_LATCH_SET | \n> WL_SOCKET_READABLE |\n> WL_TIMEOUT | \n> WL_EXIT_ON_PM_DEATH,\n> PQsocket(conn),\n> cur_timeout, PG_WAIT_EXTENSION);\n> \n> WaitLatchOrSocket's timeout is measured in msec not usec. I think the\n> comment about \"clock skew\" is complete BS, and the Min() calculation \n> was\n> put in as a workaround by somebody observing that the sleep waited too\n> long, but not understanding why.\n\nI wonder how many troubles one can get with all these unit conversions.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Mon, 09 Nov 2020 23:03:53 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Misuse of TimestampDifference() in the autoprewarm feature of\n pg_prewarm"
},
{
"msg_contents": "Alexey Kondratov <a.kondratov@postgrespro.ru> writes:\n> On 2020-11-09 21:53, Tom Lane wrote:\n>> 0002 seems like a pretty clear bug fix, though I wonder if this is exactly\n>> what we want to do going forward. It seems like a very large fraction of\n>> the callers of TimestampDifference would like to have the value in msec,\n>> which means we're doing a whole lot of expensive and error-prone\n>> arithmetic to break down the difference to sec/usec and then put it\n>> back together again. Let's get rid of that by inventing, say\n>> TimestampDifferenceMilliseconds(...).\n\n> Yeah, I get into this problem after a bug in another extension — \n> pg_wait_sampling. I have attached 0002, which implements \n> TimestampDifferenceMilliseconds(), so 0003 just uses this new function \n> to solve the initial issues. If it looks good to you, then we can switch \n> all similar callers to it.\n\nYeah, let's move forward with that --- in fact, I'm inclined to\nback-patch it. (Not till the current release cycle is done, though.\nI don't find this important enough to justify a last-moment patch.)\n\nBTW, I wonder if we shouldn't make TimestampDifferenceMilliseconds\nround any fractional millisecond up rather than down. Rounding down\nseems to create a hazard of uselessly waking just before the delay is\ncompleted. Better to wake just after.\n\nI still think your 0001 is fishy, but don't have time today to stare at\nit more closely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Nov 2020 15:25:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Misuse of TimestampDifference() in the autoprewarm feature of\n pg_prewarm"
},
{
"msg_contents": "On 2020-11-09 23:25, Tom Lane wrote:\n> Alexey Kondratov <a.kondratov@postgrespro.ru> writes:\n>> On 2020-11-09 21:53, Tom Lane wrote:\n>>> 0002 seems like a pretty clear bug fix, though I wonder if this is \n>>> exactly\n>>> what we want to do going forward. It seems like a very large \n>>> fraction of\n>>> the callers of TimestampDifference would like to have the value in \n>>> msec,\n>>> which means we're doing a whole lot of expensive and error-prone\n>>> arithmetic to break down the difference to sec/usec and then put it\n>>> back together again. Let's get rid of that by inventing, say\n>>> TimestampDifferenceMilliseconds(...).\n> \n>> Yeah, I get into this problem after a bug in another extension —\n>> pg_wait_sampling. I have attached 0002, which implements\n>> TimestampDifferenceMilliseconds(), so 0003 just uses this new function\n>> to solve the initial issues. If it looks good to you, then we can \n>> switch\n>> all similar callers to it.\n> \n> Yeah, let's move forward with that --- in fact, I'm inclined to\n> back-patch it. (Not till the current release cycle is done, though.\n> I don't find this important enough to justify a last-moment patch.)\n> \n> BTW, I wonder if we shouldn't make TimestampDifferenceMilliseconds\n> round any fractional millisecond up rather than down. Rounding down\n> seems to create a hazard of uselessly waking just before the delay is\n> completed. Better to wake just after.\n> \n\nYes, it make sense. I have changed TimestampDifferenceMilliseconds() to \nround result up if there is a reminder.\n\nAfter looking on the autoprewarm code more closely I have realised that \nthis 'double dump' issues was not an issues at all. I have just \nmisplaced a debug elog(), so its second output in the log was only \nindicating that we calculated delay_in_ms one more time. Actually, even \nwith wrong calculation of delay_in_ms the only problem was that we were \nbusy looping with ~1 second interval instead of waiting on latch.\n\nIt is still a buggy behaviour, but much less harmful than I have \noriginally thought.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Tue, 10 Nov 2020 19:58:44 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Misuse of TimestampDifference() in the autoprewarm feature of\n pg_prewarm"
},
{
"msg_contents": "Alexey Kondratov <a.kondratov@postgrespro.ru> writes:\n> After looking on the autoprewarm code more closely I have realised that \n> this 'double dump' issues was not an issues at all. I have just \n> misplaced a debug elog(), so its second output in the log was only \n> indicating that we calculated delay_in_ms one more time.\n\nAh --- that explains why I couldn't see a problem.\n\nI've pushed 0001+0002 plus some followup work to fix other places\nthat could usefully use TimestampDifferenceMilliseconds(). I have\nnot done anything with 0003 (the TAP test for pg_prewarm), and will\nleave that to the judgment of somebody who's worked with pg_prewarm\nbefore. To me it looks like it's not really testing things very\ncarefully at all; on the other hand, we have exactly zero test\ncoverage of that module today, so maybe something is better than\nnothing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Nov 2020 22:59:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Misuse of TimestampDifference() in the autoprewarm feature of\n pg_prewarm"
},
{
"msg_contents": "On 2020-11-11 06:59, Tom Lane wrote:\n> Alexey Kondratov <a.kondratov@postgrespro.ru> writes:\n>> After looking on the autoprewarm code more closely I have realised \n>> that\n>> this 'double dump' issues was not an issues at all. I have just\n>> misplaced a debug elog(), so its second output in the log was only\n>> indicating that we calculated delay_in_ms one more time.\n> \n> Ah --- that explains why I couldn't see a problem.\n> \n> I've pushed 0001+0002 plus some followup work to fix other places\n> that could usefully use TimestampDifferenceMilliseconds(). I have\n> not done anything with 0003 (the TAP test for pg_prewarm), and will\n> leave that to the judgment of somebody who's worked with pg_prewarm\n> before. To me it looks like it's not really testing things very\n> carefully at all; on the other hand, we have exactly zero test\n> coverage of that module today, so maybe something is better than\n> nothing.\n> \n\nGreat, thank you for generalisation of the issue and working on it.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Thu, 12 Nov 2020 13:47:34 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Misuse of TimestampDifference() in the autoprewarm feature of\n pg_prewarm"
}
] |
[
{
"msg_contents": "Currently the cost model of append path sums the cost/rows for all the\nsubpaths, it usually works well until we run into the run-time partition\nprune\ncase. The first result is that generic plans will rarely be used for some\ncases.\nFor instance, SELECT * FROM p WHERE pkey = $1; The custom plan will only\ncount the cost of one partition, however generic plan will count the cost\nfor all the\npartitions even we are sure that only 1 partition will survive. Another\nimpact\nis that planners may choose a wrong plan. for example, SELECT * FROM t1,\np\nWHERE t1.a = p.pkey; The cost/rows of t1 nest loop p is estimated highly\nimproperly. This patch wants to help this case to some extent.\n\nThe basic idea is we need to estimate how many partitions will survive after\nconsidering the runtime partition prune. After that, we will adjust the\ncost/rows accordingly. IIUC, this follows Robert's idea at [1].\nHowever there are too many special cases which is hard\nto handle, but luckily the most common case would be sth like partykey =\n$1,\nwhich we can estimate well, this patch is aimed to handle that part\nspecially.\nI supposed 75% partitions will survive for other cases arbitrarily,\nactually I\nwant to use 100% to be the same as the current situation. If we decide to\nhandle their special case differently, PartitionedRelQual has to be\nredefined\na bit (see comments around that.) which is the main data struct in the\nimplementation.\n\nThe attached is the minimum runnable patch. There are some obvious issue\nhere:\n\n1. MergeAppend path is not handled on purpose, it will be done at last.\n2. The cost for Parallel Append Path is not adjusted, I'm not sure about\nwhat is\n the best way to do it. So there are 2 test cases in\nsql/partition_prune.out\n failed due to this, I just echo the 'results' to expected' so that you\ncan\n know which one is impacted.\n\n\nHere are some simple test results.\n\ncreate table a1 partition of a for values in (1) partition by range (b, c);\ncreate table a1_1 partition of a1 for values from (1, 0) to (1, 10);\ncreate table a1_2 partition of a1 for values from (2, 0) to (2, 10);\n\ncreate table a2 partition of a for values in (2) partition by range (b, c);\ncreate table a2_1 partition of a2 for values from (1, 0) to (1, 10);\ncreate table a2_2 partition of a2 for values from (2, 0) to (2, 10);\n\n\ninsert into a select 1, i%2 + 1, i % 10 from generate_series(1, 10000) i;\ninsert into a select 2, i%2 + 1, i % 10 from generate_series(1, 10000) i;\n\nanalyze a;\n\nset plan_cache_mode to force_generic_plan;\n\nprepare s as select * from a where a = $1;\nPREPARE\nexplain execute s(1);\n QUERY PLAN\n-------------------------------------------------------------------\n Append (cost=0.00..231.00 rows=10000 width=12) *<< both rows/cost is\nadjusted.*\n Subplans Removed: 2\n -> Seq Scan on a1_1 a_1 (cost=0.00..90.50 rows=5000 width=12)\n Filter: (a = $1)\n -> Seq Scan on a1_2 a_2 (cost=0.00..90.50 rows=5000 width=12)\n Filter: (a = $1)\n(6 rows)\n\nprepare s4 as select * from a where a = $1 and b = $2 and c = $3;\nPREPARE\nexplain execute s4(1, 1, 2);\n QUERY PLAN\n--------------------------------------------------------------------\n Append (cost=0.00..120.50 rows=1000 width=12) *<< Here*\n Subplans Removed: 3\n -> Seq Scan on a1_1 a_1 (cost=0.00..115.50 rows=1000 width=12)\n Filter: ((a = $1) AND (b = $2) AND (c = $3))\n(4 rows)\n\nprepare s2 as select * from a where a = $1 union all select * from a where\na = $2;\nPREPARE\nexplain execute s2(1, 1);\n QUERY PLAN\n-------------------------------------------------------------------------\n Append (cost=0.00..762.00 rows=20000 width=12)\n -> Append (cost=0.00..231.00 rows=10000 width=12) *<< Here*\n Subplans Removed: 2\n -> Seq Scan on a1_1 a_1 (cost=0.00..90.50 rows=5000 width=12)\n Filter: (a = $1)\n -> Seq Scan on a1_2 a_2 (cost=0.00..90.50 rows=5000 width=12)\n Filter: (a = $1)\n -> Append (cost=0.00..231.00 rows=10000 width=12) << Here\n Subplans Removed: 2\n -> Seq Scan on a1_1 a_4 (cost=0.00..90.50 rows=5000 width=12)\n Filter: (a = $2)\n -> Seq Scan on a1_2 a_5 (cost=0.00..90.50 rows=5000 width=12)\n Filter: (a = $2)\n(13 rows)\n\nprepare s3 as select * from a where a = $1 union select * from a where a =\n$2;\nPREPARE\nexplain execute s3(1, 1);\n QUERY PLAN\n-------------------------------------------------------------------------------\n HashAggregate (cost=912.00..1112.00 rows=20000 width=12)\n Group Key: a.a, a.b, a.c\n -> Append (cost=0.00..762.00 rows=20000 width=12)\n -> Append (cost=0.00..231.00 rows=10000 width=12) << Here\n Subplans Removed: 2\n -> Seq Scan on a1_1 a_1 (cost=0.00..90.50 rows=5000\nwidth=12)\n Filter: (a = $1)\n -> Seq Scan on a1_2 a_2 (cost=0.00..90.50 rows=5000\nwidth=12)\n Filter: (a = $1)\n -> Append (cost=0.00..231.00 rows=10000 width=12) << Here\n Subplans Removed: 2\n -> Seq Scan on a1_1 a_4 (cost=0.00..90.50 rows=5000\nwidth=12)\n Filter: (a = $2)\n -> Seq Scan on a1_2 a_5 (cost=0.00..90.50 rows=5000\nwidth=12)\n Filter: (a = $2)\n(15 rows)\n\n-- add a limit to make sure the runtime partitions prune.\nexplain select * from generate_series(1, 10) i(i) join lateral (select *\nfrom a\nwhere a.a = (i.i % 2 + 1) and a.b = (i.i % 2 + 1) and a.c = (i.i % 10)\nlimit 10000000) m on true;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2030.10 rows=10000 width=16)\n -> Function Scan on generate_series i (cost=0.00..0.10 rows=10 width=4)\n -> Limit (cost=0.00..183.00 rows=1000 width=12)\n -> Append (cost=0.00..183.00 rows=1000 width=12) << Here\n -> Seq Scan on a1_1 a_1 (cost=0.00..178.00 rows=1000\nwidth=12)\n Filter: ((c = (i.i % 10)) AND (a = ((i.i % 2) + 1))\nAND (b = ((i.i % 2) + 1)))\n -> Seq Scan on a1_2 a_2 (cost=0.00..178.00 rows=1000\nwidth=12)\n Filter: ((c = (i.i % 10)) AND (a = ((i.i % 2) + 1))\nAND (b = ((i.i % 2) + 1)))\n -> Seq Scan on a2_1 a_3 (cost=0.00..178.00 rows=1000\nwidth=12)\n Filter: ((c = (i.i % 10)) AND (a = ((i.i % 2) + 1))\nAND (b = ((i.i % 2) + 1)))\n -> Seq Scan on a2_2 a_4 (cost=0.00..178.00 rows=1000\nwidth=12)\n Filter: ((c = (i.i % 10)) AND (a = ((i.i % 2) + 1))\nAND (b = ((i.i % 2) + 1)))\n(12 rows)\n\nAny thoughts about this? Thanks\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoZHYoAL4HYwnGO25B8CxCB%2BvNMdf%2B7rbUzYykR4sU9yUA%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Tue, 10 Nov 2020 08:43:59 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make Append Cost aware of some run time partition prune case"
},
{
"msg_contents": "On Mon, Nov 9, 2020 at 5:44 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Currently the cost model of append path sums the cost/rows for all the\n> subpaths, it usually works well until we run into the run-time partition\n> prune\n> case. The first result is that generic plans will rarely be used for some\n> cases.\n> For instance, SELECT * FROM p WHERE pkey = $1; The custom plan will only\n> count the cost of one partition, however generic plan will count the cost\n> for all the\n> partitions even we are sure that only 1 partition will survive. Another\n> impact\n> is that planners may choose a wrong plan. for example, SELECT * FROM t1,\n> p\n> WHERE t1.a = p.pkey; The cost/rows of t1 nest loop p is estimated highly\n> improperly. This patch wants to help this case to some extent.\n>\n\nGreetings,\n\nI was referred to this patch by Amit as a possible improvement for an issue\nI noticed recently. I had a test setup where I expected run-time pruning\nto kick in but it did not. I am trying to test this patch to see if it\nhelps for that scenario, but ran into an error running make install against\nthe current master (commit 0a687c8f1).\n\ncostsize.c: In function ‘cost_append’:\ncostsize.c:2171:32: error: ‘AppendPath’ {aka ‘struct AppendPath’} has no\nmember named ‘partitioned_rels’\n 2171 | List *partitioned_rels = apath->partitioned_rels;\n | ^~\nmake[4]: *** [<builtin>: costsize.o] Error 1\nmake[4]: Leaving directory\n'/var/lib/postgresql/git/postgresql/src/backend/optimizer/path'\nmake[3]: *** [../../../src/backend/common.mk:39: path-recursive] Error 2\nmake[3]: Leaving directory\n'/var/lib/postgresql/git/postgresql/src/backend/optimizer'\nmake[2]: *** [common.mk:39: optimizer-recursive] Error 2\nmake[2]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend'\nmake[1]: *** [Makefile:42: install-backend-recurse] Error 2\nmake[1]: Leaving directory '/var/lib/postgresql/git/postgresql/src'\nmake: *** [GNUmakefile:11: install-src-recurse] Error 2\n\nThanks,\n\nRyan Lambert\n\nOn Mon, Nov 9, 2020 at 5:44 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Currently the cost model of append path sums the cost/rows for all thesubpaths, it usually works well until we run into the run-time partition prunecase. The first result is that generic plans will rarely be used for some cases. For instance, SELECT * FROM p WHERE pkey = $1; The custom plan will only count the cost of one partition, however generic plan will count the cost for all thepartitions even we are sure that only 1 partition will survive. Another impactis that planners may choose a wrong plan. for example, SELECT * FROM t1, p WHERE t1.a = p.pkey; The cost/rows of t1 nest loop p is estimated highlyimproperly. This patch wants to help this case to some extent.Greetings,I was referred to this patch by Amit as a possible improvement for an issue I noticed recently. I had a test setup where I expected run-time pruning to kick in but it did not. I am trying to test this patch to see if it helps for that scenario, but ran into an error running make install against the current master (commit 0a687c8f1).costsize.c: In function ‘cost_append’:costsize.c:2171:32: error: ‘AppendPath’ {aka ‘struct AppendPath’} has no member named ‘partitioned_rels’ 2171 | List *partitioned_rels = apath->partitioned_rels; | ^~make[4]: *** [<builtin>: costsize.o] Error 1make[4]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend/optimizer/path'make[3]: *** [../../../src/backend/common.mk:39: path-recursive] Error 2make[3]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend/optimizer'make[2]: *** [common.mk:39: optimizer-recursive] Error 2make[2]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend'make[1]: *** [Makefile:42: install-backend-recurse] Error 2make[1]: Leaving directory '/var/lib/postgresql/git/postgresql/src'make: *** [GNUmakefile:11: install-src-recurse] Error 2Thanks,Ryan Lambert",
"msg_date": "Wed, 3 Mar 2021 17:14:26 -0700",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": false,
"msg_subject": "Re: Make Append Cost aware of some run time partition prune case"
},
{
"msg_contents": "Hi Ryan:\n\nOn Thu, Mar 4, 2021 at 8:14 AM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n\n> On Mon, Nov 9, 2020 at 5:44 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> Currently the cost model of append path sums the cost/rows for all the\n>> subpaths, it usually works well until we run into the run-time partition\n>> prune\n>> case. The first result is that generic plans will rarely be used for\n>> some cases.\n>> For instance, SELECT * FROM p WHERE pkey = $1; The custom plan will only\n>> count the cost of one partition, however generic plan will count the cost\n>> for all the\n>> partitions even we are sure that only 1 partition will survive. Another\n>> impact\n>> is that planners may choose a wrong plan. for example, SELECT * FROM\n>> t1, p\n>> WHERE t1.a = p.pkey; The cost/rows of t1 nest loop p is estimated highly\n>> improperly. This patch wants to help this case to some extent.\n>>\n>\n> Greetings,\n>\n> I was referred to this patch by Amit as a possible improvement for an\n> issue I noticed recently. I had a test setup where I expected run-time\n> pruning to kick in but it did not. I am trying to test this patch to see\n> if it helps for that scenario, but ran into an error running make install\n> against the current master (commit 0a687c8f1).\n>\n> costsize.c: In function ‘cost_append’:\n> costsize.c:2171:32: error: ‘AppendPath’ {aka ‘struct AppendPath’} has no\n> member named ‘partitioned_rels’\n> 2171 | List *partitioned_rels = apath->partitioned_rels;\n> | ^~\n> make[4]: *** [<builtin>: costsize.o] Error 1\n> make[4]: Leaving directory\n> '/var/lib/postgresql/git/postgresql/src/backend/optimizer/path'\n> make[3]: *** [../../../src/backend/common.mk:39: path-recursive] Error 2\n> make[3]: Leaving directory\n> '/var/lib/postgresql/git/postgresql/src/backend/optimizer'\n> make[2]: *** [common.mk:39: optimizer-recursive] Error 2\n> make[2]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend'\n> make[1]: *** [Makefile:42: install-backend-recurse] Error 2\n> make[1]: Leaving directory '/var/lib/postgresql/git/postgresql/src'\n> make: *** [GNUmakefile:11: install-src-recurse] Error 2\n>\n> Thanks,\n>\n> Ryan Lambert\n>\n>\n\nThanks for checking. This patch is on a very old master and the code is\ntoo complex\nsince I wanted to handle a full scenario of a run time partition prune,\nwhich has lots\nof troubles and not very practical I think. so I am not happy with that\nnow.\n\nI have implemented a new one, which only handles 1 level of partitioned\ntable, and\nonly 1 partition key. and only handle the eq operators like partkey = $1 /\npartkey in ($1, $2)\n/ parkey = $1 or partkey = $2; The patch works well in my user case. I\ncan send\none on the latest master shortly, and hope it is helpful for you as well.\n\n(At the same time, I also ran into a case that we can expand more init\npartition\nprune case [1], you can check that one if you like. I am happy with that\npatch\nnow).\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKU4AWq4NLxu5JF9%2Bd%3Do%3DA636-%3DeFNFmPx%2BkJ44ezTm%3DikZ73w%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi Ryan:On Thu, Mar 4, 2021 at 8:14 AM Ryan Lambert <ryan@rustprooflabs.com> wrote:On Mon, Nov 9, 2020 at 5:44 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Currently the cost model of append path sums the cost/rows for all thesubpaths, it usually works well until we run into the run-time partition prunecase. The first result is that generic plans will rarely be used for some cases. For instance, SELECT * FROM p WHERE pkey = $1; The custom plan will only count the cost of one partition, however generic plan will count the cost for all thepartitions even we are sure that only 1 partition will survive. Another impactis that planners may choose a wrong plan. for example, SELECT * FROM t1, p WHERE t1.a = p.pkey; The cost/rows of t1 nest loop p is estimated highlyimproperly. This patch wants to help this case to some extent.Greetings,I was referred to this patch by Amit as a possible improvement for an issue I noticed recently. I had a test setup where I expected run-time pruning to kick in but it did not. I am trying to test this patch to see if it helps for that scenario, but ran into an error running make install against the current master (commit 0a687c8f1).costsize.c: In function ‘cost_append’:costsize.c:2171:32: error: ‘AppendPath’ {aka ‘struct AppendPath’} has no member named ‘partitioned_rels’ 2171 | List *partitioned_rels = apath->partitioned_rels; | ^~make[4]: *** [<builtin>: costsize.o] Error 1make[4]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend/optimizer/path'make[3]: *** [../../../src/backend/common.mk:39: path-recursive] Error 2make[3]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend/optimizer'make[2]: *** [common.mk:39: optimizer-recursive] Error 2make[2]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend'make[1]: *** [Makefile:42: install-backend-recurse] Error 2make[1]: Leaving directory '/var/lib/postgresql/git/postgresql/src'make: *** [GNUmakefile:11: install-src-recurse] Error 2Thanks,Ryan Lambert \nThanks for checking. This patch is on a very old master and the code is too complexsince I wanted to handle a full scenario of a run time partition prune, which has lotsof troubles and not very practical I think. so I am not happy with that now. I have implemented a new one, which only handles 1 level of partitioned table, and only 1 partition key. and only handle the eq operators like partkey = $1 / partkey in ($1, $2)/ parkey = $1 or partkey = $2; The patch works well in my user case. I can sendone on the latest master shortly, and hope it is helpful for you as well. (At the same time, I also ran into a case that we can expand more init partitionprune case [1], you can check that one if you like. I am happy with that patchnow). [1] https://www.postgresql.org/message-id/flat/CAKU4AWq4NLxu5JF9%2Bd%3Do%3DA636-%3DeFNFmPx%2BkJ44ezTm%3DikZ73w%40mail.gmail.com -- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Thu, 4 Mar 2021 09:05:29 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make Append Cost aware of some run time partition prune case"
},
{
"msg_contents": ">\n> I have implemented a new one, which only handles 1 level of partitioned\n> table, and\n> only 1 partition key. and only handle the eq operators like partkey = $1\n> / partkey in ($1, $2)\n> / parkey = $1 or partkey = $2; The patch works well in my user case. I\n> can send\n> one on the latest master shortly, and hope it is helpful for you as well.\n>\n>\nHi:\n\nHere is the new patch for this topic, which has proved works in my limited\nuser\ncase (apply the similar logic on pg11), and it is incomplete since more\ncases\nshould be covered but not. Uploading this patch now is just for discussing\nand\ntesting.\n\nDesign principle:\n\n1). the cost of AppendPath should be reduced for either init partition\nprune or run\ntime partition prune. All of the startup_cost, total_cost, rows should be\nadjusted. As for the startup_cost, if we should adjust it carefully, or\nelse we can\nget the run_time_cost less than 0.\n\n2). When we merge the subpath from sub-partitioned rel via\naccumulate_append_subpath, currently we just merge the subpaths and discard\nthe\ncost/rows in AppendPath. After this feature is involved, we may consider\nto use\nthe AppendPath's cost as well during this stage.\n\n3). When join is involved, AppendPath is not enough since the estimated\nrows for\na join relation cares about rel1->rows, rel2->rows only, the Path.rows is\nnot\ncared. so we need to adjust rel->rows as well. and only for the init\npartition prune case. (appendrel->rows for planning time partition prune is\nhandled already).\n\nThe biggest problem of the above is I don't know how to adjust the cost for\nParallel Append Path. Currently I just ignore it, which would cause some\nquery\nshould use Parallel Append Path but not.\n\nSomething I don't want to handle really unless we can address its value.\n1. Operators like >, <. Between and.\n2. the uncompleted part key appeared in prunequals. Like we have partition\nkey (a,\n b). But use just use A = 1 as restrictinfo.\n\nThe main reason I don't want to handle them are 1). it would be uncommon.\nb). It introduces extra complexity. c). at last, we can't estimate it well\nlike partkey > $1,\nwhat would be a prune ratio for ).\n\nSomething I don't handle so far are: 1). accumulate_append_subpath\nstuff. 2). MergeAppend. 3). Multi Partition key.\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)",
"msg_date": "Thu, 4 Mar 2021 12:21:08 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make Append Cost aware of some run time partition prune case"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 9:51 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>>\n>> I have implemented a new one, which only handles 1 level of partitioned table, and\n>> only 1 partition key. and only handle the eq operators like partkey = $1 / partkey in ($1, $2)\n>> / parkey = $1 or partkey = $2; The patch works well in my user case. I can send\n>> one on the latest master shortly, and hope it is helpful for you as well.\n>>\n>\n> Hi:\n>\n> Here is the new patch for this topic, which has proved works in my limited user\n> case (apply the similar logic on pg11), and it is incomplete since more cases\n> should be covered but not. Uploading this patch now is just for discussing and\n> testing.\n>\n> Design principle:\n>\n> 1). the cost of AppendPath should be reduced for either init partition prune or run\n> time partition prune. All of the startup_cost, total_cost, rows should be\n> adjusted. As for the startup_cost, if we should adjust it carefully, or else we can\n> get the run_time_cost less than 0.\n>\n> 2). When we merge the subpath from sub-partitioned rel via\n> accumulate_append_subpath, currently we just merge the subpaths and discard the\n> cost/rows in AppendPath. After this feature is involved, we may consider to use\n> the AppendPath's cost as well during this stage.\n>\n> 3). When join is involved, AppendPath is not enough since the estimated rows for\n> a join relation cares about rel1->rows, rel2->rows only, the Path.rows is not\n> cared. so we need to adjust rel->rows as well. and only for the init\n> partition prune case. (appendrel->rows for planning time partition prune is\n> handled already).\n>\n> The biggest problem of the above is I don't know how to adjust the cost for\n> Parallel Append Path. Currently I just ignore it, which would cause some query\n> should use Parallel Append Path but not.\n>\n> Something I don't want to handle really unless we can address its value.\n> 1. Operators like >, <. Between and.\n> 2. the uncompleted part key appeared in prunequals. Like we have partition key (a,\n> b). But use just use A = 1 as restrictinfo.\n>\n> The main reason I don't want to handle them are 1). it would be uncommon.\n> b). It introduces extra complexity. c). at last, we can't estimate it well like partkey > $1,\n> what would be a prune ratio for ).\n>\n> Something I don't handle so far are: 1). accumulate_append_subpath\n> stuff. 2). MergeAppend. 3). Multi Partition key.\n>\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Jul 2021 21:25:28 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make Append Cost aware of some run time partition prune case"
},
{
"msg_contents": "> On 14 Jul 2021, at 17:55, vignesh C <vignesh21@gmail.com> wrote:\n\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nAs the thread has stalled, this patch still doesn't apply (judging by the log\nit's likely not too hard to resolve). I'm marking this patch Returned with\nFeedback, feel free to open a new entry for an updated patch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 4 Nov 2021 11:10:31 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Make Append Cost aware of some run time partition prune case"
}
] |
[
{
"msg_contents": "I am working on a project called \"bottom-up index deletion\" (I was\ncalling it \"delete deduplication\" until recently). This is another\nnbtree project. The patch series adds a mechanism that controls old\nduplicate versions caused by non-HOT UPDATEs. This is very effective.\nIn fact, it prevents almost all \"unnecessary\" page splits. By that I\nmean it totally prevents page splits caused by version churn, where\nold duplicate versions accumulate in indexes and cause page splits.\nThis is at least true in the absence of a long running xact/open\nsnapshot, though only provided the index isn't very small.\n\nImportantly, the delete mechanism that the. patch series adds\nnaturally tends to bound the number of physical index tuple versions\nfor any given logical row represented in the index. I believe that\nthat's a big problem for us at present.\n\nI'm starting a new thread to discuss issues with changing the\nsignature of the aminsert() index AM routine to add a hint for the\nbenefit of the patch. I would like to get some buy-in on the details.\nBut first some background:\n\nThe patch relies on the executor passing down a hint to nbtree that\nindicates that the incoming index tuple is from an UPDATE that did not\nmodify any of the columns covered by the index. That's how the new\ndeletion mechanism avoids wasting cycles in cases that have no chance\nof benefiting from the optimization, like plain INSERTs whose\naminsert() calls naturally insert non-transient index tuples that\npoint to whole new logical rows. We only trigger a relatively\nexpensive special round of deletion when we notice an accumulation of\nversions on a leaf page, and are fairly confident (though not yet\ncompletely sure) that that's what we see on the page. Hence\n\"bottom-up\".\n\nIt's easy to review the executor mechanism in isolation, without\nlooking at the indexing stuff at all. It's broken out into its own\npatch file in the latest version. The patch is called\nv7-0002-Pass-down-logically-unchanged-index-hint.patch, and can be\ndownloaded here:\n\nhttps://postgr.es/m/CAH2-WzmP5AymEfT_n3wAdvW8D7DduapHPqRzds5kv7VjnXsx6Q@mail.gmail.com\n\nSome questions about the infrastructure I'm thinking of adding:\n\n1. Is there any reason to believe that this will create noticeable\nperformance overhead elsewhere?\n\n2. Is the current approach of adding a new boolean argument to\naminsert() comprehensive?\n\nExamples of where I might have gone wrong with the current draft design:\n\nMight I be missing an opportunity to add a more general mechanism that\nwill be useful in a variety of index access methods?\n\nMaybe an enum would make more sense?\n\nOr maybe I should add a new amupdate() routine? Or something else completely?\n\nISTM that ther general idea of giving index access methods hints about\nwhat's going on with UPDATE chains is a good one. It isn't necessary\nfor index AMs to have a *reliable* understanding of exactly what an\nUPDATE chain looks like -- that would be very brittle. But conveying\nthe general idea of the \"lifecycle\" of the data at the level of a leaf\npage makes perfect sense.\n\nTo expand the discussion beyond the immediate needs of my patch: I\nalso think that it would make sense to \"delete mark\" index tuples with\na visibility hint (think of a new \"probably going to be garbage before\ntoo long\" index tuple bit) when DELETE statements run. Again, this\ndoesn't have to be reliable in the same way that setting an LP_DEAD\nbit does. This delete marking stuff is not on my agenda right now.\nJust an example of another mechanism based on similar principles.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Nov 2020 17:10:23 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Adding an aminsert() hint that triggers bottom-up index deletion for\n UPDATEs that can't use HOT"
}
] |
[
{
"msg_contents": "Hi all,\n\nTrying to use OpenSSL with FIPS breaks if one attempts to call the\nlow-level SHA2 routines we currently use in sha2_openssl.c (upstream\ncalls that OpenSSLDie()), forcing a crash of PG. The actual way to\nfix that is to use EVP as I solved here:\nhttps://commitfest.postgresql.org/30/2762/\n\nUnfortunately, this enforces an ABI breakage so this is not\nbackpatchable material. Now, if one attempts to use OpenSSL with\nFIPS, the initialization of backup manifests in\nInitializeBackupManifest() enforces a call to pg_sha256_init() for the\nmanifest file itself even if pg_basebackup, or anything requesting a\nbase backup with the replication protocol, does *not* want a backup\nmanifest. One can for example enforce to not use a backup manifest\nwith --no-manifest in pg_basebackup, but even if you specify that base\nbackups cause the backend to crash on HEAD if using FIPS in OpenSSL.\n\nLooking at the code, the checksum of the manifest file is updated or\nfinalized only if IsManifestEnabled() is satisfied, meaning that if\nthe caller does not want a manifest we do its initialization, but\nwe have no use for it.\n\nAttached is a patch that I would like to back-patch down to v13 to\navoid this useless initialization, giving users the possibility to\ntake base backups with FIPS when not using a backup manifest. Without\nthe solution in the first paragraph, you cannot make use of backup \nmanifests at all with OpenSSL+FIPS (one can still enforce the use of\nthe in-core SHA2 implementation even if building with OpenSSL), but at\nleast it gives an escape route with 13.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 10 Nov 2020 11:00:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Avoiding useless SHA256 initialization with backup manifests,\n breaking base backups with FIPS"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 11:00:14AM +0900, Michael Paquier wrote:\n> Attached is a patch that I would like to back-patch down to v13 to\n> avoid this useless initialization, giving users the possibility to\n> take base backups with FIPS when not using a backup manifest. Without\n> the solution in the first paragraph, you cannot make use of backup \n> manifests at all with OpenSSL+FIPS (one can still enforce the use of\n> the in-core SHA2 implementation even if building with OpenSSL), but at\n> least it gives an escape route with 13.\n\nOkay. Hearing nothing, I have applied that.\n--\nMichael",
"msg_date": "Thu, 12 Nov 2020 11:00:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding useless SHA256 initialization with backup manifests,\n breaking base backups with FIPS"
}
] |
[
{
"msg_contents": "Hi all\n\nThe attached patch adds a test to the TAP logical decoding suite to show\nthat pg_replication_slot_advance() works on a server in standby mode.\n\nI didn't add a full demonstration of how to do failover with logical slots\nbecause we're still missing a way to \"sync\" a logical slot from a primary\nto a standby, or a way to directly create such a slot safely on a standby\nin a way that enforces a safe catalog_xmin etc.\n\nYou can't replay from the slot unless the server is promoted, so I don't\ntest that.\n\nI'm not sure if anyone's going to find it worth committing, but it's here\nso searchers can find it at least.",
"msg_date": "Tue, 10 Nov 2020 13:32:24 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] TAP test showing that pg_replication_slot_advance() works on\n standby"
},
{
"msg_contents": "Hi Craig,\n\nOn Tue, Nov 10, 2020 at 2:32 PM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> Hi all\n>\n> The attached patch adds a test to the TAP logical decoding suite to show that pg_replication_slot_advance() works on a server in standby mode.\n>\n> I didn't add a full demonstration of how to do failover with logical slots because we're still missing a way to \"sync\" a logical slot from a primary to a standby, or a way to directly create such a slot safely on a standby in a way that enforces a safe catalog_xmin etc.\n>\n> You can't replay from the slot unless the server is promoted, so I don't test that.\n>\n> I'm not sure if anyone's going to find it worth committing, but it's here so searchers can find it at least.\n\nYou sent in your patch,\n0001-Extend-TAP-test-for-pg_replication_slot_advance-to-c_patch.txt to\npgsql-hackers on Nov10, but you did not post it to the next\nCommitFest[1]. If this was intentional, then you need to take no\naction. However, if you want your patch to be reviewed as part of the\nupcoming CommitFest, then you need to add it yourself before\n2021-01-01 AOE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 17:53:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] TAP test showing that pg_replication_slot_advance() works\n on standby"
}
] |
[
{
"msg_contents": "Hi all\n\nI noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the docs\nfor it were removed completely as well. That's largely sensible, but is\nconfusing when users have upgraded and are trying to find out what\nhappened, or how to configure equivalent functionality.\n\nhttps://www.postgresql.org/docs/11/recovery-config.html just vanished for\n/12/, and as a result\nhttps://www.postgresql.org/docs/current/recovery-config.html is a 404. I\nthink that's unhelpful since we encourage people to use /current/ links.\n\nThe attached patch restores the recovery-config.html page with a short note\nexplaining why it's gone and what to do instead. It's added to a new\nappendix \"Obsolete or renamed features, settings and files\".\n\nI found it remarkably hard to find out what exactly made a \"standby server\"\nactually be a standby server in the docs so I have added a couple of\ncross-references that make the role of the standby.signal file much more\ndiscoverable from relevant locations.\n\nI propose a policy that we preserve our <chapter> and <sect1> ids. We\nshould move them to an \"obsolete\" section at the end, like the one I\ncreated here, and provide stubs for them instead of removing them. That'll\nhelp prevent us from breaking links on the wider web, in 3rd party\ndocumentation, etc.",
"msg_date": "Tue, 10 Nov 2020 13:38:14 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Add docs stub for recovery.conf"
},
{
"msg_contents": "On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n> Hi all\n> \n> I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the docs for\n> it were removed completely as well. That's largely sensible, but is confusing\n> when users have upgraded and are trying to find out what happened, or how to\n> configure equivalent functionality.\n\nI don't see the logic in carrying doc stuff that we don't have anymore.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 14:44:47 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 12:44 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n> > Hi all\n> >\n> > I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the\n> docs for\n> > it were removed completely as well. That's largely sensible, but is\n> confusing\n> > when users have upgraded and are trying to find out what happened, or\n> how to\n> > configure equivalent functionality.\n>\n> I don't see the logic in carrying doc stuff that we don't have anymore.\n>\n>\nI do. Saying why something went away has value. For small stuff you have\ncommit messages. For user-facing documentation stuff that warranted its\nown page, having said page remain and describe the change seems worthwhile.\n\nDavid J.\n\nOn Wed, Nov 11, 2020 at 12:44 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n> Hi all\n> \n> I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the docs for\n> it were removed completely as well. That's largely sensible, but is confusing\n> when users have upgraded and are trying to find out what happened, or how to\n> configure equivalent functionality.\n\nI don't see the logic in carrying doc stuff that we don't have anymore.I do. Saying why something went away has value. For small stuff you have commit messages. For user-facing documentation stuff that warranted its own page, having said page remain and describe the change seems worthwhile.David J.",
"msg_date": "Wed, 11 Nov 2020 12:57:49 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "> On 11 Nov 2020, at 20:44, Bruce Momjian <bruce@momjian.us> wrote:\n> On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n\n>> I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the docs for\n>> it were removed completely as well. That's largely sensible, but is confusing\n>> when users have upgraded and are trying to find out what happened, or how to\n>> configure equivalent functionality.\n> \n> I don't see the logic in carrying doc stuff that we don't have anymore.\n\nWell, we do have that already in <tip>'s sprinkled across the docs where it\nmakes sense to help transitioning users, like this one in func.sgml:\n\n \"Prior to PostgreSQL 12, it was possible to skip arbitrary text in the\n input string using non-letter or non-digit characters...\"\n\nIt doesn't seem like a terrible idea to do a similar one for recovery.conf.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 11 Nov 2020 20:59:40 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 08:59:40PM +0100, Daniel Gustafsson wrote:\n> > On 11 Nov 2020, at 20:44, Bruce Momjian <bruce@momjian.us> wrote:\n> > On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n> \n> >> I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the docs for\n> >> it were removed completely as well. That's largely sensible, but is confusing\n> >> when users have upgraded and are trying to find out what happened, or how to\n> >> configure equivalent functionality.\n> > \n> > I don't see the logic in carrying doc stuff that we don't have anymore.\n> \n> Well, we do have that already in <tip>'s sprinkled across the docs where it\n> makes sense to help transitioning users, like this one in func.sgml:\n> \n> \"Prior to PostgreSQL 12, it was possible to skip arbitrary text in the\n> input string using non-letter or non-digit characters...\"\n> \n> It doesn't seem like a terrible idea to do a similar one for recovery.conf.\n\nI am fine with a tip. The patch looked like it was creating a new\nchapter for it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 15:01:55 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "> On 11 Nov 2020, at 21:01, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Wed, Nov 11, 2020 at 08:59:40PM +0100, Daniel Gustafsson wrote:\n>>> On 11 Nov 2020, at 20:44, Bruce Momjian <bruce@momjian.us> wrote:\n>>> On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n>> \n>>>> I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the docs for\n>>>> it were removed completely as well. That's largely sensible, but is confusing\n>>>> when users have upgraded and are trying to find out what happened, or how to\n>>>> configure equivalent functionality.\n>>> \n>>> I don't see the logic in carrying doc stuff that we don't have anymore.\n>> \n>> Well, we do have that already in <tip>'s sprinkled across the docs where it\n>> makes sense to help transitioning users, like this one in func.sgml:\n>> \n>> \"Prior to PostgreSQL 12, it was possible to skip arbitrary text in the\n>> input string using non-letter or non-digit characters...\"\n>> \n>> It doesn't seem like a terrible idea to do a similar one for recovery.conf.\n> \n> I am fine with a tip. The patch looked like it was creating a new\n> chapter for it.\n\nI admittedly hadn't looked at the patch, but now that I have I agree with not\nadding a separate \"obsolete\" topic for it. I'd prefer to use tips within the\ndocs, they will also help guide users who search for recovery.conf and lands in\nthe tip which is next to the relevant updated documentation on the topic.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 11 Nov 2020 21:07:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 3:44 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n> > Hi all\n> >\n> > I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the\n> docs for\n> > it were removed completely as well. That's largely sensible, but is\n> confusing\n> > when users have upgraded and are trying to find out what happened, or\n> how to\n> > configure equivalent functionality.\n>\n> I don't see the logic in carrying doc stuff that we don't have anymore.\n>\n\nI explained why.\n\nHere's how the rendered docs look: https://imgur.com/a/VyjzEw5\n\nThink. You're used to recovery.conf. You've recently switched to pg 12. You\nsearch for \"recovery.conf\" or \"primary_slot_name\" or \"standby_mode\" or\nsomething. You of course land up at\nhttps://www.postgresql.org/docs/11/recovery-config.html or another version.\n\nWhat do you do now? There's no \"12\" or \"current\" link for your version. You\ndon't follow postgres development closely, and you have no idea we removed\nthe file. You might work it out from the release notes. But even then, if\nyou try searching for \"standby_mode\" in the updated docs will tell you\nbasically nothing, it has just vanished\n\nAlso by simply deleting the page, we've broken web links to\nhttps://www.postgresql.org/docs/current/recovery-config.html\n\nSo there are three really good reasons:\n\n* Help users of the web docs navigate to the right place when we remove\nthings\n* Don't break web links (breaking links without redirects is bad, the web\nis sad)\n* Help upgrading users who know the old terms find the new terms\n\nI'd welcome your suggestions on other ways to arrange this, so long as it\nmeets the basic requirement \"retain the linktable target 'recovery-config' \"\n\nIn general I think it's quite unpleasant for users to have docs sections\njust vanish. I strongly suggest that we enact a policy going forwards that\nany <chapter> or <sect1> removal should be accompanied by a redirect or\nstub that helps users find the new contents. It regularly annoys me when\nI'm trying to navigate around various versions of the docs and things just\nvanish.\n\nOn Thu, Nov 12, 2020 at 3:44 AM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n> Hi all\n> \n> I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the docs for\n> it were removed completely as well. That's largely sensible, but is confusing\n> when users have upgraded and are trying to find out what happened, or how to\n> configure equivalent functionality.\n\nI don't see the logic in carrying doc stuff that we don't have anymore.I explained why.Here's how the rendered docs look: https://imgur.com/a/VyjzEw5 Think. You're used to recovery.conf. You've recently switched to pg 12. You search for \"recovery.conf\" or \"primary_slot_name\" or \"standby_mode\" or something. You of course land up at https://www.postgresql.org/docs/11/recovery-config.html or another version.What do you do now? There's no \"12\" or \"current\" link for your version. You don't follow postgres development closely, and you have no idea we removed the file. You might work it out from the release notes. But even then, if you try searching for \"standby_mode\" in the updated docs will tell you basically nothing, it has just vanishedAlso by simply deleting the page, we've broken web links to https://www.postgresql.org/docs/current/recovery-config.htmlSo there are three really good reasons:* Help users of the web docs navigate to the right place when we remove things* Don't break web links (breaking links without redirects is bad, the web is sad)* Help upgrading users who know the old terms find the new termsI'd welcome your suggestions on other ways to arrange this, so long as it meets the basic requirement \"retain the linktable target 'recovery-config' \"In general I think it's quite unpleasant for users to have docs sections just vanish. I strongly suggest that we enact a policy going forwards that any <chapter> or <sect1> removal should be accompanied by a redirect or stub that helps users find the new contents. It regularly annoys me when I'm trying to navigate around various versions of the docs and things just vanish.",
"msg_date": "Thu, 12 Nov 2020 10:21:02 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 4:01 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Nov 11, 2020 at 08:59:40PM +0100, Daniel Gustafsson wrote:\n> > > On 11 Nov 2020, at 20:44, Bruce Momjian <bruce@momjian.us> wrote:\n> > > On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n> >\n> > >> I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!)\n> the docs for\n> > >> it were removed completely as well. That's largely sensible, but is\n> confusing\n> > >> when users have upgraded and are trying to find out what happened, or\n> how to\n> > >> configure equivalent functionality.\n> > >\n> > > I don't see the logic in carrying doc stuff that we don't have anymore.\n> >\n> > Well, we do have that already in <tip>'s sprinkled across the docs where\n> it\n> > makes sense to help transitioning users, like this one in func.sgml:\n> >\n> > \"Prior to PostgreSQL 12, it was possible to skip arbitrary text in\n> the\n> > input string using non-letter or non-digit characters...\"\n> >\n> > It doesn't seem like a terrible idea to do a similar one for\n> recovery.conf.\n>\n> I am fine with a tip. The patch looked like it was creating a new\n> chapter for it.\n>\n>\nIt is. Or rather, an appendix right at the end to hold info on things we\nremoved or renamed and where to find them now.\n\nYou can't AFAICS make docbook create a toplevel linkable file for a <tip> .\nA <tip> won't un-break\nhttps://www.postgresql.org/docs/current/recovery-config.html or make help\npeople who visit https://www.postgresql.org/docs/11/recovery-config.html\nfigure out what's going on if they're using pg12 and there's no link to\nversion 12 in the nav section. A <tip> won't add index entries for renamed\nsettings, so someone looking up \"standby_mode\" can find out that we've\nswitched to a file called \"standby.signal\" instead.\n\nPretend you're a user who has upgraded from pg 11. You're looking at the Pg\n12 docs. How long does it take you to find out how to make a server into a\nstandby now? It took me longer than I would've expected...\n\nOn Thu, Nov 12, 2020 at 4:01 AM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Nov 11, 2020 at 08:59:40PM +0100, Daniel Gustafsson wrote:\n> > On 11 Nov 2020, at 20:44, Bruce Momjian <bruce@momjian.us> wrote:\n> > On Tue, Nov 10, 2020 at 01:38:14PM +0800, Craig Ringer wrote:\n> \n> >> I noticed that when recovery.conf was removed in 2dedf4d9a8 (yay!) the docs for\n> >> it were removed completely as well. That's largely sensible, but is confusing\n> >> when users have upgraded and are trying to find out what happened, or how to\n> >> configure equivalent functionality.\n> > \n> > I don't see the logic in carrying doc stuff that we don't have anymore.\n> \n> Well, we do have that already in <tip>'s sprinkled across the docs where it\n> makes sense to help transitioning users, like this one in func.sgml:\n> \n> \"Prior to PostgreSQL 12, it was possible to skip arbitrary text in the\n> input string using non-letter or non-digit characters...\"\n> \n> It doesn't seem like a terrible idea to do a similar one for recovery.conf.\n\nI am fine with a tip. The patch looked like it was creating a new\nchapter for it.\nIt is. Or rather, an appendix right at the end to hold info on things we removed or renamed and where to find them now.You can't AFAICS make docbook create a toplevel linkable file for a <tip> . A <tip> won't un-break https://www.postgresql.org/docs/current/recovery-config.html or make help people who visit https://www.postgresql.org/docs/11/recovery-config.html figure out what's going on if they're using pg12 and there's no link to version 12 in the nav section. A <tip> won't add index entries for renamed settings, so someone looking up \"standby_mode\" can find out that we've switched to a file called \"standby.signal\" instead.Pretend you're a user who has upgraded from pg 11. You're looking at the Pg 12 docs. How long does it take you to find out how to make a server into a standby now? It took me longer than I would've expected...",
"msg_date": "Thu, 12 Nov 2020 10:25:23 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 10:21:02AM +0800, Craig Ringer wrote:\n> Here's how the rendered docs look: https://imgur.com/a/VyjzEw5\n> \n> Think. You're used to recovery.conf. You've recently switched to pg 12. You\n> search for \"recovery.conf\" or \"primary_slot_name\" or \"standby_mode\" or\n> something. You of course land up at https://www.postgresql.org/docs/11/\n> recovery-config.html or another version.\n> \n> What do you do now? There's no \"12\" or \"current\" link for your version. You\n> don't follow postgres development closely, and you have no idea we removed the\n> file. You might work it out from the release notes. But even then, if you try\n> searching for \"standby_mode\" in the updated docs will tell you basically\n> nothing, it has just vanished\n> \n> Also by simply deleting the page, we've broken web links to https://\n> www.postgresql.org/docs/current/recovery-config.html\n> \n> So there are three really good reasons:\n> \n> * Help users of the web docs navigate to the right place when we remove things\n> * Don't break web links (breaking links without redirects is bad, the web is\n> sad)\n> * Help upgrading users who know the old terms find the new terms\n> \n> I'd welcome your suggestions on other ways to arrange this, so long as it meets\n> the basic requirement \"retain the linktable target 'recovery-config' \"\n\nThis is certainly not the first or last time we will rename things. \nFortunately, this has already been discussed in the renaming of default\nroles to predefined roles:\n\n\thttps://www.postgresql.org/message-id/flat/157742545062.1149.11052653770497832538%40wrigleys.postgresql.org\n\nThis naming change has not happened yet, even though the issue is 11\nmonths old, but the agreed-upon way to handle this is to use a website\nredirect that links to the new text. You can add a \"tip\" there so they\nunderstand the renaming has happened.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 09:30:22 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Thu, Nov 12, 2020 at 10:21:02AM +0800, Craig Ringer wrote:\n> > Here's how the rendered docs look: https://imgur.com/a/VyjzEw5\n> > \n> > Think. You're used to recovery.conf. You've recently switched to pg 12. You\n> > search for \"recovery.conf\" or \"primary_slot_name\" or \"standby_mode\" or\n> > something. You of course land up at https://www.postgresql.org/docs/11/\n> > recovery-config.html or another version.\n> > \n> > What do you do now? There's no \"12\" or \"current\" link for your version. You\n> > don't follow postgres development closely, and you have no idea we removed the\n> > file. You might work it out from the release notes. But even then, if you try\n> > searching for \"standby_mode\" in the updated docs will tell you basically\n> > nothing, it has just vanished\n> > \n> > Also by simply deleting the page, we've broken web links to https://\n> > www.postgresql.org/docs/current/recovery-config.html\n> > \n> > So there are three really good reasons:\n> > \n> > * Help users of the web docs navigate to the right place when we remove things\n> > * Don't break web links (breaking links without redirects is bad, the web is\n> > sad)\n> > * Help upgrading users who know the old terms find the new terms\n> > \n> > I'd welcome your suggestions on other ways to arrange this, so long as it meets\n> > the basic requirement \"retain the linktable target 'recovery-config' \"\n> \n> This is certainly not the first or last time we will rename things. \n\nIndeed, we've renamed things a number of times before, eg:\n\nhttps://www.postgresql.org/docs/current/pgwaldump.html\n\nwhere the 9.6 link goes to:\n\nhttps://www.postgresql.org/docs/9.6/pgxlogdump.html\n\nand the 'current' link from the 9.6 page goes to the pgwaldump page,\nwhich all works pretty well, if all you're looking at is our\ndocumentation and not considering external links into the documentation.\n\nHowever, that isn't what Craig's raising a concern over here (at least,\nnot exclusively), it's this issue:\n\nhttps://www.postgresql.org/docs/current/pgxlogdump.html\n\nWhich currently goes to a 404.\n\nNow, the pgweb feature that Jonathan wrote recently might actually be\nexactly what we need to fix that, and to address the issue with\nrecovery config documentation that Craig raises.\n\n> Fortunately, this has already been discussed in the renaming of default\n> roles to predefined roles:\n> \n> \thttps://www.postgresql.org/message-id/flat/157742545062.1149.11052653770497832538%40wrigleys.postgresql.org\n> \n> This naming change has not happened yet, even though the issue is 11\n> months old, but the agreed-upon way to handle this is to use a website\n> redirect that links to the new text. You can add a \"tip\" there so they\n> understand the renaming has happened.\n\nThat rename will suffer the same problem that Craig is concerned about\nhere regarding the 'current' link, once it's done. I tend to agree with\nCraig that it'd be good to improve on this situation, and I've reached\nout to Jonathan to ask about using his new feature to have those\n/current/ links redirect to the renamed page. I'm actually wondering if\nmaybe we should just always do that for all the dog page aliases..\n\nMight make more sense to discuss this over on -www though.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 12 Nov 2020 09:47:42 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Thu, Nov 12, 2020 at 10:21:02AM +0800, Craig Ringer wrote:\n> > > Here's how the rendered docs look: https://imgur.com/a/VyjzEw5\n> > > \n> > > Think. You're used to recovery.conf. You've recently switched to pg 12. You\n> > > search for \"recovery.conf\" or \"primary_slot_name\" or \"standby_mode\" or\n> > > something. You of course land up at https://www.postgresql.org/docs/11/\n> > > recovery-config.html or another version.\n> > > \n> > > What do you do now? There's no \"12\" or \"current\" link for your version. You\n> > > don't follow postgres development closely, and you have no idea we removed the\n> > > file. You might work it out from the release notes. But even then, if you try\n> > > searching for \"standby_mode\" in the updated docs will tell you basically\n> > > nothing, it has just vanished\n> > > \n> > > Also by simply deleting the page, we've broken web links to https://\n> > > www.postgresql.org/docs/current/recovery-config.html\n> > > \n> > > So there are three really good reasons:\n> > > \n> > > * Help users of the web docs navigate to the right place when we remove things\n> > > * Don't break web links (breaking links without redirects is bad, the web is\n> > > sad)\n> > > * Help upgrading users who know the old terms find the new terms\n> > > \n> > > I'd welcome your suggestions on other ways to arrange this, so long as it meets\n> > > the basic requirement \"retain the linktable target 'recovery-config' \"\n> > \n> > This is certainly not the first or last time we will rename things. \n> \n> Indeed, we've renamed things a number of times before, eg:\n> \n> https://www.postgresql.org/docs/current/pgwaldump.html\n> \n> where the 9.6 link goes to:\n> \n> https://www.postgresql.org/docs/9.6/pgxlogdump.html\n> \n> and the 'current' link from the 9.6 page goes to the pgwaldump page,\n> which all works pretty well, if all you're looking at is our\n> documentation and not considering external links into the documentation.\n> \n> However, that isn't what Craig's raising a concern over here (at least,\n> not exclusively), it's this issue:\n> \n> https://www.postgresql.org/docs/current/pgxlogdump.html\n> \n> Which currently goes to a 404.\n> \n> Now, the pgweb feature that Jonathan wrote recently might actually be\n> exactly what we need to fix that, and to address the issue with\n> recovery config documentation that Craig raises.\n\nAfter chatting with Jonathan about this for a bit and testing it out in\nour test environment, I've gone ahead and added an entry for\npgxlogdump.html to redirect to pgwaldump.html, and that seems to be\nworking well.\n\nWith that then- Craig, can you look at how the pgxlogdump -> pgwaldump\npages work and see if using that would address the concerns you've\nraised here..?\n\nThough we need to decide which page 'recovery-config' should go to in\nnewer versions.\n\nI'm continuing to chat with Jonathan about if it'd make sense to do the\nsame for the other doc aliases.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 12 Nov 2020 10:25:13 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 09:47:42AM -0500, Stephen Frost wrote:\n> > Fortunately, this has already been discussed in the renaming of default\n> > roles to predefined roles:\n> > \n> > \thttps://www.postgresql.org/message-id/flat/157742545062.1149.11052653770497832538%40wrigleys.postgresql.org\n> > \n> > This naming change has not happened yet, even though the issue is 11\n> > months old, but the agreed-upon way to handle this is to use a website\n> > redirect that links to the new text. You can add a \"tip\" there so they\n> > understand the renaming has happened.\n> \n> That rename will suffer the same problem that Craig is concerned about\n> here regarding the 'current' link, once it's done. I tend to agree with\n> Craig that it'd be good to improve on this situation, and I've reached\n> out to Jonathan to ask about using his new feature to have those\n> /current/ links redirect to the renamed page. I'm actually wondering if\n> maybe we should just always do that for all the dog page aliases..\n> \n> Might make more sense to discuss this over on -www though.\n\nYes, I am thinking someone could go back and add redirects for previous\nrenames too. It would be interesting also to scrape the web logs for\n404 errors to see which renames cause the most failures and do those\nfirst.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 10:27:06 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 11:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> > Now, the pgweb feature that Jonathan wrote recently might actually be\n> > exactly what we need to fix that, and to address the issue with\n> > recovery config documentation that Craig raises.\n>\n> After chatting with Jonathan about this for a bit and testing it out in\n> our test environment, I've gone ahead and added an entry for\n> pgxlogdump.html to redirect to pgwaldump.html, and that seems to be\n> working well.\n>\n\nThanks.\n\nWith that then- Craig, can you look at how the pgxlogdump -> pgwaldump\n> pages work and see if using that would address the concerns you've\n> raised here..?\n>\n\n> Though we need to decide which page 'recovery-config' should go to in\n> newer versions.\n>\n\nSince we basically vanished all evidence of the old configuration, I don't\nthink there is a suitable place.\n\nI maintain that simply vanishing terms from the docs without any sort of\nexplanation is a user-hostile action that we should fix and stop doing If\nwe had something in the docs and we remove it, it's not unduly burdensome\nto have some index entries that point to the replacement/renamed terms, and\na short appendix entry explaining what happened.\n\nIf that is for some reason unacceptable (and I don't see anyone giving any\nactual reason why) the closest I can come up with is probably redirecting\nto\nhttps://www.postgresql.org/docs/current/warm-standby.html#STANDBY-SERVER-OPERATION\n. But that needs to be fixed to actually explicitly state what makes a\nstandby server into a standby server (per my patch), since right now it\njust kind of assumes you know about standby.signal .\n\nBut... fiddling with the website addresses none of my other concerns. In\nparticular, it doesn't help a user understand that \"standby_mode\" is gone\nand to look for \"standby.signal\" instead. It doesn't provide any \"see also\"\npointers for old terms to point to new terms in the index. Website\nredirects won't help users with local copies of the docs or manpages who\nare wondering what the heck happened to recovery.conf and standby_mode\neither.\n\nSo I still think this needs a docs patch. Redirects on the website are not\nsufficient. If you don't like how I spelled it, consider calling it\n\"important incompatible changes\" or something.\n\nThe release notes are IMO not sufficient for this because (a) they don't\nappear in the index; (b) you have to know something has been\nremoved/changed before you know to look in the relnotes for it; (c) you\nhave to find the relnotes for the correct release to find the info you\nwant. An appendix covering important renamings, removals and other\nincompatible changes would address all those points *and* fix the web\nlinks, man page names, etc.\n\nCan anyone tell me why the solution I proposed is not acceptable, and why\nwe have to invent a different one instead? The website redirect is good\nand all, but doesn't really solve the problem, and I still don't know\nwhat's wrong with just fixing the docs...\n\nOn Thu, Nov 12, 2020 at 11:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Now, the pgweb feature that Jonathan wrote recently might actually be\n> exactly what we need to fix that, and to address the issue with\n> recovery config documentation that Craig raises.\n\nAfter chatting with Jonathan about this for a bit and testing it out in\nour test environment, I've gone ahead and added an entry for\npgxlogdump.html to redirect to pgwaldump.html, and that seems to be\nworking well.Thanks. \n\nWith that then- Craig, can you look at how the pgxlogdump -> pgwaldump\npages work and see if using that would address the concerns you've\nraised here..? \n\nThough we need to decide which page 'recovery-config' should go to in\nnewer versions.Since we basically vanished all evidence of the old configuration, I don't think there is a suitable place. I maintain that simply vanishing terms from the docs without any sort of explanation is a user-hostile action that we should fix and stop doing If we had something in the docs and we remove it, it's not unduly burdensome to have some index entries that point to the replacement/renamed terms, and a short appendix entry explaining what happened.If that is for some reason unacceptable (and I don't see anyone giving any actual reason why) the closest I can come up with is probably redirecting to https://www.postgresql.org/docs/current/warm-standby.html#STANDBY-SERVER-OPERATION . But that needs to be fixed to actually explicitly state what makes a standby server into a standby server (per my patch), since right now it just kind of assumes you know about standby.signal .But... fiddling with the website addresses none of my other concerns. In particular, it doesn't help a user understand that \"standby_mode\" is gone and to look for \"standby.signal\" instead. It doesn't provide any \"see also\" pointers for old terms to point to new terms in the index. Website redirects won't help users with local copies of the docs or manpages who are wondering what the heck happened to recovery.conf and standby_mode either.So I still think this needs a docs patch. Redirects on the website are not sufficient. If you don't like how I spelled it, consider calling it \"important incompatible changes\" or something. The release notes are IMO not sufficient for this because (a) they don't appear in the index; (b) you have to know something has been removed/changed before you know to look in the relnotes for it; (c) you have to find the relnotes for the correct release to find the info you want. An appendix covering important renamings, removals and other incompatible changes would address all those points *and* fix the web links, man page names, etc.Can anyone tell me why the solution I proposed is not acceptable, and why we have to invent a different one instead? The website redirect is good and all, but doesn't really solve the problem, and I still don't know what's wrong with just fixing the docs...",
"msg_date": "Fri, 13 Nov 2020 11:31:24 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 11:31 AM Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n>\n> Can anyone tell me why the solution I proposed is not acceptable, and why\n> we have to invent a different one instead? The website redirect is good\n> and all, but doesn't really solve the problem, and I still don't know\n> what's wrong with just fixing the docs...\n>\n\nAlso, while I'm at it, note that a search on common search engines for\n\"postgres standby\" takes you to (an old version of) the hot standby docs.\nFollow the link to the current docs. Then try to work out from there what\nexactly makes a server in \"archive recovery\" or \"standby mode\".\n\nhttps://www.postgresql.org/docs/current/hot-standby.html\n\nWe need some <link ....> terms on \"archive recovery\" and \"standby mode\"\nthere, and probably other places.\n\nI have a draft patch that adds them and various related index\ncross-referencing in my tree to submit after the recovery.conf docs patch.\nLet me know if you think that might be worthwhile, 'cos I won't invest time\nin it if it's going to get reflexively blocked too.\n\nOn Fri, Nov 13, 2020 at 11:31 AM Craig Ringer <craig.ringer@enterprisedb.com> wrote:Can anyone tell me why the solution I proposed is not acceptable, and why we have to invent a different one instead? The website redirect is good and all, but doesn't really solve the problem, and I still don't know what's wrong with just fixing the docs...Also, while I'm at it, note that a search on common search engines for \"postgres standby\" takes you to (an old version of) the hot standby docs. Follow the link to the current docs. Then try to work out from there what exactly makes a server in \"archive recovery\" or \"standby mode\".https://www.postgresql.org/docs/current/hot-standby.htmlWe need some <link ....> terms on \"archive recovery\" and \"standby mode\" there, and probably other places.I have a draft patch that adds them and various related index cross-referencing in my tree to submit after the recovery.conf docs patch. Let me know if you think that might be worthwhile, 'cos I won't invest time in it if it's going to get reflexively blocked too.",
"msg_date": "Fri, 13 Nov 2020 11:37:16 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 11:31:24AM +0800, Craig Ringer wrote:\n> Can anyone tell me why the solution I proposed is not acceptable, and why we\n> have to invent a different one instead? The website� redirect is good and all,\n> but doesn't really solve the problem, and I still don't know what's wrong with\n> just fixing the docs...\n\nBecause at a certain point the number of _old_ names in the docs\nobscures exactly how to operate the current software. We have tried\nkeeping stuff around, and we are very bad at removing stuff.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 22:39:52 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, 12 Nov 2020 at 22:31, Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n\n> I maintain that simply vanishing terms from the docs without any sort of\n> explanation is a user-hostile action that we should fix and stop doing If\n> we had something in the docs and we remove it, it's not unduly burdensome\n> to have some index entries that point to the replacement/renamed terms, and\n> a short appendix entry explaining what happened.\n>\n\nThis sounds very reasonable to me. I would add that while I am by no means\nan expert in Postgres, although I do know a few things, I will state that\nit is my professional opinion as a Web person that pages should not simply\ndisappear from formal documentation without some sort of indication of what\nhappened. There are lots of ways to accomplish an indication but for\nhttps://www.postgresql.org/docs/current/recovery-config.html or other pages\nto just disappear is definitely wrong.\n\nOn Thu, 12 Nov 2020 at 22:31, Craig Ringer <craig.ringer@enterprisedb.com> wrote: I maintain that simply vanishing terms from the docs without any sort of explanation is a user-hostile action that we should fix and stop doing If we had something in the docs and we remove it, it's not unduly burdensome to have some index entries that point to the replacement/renamed terms, and a short appendix entry explaining what happened.This sounds very reasonable to me. I would add that while I am by no means an expert in Postgres, although I do know a few things, I will state that it is my professional opinion as a Web person that pages should not simply disappear from formal documentation without some sort of indication of what happened. There are lots of ways to accomplish an indication but for https://www.postgresql.org/docs/current/recovery-config.html or other pages to just disappear is definitely wrong.",
"msg_date": "Thu, 12 Nov 2020 22:40:56 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 11:37:16AM +0800, Craig Ringer wrote:\n> I have a draft patch that adds them and various related index cross-referencing\n> in my tree to submit after the recovery.conf docs patch. Let me know if you\n> think that might be worthwhile, 'cos I won't invest time in it if it's going to\n> get reflexively blocked too.\n\nSo you are saying you don't think you are getting sufficient thought\ninto your proposal, and getting just a reflex? Just because we don't\nagree with you don't mean we didn't think about it. In fact, we have\nthought about it a lot, which is evident from the URL I sent you\nalready.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 22:41:49 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, 12 Nov 2020 at 22:40, Bruce Momjian <bruce@momjian.us> wrote:\n\nBecause at a certain point the number of _old_ names in the docs\n> obscures exactly how to operate the current software. We have tried\n> keeping stuff around, and we are very bad at removing stuff.\n>\n\nThis is a good point, but does not attempt to explain why pages should\ndisappear entirely from /docs/current/. As I said in my previous comment,\nthere are lots of ways of doing this right. For example, we could have\npages that would disappear instead be replaced by a short page that\nexplains the page is removed and points to the current documentation of the\nequivalent or replacement features; these hypothetical \"useful 404\" (or,\nmore correctly, \"useful 410\") pages don't even necessarily have to be\nlisted in the table of contents. In fact, serving them with a 410 HTTP\nstatus code would be a reasonable thing to do.\n\nOn Thu, 12 Nov 2020 at 22:40, Bruce Momjian <bruce@momjian.us> wrote:\nBecause at a certain point the number of _old_ names in the docs\nobscures exactly how to operate the current software. We have tried\nkeeping stuff around, and we are very bad at removing stuff.\nThis is a good point, but does not attempt to explain why pages should disappear entirely from /docs/current/. As I said in my previous comment, there are lots of ways of doing this right. For example, we could have pages that would disappear instead be replaced by a short page that explains the page is removed and points to the current documentation of the equivalent or replacement features; these hypothetical \"useful 404\" (or, more correctly, \"useful 410\") pages don't even necessarily have to be listed in the table of contents. In fact, serving them with a 410 HTTP status code would be a reasonable thing to do.",
"msg_date": "Thu, 12 Nov 2020 22:44:43 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 10:41:49PM -0500, Bruce Momjian wrote:\n> On Fri, Nov 13, 2020 at 11:37:16AM +0800, Craig Ringer wrote:\n> > I have a draft patch that adds them and various related index cross-referencing\n> > in my tree to submit after the recovery.conf docs patch. Let me know if you\n> > think that might be worthwhile, 'cos I won't invest time in it if it's going to\n> > get reflexively blocked too.\n> \n> So you are saying you don't think you are getting sufficient thought\n> into your proposal, and getting just a reflex? Just because we don't\n> agree with you don't mean we didn't think about it. In fact, we have\n> thought about it a lot, which is evident from the URL I sent you\n> already.\n\nWhat would be interesting, I think you were suggesting this, is a\nseparate doc chapter that had a list of all the renames, what version\nthey were renamed in, and a link from their to the new name in the docs.\nThis could be easily created by reading the old release notes. Anyone\nlooking for old names would automatically be sent to that page in the\ndocs. This would give us a definitive list, and make the list out of\nthe main flow of the docs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 22:50:31 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 11:50 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n\n> > So you are saying you don't think you are getting sufficient thought\n> > into your proposal, and getting just a reflex? Just because we don't\n> > agree with you don't mean we didn't think about it. In fact, we have\n> > thought about it a lot, which is evident from the URL I sent you\n> > already.\n>\n\nI am mostly trying to say that I don't think the issues I raised were\nactually addressed in the proposed alternatives. I put in a fair bit of\neffort to clearly set out the problem that this is meant to solve, and was\nfrustrated to perceive the response as \"yeah, nah, lets just do this other\nthing that only addresses one part of the original issue.\" It wasn't clear\nwhy my proposal appeared to be being rejected. Perhaps I didn't fully grasp\nthe context of the linked discussion.\n\nPlease review the docs on standbys with a \"new user\" hat on. It's confusing\n( though the new front-matter and definitions in the HA chapter help) even\nwithout upgrade considerations. See how long it takes you to determine the\nanswer to the question \"what exactly puts a server into 'standby mode' \" ?\n\nThis proposal was intended to address one part of that, stemming directly\nfrom my own challenges with the docs when I as an experienced PostgreSQL\nuser and contributor went to adapt some tests to Pg 12 and 13. I knew we'd\nremoved recovery.conf, but for the life of me I couldn't remember how to\nput the server in standby mode in 12 or 13 at the time (I've been working\nwith 11 too much lately)... and it took ages to actually find that in the\ndocs.\n\nI can be pretty dense sometimes, but if it sent me for a loop it's going to\nconfuse others a great deal. Two things that would've helped me would've\nbeen some cross references to the old configuration terms, and a\nnon-vanishing documentation URL for newer versions. Hence this proposal.\n\nWhat would be interesting, I think you were suggesting this, is a\n> separate doc chapter that had a list of all the renames, what version\n> they were renamed in, and a link from their to the new name in the docs.\n>\n\nRight, that is exactly what I am suggesting we add, as an appendix so it's\nway out of the way of the main flow of the docs. Per the original patch and\nthe illustrative screenshots. I called it something like \"removed and\nrenamed features and settings\" or something in the proposed patch.\nAlternatives would be welcomed, I don't like the name much.\n\nThis could be easily created by reading the old release notes. Anyone\n> looking for old names would automatically be sent to that page in the\n> docs. This would give us a definitive list, and make the list out of\n> the main flow of the docs.\n>\n\nExactly. Plus a few <indexterm>s where appropriate. That's pretty much all\nI'm after.\n\nOn Fri, Nov 13, 2020 at 11:50 AM Bruce Momjian <bruce@momjian.us> wrote: \n> So you are saying you don't think you are getting sufficient thought\n> into your proposal, and getting just a reflex? Just because we don't\n> agree with you don't mean we didn't think about it. In fact, we have\n> thought about it a lot, which is evident from the URL I sent you\n> already.I am mostly trying to say that I don't think the issues I raised were actually addressed in the proposed alternatives. I put in a fair bit of effort to clearly set out the problem that this is meant to solve, and was frustrated to perceive the response as \"yeah, nah, lets just do this other thing that only addresses one part of the original issue.\" It wasn't clear why my proposal appeared to be being rejected. Perhaps I didn't fully grasp the context of the linked discussion.Please review the docs on standbys with a \"new user\" hat on. It's confusing ( though the new front-matter and definitions in the HA chapter help) even without upgrade considerations. See how long it takes you to determine the answer to the question \"what exactly puts a server into 'standby mode' \" ?This proposal was intended to address one part of that, stemming directly from my own challenges with the docs when I as an experienced PostgreSQL user and contributor went to adapt some tests to Pg 12 and 13. I knew we'd removed recovery.conf, but for the life of me I couldn't remember how to put the server in standby mode in 12 or 13 at the time (I've been working with 11 too much lately)... and it took ages to actually find that in the docs.I can be pretty dense sometimes, but if it sent me for a loop it's going to confuse others a great deal. Two things that would've helped me would've been some cross references to the old configuration terms, and a non-vanishing documentation URL for newer versions. Hence this proposal.\n\nWhat would be interesting, I think you were suggesting this, is a\nseparate doc chapter that had a list of all the renames, what version\nthey were renamed in, and a link from their to the new name in the docs.Right, that is exactly what I am suggesting we add, as an appendix so it's way out of the way of the main flow of the docs. Per the original patch and the illustrative screenshots. I called it something like \"removed and renamed features and settings\" or something in the proposed patch. Alternatives would be welcomed, I don't like the name much.\nThis could be easily created by reading the old release notes. Anyone\nlooking for old names would automatically be sent to that page in the\ndocs. This would give us a definitive list, and make the list out of\nthe main flow of the docs.Exactly. Plus a few <indexterm>s where appropriate. That's pretty much all I'm after.",
"msg_date": "Fri, 13 Nov 2020 13:27:34 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 01:27:34PM +0800, Craig Ringer wrote:\n> On Fri, Nov 13, 2020 at 11:50 AM Bruce Momjian <bruce@momjian.us> wrote:\n> �\n> \n> > So you are saying you don't think you are getting sufficient thought\n> > into your proposal, and getting just a reflex?� Just because we don't\n> > agree with you don't mean we didn't think about it.� In fact, we have\n> > thought about it a lot, which is evident from the URL I sent you\n> > already.\n> \n> \n> I am mostly trying to say that I don't think the issues I raised were actually\n> addressed in the proposed alternatives. I put in a fair bit of effort to\n> clearly set out the problem that this is meant to solve, and was frustrated to\n> perceive the response as \"yeah, nah, lets just do this other thing that only\n> addresses one part of the original issue.\" It wasn't clear why my proposal\n> appeared to be being rejected. Perhaps I didn't fully grasp the context of the\n> linked discussion.\n\nI think the big problem, and I have seen this repeatedly, is showing up\nwith a patch without discussing whether people actually want the\nfeature. I know it is a doc issue, but our TODO list has the order as:\n\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\nand there is a reason for that. When you appear with a patch, you are\nalready far down the steps, and you have to back up to explain why it is\nuseful.\n\nClearly we have need for documenting these renamings somewhere. We were\ngoing to go with a simple URL redirect and a \"tip\" for\ndefault/pre-installed roles, but I like the idea of doing something more\nwholistic that covers all of our recent renaming cases. Let's get\nbuy-in from that, and then someone can work on a patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 13 Nov 2020 12:42:30 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 10:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I think the big problem, and I have seen this repeatedly, is showing up\n> with a patch without discussing whether people actually want the\n> feature. I know it is a doc issue, but our TODO list has the order as:\n>\n> Desirability -> Design -> Implement -> Test -> Review -> Commit\n\n\n> and there is a reason for that. When you appear with a patch, you are\n> already far down the steps, and you have to back up to explain why it is\n> useful.\n>\n\nThat process is designed to prevent people from being exposed to wasted\neffort and hard feelings. The choice to follow it individually, instead of\ncollectively, doesn't diminish the value of the end result.\n\nI generally agree with Craig's proposed solution here. It doesn't add any\ncognitive load to new users as they will not see the obsolete features\nappendix in the normal course of their reading.\n\nTo the particular point regarding renaming features - this situation is not\nan instance of a rename but rather a feature removal. To blindly apply the\nreasoning and decision made for renaming to removal is not reasonable.\n From that observation (and the commentary below) extends the conclusion\nthat this appendix shouldn't include renaming.\n\nOn the point of renaming, my suggestion would be to have the documentation\ndirectory provide a file of all renaming for which redirects should be\nperformed. pgweb would source that file and actually establish the\nredirects on the main website. Comments in the file can describe to a\ncurious user why the name change was needed. Though that honestly seems a\nbit overkill; for rename, the content as a whole still exists and a comment\ntherein can talk about the renaming. Users of the public website would\nstill get the benefit of redirects, and there isn't any practical reason\nfor people building documentation from source to want to establish such\nredirects even if they were provided the data in the form of the\naforementioned file.\n\nI believe there is probably room for more discussion regarding the value of\nproviding a limited view of history in the publicly facing documentation\nbut that seems outside the scope of this patch.\n\nDavid J.\n\nOn Fri, Nov 13, 2020 at 10:42 AM Bruce Momjian <bruce@momjian.us> wrote:I think the big problem, and I have seen this repeatedly, is showing up\nwith a patch without discussing whether people actually want the\nfeature. I know it is a doc issue, but our TODO list has the order as:\n\n Desirability -> Design -> Implement -> Test -> Review -> Commit \n\nand there is a reason for that. When you appear with a patch, you are\nalready far down the steps, and you have to back up to explain why it is\nuseful.That process is designed to prevent people from being exposed to wasted effort and hard feelings. The choice to follow it individually, instead of collectively, doesn't diminish the value of the end result.I generally agree with Craig's proposed solution here. It doesn't add any cognitive load to new users as they will not see the obsolete features appendix in the normal course of their reading.To the particular point regarding renaming features - this situation is not an instance of a rename but rather a feature removal. To blindly apply the reasoning and decision made for renaming to removal is not reasonable. From that observation (and the commentary below) extends the conclusion that this appendix shouldn't include renaming.On the point of renaming, my suggestion would be to have the documentation directory provide a file of all renaming for which redirects should be performed. pgweb would source that file and actually establish the redirects on the main website. Comments in the file can describe to a curious user why the name change was needed. Though that honestly seems a bit overkill; for rename, the content as a whole still exists and a comment therein can talk about the renaming. Users of the public website would still get the benefit of redirects, and there isn't any practical reason for people building documentation from source to want to establish such redirects even if they were provided the data in the form of the aforementioned file.I believe there is probably room for more discussion regarding the value of providing a limited view of history in the publicly facing documentation but that seems outside the scope of this patch.David J.",
"msg_date": "Sat, 14 Nov 2020 22:48:57 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Sun, Nov 15, 2020 at 1:49 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Fri, Nov 13, 2020 at 10:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> I think the big problem, and I have seen this repeatedly, is showing up\n>> with a patch without discussing whether people actually want the\n>> feature. I know it is a doc issue, but our TODO list has the order as:\n>>\n>> Desirability -> Design -> Implement -> Test -> Review -> Commit\n>\n>\n>> and there is a reason for that. When you appear with a patch, you are\n>> already far down the steps, and you have to back up to explain why it is\n>> useful.\n>>\n>\n> That process is designed to prevent people from being exposed to wasted\n> effort and hard feelings. The choice to follow it individually, instead of\n> collectively, doesn't diminish the value of the end result.\n>\n\nFrankly, it's also kind of a catch-22. Because often proposals for changes\nor discussion get ignored until there's a patch, or the response is along\nthe lines of \"show us a patch so we can try it and get a solid idea of what\nthis will do.\"\n\nFor major engineering changes, yes, discuss first. For small stuff, if you\ndon't want to get ignored, open with a patch.\n\nOn the point of renaming, my suggestion would be to have the documentation\n> directory provide a file of all renaming for which redirects should be\n> performed. pgweb would source that file and actually establish the\n> redirects on the main website. Comments in the file can describe to a\n> curious user why the name change was needed. Though that honestly seems a\n> bit overkill; for rename, the content as a whole still exists and a comment\n> therein can talk about the renaming. Users of the public website would\n> still get the benefit of redirects, and there isn't any practical reason\n> for people building documentation from source to want to establish such\n> redirects even if they were provided the data in the form of the\n> aforementioned file.\n>\n\nAgreed, there's no need to keep heading redirects in the source-built docs.\nSo if we're happy to maintain that on the website, in a way that makes\n/current/ links work *and* following links from a /11/ docs page that has\nvanished in pg12 to the right new place via the version navigation links,\nthat's sufficient for topic-heading renames.\n\nWe should, however, carry information about removals and renames in the\nsource-built docs to the extent that we have appropriate \"see also\" index\nentries and useful information somewhere in the docs for people who are\nupgrading. It just doesn't have to retain the same subject heading.\n\nOn Sun, Nov 15, 2020 at 1:49 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Fri, Nov 13, 2020 at 10:42 AM Bruce Momjian <bruce@momjian.us> wrote:I think the big problem, and I have seen this repeatedly, is showing up\nwith a patch without discussing whether people actually want the\nfeature. I know it is a doc issue, but our TODO list has the order as:\n\n Desirability -> Design -> Implement -> Test -> Review -> Commit \n\nand there is a reason for that. When you appear with a patch, you are\nalready far down the steps, and you have to back up to explain why it is\nuseful.That process is designed to prevent people from being exposed to wasted effort and hard feelings. The choice to follow it individually, instead of collectively, doesn't diminish the value of the end result.Frankly, it's also kind of a catch-22. Because often proposals for changes or discussion get ignored until there's a patch, or the response is along the lines of \"show us a patch so we can try it and get a solid idea of what this will do.\"For major engineering changes, yes, discuss first. For small stuff, if you don't want to get ignored, open with a patch.On the point of renaming, my suggestion would be to have the documentation directory provide a file of all renaming for which redirects should be performed. pgweb would source that file and actually establish the redirects on the main website. Comments in the file can describe to a curious user why the name change was needed. Though that honestly seems a bit overkill; for rename, the content as a whole still exists and a comment therein can talk about the renaming. Users of the public website would still get the benefit of redirects, and there isn't any practical reason for people building documentation from source to want to establish such redirects even if they were provided the data in the form of the aforementioned file.Agreed, there's no need to keep heading redirects in the source-built docs. So if we're happy to maintain that on the website, in a way that makes /current/ links work *and* following links from a /11/ docs page that has vanished in pg12 to the right new place via the version navigation links, that's sufficient for topic-heading renames.We should, however, carry information about removals and renames in the source-built docs to the extent that we have appropriate \"see also\" index entries and useful information somewhere in the docs for people who are upgrading. It just doesn't have to retain the same subject heading.",
"msg_date": "Mon, 16 Nov 2020 09:46:35 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Sat, Nov 14, 2020 at 1:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> Clearly we have need for documenting these renamings somewhere. We were\n> going to go with a simple URL redirect and a \"tip\" for\n> default/pre-installed roles, but I like the idea of doing something more\n> wholistic that covers all of our recent renaming cases. Let's get\n> buy-in from that, and then someone can work on a patch.\n>\n\nIs there anything further I can do to address this specific documentation\nissue?\n\nCan I get you to consider the user experience arising from the current docs\n- try using the docs to find out how to set up a standby?\n\nI'm not prepared to expand the scope of this to do a major review of all\npast renamings and changes without a very clear agreement about how that\nshould be implemented in the docs and how that will address all the\nrelevant UX issues - vanishing terms from indexes and docs without\nx-refs/see-alsos, vanishing URLs endpoints for public links, vanishing\nnext-version links in www docs.\n\nI didn't raise this for discussion before I submitted a patch because I\nthought it was such an obvious no-brainer that a simple patch to address an\nobviously confusing aspect of the docs after the recovery.conf removal\nwould be uncontroversial. Anyway, as I've noted, these things often get\nignored until there is a patch to argue about.\n\nCan we please just address this docs issue? If you don't like my solution\ncan you please supply a patch that you feel addresses the problem? Or\nclearly state that you don't think there is a problem, and do so in a way\nthat actually addresses the specific points I have raised about what's\nwrong with the status quo?\n\nOn Sat, Nov 14, 2020 at 1:42 AM Bruce Momjian <bruce@momjian.us> wrote:\nClearly we have need for documenting these renamings somewhere. We were\ngoing to go with a simple URL redirect and a \"tip\" for\ndefault/pre-installed roles, but I like the idea of doing something more\nwholistic that covers all of our recent renaming cases. Let's get\nbuy-in from that, and then someone can work on a patch.Is there anything further I can do to address this specific documentation issue?Can I get you to consider the user experience arising from the current docs - try using the docs to find out how to set up a standby?I'm not prepared to expand the scope of this to do a major review of all past renamings and changes without a very clear agreement about how that should be implemented in the docs and how that will address all the relevant UX issues - vanishing terms from indexes and docs without x-refs/see-alsos, vanishing URLs endpoints for public links, vanishing next-version links in www docs.I didn't raise this for discussion before I submitted a patch because I thought it was such an obvious no-brainer that a simple patch to address an obviously confusing aspect of the docs after the recovery.conf removal would be uncontroversial. Anyway, as I've noted, these things often get ignored until there is a patch to argue about.Can we please just address this docs issue? If you don't like my solution can you please supply a patch that you feel addresses the problem? Or clearly state that you don't think there is a problem, and do so in a way that actually addresses the specific points I have raised about what's wrong with the status quo?",
"msg_date": "Mon, 30 Nov 2020 10:11:04 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Craig Ringer (craig.ringer@enterprisedb.com) wrote:\n> On Sat, Nov 14, 2020 at 1:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > Clearly we have need for documenting these renamings somewhere. We were\n> > going to go with a simple URL redirect and a \"tip\" for\n> > default/pre-installed roles, but I like the idea of doing something more\n> > wholistic that covers all of our recent renaming cases. Let's get\n> > buy-in from that, and then someone can work on a patch.\n> \n> Is there anything further I can do to address this specific documentation\n> issue?\n> \n> Can I get you to consider the user experience arising from the current docs\n> - try using the docs to find out how to set up a standby?\n\nFor my 2c, at least, I'm in favor of making some kind of change here to\nmake things better for users. I tried to figure out a way using the\nfeatures we have easily available in pgweb, but I tend to agree with\nCraig that those just aren't enough for the more recent set of changes\nthat we've made.\n\n> I'm not prepared to expand the scope of this to do a major review of all\n> past renamings and changes without a very clear agreement about how that\n> should be implemented in the docs and how that will address all the\n> relevant UX issues - vanishing terms from indexes and docs without\n> x-refs/see-alsos, vanishing URLs endpoints for public links, vanishing\n> next-version links in www docs.\n\nThe past renamings haven't really been as much of an issue since the\nredirects and such, imv anyway, have been sufficient to deal with them.\n\n> Can we please just address this docs issue? If you don't like my solution\n> can you please supply a patch that you feel addresses the problem? Or\n> clearly state that you don't think there is a problem, and do so in a way\n> that actually addresses the specific points I have raised about what's\n> wrong with the status quo?\n\nWhile I understand not wanting to go back through and check for all\nrenamings, it seems like we should probably at least list out the ones\nwe know about pretty easily, if we're going to create a new section\nspecifically for those cases..? Or do we think the current approach\nworks well enough for those other cases but just not for this one?\n\nThink I listed this elsewhere but not seeing it on the thread so I'll\ninclude it here anyway. These are the current doc aliases:\n\ncatalog-pg-replication-slots.html <-> view-pg-replication-slots.html\npgxlogdump.html <-> pgwaldump.html\napp-pgresetxlog.html <-> app-pgresetwal.html\nlegalnotice.html <-> LEGALNOTICE.html\napp-pgreceivexlog.html <-> app-pgreceivewal.html\n\nAs relates to the specific patch, I don't think the comments line up\nquite right (we can prevent a 404 from happening through other means,\nbut the point of the patch is really to give a deeper explanation of\nwhat happened). Also- wrt pg_basebackup, isn't that expected to support\nolder versions, and so we should document the behavior against older\nversions..?\n\nThanks,\n\nStephen",
"msg_date": "Sun, 29 Nov 2020 21:25:41 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 10:11:04AM +0800, Craig Ringer wrote:\n> Can we please just address this docs issue? If you don't like my solution can\n> you please supply a patch that you feel addresses the problem? Or clearly state\n> that you don't think there is a problem, and do so in a way that actually\n> addresses the specific points I have raised about what's wrong with the status\n> quo?\n\nIf we know there are X problems, and we fix one of them one way, then\nlater fix the rest another way, we have to undo the first fix. If you\ndon't want to fix all X, then let's wait until someone does want to fix\nthem all.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 30 Nov 2020 13:25:35 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 11:25 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Nov 30, 2020 at 10:11:04AM +0800, Craig Ringer wrote:\n> > Can we please just address this docs issue? If you don't like my\n> solution can\n> > you please supply a patch that you feel addresses the problem? Or\n> clearly state\n> > that you don't think there is a problem, and do so in a way that actually\n> > addresses the specific points I have raised about what's wrong with the\n> status\n> > quo?\n>\n> If we know there are X problems, and we fix one of them one way, then\n> later fix the rest another way, we have to undo the first fix. If you\n> don't want to fix all X, then let's wait until someone does want to fix\n> them all.\n>\n>\nIMO there is only the original problem with an acceptable solution\npresented that can be committed without downside. If that has to be undone\nbecause someone else in the future decides on a different solution that\nhappens to touch this too, fine, it can be changed again.\n\nDavid J.\n\nOn Mon, Nov 30, 2020 at 11:25 AM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Nov 30, 2020 at 10:11:04AM +0800, Craig Ringer wrote:\n> Can we please just address this docs issue? If you don't like my solution can\n> you please supply a patch that you feel addresses the problem? Or clearly state\n> that you don't think there is a problem, and do so in a way that actually\n> addresses the specific points I have raised about what's wrong with the status\n> quo?\n\nIf we know there are X problems, and we fix one of them one way, then\nlater fix the rest another way, we have to undo the first fix. If you\ndon't want to fix all X, then let's wait until someone does want to fix\nthem all.IMO there is only the original problem with an acceptable solution presented that can be committed without downside. If that has to be undone because someone else in the future decides on a different solution that happens to touch this too, fine, it can be changed again.David J.",
"msg_date": "Mon, 30 Nov 2020 11:31:35 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 11:31:35AM -0700, David G. Johnston wrote:\n> On Mon, Nov 30, 2020 at 11:25 AM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Mon, Nov 30, 2020 at 10:11:04AM +0800, Craig Ringer wrote:\n> > Can we please just address this docs issue? If you don't like my solution\n> can\n> > you please supply a patch that you feel addresses the problem? Or clearly\n> state\n> > that you don't think there is a problem, and do so in a way that actually\n> > addresses the specific points I have raised about what's wrong with the\n> status\n> > quo?\n> \n> If we know there are X problems, and we fix one of them one way, then\n> later fix the rest another way, we have to undo the first fix.� If you\n> don't want to fix all X, then let's wait until someone does want to fix\n> them all.\n> \n> IMO there is only the original problem with an acceptable solution presented\n> that can be committed without downside.� If that has to be undone because\n> someone else in the future decides on a different solution that happens to\n> touch this too, fine, it can be changed again.\n\nThe downside is you end up with X-1 dummy sections just to allow for\nreferences to old syntax, and you then have to find them all and remove\nthem when you implement the proper solution. I have no intention of\napplying such an X-1 fix.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 30 Nov 2020 13:42:07 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 11:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> The downside is you end up with X-1 dummy sections just to allow for\n> references to old syntax, and you then have to find them all and remove\n> them when you implement the proper solution. I have no intention of\n> applying such an X-1 fix.\n>\n>\nX = 2; seems like a strong objection for such a minor issue. The status\nquo seems worse than that.\n\nDavid J.\n\nOn Mon, Nov 30, 2020 at 11:42 AM Bruce Momjian <bruce@momjian.us> wrote:\nThe downside is you end up with X-1 dummy sections just to allow for\nreferences to old syntax, and you then have to find them all and remove\nthem when you implement the proper solution. I have no intention of\napplying such an X-1 fix.X = 2; seems like a strong objection for such a minor issue. The status quo seems worse than that.David J.",
"msg_date": "Mon, 30 Nov 2020 12:13:50 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Mon, Nov 30, 2020 at 11:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > The downside is you end up with X-1 dummy sections just to allow for\n> > references to old syntax, and you then have to find them all and remove\n> > them when you implement the proper solution. I have no intention of\n> > applying such an X-1 fix.\n>\n> X = 2; seems like a strong objection for such a minor issue. The status\n> quo seems worse than that.\n\nI've been thinking about this and I think I'm on Craig and David's side-\nhaving something cleaner, and clearer, than just http redirects and such\nwould be good for these cases and I don't think we are going to end up\nwith so many of them that it ends up becoming an issue.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 2 Dec 2020 14:47:13 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 02:47:13PM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > On Mon, Nov 30, 2020 at 11:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > The downside is you end up with X-1 dummy sections just to allow for\n> > > references to old syntax, and you then have to find them all and remove\n> > > them when you implement the proper solution. I have no intention of\n> > > applying such an X-1 fix.\n> >\n> > X = 2; seems like a strong objection for such a minor issue. The status\n> > quo seems worse than that.\n> \n> I've been thinking about this and I think I'm on Craig and David's side-\n> having something cleaner, and clearer, than just http redirects and such\n> would be good for these cases and I don't think we are going to end up\n> with so many of them that it ends up becoming an issue.\n\nWe were not going to use just redirects --- we were going to create a\npage that had all the renames listed, with links to the new names.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 2 Dec 2020 15:26:22 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Wed, Dec 2, 2020 at 02:47:13PM -0500, Stephen Frost wrote:\n> > * David G. Johnston (david.g.johnston@gmail.com) wrote:\n> > > On Mon, Nov 30, 2020 at 11:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > The downside is you end up with X-1 dummy sections just to allow for\n> > > > references to old syntax, and you then have to find them all and remove\n> > > > them when you implement the proper solution. I have no intention of\n> > > > applying such an X-1 fix.\n> > >\n> > > X = 2; seems like a strong objection for such a minor issue. The status\n> > > quo seems worse than that.\n> > \n> > I've been thinking about this and I think I'm on Craig and David's side-\n> > having something cleaner, and clearer, than just http redirects and such\n> > would be good for these cases and I don't think we are going to end up\n> > with so many of them that it ends up becoming an issue.\n> \n> We were not going to use just redirects --- we were going to create a\n> page that had all the renames listed, with links to the new names.\n\nMaybe I'm the one who is confused here, but I thought there was\nobjection to adding a new section/page which covers these topics (which\nis what Craig's original patch does)...? If there isn't an objection to\nthat then it seems like we should move forward with it.\n\nIf I'm following correctly, maybe there was some idea that we should\nhave more things added to this section than just the recovery.conf bits,\nand perhaps we should, but that could certainly be done later. To be\nclear though, I don't think we need to do this in all cases- the\nexisting flow for pg_xlogdump -> pg_waldump works pretty well. Maybe we\nadd in a note here too if someone wants to but I don't think it's\nstrictly necessary for the 'simple' rename cases.\n\nI also feel like that could be done once the section gets added, if\nsomeone wants to.\n\nWas there something else that I'm missing here in terms of what the\nconcern is regarding Craig's patch..?\n\nThanks,\n\nStephen",
"msg_date": "Wed, 2 Dec 2020 17:57:01 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 05:57:01PM -0500, Stephen Frost wrote:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > We were not going to use just redirects --- we were going to create a\n> > page that had all the renames listed, with links to the new names.\n> \n> Maybe I'm the one who is confused here, but I thought there was\n> objection to adding a new section/page which covers these topics (which\n> is what Craig's original patch does)...? If there isn't an objection to\n> that then it seems like we should move forward with it.\n> \n> If I'm following correctly, maybe there was some idea that we should\n> have more things added to this section than just the recovery.conf bits,\n> and perhaps we should, but that could certainly be done later. To be\n> clear though, I don't think we need to do this in all cases- the\n> existing flow for pg_xlogdump -> pg_waldump works pretty well. Maybe we\n> add in a note here too if someone wants to but I don't think it's\n> strictly necessary for the 'simple' rename cases.\n> \n> I also feel like that could be done once the section gets added, if\n> someone wants to.\n> \n> Was there something else that I'm missing here in terms of what the\n> concern is regarding Craig's patch..?\n\nI think the ideal solution is to create a section for all the rename\ncases and do all the redirects to that page. The page would list the\nold and new name for each item, and would link to the section for each\nnew item.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 2 Dec 2020 19:26:58 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I think the ideal solution is to create a section for all the rename\n> cases and do all the redirects to that page. The page would list the\n> old and new name for each item, and would link to the section for each\n> new item.\n>\n>\nNothing prevents us from doing that for simple renames. For me, this\nsituation is not a simple rename and the proposed solution is appropriate\nfor what it is - changing the implementation details of an existing\nfeature. We can do both - though the simple rename page doesn't seem\nparticularly appealing at first glance.\n\nDavid J.\n\nOn Wed, Dec 2, 2020 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:I think the ideal solution is to create a section for all the rename\ncases and do all the redirects to that page. The page would list the\nold and new name for each item, and would link to the section for each\nnew item.Nothing prevents us from doing that for simple renames. For me, this situation is not a simple rename and the proposed solution is appropriate for what it is - changing the implementation details of an existing feature. We can do both - though the simple rename page doesn't seem particularly appealing at first glance.David J.",
"msg_date": "Wed, 2 Dec 2020 17:33:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Wed, 2 Dec 2020 at 19:33, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wed, Dec 2, 2020 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> I think the ideal solution is to create a section for all the rename\n>> cases and do all the redirects to that page. The page would list the\n>> old and new name for each item, and would link to the section for each\n>> new item.\n>>\n>>\n> Nothing prevents us from doing that for simple renames. For me, this\n> situation is not a simple rename and the proposed solution is appropriate\n> for what it is - changing the implementation details of an existing\n> feature. We can do both - though the simple rename page doesn't seem\n> particularly appealing at first glance.\n>\n\nI for one do not like following a bookmark or link and then being\nredirected to a generic page that doesn't relate to the specific link I was\nfollowing. What is being proposed here is not as bad as the usual, where\nall the old links simply turn into redirects to the homepage, but it's\nstill disorienting. I would much rather each removed page be moved to an\nappendix (without renaming) and edited to briefly explain what happened to\nthe page and provide links to the appropriate up-to-date page or pages.\n\nOn Wed, 2 Dec 2020 at 19:33, David G. Johnston <david.g.johnston@gmail.com> wrote:On Wed, Dec 2, 2020 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:I think the ideal solution is to create a section for all the rename\ncases and do all the redirects to that page. The page would list the\nold and new name for each item, and would link to the section for each\nnew item.Nothing prevents us from doing that for simple renames. For me, this situation is not a simple rename and the proposed solution is appropriate for what it is - changing the implementation details of an existing feature. We can do both - though the simple rename page doesn't seem particularly appealing at first glance.I for one do not like following a bookmark or link and then being redirected to a generic page that doesn't relate to the specific link I was following. What is being proposed here is not as bad as the usual, where all the old links simply turn into redirects to the homepage, but it's still disorienting. I would much rather each removed page be moved to an appendix (without renaming) and edited to briefly explain what happened to the page and provide links to the appropriate up-to-date page or pages.",
"msg_date": "Wed, 2 Dec 2020 20:07:47 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Wed, Dec 2, 2020 at 08:07:47PM -0500, Isaac Morland wrote:\n> On Wed, 2 Dec 2020 at 19:33, David G. Johnston <david.g.johnston@gmail.com>\n> wrote:\n> \n> On Wed, Dec 2, 2020 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> I think the ideal solution is to create a section for all the rename\n> cases and do all the redirects to that page.� The page would list the\n> old and new name for each item, and would link to the section for each\n> new item.\n> \n> \n> \n> Nothing prevents us from doing that for simple renames.� For me, this\n> situation is not a simple rename and the proposed solution is appropriate\n> for what it is - changing the implementation�details of an existing\n> feature.� We can do both - though the simple rename page doesn't seem\n> particularly appealing at first glance.\n> \n> \n> I for one do not like following a bookmark or link and then being redirected to\n> a generic page that doesn't relate to the specific link I was following. What\n> is being proposed here is not as bad as the usual, where all the old links\n> simply turn into redirects to the homepage, but it's still disorienting. I\n> would much rather each removed page be moved to an appendix (without renaming)\n> and edited to briefly explain what happened to the page and provide links to\n> the appropriate up-to-date page or pages.\n\nYes, that is pretty much the same thing I was suggesting, except that\neach rename has its own _original_ URL link, which I think is also\nacceptable. My desire is for these items to all exist in one place, and\nan appendix of them seems fine.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 2 Dec 2020 20:11:54 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Wed, Dec 2, 2020 at 08:07:47PM -0500, Isaac Morland wrote:\n> > On Wed, 2 Dec 2020 at 19:33, David G. Johnston <david.g.johnston@gmail.com>\n> > wrote:\n> > \n> > On Wed, Dec 2, 2020 at 5:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > I think the ideal solution is to create a section for all the rename\n> > cases and do all the redirects to that page. The page would list the\n> > old and new name for each item, and would link to the section for each\n> > new item.\n> > \n> > \n> > \n> > Nothing prevents us from doing that for simple renames. For me, this\n> > situation is not a simple rename and the proposed solution is appropriate\n> > for what it is - changing the implementation details of an existing\n> > feature. We can do both - though the simple rename page doesn't seem\n> > particularly appealing at first glance.\n> > \n> > \n> > I for one do not like following a bookmark or link and then being redirected to\n> > a generic page that doesn't relate to the specific link I was following. What\n> > is being proposed here is not as bad as the usual, where all the old links\n> > simply turn into redirects to the homepage, but it's still disorienting. I\n> > would much rather each removed page be moved to an appendix (without renaming)\n> > and edited to briefly explain what happened to the page and provide links to\n> > the appropriate up-to-date page or pages.\n> \n> Yes, that is pretty much the same thing I was suggesting, except that\n> each rename has its own _original_ URL link, which I think is also\n> acceptable. My desire is for these items to all exist in one place, and\n> an appendix of them seems fine.\n\nAlright, so, to try and move this forward I'll list out (again) the\nrenames that we have in pgweb:\n\ncatalog-pg-replication-slots.html <-> view-pg-replication-slots.html\npgxlogdump.html <-> pgwaldump.html\napp-pgresetxlog.html <-> app-pgresetwal.html\napp-pgreceivexlog.html <-> app-pgreceivewal.html\n\n(excluding the 'legal notice' one)\n\nBruce, are you saying that we need to take Craig's patch and then add to\nit entries for all of the above, effectively removing the need for the\nweb page aliases and redirects? If that was done, would that be\nsufficient to get this committed? Are there other things that people\ncan think of off-hand that we should include, I think Craig might have\nmentioned something else earlier on..? I don't think we should require\nthat someone troll through everything that ever existed, just to be\nclear, as we can always add to this later if other things come up. If\nthat's the expectation though, then someone needs to say so, in which\ncase I'll assume it's status quo unless/until someone steps up to do\nthat.\n\nObviously, I'd then have to adjust the patch that I proposed for default\nroles, or move forward with it as-is, depending on what we end up doing\nhere. I dislike what feels like a state of limbo for this right now\nthough.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 4 Dec 2020 14:00:23 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 12:00 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Obviously, I'd then have to adjust the patch that I proposed for default\n> roles, or move forward with it as-is, depending on what we end up doing\n> here. I dislike what feels like a state of limbo for this right now\n> though.\n>\n>\nWe have a committer + 1 in favor of status quo - or at least requiring that\na non-existing solution be created to move things forward. (Bruce, Daniel)\nWe have a committer + 3 that seem to agree that the proposed patch is\nacceptable as presented. (Stephen, Craig, Isaac, David J.)\nAnyone wish to update the above observation?\n\nStephen, are you will to commit this with the support of the community\nmembers who have spoken up here?\n\nDavid J.\n\nOn Fri, Dec 4, 2020 at 12:00 PM Stephen Frost <sfrost@snowman.net> wrote:Obviously, I'd then have to adjust the patch that I proposed for default\nroles, or move forward with it as-is, depending on what we end up doing\nhere. I dislike what feels like a state of limbo for this right now\nthough.We have a committer + 1 in favor of status quo - or at least requiring that a non-existing solution be created to move things forward. (Bruce, Daniel)We have a committer + 3 that seem to agree that the proposed patch is acceptable as presented. (Stephen, Craig, Isaac, David J.)Anyone wish to update the above observation?Stephen, are you will to commit this with the support of the community members who have spoken up here?David J.",
"msg_date": "Fri, 4 Dec 2020 12:14:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Fri, Dec 4, 2020 at 12:00 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Obviously, I'd then have to adjust the patch that I proposed for default\n> > roles, or move forward with it as-is, depending on what we end up doing\n> > here. I dislike what feels like a state of limbo for this right now\n> > though.\n>\n> We have a committer + 1 in favor of status quo - or at least requiring that\n> a non-existing solution be created to move things forward. (Bruce, Daniel)\n> We have a committer + 3 that seem to agree that the proposed patch is\n> acceptable as presented. (Stephen, Craig, Isaac, David J.)\n> Anyone wish to update the above observation?\n> \n> Stephen, are you will to commit this with the support of the community\n> members who have spoken up here?\n\nWhat I was hoping to achieve is consensus on a reasonably well bounded\nsolution. I'm not going to push something that others are objecting to,\nbut if we can agree on the idea and what's needed for it to be\nacceptable then I'm willing to work towards that, provided it doesn't\nrequire going back 10 versions and looking at every single change.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 4 Dec 2020 14:28:16 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Fri, Dec 4, 2020 at 02:00:23PM -0500, Stephen Frost wrote:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > Yes, that is pretty much the same thing I was suggesting, except that\n> > each rename has its own _original_ URL link, which I think is also\n> > acceptable. My desire is for these items to all exist in one place, and\n> > an appendix of them seems fine.\n> \n> Alright, so, to try and move this forward I'll list out (again) the\n> renames that we have in pgweb:\n> \n> catalog-pg-replication-slots.html <-> view-pg-replication-slots.html\n> pgxlogdump.html <-> pgwaldump.html\n> app-pgresetxlog.html <-> app-pgresetwal.html\n> app-pgreceivexlog.html <-> app-pgreceivewal.html\n> \n> (excluding the 'legal notice' one)\n> \n> Bruce, are you saying that we need to take Craig's patch and then add to\n> it entries for all of the above, effectively removing the need for the\n> web page aliases and redirects? If that was done, would that be\n\nYes, I think putting the compatibility section headings in our main\ndocumentation flow will make it too hard to read and cause unnecessary\ncomplexity, but if we have a separate section for them, adding the\nsection headings seems fine. This way, we don't have to add a redirect\nevery time we add a new entry.\n\n> sufficient to get this committed? Are there other things that people\n> can think of off-hand that we should include, I think Craig might have\n> mentioned something else earlier on..? I don't think we should require\n> that someone troll through everything that ever existed, just to be\n> clear, as we can always add to this later if other things come up. If\n> that's the expectation though, then someone needs to say so, in which\n> case I'll assume it's status quo unless/until someone steps up to do\n> that.\n\nAgreed. I just wanted something that could scale going forward, and be\neasily identified as compatibility, so maybe one day we can remove them.\nHowever, if they are in a separate section, we might never do that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 16 Dec 2020 17:29:16 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Fri, Dec 4, 2020 at 02:00:23PM -0500, Stephen Frost wrote:\n> > * Bruce Momjian (bruce@momjian.us) wrote:\n> > > Yes, that is pretty much the same thing I was suggesting, except that\n> > > each rename has its own _original_ URL link, which I think is also\n> > > acceptable. My desire is for these items to all exist in one place, and\n> > > an appendix of them seems fine.\n> > \n> > Alright, so, to try and move this forward I'll list out (again) the\n> > renames that we have in pgweb:\n> > \n> > catalog-pg-replication-slots.html <-> view-pg-replication-slots.html\n> > pgxlogdump.html <-> pgwaldump.html\n> > app-pgresetxlog.html <-> app-pgresetwal.html\n> > app-pgreceivexlog.html <-> app-pgreceivewal.html\n> > \n> > (excluding the 'legal notice' one)\n> > \n> > Bruce, are you saying that we need to take Craig's patch and then add to\n> > it entries for all of the above, effectively removing the need for the\n> > web page aliases and redirects? If that was done, would that be\n> \n> Yes, I think putting the compatibility section headings in our main\n> documentation flow will make it too hard to read and cause unnecessary\n> complexity, but if we have a separate section for them, adding the\n> section headings seems fine. This way, we don't have to add a redirect\n> every time we add a new entry.\n\nAlright, how does this look? The new entries are all under the\n'obsolete' section to keep it out of the main line, but should work to\n'fix' the links that currently 404 and provide a bit of a 'softer'\nlanding for the other cases that currently just forcibly redirect using\nthe website doc alias capability.\n\nI ended up not actually doing this for the catalog -> view change of\npg_replication_slots simply because I don't really think folks will\nmisunderstand or be confused by that redirect since it's still the same\nrelation. If others disagree though, we could certainly change that\ntoo.\n\n> > sufficient to get this committed? Are there other things that people\n> > can think of off-hand that we should include, I think Craig might have\n> > mentioned something else earlier on..? I don't think we should require\n> > that someone troll through everything that ever existed, just to be\n> > clear, as we can always add to this later if other things come up. If\n> > that's the expectation though, then someone needs to say so, in which\n> > case I'll assume it's status quo unless/until someone steps up to do\n> > that.\n> \n> Agreed. I just wanted something that could scale going forward, and be\n> easily identified as compatibility, so maybe one day we can remove them.\n> However, if they are in a separate section, we might never do that.\n\nSure, seems like this approach addresses that.\n\nIf we have agreement from folks on this then I'll commit it and then\nrework the change from default roles to predefined roles to use this\napproach and then we can move forward with that too.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 13 Jan 2021 14:44:50 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Thu, 14 Jan 2021 at 03:44, Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> Alright, how does this look? The new entries are all under the\n> 'obsolete' section to keep it out of the main line, but should work to\n> 'fix' the links that currently 404 and provide a bit of a 'softer'\n> landing for the other cases that currently just forcibly redirect using\n> the website doc alias capability.\n>\n\nThanks for expanding the change to other high profile obsoleted or renamed\nfeatures and tools.\n\nOne minor point. I'm not sure this is quite the best way to spell the index\nentries:\n\n+ <indexterm>\n+ <primary>obsolete</primary>\n+ <secondary>pg_receivexlog</secondary>\n+ </indexterm>\n\nas it will produce an index term \"obsolete\" with a list of various\ncomponents under it. While that concentrates them nicely, it means people\nwon't actually find them if they're using the index alphabetically.\n\nI'd slightly prefer\n\n\n+ <indexterm>\n+ <primary>pg_receivexlog</primary>\n+ <seealso>pg_receivewal</secondary>\n+ </indexterm>\n\neven though that bulks the index up a little, because then people are a bit\nmore likely to find it.\n\nYour extended and revised patch retains the above style for\n\n+ <indexterm>\n+ <primary>trigger_file</primary>\n+ <seealso>promote_trigger_file</seealso>\n+ </indexterm>\n...\n+ <indexterm>\n+ <primary>standby_mode</primary>\n+ <seealso>standby.signal</seealso>\n+ </indexterm>\n\nso if you intend to change it, that entry needs changing too.\n\n\n> I ended up not actually doing this for the catalog -> view change of\n> pg_replication_slots simply because I don't really think folks will\n> misunderstand or be confused by that redirect since it's still the same\n> relation. If others disagree though, we could certainly change that\n> too.\n>\n\nI agree with you.\n\nOn Thu, 14 Jan 2021 at 03:44, Stephen Frost <sfrost@snowman.net> wrote:\nAlright, how does this look? The new entries are all under the\n'obsolete' section to keep it out of the main line, but should work to\n'fix' the links that currently 404 and provide a bit of a 'softer'\nlanding for the other cases that currently just forcibly redirect using\nthe website doc alias capability.Thanks for expanding the change to other high profile obsoleted or renamed features and tools.One minor point. I'm not sure this is quite the best way to spell the index entries:+ <indexterm>+ <primary>obsolete</primary>+ <secondary>pg_receivexlog</secondary>+ </indexterm>as it will produce an index term \"obsolete\" with a list of various components under it. While that concentrates them nicely, it means people won't actually find them if they're using the index alphabetically.I'd slightly prefer+ <indexterm>+ <primary>pg_receivexlog</primary>+ <seealso>pg_receivewal</secondary>+ </indexterm>even though that bulks the index up a little, because then people are a bit more likely to find it. Your extended and revised patch retains the above style for + <indexterm>+ <primary>trigger_file</primary>+ <seealso>promote_trigger_file</seealso>+ </indexterm>...+ <indexterm>+ <primary>standby_mode</primary>+ <seealso>standby.signal</seealso>+ </indexterm>so if you intend to change it, that entry needs changing too.\n\nI ended up not actually doing this for the catalog -> view change of\npg_replication_slots simply because I don't really think folks will\nmisunderstand or be confused by that redirect since it's still the same\nrelation. If others disagree though, we could certainly change that\ntoo.I agree with you.",
"msg_date": "Mon, 18 Jan 2021 12:07:43 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Craig Ringer (craig.ringer@enterprisedb.com) wrote:\n> On Thu, 14 Jan 2021 at 03:44, Stephen Frost <sfrost@snowman.net> wrote:\n> > Alright, how does this look? The new entries are all under the\n> > 'obsolete' section to keep it out of the main line, but should work to\n> > 'fix' the links that currently 404 and provide a bit of a 'softer'\n> > landing for the other cases that currently just forcibly redirect using\n> > the website doc alias capability.\n> \n> Thanks for expanding the change to other high profile obsoleted or renamed\n> features and tools.\n\nThanks for taking the time to review it and comment on it!\n\n> One minor point. I'm not sure this is quite the best way to spell the index\n> entries:\n> \n> + <indexterm>\n> + <primary>obsolete</primary>\n> + <secondary>pg_receivexlog</secondary>\n> + </indexterm>\n> \n> as it will produce an index term \"obsolete\" with a list of various\n> components under it. While that concentrates them nicely, it means people\n> won't actually find them if they're using the index alphabetically.\n\nAh, yeah, that's definitely a good point and one that I hadn't really\nspent much time thinking about.\n\n> I'd slightly prefer\n> \n> + <indexterm>\n> + <primary>pg_receivexlog</primary>\n> + <seealso>pg_receivewal</secondary>\n> + </indexterm>\n> \n> even though that bulks the index up a little, because then people are a bit\n> more likely to find it.\n\nYup, makes sense, updated patch attached which makes that change.\n\n> > I ended up not actually doing this for the catalog -> view change of\n> > pg_replication_slots simply because I don't really think folks will\n> > misunderstand or be confused by that redirect since it's still the same\n> > relation. If others disagree though, we could certainly change that\n> > too.\n> \n> I agree with you.\n\nOk, great.\n\nHow does the attached look then?\n\nBruce, did you want to review or comment on this as to if it addresses\nyour concerns appropriately? Would be great to get this in as there's\nthe follow-on for default roles.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 19 Jan 2021 13:44:23 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Craig Ringer (craig.ringer@enterprisedb.com) wrote:\n> > On Thu, 14 Jan 2021 at 03:44, Stephen Frost <sfrost@snowman.net> wrote:\n> > > Alright, how does this look? The new entries are all under the\n> > > 'obsolete' section to keep it out of the main line, but should work to\n> > > 'fix' the links that currently 404 and provide a bit of a 'softer'\n> > > landing for the other cases that currently just forcibly redirect using\n> > > the website doc alias capability.\n> > \n> > Thanks for expanding the change to other high profile obsoleted or renamed\n> > features and tools.\n> \n> Thanks for taking the time to review it and comment on it!\n> \n> > One minor point. I'm not sure this is quite the best way to spell the index\n> > entries:\n> > \n> > + <indexterm>\n> > + <primary>obsolete</primary>\n> > + <secondary>pg_receivexlog</secondary>\n> > + </indexterm>\n> > \n> > as it will produce an index term \"obsolete\" with a list of various\n> > components under it. While that concentrates them nicely, it means people\n> > won't actually find them if they're using the index alphabetically.\n> \n> Ah, yeah, that's definitely a good point and one that I hadn't really\n> spent much time thinking about.\n> \n> > I'd slightly prefer\n> > \n> > + <indexterm>\n> > + <primary>pg_receivexlog</primary>\n> > + <seealso>pg_receivewal</secondary>\n> > + </indexterm>\n> > \n> > even though that bulks the index up a little, because then people are a bit\n> > more likely to find it.\n> \n> Yup, makes sense, updated patch attached which makes that change.\n> \n> > > I ended up not actually doing this for the catalog -> view change of\n> > > pg_replication_slots simply because I don't really think folks will\n> > > misunderstand or be confused by that redirect since it's still the same\n> > > relation. If others disagree though, we could certainly change that\n> > > too.\n> > \n> > I agree with you.\n> \n> Ok, great.\n> \n> How does the attached look then?\n> \n> Bruce, did you want to review or comment on this as to if it addresses\n> your concerns appropriately? Would be great to get this in as there's\n> the follow-on for default roles.\n\n... really attached now, sorry about that. :)\n\nThanks,\n\nStephen",
"msg_date": "Tue, 19 Jan 2021 13:45:46 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Wed, 20 Jan 2021 at 02:45, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Stephen Frost (sfrost@snowman.net) wrote:\n> > * Craig Ringer (craig.ringer@enterprisedb.com) wrote:\n> > > On Thu, 14 Jan 2021 at 03:44, Stephen Frost <sfrost@snowman.net>\n> wrote:\n> > > > Alright, how does this look? The new entries are all under the\n> > > > 'obsolete' section to keep it out of the main line, but should work\n> to\n> > > > 'fix' the links that currently 404 and provide a bit of a 'softer'\n> > > > landing for the other cases that currently just forcibly redirect\n> using\n> > > > the website doc alias capability.\n> > >\n> > > Thanks for expanding the change to other high profile obsoleted or\n> renamed\n> > > features and tools.\n> >\n> > Thanks for taking the time to review it and comment on it!\n> >\n> > > One minor point. I'm not sure this is quite the best way to spell the\n> index\n> > > entries:\n> > >\n> > > + <indexterm>\n> > > + <primary>obsolete</primary>\n> > > + <secondary>pg_receivexlog</secondary>\n> > > + </indexterm>\n> > >\n> > > as it will produce an index term \"obsolete\" with a list of various\n> > > components under it. While that concentrates them nicely, it means\n> people\n> > > won't actually find them if they're using the index alphabetically.\n> >\n> > Ah, yeah, that's definitely a good point and one that I hadn't really\n> > spent much time thinking about.\n> >\n> > > I'd slightly prefer\n> > >\n> > > + <indexterm>\n> > > + <primary>pg_receivexlog</primary>\n> > > + <seealso>pg_receivewal</secondary>\n> > > + </indexterm>\n> > >\n> > > even though that bulks the index up a little, because then people are\n> a bit\n> > > more likely to find it.\n> >\n> > Yup, makes sense, updated patch attached which makes that change.\n> >\n> > > > I ended up not actually doing this for the catalog -> view change of\n> > > > pg_replication_slots simply because I don't really think folks will\n> > > > misunderstand or be confused by that redirect since it's still the\n> same\n> > > > relation. If others disagree though, we could certainly change that\n> > > > too.\n> > >\n> > > I agree with you.\n> >\n> > Ok, great.\n> >\n> > How does the attached look then?\n>\n\nPretty good to me. Thanks so much for your help and support with this.\n\n\nIndex entries render as e.g.\n\n pg_xlogdump, The pg_xlogdump command\n (see also pg_waldump)\n\nwheras with the obsolete subhead they would render as something like:\n\n obsolete, Obsolete or renamed features, settings and files\n pg_xlogdump, The pg_xlogdump command\n\nThe see also spelling is much easier to find in the index but doesn't make\nit as obvious that it's obsoleted/replaced.\n\nA look at the doxygen docs suggest we should use <see> not <seealso> for\nthese.\n\nA quick\n\n sed -i -e 's/<seealso>/<see>/g' -e 's/<\\/seealso>/<\\/see>/g'\ndoc/src/sgml/appendix-obsolete*\n\ncauses them to render much better:\n\n pg_receivexlog, The pg_receivexlog command (see pg_receivewal)\n\nIt might be worth changing the <title/>s too, so I've done so in the\nattached. The terms now render as:\n\n pg_receivexlog, pg_receivexlog renamed to pg_recievewal (see\npg_receivewal)\n\nwhich is good enough in my opinion. The duplication is messy but an\nexpected artifact of index generation. I don't see any docbook <indexterm>\nattribute that lets you suppress insertion of the <title> of the section\ncontaining the <indexterm>, and it's not worth fiddling to try to eliminate\nit with structural hacks.\n\nThe attached changes the titles, changes <seealso> to <see>, and also\nupdates the comments in the obsolete entries SGML docs to specify that the\nid must be unchanged + give a recommended index term format.",
"msg_date": "Thu, 21 Jan 2021 10:19:02 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Craig Ringer (craig.ringer@enterprisedb.com) wrote:\n> Pretty good to me. Thanks so much for your help and support with this.\n\nThanks for helping me move it forward!\n\n> Index entries render as e.g.\n> \n> pg_xlogdump, The pg_xlogdump command\n> (see also pg_waldump)\n> \n> wheras with the obsolete subhead they would render as something like:\n> \n> obsolete, Obsolete or renamed features, settings and files\n> pg_xlogdump, The pg_xlogdump command\n> \n> The see also spelling is much easier to find in the index but doesn't make\n> it as obvious that it's obsoleted/replaced.\n> \n> A look at the doxygen docs suggest we should use <see> not <seealso> for\n> these.\n> \n> A quick\n> \n> sed -i -e 's/<seealso>/<see>/g' -e 's/<\\/seealso>/<\\/see>/g'\n> doc/src/sgml/appendix-obsolete*\n> \n> causes them to render much better:\n> \n> pg_receivexlog, The pg_receivexlog command (see pg_receivewal)\n> \n> It might be worth changing the <title/>s too, so I've done so in the\n> attached. The terms now render as:\n> \n> pg_receivexlog, pg_receivexlog renamed to pg_recievewal (see\n> pg_receivewal)\n> \n> which is good enough in my opinion. The duplication is messy but an\n> expected artifact of index generation. I don't see any docbook <indexterm>\n> attribute that lets you suppress insertion of the <title> of the section\n> containing the <indexterm>, and it's not worth fiddling to try to eliminate\n> it with structural hacks.\n\nNice, yes, that does look better.\n\n> The attached changes the titles, changes <seealso> to <see>, and also\n> updates the comments in the obsolete entries SGML docs to specify that the\n> id must be unchanged + give a recommended index term format.\n\nAwesome, attached is just a rebase (not that anything really changed).\nUnless someone wants to speak up, I'll commit this soonish (hopefully\ntomorrow, but at least sometime later this week).\n\nThanks!\n\nStephen",
"msg_date": "Mon, 22 Mar 2021 13:02:47 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> Awesome, attached is just a rebase (not that anything really changed).\n> Unless someone wants to speak up, I'll commit this soonish (hopefully\n> tomorrow, but at least sometime later this week).\n\nAlright, as this took a bit more playing with to work cleanly in the\nback-branches, I'm putting it back out there with full patches for all\nthe back-branches, in case anyone sees anything I missed, but I think I\ngot it all right and the docs build for me and at least going through\nall the new pages, everything looks good to me.\n\nNaturally, only included the appropriate pieces in each of the back\nbranches (v10 got the xlog -> WAL changes, v11 had the same, v12 had\nthose plus the recovery.conf changes, as did v13 and HEAD).\n\nOnce these all go in, I'll update the default roles patch as discussed\nelsewhere and backpatch that too. If there's other things we've done\nthat would be good to include here, I'd be happy to work with anyone\nwho's interested in putting in the effort to add more. For now, this\nseems like a pretty good set though.\n\nUnless there's anything further, will commit these soon.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 28 Mar 2021 13:22:16 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "On Sun, Mar 28, 2021, at 2:22 PM, Stephen Frost wrote:\n> Unless there's anything further, will commit these soon.\nI briefly looked at this patch and have a few comments.\n\n+<sect1 id=\"app-pgreceivexlog\" xreflabel=\"pg_receivexlog\">\n+ <title><command>pg_receivexlog</command> renamed to <command>pg_recievewal</command></title>\n\ns/pg_recievewal/pg_receivewal/\n\n+<appendix id=\"appendix-obsolete\">\n+ <title>Obsolete or renamed features, settings and files</title>\n\nSection titles are capitalized so it should be \"Obsolete or Renamed Features,\nSettings and Files\". I find this section name too descriptive. I didn't follow\nthis thread but maybe we might use a generic name that is also shorter than it\nsuch as \"Incompatible or Obsolete Features\".\n\n+ <primary>trigger_file</primary>\n+ <see>promote_trigger_file</see>\n+ </indexterm>\n+ setting has been renamed to\n+ <xref linkend=\"guc-promote-trigger-file\"/>\n+ </para>\n\nThere should be a period after the <xref>.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, Mar 28, 2021, at 2:22 PM, Stephen Frost wrote:Unless there's anything further, will commit these soon.I briefly looked at this patch and have a few comments.+<sect1 id=\"app-pgreceivexlog\" xreflabel=\"pg_receivexlog\">+ <title><command>pg_receivexlog</command> renamed to <command>pg_recievewal</command></title>s/pg_recievewal/pg_receivewal/+<appendix id=\"appendix-obsolete\">+ <title>Obsolete or renamed features, settings and files</title>Section titles are capitalized so it should be \"Obsolete or Renamed Features,Settings and Files\". I find this section name too descriptive. I didn't followthis thread but maybe we might use a generic name that is also shorter than itsuch as \"Incompatible or Obsolete Features\".+ <primary>trigger_file</primary>+ <see>promote_trigger_file</see>+ </indexterm>+ setting has been renamed to+ <xref linkend=\"guc-promote-trigger-file\"/>+ </para>There should be a period after the <xref>.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Sun, 28 Mar 2021 16:55:59 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Euler Taveira (euler@eulerto.com) wrote:\n> On Sun, Mar 28, 2021, at 2:22 PM, Stephen Frost wrote:\n> > Unless there's anything further, will commit these soon.\n> I briefly looked at this patch and have a few comments.\n> \n> +<sect1 id=\"app-pgreceivexlog\" xreflabel=\"pg_receivexlog\">\n> + <title><command>pg_receivexlog</command> renamed to <command>pg_recievewal</command></title>\n> \n> s/pg_recievewal/pg_receivewal/\n\nHah! Good catch. Fixed.\n\n> +<appendix id=\"appendix-obsolete\">\n> + <title>Obsolete or renamed features, settings and files</title>\n> \n> Section titles are capitalized so it should be \"Obsolete or Renamed Features,\n> Settings and Files\". I find this section name too descriptive. I didn't follow\n> this thread but maybe we might use a generic name that is also shorter than it\n> such as \"Incompatible or Obsolete Features\".\n\nNot sure how others feel, but 'incompatible' doesn't seem quite right to\nme- we renamed some things between major versions but they're as\ncompatible as they were before. Having it be shorter does make sense to\nme, so I updated the patch with \"Obsolete or Renamed Features\" instead.\n\nI also reworded the section underneath a bit to mention renaming and to\ninclude a comment about features, settings, and file names.\n\n> + <primary>trigger_file</primary>\n> + <see>promote_trigger_file</see>\n> + </indexterm>\n> + setting has been renamed to\n> + <xref linkend=\"guc-promote-trigger-file\"/>\n> + </para>\n> \n> There should be a period after the <xref>.\n\nGood catch, fixed.\n\nUpdated patches attached.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 28 Mar 2021 17:20:41 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Stephen Frost (sfrost@snowman.net) wrote:\n> > Awesome, attached is just a rebase (not that anything really changed).\n> > Unless someone wants to speak up, I'll commit this soonish (hopefully\n> > tomorrow, but at least sometime later this week).\n> \n> Alright, as this took a bit more playing with to work cleanly in the\n> back-branches, I'm putting it back out there with full patches for all\n> the back-branches, in case anyone sees anything I missed, but I think I\n> got it all right and the docs build for me and at least going through\n> all the new pages, everything looks good to me.\n> \n> Naturally, only included the appropriate pieces in each of the back\n> branches (v10 got the xlog -> WAL changes, v11 had the same, v12 had\n> those plus the recovery.conf changes, as did v13 and HEAD).\n> \n> Once these all go in, I'll update the default roles patch as discussed\n> elsewhere and backpatch that too. If there's other things we've done\n> that would be good to include here, I'd be happy to work with anyone\n> who's interested in putting in the effort to add more. For now, this\n> seems like a pretty good set though.\n> \n> Unless there's anything further, will commit these soon.\n\nAnd done.\n\nThanks all!\n\nStephen",
"msg_date": "Wed, 31 Mar 2021 16:27:06 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add docs stub for recovery.conf"
}
] |
[
{
"msg_contents": "Hi\n\nI notice that there are no Nullif case in eval_const_expression.\nSince Nullif is similar to Opexpr and is easy to implement,\nI try add this case in eval_const_expressions_mutator.\n\nBest regards,\nhouzj",
"msg_date": "Tue, 10 Nov 2020 11:31:47 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com> writes:\n> I notice that there are no Nullif case in eval_const_expression.\n> Since Nullif is similar to Opexpr and is easy to implement,\n> I try add this case in eval_const_expressions_mutator.\n\nI think this patch should be about a tenth the size. Try modeling\nit on the T_SubscriptingRef-etc case, ie, use ece_generic_processing\nand then ece_evaluate_expr to cover the generic cases. OpExpr is\ncommon enough to deserve specially optimized code, but NullIf isn't,\nso shorter is better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Jan 2021 11:20:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "> > I notice that there are no Nullif case in eval_const_expression.\n> > Since Nullif is similar to Opexpr and is easy to implement, I try add\n> > this case in eval_const_expressions_mutator.\n> \n> I think this patch should be about a tenth the size. Try modeling it on\n> the T_SubscriptingRef-etc case, ie, use ece_generic_processing and then\n> ece_evaluate_expr to cover the generic cases. OpExpr is common enough to\n> deserve specially optimized code, but NullIf isn't, so shorter is better.\n\nThanks for the review.\n\nAttaching v2 patch , which followed the suggestion \nto use ece_generic_processing and ece_evaluate_expr to simplify the code.\n\nPlease have a check.\n\nBest regards,\nhouzj",
"msg_date": "Tue, 12 Jan 2021 04:58:34 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "> > I think this patch should be about a tenth the size. Try modeling it\n> > on the T_SubscriptingRef-etc case, ie, use ece_generic_processing and\n> > then ece_evaluate_expr to cover the generic cases. OpExpr is common\n> > enough to deserve specially optimized code, but NullIf isn't, so shorter\n> is better.\n> \n> Thanks for the review.\n> \n> Attaching v2 patch , which followed the suggestion to use\n> ece_generic_processing and ece_evaluate_expr to simplify the code.\n> \n> Please have a check.\n\nSorry, I found the code still be simplified better.\nAttaching the new patch.\n\nBest regards,\nhouzj",
"msg_date": "Tue, 12 Jan 2021 06:43:09 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "On 2021-01-12 07:43, Hou, Zhijie wrote:\n>>> I think this patch should be about a tenth the size. Try modeling it\n>>> on the T_SubscriptingRef-etc case, ie, use ece_generic_processing and\n>>> then ece_evaluate_expr to cover the generic cases. OpExpr is common\n>>> enough to deserve specially optimized code, but NullIf isn't, so shorter\n>> is better.\n>>\n>> Thanks for the review.\n>>\n>> Attaching v2 patch , which followed the suggestion to use\n>> ece_generic_processing and ece_evaluate_expr to simplify the code.\n>>\n>> Please have a check.\n> \n> Sorry, I found the code still be simplified better.\n> Attaching the new patch.\n\nIt's a bit unfortunate now that between OpExpr, DistinctExpr, \nNullIfExpr, and to a lesser extent ScalarArrayOpExpr we will now have \nseveral different implementations of nearly the same thing, without any \nexplanation why one approach was chosen here and another there. We \nshould at least document this.\n\nSome inconsistencies I found: The code for DistinctExpr calls \nexpression_tree_mutator() directly, but your code for NullIfExpr calls \nece_generic_processing(), even though the explanation in the comment for \nDistinctExpr would apply there as well.\n\nYour code for NullIfExpr doesn't appear to call set_opfuncid() anywhere.\n\nI would move your new block for NullIfExpr after the block for \nDistinctExpr. That's the order in which these blocks appear elsewhere \nfor generic node processing.\n\nCheck your whitespace usage:\n\n if(!has_nonconst_input)\n\nshould have a space after the \"if\". (It's easy to fix of course, but I \nfigure I'd point it out here since you have submitted several patches \nwith this style, so it's perhaps a habit to break.)\n\nPerhaps add a comment to the tests like this so it's clear what they are \nfor:\n\ndiff --git a/src/test/regress/sql/case.sql b/src/test/regress/sql/case.sql\nindex 4742e1d0e0..98e3fb8de5 100644\n--- a/src/test/regress/sql/case.sql\n+++ b/src/test/regress/sql/case.sql\n@@ -137,6 +137,7 @@ CREATE TABLE CASE2_TBL (\n FROM CASE_TBL a, CASE2_TBL b\n WHERE COALESCE(f,b.i) = 2;\n\n+-- Tests for constant subexpression simplification\n explain (costs off)\n SELECT * FROM CASE_TBL WHERE NULLIF(1, 2) = 2;\n\n\n",
"msg_date": "Fri, 15 Jan 2021 07:59:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "Hi\n\nThanks for the review.\n\n> It's a bit unfortunate now that between OpExpr, DistinctExpr, NullIfExpr,\n> and to a lesser extent ScalarArrayOpExpr we will now have several different\n> implementations of nearly the same thing, without any explanation why one\n> approach was chosen here and another there. We should at least document\n> this.\n\nI am not quiet sure where to document the difference.\nTemporarily, I tried to add some comments for the Nullif to explain why this one is different.\n\n+\t\t\t\t/*\n+\t\t\t\t * Since NullIf is not common enough to deserve specially\n+\t\t\t\t * optimized code, use ece_generic_processing and\n+\t\t\t\t * ece_evaluate_expr to simplify the code as much as possible.\n+\t\t\t\t */\n\nAny suggestions ?\n\n> Some inconsistencies I found: The code for DistinctExpr calls\n> expression_tree_mutator() directly, but your code for NullIfExpr calls\n> ece_generic_processing(), even though the explanation in the comment for\n> DistinctExpr would apply there as well.\n> \n> Your code for NullIfExpr doesn't appear to call set_opfuncid() anywhere.\n\nIMO, we will call set_opfuncid in function ece_evaluate_expr.\n\nLike the following flow:\n\tece_evaluate_expr-->evaluate_expr--> fix_opfuncids--> fix_opfuncids_walker--> set_opfuncid\n\nAnd we do not need the opfuncid till we call ece_evaluate_expr.\nSo, to simplify the code as much as possible, I did not call set_opfuncid in eval_const_expressions_mutator again.\n\n\n> I would move your new block for NullIfExpr after the block for DistinctExpr.\n> That's the order in which these blocks appear elsewhere for generic node\n> processing.\n> \n\nChanged.\n\n\n> Check your whitespace usage:\n> \n> if(!has_nonconst_input)\n> \n> should have a space after the \"if\". (It's easy to fix of course, but I\n> figure I'd point it out here since you have submitted several patches with\n> this style, so it's perhaps a habit to break.)\n\nChanged.\n\n\n> Perhaps add a comment to the tests like this so it's clear what they are\n> for:\n> \n> diff --git a/src/test/regress/sql/case.sql\n> b/src/test/regress/sql/case.sql index 4742e1d0e0..98e3fb8de5 100644\n> --- a/src/test/regress/sql/case.sql\n> +++ b/src/test/regress/sql/case.sql\n> @@ -137,6 +137,7 @@ CREATE TABLE CASE2_TBL (\n> FROM CASE_TBL a, CASE2_TBL b\n> WHERE COALESCE(f,b.i) = 2;\n> \n> +-- Tests for constant subexpression simplification\n> explain (costs off)\n> SELECT * FROM CASE_TBL WHERE NULLIF(1, 2) = 2;\n\nAdded.\n\nAttatching v3 patch, please consider it for further review.\n\nBest regards,\nhouzj",
"msg_date": "Wed, 20 Jan 2021 01:16:38 +0000",
"msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "On 1/19/21 8:16 PM, Hou, Zhijie wrote:\n> \n> Attatching v3 patch, please consider it for further review.\n\nPeter, thoughts on the new patch in [1]?\n\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/ab53b3dbdbd6436f970f33b51ccd7dd3%40G08CNEXMBPEKD05.g08.fujitsu.local\n\n\n",
"msg_date": "Tue, 23 Mar 2021 10:31:46 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> Peter, thoughts on the new patch in [1]?\n\nI'm not Peter, but I have a complaint about this bit:\n\n+\t\t\t\tif (!has_nonconst_input)\n+\t\t\t\t\treturn ece_evaluate_expr(expr);\n\nThat's not okay without a further check to see if the comparison function\nused by the node is immutable. Compare ScalarArrayOpExpr, for instance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Mar 2021 11:45:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "> +\t\t\t\tif (!has_nonconst_input)\n> +\t\t\t\t\treturn ece_evaluate_expr(expr);\n> \n> That's not okay without a further check to see if the comparison function used\n> by the node is immutable. Compare ScalarArrayOpExpr, for instance.\n\nThanks for pointing it out.\nAttaching new patch with this change.\n\nBest regards,\nhouzj",
"msg_date": "Wed, 24 Mar 2021 10:52:04 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "On 24.03.21 11:52, houzj.fnst@fujitsu.com wrote:\n>> +\t\t\t\tif (!has_nonconst_input)\n>> +\t\t\t\t\treturn ece_evaluate_expr(expr);\n>>\n>> That's not okay without a further check to see if the comparison function used\n>> by the node is immutable. Compare ScalarArrayOpExpr, for instance.\n> \n> Thanks for pointing it out.\n> Attaching new patch with this change.\n\nThis patch looks okay to me and addresses all the feedback that was \ngiven. If there are no more comments, I'll commit it in a few days.\n\n\n",
"msg_date": "Tue, 30 Mar 2021 11:20:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Nullif case for eval_const_expressions_mutator"
},
{
"msg_contents": "On 30.03.21 11:20, Peter Eisentraut wrote:\n> On 24.03.21 11:52, houzj.fnst@fujitsu.com wrote:\n>>> + if (!has_nonconst_input)\n>>> + return ece_evaluate_expr(expr);\n>>>\n>>> That's not okay without a further check to see if the comparison \n>>> function used\n>>> by the node is immutable. Compare ScalarArrayOpExpr, for instance.\n>>\n>> Thanks for pointing it out.\n>> Attaching new patch with this change.\n> \n> This patch looks okay to me and addresses all the feedback that was \n> given. If there are no more comments, I'll commit it in a few days.\n\ndone\n\n\n",
"msg_date": "Fri, 2 Apr 2021 11:08:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add Nullif case for eval_const_expressions_mutator"
}
] |
[
{
"msg_contents": "Hi all,\n\nFollowing the report of Coverity that led to 3636efa, I have reviewed\nthe existing callers of parsePGArray() in pg_dump and some of its\nerror handling is a bit sloppy.\n\nIt could theoretically be possible to reach an OOM in parsePGArray()\nwith a dump able to finish. This is very unlikely going to matter in\npractice as an OOM when parsing an array is most likely going to\ntrigger a fatal failure in one of the follow-up allocations, but if\nthe dump is able to go through we could finish with a valid dump that\nlacks some information:\n- Statistics for indexes.\n- Run-time configuration of functions.\n- Configuration of extensions.\n- Publication list for a subscription.\n\nI would like to propose the attached to tighten the error handling in\nthe area, generating a fatal error if an array cannot be parsed. I\ndid not see the point of changing the assumptions we use for the\nparsing of function args or such when it comes to pre-8.4 dumps. This\nissue is unlikely going to matter in practice, so I don't propose a\nbackpatch.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 11 Nov 2020 15:13:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Sloppiness around failure handling of parsePGArray in pg_dump"
},
{
"msg_contents": "> On 11 Nov 2020, at 07:13, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I would like to propose the attached to tighten the error handling in\n> the area, generating a fatal error if an array cannot be parsed.\n\nI agree that we should fix this even if it will have quite limited impact in\nproduction settings. Patch LGTM, +1.\n\n> I did not see the point of changing the assumptions we use for the parsing of\n> function args or such when it comes to pre-8.4 dumps.\n\n\nAnother thing caught my eye here (while not the fault of this patch), we ensure\nto clean up array leftover in case of parsePGArray failure, but we don't clean\nup the potential allocations from the previous calls. Something like the below\nseems more consistent.\n\n@@ -12105,6 +12099,8 @@ dumpFunc(Archive *fout, FuncInfo *finfo)\n nitems != nallargs)\n {\n pg_log_warning(\"could not parse proargmodes array\");\n+ if (allargtypes)\n+ free(allargtypes);\n if (argmodes)\n free(argmodes);\n argmodes = NULL;\n@@ -12119,6 +12115,10 @@ dumpFunc(Archive *fout, FuncInfo *finfo)\n nitems != nallargs)\n {\n pg_log_warning(\"could not parse proargnames array\");\n+ if (allargtypes)\n+ free(allargtypes);\n+ if (argmodes)\n+ free(argmodes);\n if (argnames)\n free(argnames);\n argnames = NULL;\n\n> This issue is unlikely going to matter in practice, so I don't propose a\n> backpatch.\n\nAgreed, unless it's easier for dealing with backpatching other things, that\nwould be the only reason I reckon.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 18 Nov 2020 10:19:40 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Sloppiness around failure handling of parsePGArray in pg_dump"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 10:19:40AM +0100, Daniel Gustafsson wrote:\n> I agree that we should fix this even if it will have quite limited impact in\n> production settings. Patch LGTM, +1.\n\nThanks. I have reviewed that again this morning and applied it.\n\n> Another thing caught my eye here (while not the fault of this patch), we ensure\n> to clean up array leftover in case of parsePGArray failure, but we don't clean\n> up the potential allocations from the previous calls. Something like the below\n> seems more consistent.\n> \n> @@ -12105,6 +12099,8 @@ dumpFunc(Archive *fout, FuncInfo *finfo)\n> nitems != nallargs)\n> {\n> pg_log_warning(\"could not parse proargmodes array\");\n> + if (allargtypes)\n> + free(allargtypes);\n> if (argmodes)\n> free(argmodes);\n> argmodes = NULL;\n> @@ -12119,6 +12115,10 @@ dumpFunc(Archive *fout, FuncInfo *finfo)\n> nitems != nallargs)\n> {\n> pg_log_warning(\"could not parse proargnames array\");\n> + if (allargtypes)\n> + free(allargtypes);\n> + if (argmodes)\n> + free(argmodes);\n> if (argnames)\n> free(argnames);\n> argnames = NULL;\n\nIf you do that, I think that's not completely correct either.\nformat_function_arguments_old() has some logic to allow the process to\ngo through for pre-8.4 dumps as long as allargtypes includes correct\ndata even if argmodes and/or argnames parsing has failed, so you would\ncause a crash as long as nallargs is set if you free allargtypes like\nthat. You could allow those free calls if resetting nallargs to 0 if\nthe parsing of argmodes or argnames has failed, but that would make\nthe logic less flexible.\n--\nMichael",
"msg_date": "Thu, 19 Nov 2020 10:37:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Sloppiness around failure handling of parsePGArray in pg_dump"
},
{
"msg_contents": "> On 19 Nov 2020, at 02:37, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Nov 18, 2020 at 10:19:40AM +0100, Daniel Gustafsson wrote:\n>> I agree that we should fix this even if it will have quite limited impact in\n>> production settings. Patch LGTM, +1.\n> \n> Thanks. I have reviewed that again this morning and applied it.\n> \n>> Another thing caught my eye here (while not the fault of this patch), we ensure\n>> to clean up array leftover in case of parsePGArray failure, but we don't clean\n>> up the potential allocations from the previous calls. Something like the below\n>> seems more consistent.\n>> \n>> @@ -12105,6 +12099,8 @@ dumpFunc(Archive *fout, FuncInfo *finfo)\n>> nitems != nallargs)\n>> {\n>> pg_log_warning(\"could not parse proargmodes array\");\n>> + if (allargtypes)\n>> + free(allargtypes);\n>> if (argmodes)\n>> free(argmodes);\n>> argmodes = NULL;\n>> @@ -12119,6 +12115,10 @@ dumpFunc(Archive *fout, FuncInfo *finfo)\n>> nitems != nallargs)\n>> {\n>> pg_log_warning(\"could not parse proargnames array\");\n>> + if (allargtypes)\n>> + free(allargtypes);\n>> + if (argmodes)\n>> + free(argmodes);\n>> if (argnames)\n>> free(argnames);\n>> argnames = NULL;\n> \n> If you do that, I think that's not completely correct either.\n> format_function_arguments_old() has some logic to allow the process to\n> go through for pre-8.4 dumps as long as allargtypes includes correct\n\nAh, yes, I read that wrong. It's correct as it is.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 19 Nov 2020 10:36:26 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Sloppiness around failure handling of parsePGArray in pg_dump"
}
] |
[
{
"msg_contents": "Hello. This is a thread for an alternative solution to wal_level=none\n[*1] for bulk data loading.\n\n*1: https://www.postgresql.org/message-id/TYAPR01MB29901EBE5A3ACCE55BA99186FE320%40TYAPR01MB2990.jpnprd01.prod.outlook.com\n\nAt Tue, 10 Nov 2020 09:33:12 -0500, Stephen Frost <sfrost@snowman.net> wrote in \n> Greetings,\n> \n> * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > For fuel(?) of the discussion, I tried a very-quick PoC for in-place\n> > ALTER TABLE SET LOGGED/UNLOGGED and resulted as attached. After some\n> > trials of several ways, I drifted to the following way after poking\n> > several ways.\n> > \n> > 1. Flip BM_PERMANENT of active buffers\n> > 2. adding/removing init fork\n> > 3. sync files,\n> > 4. Flip pg_class.relpersistence.\n> > \n> > It always skips table copy in the SET UNLOGGED case, and only when\n> > wal_level=minimal in the SET LOGGED case. Crash recovery seems\n> > working by some brief testing by hand.\n> \n> Somehow missed that this patch more-or-less does what I was referring to\n> down-thread, but I did want to mention that it looks like it's missing a\n> necessary FlushRelationBuffers() call before the sync, otherwise there\n> could be dirty buffers for the relation that's being set to LOGGED (with\n> wal_level=minimal), which wouldn't be good. See the comments above\n> smgrimmedsync().\n\nRight. Thanks. However, since SetRelFileNodeBuffersPersistence()\ncalled just above scans shared buffers so I don't want to just call\nFlushRelationBuffers() separately. Instead, I added buffer-flush to\nSetRelFileNodeBuffersPersistence().\n\nFWIW this is a revised version of the PoC, which has some known\nproblems.\n\n- Flipping of Buffer persistence is not WAL-logged nor even be able to\n be safely roll-backed. (It might be better to drop buffers).\n\n- This version handles indexes but not yet handle toast relatins.\n\n- tableAMs are supposed to support this feature. (but I'm not sure\n it's worth allowing them not to do so).\n\n> > Of course, I haven't performed intensive test on it.\n> \n> Reading through the thread, it didn't seem very clear, but we should\n> definitely make sure that it does the right thing on replicas when going\n> between unlogged and logged (and between logged and unlogged too), of\n> course.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 11 Nov 2020 17:33:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "In-placre persistance change of a relation"
},
{
"msg_contents": "Greetings,\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> At Tue, 10 Nov 2020 09:33:12 -0500, Stephen Frost <sfrost@snowman.net> wrote in \n> > * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > > For fuel(?) of the discussion, I tried a very-quick PoC for in-place\n> > > ALTER TABLE SET LOGGED/UNLOGGED and resulted as attached. After some\n> > > trials of several ways, I drifted to the following way after poking\n> > > several ways.\n> > > \n> > > 1. Flip BM_PERMANENT of active buffers\n> > > 2. adding/removing init fork\n> > > 3. sync files,\n> > > 4. Flip pg_class.relpersistence.\n> > > \n> > > It always skips table copy in the SET UNLOGGED case, and only when\n> > > wal_level=minimal in the SET LOGGED case. Crash recovery seems\n> > > working by some brief testing by hand.\n> > \n> > Somehow missed that this patch more-or-less does what I was referring to\n> > down-thread, but I did want to mention that it looks like it's missing a\n> > necessary FlushRelationBuffers() call before the sync, otherwise there\n> > could be dirty buffers for the relation that's being set to LOGGED (with\n> > wal_level=minimal), which wouldn't be good. See the comments above\n> > smgrimmedsync().\n> \n> Right. Thanks. However, since SetRelFileNodeBuffersPersistence()\n> called just above scans shared buffers so I don't want to just call\n> FlushRelationBuffers() separately. Instead, I added buffer-flush to\n> SetRelFileNodeBuffersPersistence().\n\nMaybe I'm missing something, but it sure looks like in the patch that\nSetRelFileNodeBuffersPersistence() is being called after the\nsmgrimmedsync() call, and I don't think you get to just switch the order\nof those- the sync is telling the kernel to make sure it's written to\ndisk, while the FlushBuffer() is just writing it into the kernel but\ndoesn't provide any guarantee that the data has actually made it to\ndisk. We have to FlushBuffer() first, and then call smgrimmedsync().\nPerhaps there's a way to avoid having to go through shared buffers\ntwice, and I generally agreed it'd be good if we could avoid doing so,\nbut this approach doesn't look like it actually works.\n\n> FWIW this is a revised version of the PoC, which has some known\n> problems.\n> \n> - Flipping of Buffer persistence is not WAL-logged nor even be able to\n> be safely roll-backed. (It might be better to drop buffers).\n\nNot sure if it'd be better to drop buffers or not, but figuring out how\nto deal with rollback seems pretty important. How is the persistence\nchange in the catalog not WAL-logged though..?\n\n> - This version handles indexes but not yet handle toast relatins.\n\nWould need to be fixed, of course.\n\n> - tableAMs are supposed to support this feature. (but I'm not sure\n> it's worth allowing them not to do so).\n\nSeems like they should.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 11 Nov 2020 09:56:44 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: In-place persistance change of a relation"
},
{
"msg_contents": "Hi,\n\nI suggest outlining what you are trying to achieve here. Starting a new\nthread and expecting people to dig through another thread to infer what\nyou are actually trying to achive isn't great.\n\nFWIW, I'm *extremely* doubtful it's worth adding features that depend on\na PGC_POSTMASTER wal_level=minimal being used. Which this does, a far as\nI understand. If somebody added support for dynamically adapting\nwal_level (e.g. wal_level=auto, that increases wal_level to\nreplica/logical depending on the presence of replication slots), it'd\nperhaps be different.\n\n\nOn 2020-11-11 17:33:17 +0900, Kyotaro Horiguchi wrote:\n> FWIW this is a revised version of the PoC, which has some known\n> problems.\n> \n> - Flipping of Buffer persistence is not WAL-logged nor even be able to\n> be safely roll-backed. (It might be better to drop buffers).\n\nThat's obviously a no-go. I think you might be able to address this if\nyou accept that the command cannot be run in a transaction (like\nCONCURRENTLY). Then you can first do the catalog changes, change the\npersistence level, and commit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Nov 2020 14:18:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Wed, 11 Nov 2020 14:18:04 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> I suggest outlining what you are trying to achieve here. Starting a new\n> thread and expecting people to dig through another thread to infer what\n> you are actually trying to achive isn't great.\n\nAgreed. I'll post that. Thanks.\n\n> FWIW, I'm *extremely* doubtful it's worth adding features that depend on\n> a PGC_POSTMASTER wal_level=minimal being used. Which this does, a far as\n> I understand. If somebody added support for dynamically adapting\n> wal_level (e.g. wal_level=auto, that increases wal_level to\n> replica/logical depending on the presence of replication slots), it'd\n> perhaps be different.\n\nYes, this depends on wal_level=minimal for switching from UNLOGGED to\nLOGGED, that's similar to COPY/INSERT-to-intransaction-created-tables\noptimization for wal_level=minimal. And it expands that optimization\nto COPY/INSERT-to-existent-tables, which seems worth doing.\n\nSwitching to LOGGED needs to emit the initial state to WAL... Hmm.. I\ncame to think that even in that case skipping table copy reduces I/O\nsignificantly, even though FPI-WAL is emitted.\n\n\n> On 2020-11-11 17:33:17 +0900, Kyotaro Horiguchi wrote:\n> > FWIW this is a revised version of the PoC, which has some known\n> > problems.\n> > \n> > - Flipping of Buffer persistence is not WAL-logged nor even be able to\n> > be safely roll-backed. (It might be better to drop buffers).\n> \n> That's obviously a no-go. I think you might be able to address this if\n> you accept that the command cannot be run in a transaction (like\n> CONCURRENTLY). Then you can first do the catalog changes, change the\n> persistence level, and commit.\n\nOf course. The next version reverts persistence change at abort.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 Nov 2020 15:55:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Wed, 11 Nov 2020 09:56:44 -0500, Stephen Frost <sfrost@snowman.net> wrote in \n> Greetings,\n> \n> * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > At Tue, 10 Nov 2020 09:33:12 -0500, Stephen Frost <sfrost@snowman.net> wrote in \n> > > * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > > > For fuel(?) of the discussion, I tried a very-quick PoC for in-place\n> > > > ALTER TABLE SET LOGGED/UNLOGGED and resulted as attached. After some\n> > > > trials of several ways, I drifted to the following way after poking\n> > > > several ways.\n> > > > \n> > > > 1. Flip BM_PERMANENT of active buffers\n> > > > 2. adding/removing init fork\n> > > > 3. sync files,\n> > > > 4. Flip pg_class.relpersistence.\n> > > > \n> > > > It always skips table copy in the SET UNLOGGED case, and only when\n> > > > wal_level=minimal in the SET LOGGED case. Crash recovery seems\n> > > > working by some brief testing by hand.\n> > > \n> > > Somehow missed that this patch more-or-less does what I was referring to\n> > > down-thread, but I did want to mention that it looks like it's missing a\n> > > necessary FlushRelationBuffers() call before the sync, otherwise there\n> > > could be dirty buffers for the relation that's being set to LOGGED (with\n> > > wal_level=minimal), which wouldn't be good. See the comments above\n> > > smgrimmedsync().\n> > \n> > Right. Thanks. However, since SetRelFileNodeBuffersPersistence()\n> > called just above scans shared buffers so I don't want to just call\n> > FlushRelationBuffers() separately. Instead, I added buffer-flush to\n> > SetRelFileNodeBuffersPersistence().\n> \n> Maybe I'm missing something, but it sure looks like in the patch that\n> SetRelFileNodeBuffersPersistence() is being called after the\n> smgrimmedsync() call, and I don't think you get to just switch the order\n> of those- the sync is telling the kernel to make sure it's written to\n> disk, while the FlushBuffer() is just writing it into the kernel but\n> doesn't provide any guarantee that the data has actually made it to\n> disk. We have to FlushBuffer() first, and then call smgrimmedsync().\n> Perhaps there's a way to avoid having to go through shared buffers\n> twice, and I generally agreed it'd be good if we could avoid doing so,\n> but this approach doesn't look like it actually works.\n\nYeah, sorry for the rare-baked version.. I was confused about the\norder at the time. The next version works like this:\n\nLOGGED->UNLOGGED\n<collect reloids to process>\n\nfor each relations:\n <set buffer persistence to !BM_PERMANENT (wal-logged if walleve > minimal>\n <create init fork>\n if it is index call ambuildempty() (which syncs the init fork)\n else WAL-log smgr_create then sync the init file.\n <update catalog>\n...\ncommit time:\n <do nogthing>\nabort time:\n <unlink init fork>\n <revert buffer persistence>\n\nUNLOGGED->LOGGED\n<collect reloids to process>\n\nfor each relations:\n <set buffer persistence to !BM_PERMANENT (wal-logged if walleve > minimal>\n <record drop-init-fork to pending-deletes>\n <sync storage files>\n <update catalog>\n...\ncommit time:\n <log smgrunlink>\n <smgrunlink init fork>\nabort time:\n <revert buffer persistence>\n\n\n> > FWIW this is a revised version of the PoC, which has some known\n> > problems.\n> > \n> > - Flipping of Buffer persistence is not WAL-logged nor even be able to\n> > be safely roll-backed. (It might be better to drop buffers).\n> \n> Not sure if it'd be better to drop buffers or not, but figuring out how\n> to deal with rollback seems pretty important. How is the persistence\n> change in the catalog not WAL-logged though..?\n\nRollback works as the above. Buffer persistence change is registered\nin pending-deletes. Persistence change in catalog is rolled back in\nthe ordinary way (or automatically).\n\nIf wal_level > minimal, persistence change of buffers is propagated to\nstandbys by WAL. However I'm not sure we need wal-logging otherwise,\nthe next version emits WAL since SMGR_CREATE is always logged by\nexisting code.\n\n> > - This version handles indexes but not yet handle toast relatins.\n> \n> Would need to be fixed, of course.\n\nFixed.\n\n> > - tableAMs are supposed to support this feature. (but I'm not sure\n> > it's worth allowing them not to do so).\n> \n> Seems like they should.\n\nInit fork of index relations needs a call to ambuildempty() instead of\n\"log_smgrcreate-smgrimmedsync\" after smgrcreate. Instead of adding\nsimilar interface in indexAm, I reverted changes of tableam and make\nRelationCreate/DropInitFork() directly do that. That introduces new\ninclude of amapi.h to storage.c, which is a bit uneasy.\n\nThe previous version give up the in-place persistence change in the\ncase where wal_level > minimal and SET LOGGED since that needs WAL to\nbe emitted. However, in-place change still has the advantage of not\nrunning a table copy. So the next verson always runs persistence\nchange in-place.\n\nAs suggested by Andres, I'll send a summary of this patch. The patch\nwill be attached to the coming mail.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 Nov 2020 15:55:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-place persistance change of a relation"
},
{
"msg_contents": "Hello. Before posting the next version, I'd like to explain what this\npatch is.\n\n1. The Issue\n\nBulk data loading is a long-time taking, I/O consuming task. Many\nDBAs want that task is faster, even at the cost of increasing risk of\ndata-loss. wal_level=minimal is an answer to such a\nrequest. Data-loading onto a table that is created in the current\ntransaction omits WAL-logging and synced at commit.\n\nHowever, the optimization doesn't benefit the case where the\ndata-loading is performed onto existing tables. There are quite a few\ncases where data is loaded into tables that already contains a lot of\ndata. Those cases don't take benefit of the optimization.\n\nAnother possible solution for bulk data-loading is UNLOGGED\ntables. But when we switch LOGGED/UNLOGGED of a table, all the table\ncontent is copied to a newly created heap, which is costly.\n\n\n2. Proposed Solutions.\n\nThere are two proposed solutions are discussed on this mailing\nlist. One is wal_level = none (*1), which omits WAL-logging almost at\nall. Another is extending the existing optimization to the ALTER TABLE\nSET LOGGED/UNLOGGED cases, which is to be discussed in this new\nthread.\n\n\n3. In-place Persistence Change\n\nSo the attached is a PoC patch of the \"another\" solution. When we\nwant to change table persistence in-place, basically we need to do the\nfollowing steps.\n\n(the talbe is exclusively locked)\n\n(1) Flip BM_PERMANENT flag of all shared buffer blocks for the heap.\n\n(2) Create or delete the init fork for existing heap.\n\n(3) Flush all buffers of the relation to file system.\n\n(4) Sync heap files.\n\n(5) Make catalog changes.\n\n\n4. Transactionality\n\nThe 1, 2 and 5 above need to be abort-able. 5 is rolled back by\nexisting infrastructure, and rolling-back of 1 and 2 are achieved by\npiggybacking on the pendingDeletes mechanism.\n\n\n5. Replication\n\nFurthermore, that changes ought to be replicable to standbys. Catalog\nchanges are replicated as usual. \n\nOn-the-fly creation of the init fork leads to recovery mess. Even\nthough it is removed at abort, if the server crashed before\ntransaction end, the file is left alone and corrupts database in the\nnext recovery. I sought a way to create the init fork in\nsmgrPendingDelete but that needs relcache and relcache is not\navailable at that late of commit. Finally, I introduced the fifth fork\nkind \"INITTMP\"(_itmp) only to signal that the init file is not\ncommitted. I don't like that way but it seems working fine...\n\n\n6. SQL Command\n\nThe second file in the patchset adds a syntax that changes persistence\nof all tables in a tablespace.\n\nALTER TABLE ALL IN TABLESPACE <tsp> SET LOGGED/UNLOGGED [ NOWAIT ];\n\n\n7. Testing\n\nI tried to write TAP test for this, but IPC::Run::harness (or\ninteractive_psql) doesn't seem to work for me. I'm not sure what\nexactly is happening but pty redirection doesn't work.\n\n $in = \"ls\\n\"; $out = \"\"; run [\"/usr/bin/bash\"], \\$in, \\$out; print $out;\n\nworks but\n\n $in = \"ls\\n\"; $out = \"\"; run [\"/usr/bin/bash\"], '<pty<', \\$in, '>pty>', \\$out; print $out;\n\ndoesn't respond.\n\n\nThe patch is attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 13 Nov 2020 13:22:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hi Horiguchi-san,\n\n\nThank you for making a patch so quickly. I've started looking at it.\n\nWhat makes you think this is a PoC? Documentation and test cases? If there's something you think that doesn't work or are concerned about, can you share it?\n\nDo you know the reason why data copy was done before? And, it may be odd for me to ask this, but I think I saw someone referred to the past discussion that eliminating data copy is difficult due to some processing at commit. I can't find it.\n\n\n\n(1)\n@@ -168,6 +168,8 @@ extern PGDLLIMPORT int32 *LocalRefCount;\n */\n #define BufferGetPage(buffer) ((Page)BufferGetBlock(buffer))\n \n+struct SmgrRelationData;\n\nThis declaration is already in the file:\n\n/* forward declared, to avoid having to expose buf_internals.h here */\nstruct WritebackContext;\n\n/* forward declared, to avoid including smgr.h here */\nstruct SMgrRelationData;\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Fri, 13 Nov 2020 06:43:13 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "Hello, Tsunakawa-San\n\n\n> Do you know the reason why data copy was done before? And, it may be\n> odd for me to ask this, but I think I saw someone referred to the past\n> discussion that eliminating data copy is difficult due to some processing at\n> commit. I can't find it.\nI can share 2 sources why to eliminate the data copy is difficult in hackers thread.\n\nTom's remark and the context to copy relation's data.\nhttps://www.postgresql.org/message-id/flat/31724.1394163360%40sss.pgh.pa.us#31724.1394163360@sss.pgh.pa.us\n\nAmit-San quoted this thread and mentioned that point in another thread.\nhttps://www.postgresql.org/message-id/CAA4eK1%2BHDqS%2B1fhs5Jf9o4ZujQT%3DXBZ6sU0kOuEh2hqQAC%2Bt%3Dw%40mail.gmail.com\n\nBest,\n\tTakamichi Osumi\n\n\n",
"msg_date": "Fri, 13 Nov 2020 07:15:41 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "At Fri, 13 Nov 2020 06:43:13 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> Hi Horiguchi-san,\n> \n> \n> Thank you for making a patch so quickly. I've started looking at it.\n> \n> What makes you think this is a PoC? Documentation and test cases? If there's something you think that doesn't work or are concerned about, can you share it?\n\nThe latest version is heavily revised and is given much comment so it\nmight have exited from PoC state. The necessity of documentation is\ndoubtful since this patch doesn't user-facing behavior other than\nspeed. Some tests are required especialy about recovery and\nreplication perspective but I haven't been able to make it. (One of\nthe tests needs to cause crash while a transaction is running.)\n\n> Do you know the reason why data copy was done before? And, it may be odd for me to ask this, but I think I saw someone referred to the past discussion that eliminating data copy is difficult due to some processing at commit. I can't find it.\n\nTo imagine that, just because it is simpler considering rollback and\ncode sharing, and maybe no one have been complained that SET\nLOGGED/UNLOGGED looks taking a long time than required/expected.\n\nThe current implement is simple. It's enough to just discard old or\nnew relfilenode according to the current transaction ends with commit\nor abort. Tweaking of relfilenode under use leads-in some skews in\nsome places. I used pendingDelete mechanism a bit complexified way\nand a violated an abstraction (I think, calling AM-routines from\nstorage.c is not good.) and even introduce a new fork kind only to\nmark a init fork as \"not committed yet\". There might be better way,\nbut I haven't find it.\n\n(The patch scans all shared buffer blocks for each relation).\n\n> (1)\n> @@ -168,6 +168,8 @@ extern PGDLLIMPORT int32 *LocalRefCount;\n> */\n> #define BufferGetPage(buffer) ((Page)BufferGetBlock(buffer))\n> \n> +struct SmgrRelationData;\n> \n> This declaration is already in the file:\n> \n> /* forward declared, to avoid having to expose buf_internals.h here */\n> struct WritebackContext;\n> \n> /* forward declared, to avoid including smgr.h here */\n> struct SMgrRelationData;\n\nHmmm. Nice chatch. And will fix in the next version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 13 Nov 2020 16:47:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Fri, 13 Nov 2020 07:15:41 +0000, \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> wrote in \n> Hello, Tsunakawa-San\n> \n\nThanks for sharing it!\n\n> > Do you know the reason why data copy was done before? And, it may be\n> > odd for me to ask this, but I think I saw someone referred to the past\n> > discussion that eliminating data copy is difficult due to some processing at\n> > commit. I can't find it.\n> I can share 2 sources why to eliminate the data copy is difficult in hackers thread.\n> \n> Tom's remark and the context to copy relation's data.\n> https://www.postgresql.org/message-id/flat/31724.1394163360%40sss.pgh.pa.us#31724.1394163360@sss.pgh.pa.us\n\nhttps://www.postgresql.org/message-id/CA+Tgmob44LNwwU73N1aJsGQyzQ61SdhKJRC_89wCm0+aLg=x2Q@mail.gmail.com\n\n> No, not really. The issue is more around what happens if we crash\n> part way through. At crash recovery time, the system catalogs are not\n> available, because the database isn't consistent yet and, anyway, the\n> startup process can't be bound to a database, let alone every database\n> that might contain unlogged tables. So the sentinel that's used to\n> decide whether to flush the contents of a table or index is the\n> presence or absence of an _init fork, which the startup process\n> obviously can see just fine. The _init fork also tells us what to\n> stick in the relation when we reset it; for a table, we can just reset\n> to an empty file, but that's not legal for indexes, so the _init fork\n> contains a pre-initialized empty index that we can just copy over.\n> \n> Now, to make an unlogged table logged, you've got to at some stage\n> remove those _init forks. But this is not a transactional operation.\n> If you remove the _init forks and then the transaction rolls back,\n> you've left the system an inconsistent state. If you postpone the\n> removal until commit time, then you have a problem if it fails,\n\nIt's true. That are the cause of headache.\n\n> particularly if it works for the first file but fails for the second.\n> And if you crash at any point before you've fsync'd the containing\n> directory, you have no idea which files will still be on disk after a\n> hard reboot.\n\nThis is not an issue in this patch *except* the case where init fork\nis failed to removed but the following removal of inittmp fork\nsucceeds. Another idea is adding a \"not-yet-committed\" property to a\nfork. I added a new fork type for easiness of the patch but I could\ngo that way if that is an issue.\n\n> Amit-San quoted this thread and mentioned that point in another thread.\n> https://www.postgresql.org/message-id/CAA4eK1%2BHDqS%2B1fhs5Jf9o4ZujQT%3DXBZ6sU0kOuEh2hqQAC%2Bt%3Dw%40mail.gmail.com\n\nThis sounds like a bit differrent discussion. Making part-of-a-table\nUNLOGGED looks far difficult to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 13 Nov 2020 17:23:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > No, not really. The issue is more around what happens if we crash\n> > part way through. At crash recovery time, the system catalogs are not\n> > available, because the database isn't consistent yet and, anyway, the\n> > startup process can't be bound to a database, let alone every database\n> > that might contain unlogged tables. So the sentinel that's used to\n> > decide whether to flush the contents of a table or index is the\n> > presence or absence of an _init fork, which the startup process\n> > obviously can see just fine. The _init fork also tells us what to\n> > stick in the relation when we reset it; for a table, we can just reset\n> > to an empty file, but that's not legal for indexes, so the _init fork\n> > contains a pre-initialized empty index that we can just copy over.\n> >\n> > Now, to make an unlogged table logged, you've got to at some stage\n> > remove those _init forks. But this is not a transactional operation.\n> > If you remove the _init forks and then the transaction rolls back,\n> > you've left the system an inconsistent state. If you postpone the\n> > removal until commit time, then you have a problem if it fails,\n> \n> It's true. That are the cause of headache.\n...\n> The current implement is simple. It's enough to just discard old or\n> new relfilenode according to the current transaction ends with commit\n> or abort. Tweaking of relfilenode under use leads-in some skews in\n> some places. I used pendingDelete mechanism a bit complexified way\n> and a violated an abstraction (I think, calling AM-routines from\n> storage.c is not good.) and even introduce a new fork kind only to\n> mark a init fork as \"not committed yet\". There might be better way,\n> but I haven't find it.\n\nI have no alternative idea yet, too. I agree that we want to avoid them, especially introducing inittmp fork... Anyway, below are the rest of my review comments for 0001. I want to review 0002 when we have decided to go with 0001.\n\n\n(2)\nXLOG_SMGR_UNLINK seems to necessitate modification of the following comments:\n\n[src/include/catalog/storage_xlog.h]\n/*\n * Declarations for smgr-related XLOG records\n *\n * Note: we log file creation and truncation here, but logging of deletion\n * actions is handled by xact.c, because it is part of transaction commit.\n */\n\n[src/backend/access/transam/README]\n3. Deleting a table, which requires an unlink() that could fail.\n\nOur approach here is to WAL-log the operation first, but to treat failure\nof the actual unlink() call as a warning rather than error condition.\nAgain, this can leave an orphan file behind, but that's cheap compared to\nthe alternatives. Since we can't actually do the unlink() until after\nwe've committed the DROP TABLE transaction, throwing an error would be out\nof the question anyway. (It may be worth noting that the WAL entry about\nthe file deletion is actually part of the commit record for the dropping\ntransaction.)\n\n\n(3)\n+/* This is bit-map, not ordianal numbers */\n\nThere seems to be no comments using \"bit-map\". \"Flags for ...\" can be seen here and there.\n\n\n(4)\nSome wrong spellings:\n\n+\t\t\t/* we flush this buffer when swithing to PERMANENT */\n\nswithing -> switching\n\n+\t\t * alredy flushed out by RelationCreate(Drop)InitFork called just\n\nalredy -> already\n\n+\t\t * relation content to be WAL-logged to recovery the table.\n\nrecovery -> recover\n\n+\t * The inittmp fork works as the sentinel to identify that situaton.\n\nsituaton -> situation\n\n\n(5)\n+\ttable_close(classRel, NoLock);\n+\n+\n+\n+\n }\n\nThese empty lines can be deleted.\n\n\n(6)\n+/*\n+ * Perform XLogInsert of an XLOG_SMGR_UNLINK record to WAL.\n+ */\n+void\n+log_smgrbufpersistence(const RelFileNode *rnode, bool persistence)\n...\n+\t * Make an XLOG entry reporting the file unlink.\n\nNot unlink but buffer persistence?\n\n\n(7)\n+\t/*\n+\t * index-init fork needs further initialization. ambuildempty shoud do\n+\t * WAL-log and file sync by itself but otherwise we do that by myself.\n+\t */\n+\tif (rel->rd_rel->relkind == RELKIND_INDEX)\n+\t\trel->rd_indam->ambuildempty(rel);\n+\telse\n+\t{\n+\t\tlog_smgrcreate(&rnode, INIT_FORKNUM);\n+\t\tsmgrimmedsync(srel, INIT_FORKNUM);\n+\t}\n+\n+\t/*\n+\t * We have created the init fork. If server crashes before the current\n+\t * transaction ends the init fork left alone corrupts data while recovery.\n+\t * The inittmp fork works as the sentinel to identify that situaton.\n+\t */\n+\tsmgrcreate(srel, INITTMP_FORKNUM, false);\n+\tlog_smgrcreate(&rnode, INITTMP_FORKNUM);\n+\tsmgrimmedsync(srel, INITTMP_FORKNUM);\n\nIf the server crashes between these two processings, only the init fork exists. Is it correct to create the inittmp fork first?\n\n\n(8)\n+\tif (inxact_created)\n+\t{\n+\t\tSMgrRelation srel = smgropen(rnode, InvalidBackendId);\n+\t\tsmgrclose(srel);\n+\t\tlog_smgrunlink(&rnode, INIT_FORKNUM);\n+\t\tsmgrunlink(srel, INIT_FORKNUM, false);\n\t+\t\tlog_smgrunlink(&rnode, INITTMP_FORKNUM);\n+\t\tsmgrunlink(srel, INITTMP_FORKNUM, false);\n+\t\treturn;\n+\t}\n\nsmgrclose() should be called just before return.\nIsn't it necessary here to revert buffer persistence state change?\n\n\n(9)\n+void\n+smgrunlink(SMgrRelation reln, ForkNumber forknum, bool isRedo)\n+{\n+\tsmgrsw[reln->smgr_which].smgr_unlink(reln->smgr_rnode, forknum, isRedo);\n+}\n\nMaybe it's better to restore smgrdounlinkfork() that was removed in the older release. That function includes dropping shared buffers, which can clean up the shared buffers that may be cached by this transaction.\n\n\n(10)\n[RelationDropInitFork]\n+\t/* revert buffer-persistence changes at abort */\n+\tpending = (PendingRelDelete *)\n+\t\tMemoryContextAlloc(TopMemoryContext, sizeof(PendingRelDelete));\n+\tpending->relnode = rnode;\n+\tpending->op = PDOP_SET_PERSISTENCE;\n+\tpending->bufpersistence = false;\n+\tpending->backend = InvalidBackendId;\n+\tpending->atCommit = true;\n+\tpending->nestLevel = GetCurrentTransactionNestLevel();\n+\tpending->next = pendingDeletes;\n+\tpendingDeletes = pending;\n+}\n\nbufpersistence should be true.\n\n\n(11)\n+\t\t\t\tBlockNumber block = 0;\n...\n+\t\t\t\tDropRelFileNodeBuffers(rbnode, &pending->unlink_forknum, 1,\n+\t\t\t\t\t\t\t\t\t &block);\n\n\"block\" is unnecessary and 0 can be passed directly.\n\n\n(12)\n-\t\t\t&& pending->backend == InvalidBackendId)\n+\t\t\t&& pending->backend == InvalidBackendId &&\n+\t\t\tpending->op == PDOP_DELETE)\n \t\t\tnrels++;\n\nIt's better to put && at the beginning of the line to follow the existing code here.\n\n\n(13)\n+\ttable_close(rel, lockmode);\n\nlockmode should be NoLock to retain the lock until transaction completion.\n\n\n\n(14)\n+\tctl.keysize = sizeof(unlogged_relation_entry);\n+\tctl.entrysize = sizeof(unlogged_relation_entry);\n+\thash = hash_create(\"unlogged hash\", 32, &ctl, HASH_ELEM);\n...\n+\t\t\tmemset(key.oid, 0, sizeof(key.oid));\n+\t\t\tmemcpy(key.oid, de->d_name, oidchars);\n+\t\t\tent = hash_search(hash, &key, HASH_FIND, NULL);\n\nkeysize should be the oid member of the struct.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Fri, 4 Dec 2020 07:49:22 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "Thanks for the comment! Sorry for the late reply.\n\nAt Fri, 4 Dec 2020 07:49:22 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > No, not really. The issue is more around what happens if we crash\n> > > part way through. At crash recovery time, the system catalogs are not\n> > > available, because the database isn't consistent yet and, anyway, the\n> > > startup process can't be bound to a database, let alone every database\n> > > that might contain unlogged tables. So the sentinel that's used to\n> > > decide whether to flush the contents of a table or index is the\n> > > presence or absence of an _init fork, which the startup process\n> > > obviously can see just fine. The _init fork also tells us what to\n> > > stick in the relation when we reset it; for a table, we can just reset\n> > > to an empty file, but that's not legal for indexes, so the _init fork\n> > > contains a pre-initialized empty index that we can just copy over.\n> > >\n> > > Now, to make an unlogged table logged, you've got to at some stage\n> > > remove those _init forks. But this is not a transactional operation.\n> > > If you remove the _init forks and then the transaction rolls back,\n> > > you've left the system an inconsistent state. If you postpone the\n> > > removal until commit time, then you have a problem if it fails,\n> > \n> > It's true. That are the cause of headache.\n> ...\n> > The current implement is simple. It's enough to just discard old or\n> > new relfilenode according to the current transaction ends with commit\n> > or abort. Tweaking of relfilenode under use leads-in some skews in\n> > some places. I used pendingDelete mechanism a bit complexified way\n> > and a violated an abstraction (I think, calling AM-routines from\n> > storage.c is not good.) and even introduce a new fork kind only to\n> > mark a init fork as \"not committed yet\". There might be better way,\n> > but I haven't find it.\n> \n> I have no alternative idea yet, too. I agree that we want to avoid them, especially introducing inittmp fork... Anyway, below are the rest of my review comments for 0001. I want to review 0002 when we have decided to go with 0001.\n> \n> \n> (2)\n> XLOG_SMGR_UNLINK seems to necessitate modification of the following comments:\n> \n> [src/include/catalog/storage_xlog.h]\n> /*\n> * Declarations for smgr-related XLOG records\n> *\n> * Note: we log file creation and truncation here, but logging of deletion\n> * actions is handled by xact.c, because it is part of transaction commit.\n> */\n\nSure. Rewrote it.\n\n> [src/backend/access/transam/README]\n> 3. Deleting a table, which requires an unlink() that could fail.\n> \n> Our approach here is to WAL-log the operation first, but to treat failure\n> of the actual unlink() call as a warning rather than error condition.\n> Again, this can leave an orphan file behind, but that's cheap compared to\n> the alternatives. Since we can't actually do the unlink() until after\n> we've committed the DROP TABLE transaction, throwing an error would be out\n> of the question anyway. (It may be worth noting that the WAL entry about\n> the file deletion is actually part of the commit record for the dropping\n> transaction.)\n\nMmm. I didn't touched theDROP TABLE (RelationDropStorage) path, but I\nadded a brief description about INITTMP fork to the file.\n\n====\nThe INITTMP fork file\n--------------------------------\n\nAn INITTMP fork is created when new relation file is created to mark\nthe relfilenode needs to be cleaned up at recovery time. The file is\nremoved at transaction end but is left when the process crashes before\nthe transaction ends. In contrast to 4 above, failure to remove an\nINITTMP file will lead to data loss, in which case the server will\nshut down.\n====\n\n> (3)\n> +/* This is bit-map, not ordianal numbers */\n> \n> There seems to be no comments using \"bit-map\". \"Flags for ...\" can be seen here and there.\n\nI revmoed the comment and use (1 << n) notation to show the fact\ninstead.\n\n\n> (4)\n> Some wrong spellings:\n> \n> swithing -> switching\n> alredy -> already\n> recovery -> recover\n> situaton -> situation\n\nOops! Fixed them.\n\n> (5)\n> +\ttable_close(classRel, NoLock);\n> +\n> +\n> +\n> +\n> }\n> \n> These empty lines can be deleted.\n\ns/can/should/ :p. Fixed.\n\n> \n> (6)\n> +/*\n> + * Perform XLogInsert of an XLOG_SMGR_UNLINK record to WAL.\n> + */\n> +void\n> +log_smgrbufpersistence(const RelFileNode *rnode, bool persistence)\n> ...\n> +\t * Make an XLOG entry reporting the file unlink.\n> \n> Not unlink but buffer persistence?\n\nSilly copy-pasto. Fixed.\n\n> (7)\n> +\t/*\n> +\t * index-init fork needs further initialization. ambuildempty shoud do\n> +\t * WAL-log and file sync by itself but otherwise we do that by myself.\n> +\t */\n> +\tif (rel->rd_rel->relkind == RELKIND_INDEX)\n> +\t\trel->rd_indam->ambuildempty(rel);\n> +\telse\n> +\t{\n> +\t\tlog_smgrcreate(&rnode, INIT_FORKNUM);\n> +\t\tsmgrimmedsync(srel, INIT_FORKNUM);\n> +\t}\n> +\n> +\t/*\n> +\t * We have created the init fork. If server crashes before the current\n> +\t * transaction ends the init fork left alone corrupts data while recovery.\n> +\t * The inittmp fork works as the sentinel to identify that situaton.\n> +\t */\n> +\tsmgrcreate(srel, INITTMP_FORKNUM, false);\n> +\tlog_smgrcreate(&rnode, INITTMP_FORKNUM);\n> +\tsmgrimmedsync(srel, INITTMP_FORKNUM);\n> \n> If the server crashes between these two processings, only the init fork exists. Is it correct to create the inittmp fork first?\n\nRight. I change it that way, and did the same with the new code added\nto RelationCreateStorage.\n\n> (8)\n> +\tif (inxact_created)\n> +\t{\n> +\t\tSMgrRelation srel = smgropen(rnode, InvalidBackendId);\n> +\t\tsmgrclose(srel);\n> +\t\tlog_smgrunlink(&rnode, INIT_FORKNUM);\n> +\t\tsmgrunlink(srel, INIT_FORKNUM, false);\n> \t+\t\tlog_smgrunlink(&rnode, INITTMP_FORKNUM);\n> +\t\tsmgrunlink(srel, INITTMP_FORKNUM, false);\n> +\t\treturn;\n> +\t}\n> \n> smgrclose() should be called just before return.\n> Isn't it necessary here to revert buffer persistence state change?\n\nMmm. it's a thinko. I was confused with the case of\nclose/unlink. Fixed all instacnes of the same.\n\n> (9)\n> +void\n> +smgrunlink(SMgrRelation reln, ForkNumber forknum, bool isRedo)\n> +{\n> +\tsmgrsw[reln->smgr_which].smgr_unlink(reln->smgr_rnode, forknum, isRedo);\n> +}\n> \n> Maybe it's better to restore smgrdounlinkfork() that was removed in the older release. That function includes dropping shared buffers, which can clean up the shared buffers that may be cached by this transaction.\n\nINITFORK/INITTMP forks cannot be loaded to shared buffer so it's no\nuse to drop buffers. I added a comment like that.\n\n|\t/*\n|\t * INIT/INITTMP forks never be loaded to shared buffer so no point in\n|\t * dropping buffers for these files.\n|\t */\n|\tlog_smgrunlink(&rnode, INIT_FORKNUM);\n\nI removed DropRelFileNodeBuffers from PDOP_UNLINK_FORK branch in\nsmgrDoPendingDeletes and added an assertion and a comment instead.\n\n|\t\t/* other forks needs to drop buffers */\n|\t\tAssert(pending->unlink_forknum == INIT_FORKNUM ||\n|\t\t\t pending->unlink_forknum == INITTMP_FORKNUM);\n|\n|\t\tlog_smgrunlink(&pending->relnode, pending->unlink_forknum);\n|\t\tsmgrunlink(srel, pending->unlink_forknum, false);\n\n\n> (10)\n> [RelationDropInitFork]\n> +\t/* revert buffer-persistence changes at abort */\n> +\tpending = (PendingRelDelete *)\n> +\t\tMemoryContextAlloc(TopMemoryContext, sizeof(PendingRelDelete));\n> +\tpending->relnode = rnode;\n> +\tpending->op = PDOP_SET_PERSISTENCE;\n> +\tpending->bufpersistence = false;\n> +\tpending->backend = InvalidBackendId;\n> +\tpending->atCommit = true;\n> +\tpending->nestLevel = GetCurrentTransactionNestLevel();\n> +\tpending->next = pendingDeletes;\n> +\tpendingDeletes = pending;\n> +}\n> \n> bufpersistence should be true.\n\nRelationDropInitFork() chnages the relation persisitence to\n\"persistent\" so it shoud be reverted to \"non-persistent (= false)\" at\nabort. (I agree that the function name is somewhat confusing...)\n\n\n> (11)\n> +\t\t\t\tBlockNumber block = 0;\n> ...\n> +\t\t\t\tDropRelFileNodeBuffers(rbnode, &pending->unlink_forknum, 1,\n> +\t\t\t\t\t\t\t\t\t &block);\n> \n> \"block\" is unnecessary and 0 can be passed directly.\n\nI removed the entire function call.\n\nBut, I don't think you're right here.\n\n| DropRelFileNodeBuffers(RelFileNodeBackend rnode, ForkNumber *forkNum,\n| \t\t\t\t\t int nforks, BlockNumber *firstDelBlock)\n\nDoesn't just passing 0 lead to SEGV?\n\n> (12)\n> -\t\t\t&& pending->backend == InvalidBackendId)\n> +\t\t\t&& pending->backend == InvalidBackendId &&\n> +\t\t\tpending->op == PDOP_DELETE)\n> \t\t\tnrels++;\n> \n> It's better to put && at the beginning of the line to follow the existing code here.\n\nIt's terrible.. Fixed.\n\n\n> (13)\n> +\ttable_close(rel, lockmode);\n> \n> lockmode should be NoLock to retain the lock until transaction completion.\n\nI tried to recall the reason for that, but didn't come up with\nanything. Fixed.\n\n> (14)\n> +\tctl.keysize = sizeof(unlogged_relation_entry);\n> +\tctl.entrysize = sizeof(unlogged_relation_entry);\n> +\thash = hash_create(\"unlogged hash\", 32, &ctl, HASH_ELEM);\n> ...\n> +\t\t\tmemset(key.oid, 0, sizeof(key.oid));\n> +\t\t\tmemcpy(key.oid, de->d_name, oidchars);\n> +\t\t\tent = hash_search(hash, &key, HASH_FIND, NULL);\n> \n> keysize should be the oid member of the struct.\n\nIt's not a problem since the first member is the oid and perhaps it\nseems that I thougth to do someting more on that. Now that I don't\nrecall what is it and in the first place the key should be just Oid in\nthe context above. Fixed.\n\nThe patch is attached to the next message.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 24 Dec 2020 17:02:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hello.\n\nAt Thu, 24 Dec 2020 17:02:20 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> The patch is attached to the next message.\n\nThe reason for separating this message is that I modified this so that\nit could solve another issue.\n\nThere's a complain about orphan files after crash. [1]\n\n1: https://www.postgresql.org/message-id/16771-cbef7d97ba93f4b9@postgresql.org\n\nThat is, the case where a relation file is left alone after a server\ncrash that happened before the end of the transaction that has created\na relation. As I read this, I noticed this feature can solve the\nissue with a small change.\n\nThis version gets changes in RelationCreateStorage and\nsmgrDoPendingDeletes.\n\nPreviously inittmp fork is created only along with an init fork. This\nversion creates one always when a relation storage file is created. As\nthe result ResetUnloggedRelationsInDbspaceDir removes all forks if the\ninttmp fork of a logged relations is found. Now that pendingDeletes\ncan contain multiple entries for the same relation, it has been\nmodified not to close the same smgr multiple times.\n\n- It might be better to split 0001 into two peaces.\n\n- The function name ResetUnloggedRelationsInDbspaceDir is no longer\n represents the function correctly.\n\t \nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 25 Dec 2020 09:12:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Fri, 25 Dec 2020 09:12:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Hello.\n> \n> At Thu, 24 Dec 2020 17:02:20 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > The patch is attached to the next message.\n> \n> The reason for separating this message is that I modified this so that\n> it could solve another issue.\n> \n> There's a complain about orphan files after crash. [1]\n> \n> 1: https://www.postgresql.org/message-id/16771-cbef7d97ba93f4b9@postgresql.org\n> \n> That is, the case where a relation file is left alone after a server\n> crash that happened before the end of the transaction that has created\n> a relation. As I read this, I noticed this feature can solve the\n> issue with a small change.\n> \n> This version gets changes in RelationCreateStorage and\n> smgrDoPendingDeletes.\n> \n> Previously inittmp fork is created only along with an init fork. This\n> version creates one always when a relation storage file is created. As\n> the result ResetUnloggedRelationsInDbspaceDir removes all forks if the\n> inttmp fork of a logged relations is found. Now that pendingDeletes\n> can contain multiple entries for the same relation, it has been\n> modified not to close the same smgr multiple times.\n> \n> - It might be better to split 0001 into two peaces.\n> \n> - The function name ResetUnloggedRelationsInDbspaceDir is no longer\n> represents the function correctly.\n\nAs pointed by Robert in another thread [1], persisntence of (at least)\nGiST index cannot be flipped in-place due to incompatibility of fake\nLSNs with real ones.\n\nThis version RelationChangePersistence() is changed not to choose\nin-place method for indexes other than btree. It seems to be usable\nwith all kind of indexes other than Gist, but at the mement it applies\nonly to btrees.\n\n1: https://www.postgresql.org/message-id/CA+TgmoZEZ5RONS49C7mEpjhjndqMQtVrz_LCQUkpRWdmRevDnQ@mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 08 Jan 2021 14:47:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Fri, 08 Jan 2021 14:47:05 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> This version RelationChangePersistence() is changed not to choose\n> in-place method for indexes other than btree. It seems to be usable\n> with all kind of indexes other than Gist, but at the mement it applies\n> only to btrees.\n> \n> 1: https://www.postgresql.org/message-id/CA+TgmoZEZ5RONS49C7mEpjhjndqMQtVrz_LCQUkpRWdmRevDnQ@mail.gmail.com\n\nHmm. This is not wroking correctly. I'll repost after fixint that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Jan 2021 17:52:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Fri, 08 Jan 2021 17:52:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 08 Jan 2021 14:47:05 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > This version RelationChangePersistence() is changed not to choose\n> > in-place method for indexes other than btree. It seems to be usable\n> > with all kind of indexes other than Gist, but at the mement it applies\n> > only to btrees.\n> > \n> > 1: https://www.postgresql.org/message-id/CA+TgmoZEZ5RONS49C7mEpjhjndqMQtVrz_LCQUkpRWdmRevDnQ@mail.gmail.com\n> \n> Hmm. This is not wroking correctly. I'll repost after fixint that.\n\nI think I fixed the misbehavior. ResetUnloggedRelationsInDbspaceDir()\nhandles file operations in the wrong order and with the wrong logic.\nIt also needed to drop buffers and forget fsync requests.\n\nI thought that the two cases that this patch is expected to fix\n(orphan relation files and uncommited init files) can share the same\n\"cleanup\" fork but that is wrong. I had to add one more additional\nfork to differentiate the cases of SET UNLOGGED and of creation of\nUNLOGGED tables...\n\nThe attached is a new version, that seems working correctly but looks\nsomewhat messy. I'll continue working.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 12 Jan 2021 18:58:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Tue, 12 Jan 2021 18:58:08 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 08 Jan 2021 17:52:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Fri, 08 Jan 2021 14:47:05 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > This version RelationChangePersistence() is changed not to choose\n> > > in-place method for indexes other than btree. It seems to be usable\n> > > with all kind of indexes other than Gist, but at the mement it applies\n> > > only to btrees.\n> > > \n> > > 1: https://www.postgresql.org/message-id/CA+TgmoZEZ5RONS49C7mEpjhjndqMQtVrz_LCQUkpRWdmRevDnQ@mail.gmail.com\n> > \n> > Hmm. This is not wroking correctly. I'll repost after fixint that.\n> \n> I think I fixed the misbehavior. ResetUnloggedRelationsInDbspaceDir()\n> handles file operations in the wrong order and with the wrong logic.\n> It also needed to drop buffers and forget fsync requests.\n> \n> I thought that the two cases that this patch is expected to fix\n> (orphan relation files and uncommited init files) can share the same\n> \"cleanup\" fork but that is wrong. I had to add one more additional\n> fork to differentiate the cases of SET UNLOGGED and of creation of\n> UNLOGGED tables...\n> \n> The attached is a new version, that seems working correctly but looks\n> somewhat messy. I'll continue working.\n\nCommit bea449c635 conflicts with this on the change of the definition\nof DropRelFileNodeBuffers. The change simplified this patch by a bit:p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 14 Jan 2021 17:32:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "(I'm not sure when the subject was broken..)\n\nAt Thu, 14 Jan 2021 17:32:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Commit bea449c635 conflicts with this on the change of the definition\n> of DropRelFileNodeBuffers. The change simplified this patch by a bit:p\n\nIn this version, I got rid of the \"CLEANUP FORK\"s, and added a new\nsystem \"Smgr marks\". The mark files have the name of the\ncorresponding fork file followed by \".u\" (which means Uncommitted.).\n\"Uncommited\"-marked main fork means the same as the CLEANUP2_FORKNUM\nand uncommitted-marked init fork means the same as the CLEANUP_FORKNUM\nin the previous version.x\n\nI noticed that the previous version of the patch still leaves an\norphan main fork file after \"BEGIN; CREATE TABLE x; ROLLBACK; (crash\nbefore checkpoint)\" since the \"mark\" file (or CLEANUP2_FORKNUM) is\nrevmoed at rollback. In this version the responsibility to remove the\nmark files is moved to SyncPostCheckpoint, where main fork files are\nactually removed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 25 Mar 2021 14:08:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Thu, 25 Mar 2021 14:08:05 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> (I'm not sure when the subject was broken..)\n> \n> At Thu, 14 Jan 2021 17:32:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Commit bea449c635 conflicts with this on the change of the definition\n> > of DropRelFileNodeBuffers. The change simplified this patch by a bit:p\n> \n> In this version, I got rid of the \"CLEANUP FORK\"s, and added a new\n> system \"Smgr marks\". The mark files have the name of the\n> corresponding fork file followed by \".u\" (which means Uncommitted.).\n> \"Uncommited\"-marked main fork means the same as the CLEANUP2_FORKNUM\n> and uncommitted-marked init fork means the same as the CLEANUP_FORKNUM\n> in the previous version.x\n> \n> I noticed that the previous version of the patch still leaves an\n> orphan main fork file after \"BEGIN; CREATE TABLE x; ROLLBACK; (crash\n> before checkpoint)\" since the \"mark\" file (or CLEANUP2_FORKNUM) is\n> revmoed at rollback. In this version the responsibility to remove the\n> mark files is moved to SyncPostCheckpoint, where main fork files are\n> actually removed.\n\nFor the record, I noticed that basebackup could be confused by the\nmark files but I haven't looked that yet.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 25 Mar 2021 14:15:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "> Kyotaro wrote:\n> > In this version, I got rid of the \"CLEANUP FORK\"s, and added a new\n> > system \"Smgr marks\". The mark files have the name of the\n> > corresponding fork file followed by \".u\" (which means Uncommitted.).\n> > \"Uncommited\"-marked main fork means the same as the\n> CLEANUP2_FORKNUM\n> > and uncommitted-marked init fork means the same as the\n> CLEANUP_FORKNUM\n> > in the previous version.x\n> >\n> > I noticed that the previous version of the patch still leaves an\n> > orphan main fork file after \"BEGIN; CREATE TABLE x; ROLLBACK; (crash\n> > before checkpoint)\" since the \"mark\" file (or CLEANUP2_FORKNUM) is\n> > revmoed at rollback. In this version the responsibility to remove the\n> > mark files is moved to SyncPostCheckpoint, where main fork files are\n> > actually removed.\n> \n> For the record, I noticed that basebackup could be confused by the mark files\n> but I haven't looked that yet.\n> \n\nGood morning Kyotaro,\n\nthe patch didn't apply clean (it's from March; some hunks were failing), so I've fixed it and the combined git format-patch is attached. It did conflict with the following:\n\tb0483263dda - Add support for SET ACCESS METHOD in ALTER TABLE\n\t7b565843a94 - Add call to object access hook at the end of table rewrite in ALTER TABLE\n\t9ce346eabf3 - Report progress of startup operations that take a long time.\n\tf10f0ae420 - Replace RelationOpenSmgr() with RelationGetSmgr().\n\nI'm especially worried if I didn't screw up something/forgot something related to the last one (rd->rd_smgr changes), but I'm getting \"All 210 tests passed\".\n\nBasic demonstration of this patch (with wal_level=minimal):\n\tcreate unlogged table t6 (id bigint, t text);\n\t-- produces 110GB table, takes ~5mins\n\tinsert into t6 select nextval('s1'), repeat('A', 1000) from generate_series(1, 100000000);\n\talter table t6 set logged;\n\t\ton baseline SET LOGGED takes: ~7min10s\n\t\ton patched SET LOGGED takes: 25s\n\nSo basically one can - thanks to this patch - use his application (performing classic INSERTs/UPDATEs/DELETEs, so without the need to rewrite to use COPY) and perform literally batch upload and then just switch the tables to LOGGED. \n\nSome more intensive testing also looks good, assuming table prepared to put pressure on WAL:\n\tcreate unlogged table t_unlogged (id bigint, t text) partition by hash (id);\n\tcreate unlogged table t_unlogged_h0 partition of t_unlogged FOR VALUES WITH (modulus 4, remainder 0);\n\t[..]\n\tcreate unlogged table t_unlogged_h3 partition of t_unlogged FOR VALUES WITH (modulus 4, remainder 3);\n\nWorkload would still be pretty heavy on LWLock/BufferContent,WALInsert and Lock/extend .\n\tt_logged.sql = insert into t_logged select nextval('s1'), repeat('A', 1000) from generate_series(1, 1000); # according to pg_wal_stats.wal_bytes generates ~1MB of WAL\n\tt_unlogged.sql = insert into t_unlogged select nextval('s1'), repeat('A', 1000) from generate_series(1, 1000); # according to pg_wal_stats.wal_bytes generates ~3kB of WAL\n\nso using: pgbench -f <tabletypetest>.sql -T 30 -P 1 -c 32 -j 3 t \nwith synchronous_commit =ON(default):\n\twith t_logged.sql: tps = 229 (lat avg = 138ms)\n\twith t_unlogged.sql tps = 283 (lat avg = 112ms) # almost all on LWLock/WALWrite\nwith synchronous_commit =OFF:\n\twith t_logged.sql: tps = 413 (lat avg = 77ms)\n\twith t_unloged.sql: tps = 782 (lat avg = 40ms)\nAfterwards switching the unlogged ~16GB partitions takes 5s per partition. \n\nAs the thread didn't get a lot of traction, I've registered it into current commitfest https://commitfest.postgresql.org/36/3461/ with You as the author and in 'Ready for review' state. \nI think it behaves as almost finished one and apparently after reading all those discussions that go back over 10years+ time span about this feature, and lot of failed effort towards wal_level=noWAL I think it would be nice to finally start getting some of that of it into the core.\n\n-Jakub Wartak.",
"msg_date": "Fri, 17 Dec 2021 09:10:30 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "On Fri, Dec 17, 2021 at 09:10:30AM +0000, Jakub Wartak wrote:\n> I'm especially worried if I didn't screw up something/forgot something related to the last one (rd->rd_smgr changes), but I'm getting \"All 210 tests passed\".\n\n> As the thread didn't get a lot of traction, I've registered it into current commitfest https://commitfest.postgresql.org/36/3461/ with You as the author and in 'Ready for review' state. \n> I think it behaves as almost finished one and apparently after reading all those discussions that go back over 10years+ time span about this feature, and lot of failed effort towards wal_level=noWAL I think it would be nice to finally start getting some of that of it into the core.\n\nThe patch is failing:\nhttp://cfbot.cputube.org/kyotaro-horiguchi.html\nhttps://api.cirrus-ci.com/v1/artifact/task/5564333871595520/regress_diffs/src/bin/pg_upgrade/tmp_check/regress/regression.diffs\n\nI think you ran \"make check\", but should run something like this:\nmake check-world -j8 >check-world.log 2>&1 && echo Success\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 17 Dec 2021 06:46:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "> Justin wrote:\n> On Fri, Dec 17, 2021 at 09:10:30AM +0000, Jakub Wartak wrote:\n> > As the thread didn't get a lot of traction, I've registered it into current\n> commitfest\n> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommitf\n> est.postgresql.org%2F36%2F3461%2F&data=04%7C01%7CJakub.Wartak%\n> 40tomtom.com%7Cb815e75090d44e20fd0a08d9c15b45cc%7C374f80267b544a\n> 3ab87d328fa26ec10d%7C0%7C0%7C637753420044612362%7CUnknown%7CT\n> WFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXV\n> CI6Mn0%3D%7C3000&sdata=0BTQSVDnVPu4YpECKXXlBJT5q3Gfgv099SaC\n> NuBwiW4%3D&reserved=0 with You as the author and in 'Ready for\n> review' state.\n> \n> The patch is failing:\n[..] \n> I think you ran \"make check\", but should run something like this:\n> make check-world -j8 >check-world.log 2>&1 && echo Success\n\nHi Justin,\n\nI've repeated the check-world and it says to me all is ok locally (also with --enable-cassert --enable-debug , at least on Amazon Linux 2) and also installcheck on default params seems to be ok \nI don't seem to understand why testfarm reports errors for tests like \"path, polygon, rowsecurity\" e.g. on Linux/graviton2 and FreeBSD while not on MacOS(?) . \nCould someone point to me where to start looking/fixing?\n\n-J.\n\n\n",
"msg_date": "Fri, 17 Dec 2021 14:33:25 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "On Fri, Dec 17, 2021 at 02:33:25PM +0000, Jakub Wartak wrote:\n> > Justin wrote:\n> > On Fri, Dec 17, 2021 at 09:10:30AM +0000, Jakub Wartak wrote:\n> > > As the thread didn't get a lot of traction, I've registered it into current\n> > commitfest\n> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommitf\n> > est.postgresql.org%2F36%2F3461%2F&data=04%7C01%7CJakub.Wartak%\n> > 40tomtom.com%7Cb815e75090d44e20fd0a08d9c15b45cc%7C374f80267b544a\n> > 3ab87d328fa26ec10d%7C0%7C0%7C637753420044612362%7CUnknown%7CT\n> > WFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXV\n> > CI6Mn0%3D%7C3000&sdata=0BTQSVDnVPu4YpECKXXlBJT5q3Gfgv099SaC\n> > NuBwiW4%3D&reserved=0 with You as the author and in 'Ready for\n> > review' state.\n> > \n> > The patch is failing:\n> [..] \n> > I think you ran \"make check\", but should run something like this:\n> > make check-world -j8 >check-world.log 2>&1 && echo Success\n> \n> Hi Justin,\n> \n> I've repeated the check-world and it says to me all is ok locally (also with --enable-cassert --enable-debug , at least on Amazon Linux 2) and also installcheck on default params seems to be ok \n> I don't seem to understand why testfarm reports errors for tests like \"path, polygon, rowsecurity\" e.g. on Linux/graviton2 and FreeBSD while not on MacOS(?) . \n> Could someone point to me where to start looking/fixing?\n\nSince it says this, it looks a lot like a memory error like a use-after-free\n - like in fsync_parent_path():\n\n CREATE TABLE PATH_TBL (f1 path);\n+ERROR: could not open file <....> Pacific\": No such file or directory\n\nI see at least this one is still failing, though:\ntime make -C src/test/recovery check",
"msg_date": "Fri, 17 Dec 2021 13:10:28 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hello, Jakub.\n\nAt Fri, 17 Dec 2021 09:10:30 +0000, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote in \n> the patch didn't apply clean (it's from March; some hunks were failing), so I've fixed it and the combined git format-patch is attached. It did conflict with the following:\n\nThanks for looking this. Also thanks for Justin for finding the silly\nuse-after-free bug. (Now I see the regression test fails and I'm not\nsure how come I didn't find this before.)\n\n> \tb0483263dda - Add support for SET ACCESS METHOD in ALTER TABLE\n> \t7b565843a94 - Add call to object access hook at the end of table rewrite in ALTER TABLE\n> \t9ce346eabf3 - Report progress of startup operations that take a long time.\n> \tf10f0ae420 - Replace RelationOpenSmgr() with RelationGetSmgr().\n> \n> I'm especially worried if I didn't screw up something/forgot something related to the last one (rd->rd_smgr changes), but I'm getting \"All 210 tests passed\".\n\nAbout the last one, all rel->rd_smgr acesses need to be repalced with\nRelationGetSmgr(). On the other hand we can simply remove\nRelationOpenSmgr() calls since the target smgrrelation is guaranteed\nto be loaded by RelationGetSmgr().\n\nThe fix you made for RelationCreate/DropInitFork is correct and\nchanges you made would work, but I prefer that the code not being too\npermissive for unknown (or unexpected) states.\n\n> Basic demonstration of this patch (with wal_level=minimal):\n> \tcreate unlogged table t6 (id bigint, t text);\n> \t-- produces 110GB table, takes ~5mins\n> \tinsert into t6 select nextval('s1'), repeat('A', 1000) from generate_series(1, 100000000);\n> \talter table t6 set logged;\n> \t\ton baseline SET LOGGED takes: ~7min10s\n> \t\ton patched SET LOGGED takes: 25s\n> \n> So basically one can - thanks to this patch - use his application (performing classic INSERTs/UPDATEs/DELETEs, so without the need to rewrite to use COPY) and perform literally batch upload and then just switch the tables to LOGGED. \n\nThis result is significant. That operation finally requires WAL writes\nbut I was not sure how much gain FPIs (or bulk WAL logging) gives in\ncomparison to operational WALs.\n\n> Some more intensive testing also looks good, assuming table prepared to put pressure on WAL:\n> \tcreate unlogged table t_unlogged (id bigint, t text) partition by hash (id);\n> \tcreate unlogged table t_unlogged_h0 partition of t_unlogged FOR VALUES WITH (modulus 4, remainder 0);\n> \t[..]\n> \tcreate unlogged table t_unlogged_h3 partition of t_unlogged FOR VALUES WITH (modulus 4, remainder 3);\n> \n> Workload would still be pretty heavy on LWLock/BufferContent,WALInsert and Lock/extend .\n> \tt_logged.sql = insert into t_logged select nextval('s1'), repeat('A', 1000) from generate_series(1, 1000); # according to pg_wal_stats.wal_bytes generates ~1MB of WAL\n> \tt_unlogged.sql = insert into t_unlogged select nextval('s1'), repeat('A', 1000) from generate_series(1, 1000); # according to pg_wal_stats.wal_bytes generates ~3kB of WAL\n> \n> so using: pgbench -f <tabletypetest>.sql -T 30 -P 1 -c 32 -j 3 t \n> with synchronous_commit =ON(default):\n> \twith t_logged.sql: tps = 229 (lat avg = 138ms)\n> \twith t_unlogged.sql tps = 283 (lat avg = 112ms) # almost all on LWLock/WALWrite\n> with synchronous_commit =OFF:\n> \twith t_logged.sql: tps = 413 (lat avg = 77ms)\n> \twith t_unloged.sql: tps = 782 (lat avg = 40ms)\n> Afterwards switching the unlogged ~16GB partitions takes 5s per partition. \n> \n> As the thread didn't get a lot of traction, I've registered it into current commitfest https://commitfest.postgresql.org/36/3461/ with You as the author and in 'Ready for review' state.\n> \n> I think it behaves as almost finished one and apparently after reading all those discussions that go back over 10years+ time span about this feature, and lot of failed effort towards wal_level=noWAL I think it would be nice to finally start getting some of that of it into the core.\n\nThanks for taking the performance benchmark.\n\nI didn't register for some reasons.\n\n1. I'm not sure that we want to have the new mark files.\n\n2. Aside of possible bugs, I'm not confident that the crash-safety of\n this patch is actually water-tight. At least we need tests for some\n failure cases.\n\n3. As mentioned in transam/README, failure in removing smgr mark files\n leads to immediate shut down. I'm not sure this behavior is acceptable.\n\n4. Including the reasons above, this is not fully functionally.\n For example, if we execute the following commands on primary,\n replica dones't work correctly. (boom!)\n\n =# CREATE UNLOGGED TABLE t (a int);\n =# ALTER TABLE t SET LOGGED;\n\n\nThe following fixes are done in the attched v8.\n\n- Rebased. Referring to Jakub and Justin's work, I replaced direct\n access to ->rd_smgr with RelationGetSmgr() and removed calls to\n RelationOpenSmgr(). I still separate the \"ALTER TABLE ALL IN\n TABLESPACE SET LOGGED/UNLOGGED\" statement part.\n\n- Fixed RelationCreate/DropInitFork's behavior for non-target\n relations. (From Jakub's work).\n\n- Fixed wording of some comments.\n\n- As revisited, I found a bug around recovery. If the logged-ness of a\n relation gets flipped repeatedly in a transaction, duplicate\n pending-delete entries are accumulated during recovery and work in a\n wrong way. sgmr_redo now adds up to one entry for a action.\n\n- The issue 4 above is not fixed (yet).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 20 Dec 2021 15:28:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Mon, 20 Dec 2021 15:28:23 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> \n> 4. Including the reasons above, this is not fully functionally.\n> For example, if we execute the following commands on primary,\n> replica dones't work correctly. (boom!)\n> \n> =# CREATE UNLOGGED TABLE t (a int);\n> =# ALTER TABLE t SET LOGGED;\n\n> - The issue 4 above is not fixed (yet).\n\nNot only for the case, RelationChangePersistence needs to send a\ntruncate record before FPIs. If primary crashes amid of the\noperation, the table content will be vanish with the persistence\nchange. That is the correct behavior.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 20 Dec 2021 16:53:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hi Kyotaro, I'm glad you are still into this\n\n> I didn't register for some reasons.\n\nRight now in v8 there's a typo in ./src/backend/catalog/storage.c :\n\nstorage.c: In function 'RelationDropInitFork':\nstorage.c:385:44: error: expected statement before ')' token\n pending->unlink_forknum != INIT_FORKNUM)) <-- here, one ) too much\n\n> 1. I'm not sure that we want to have the new mark files.\n\nI can't help with such design decision, but if there are doubts maybe then add checking return codes around:\na) pg_fsync() and fsync_parent_path() (??) inside mdcreatemark()\nb) mdunlinkmark() inside mdunlinkmark()\nand PANIC if something goes wrong?\n\n> 2. Aside of possible bugs, I'm not confident that the crash-safety of\n> this patch is actually water-tight. At least we need tests for some\n> failure cases.\n>\n> 3. As mentioned in transam/README, failure in removing smgr mark files\n> leads to immediate shut down. I'm not sure this behavior is acceptable.\n\nDoesn't it happen for most of the stuff already? There's even data_sync_retry GUC.\n\n> 4. Including the reasons above, this is not fully functionally.\n> For example, if we execute the following commands on primary,\n> replica dones't work correctly. (boom!)\n> \n> =# CREATE UNLOGGED TABLE t (a int);\n> =# ALTER TABLE t SET LOGGED;\n> \n> \n> The following fixes are done in the attched v8.\n> \n> - Rebased. Referring to Jakub and Justin's work, I replaced direct\n> access to ->rd_smgr with RelationGetSmgr() and removed calls to\n> RelationOpenSmgr(). I still separate the \"ALTER TABLE ALL IN\n> TABLESPACE SET LOGGED/UNLOGGED\" statement part.\n> \n> - Fixed RelationCreate/DropInitFork's behavior for non-target\n> relations. (From Jakub's work).\n> \n> - Fixed wording of some comments.\n> \n> - As revisited, I found a bug around recovery. If the logged-ness of a\n> relation gets flipped repeatedly in a transaction, duplicate\n> pending-delete entries are accumulated during recovery and work in a\n> wrong way. sgmr_redo now adds up to one entry for a action.\n> \n> - The issue 4 above is not fixed (yet).\n\nThanks again, If you have any list of crush tests ideas maybe I'll have some minutes \nto try to figure them out. Is there is any goto list of stuff to be checked to add confidence\nto this patch (as per point #2) ?\n\nBTW fast feedback regarding that ALTER patch (there were 4 unlogged tables):\n# ALTER TABLE ALL IN TABLESPACE tbs1 set logged;\nWARNING: unrecognized node type: 349\n\n-J.\n\n\n",
"msg_date": "Mon, 20 Dec 2021 07:59:29 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "At Mon, 20 Dec 2021 07:59:29 +0000, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote in \n> Hi Kyotaro, I'm glad you are still into this\n> \n> > I didn't register for some reasons.\n> \n> Right now in v8 there's a typo in ./src/backend/catalog/storage.c :\n> \n> storage.c: In function 'RelationDropInitFork':\n> storage.c:385:44: error: expected statement before ')' token\n> pending->unlink_forknum != INIT_FORKNUM)) <-- here, one ) too much\n\nYeah, I thought that I had removed it. v9 patch I believe is correct.\n\n> > 1. I'm not sure that we want to have the new mark files.\n> \n> I can't help with such design decision, but if there are doubts maybe then add checking return codes around:\n> a) pg_fsync() and fsync_parent_path() (??) inside mdcreatemark()\n> b) mdunlinkmark() inside mdunlinkmark()\n> and PANIC if something goes wrong?\n\nThe point is it is worth the complexity it adds. Since the mark file\ncan resolve another existing (but I don't recall in detail) issue and\nthis patchset actually fixes it, it can be said to have a certain\nextent of persuasiveness. But that doesn't change the fact that it's\nadditional complexity.\n\n> > 2. Aside of possible bugs, I'm not confident that the crash-safety of\n> > this patch is actually water-tight. At least we need tests for some\n> > failure cases.\n> >\n> > 3. As mentioned in transam/README, failure in removing smgr mark files\n> > leads to immediate shut down. I'm not sure this behavior is acceptable.\n> \n> Doesn't it happen for most of the stuff already? There's even data_sync_retry GUC.\n\nHmm. Yes, actually it is \"as water-tight as possible\". I just want\nothers' eyes on that perspective. CF could be the entry point of\nothers but I'm a bit hesitent to add a new entry..\n\n> > 4. Including the reasons above, this is not fully functionally.\n> > For example, if we execute the following commands on primary,\n> > replica dones't work correctly. (boom!)\n> > \n> > =# CREATE UNLOGGED TABLE t (a int);\n> > =# ALTER TABLE t SET LOGGED;\n> > \n> > \n> > The following fixes are done in the attched v8.\n> > \n> > - Rebased. Referring to Jakub and Justin's work, I replaced direct\n> > access to ->rd_smgr with RelationGetSmgr() and removed calls to\n> > RelationOpenSmgr(). I still separate the \"ALTER TABLE ALL IN\n> > TABLESPACE SET LOGGED/UNLOGGED\" statement part.\n> > \n> > - Fixed RelationCreate/DropInitFork's behavior for non-target\n> > relations. (From Jakub's work).\n> > \n> > - Fixed wording of some comments.\n> > \n> > - As revisited, I found a bug around recovery. If the logged-ness of a\n> > relation gets flipped repeatedly in a transaction, duplicate\n> > pending-delete entries are accumulated during recovery and work in a\n> > wrong way. sgmr_redo now adds up to one entry for a action.\n> > \n> > - The issue 4 above is not fixed (yet).\n> \n> Thanks again, If you have any list of crush tests ideas maybe I'll have some minutes \n> to try to figure them out. Is there is any goto list of stuff to be checked to add confidence\n> to this patch (as per point #2) ?\n\nJust causing a crash (kill -9) after executing problem-prone command\nsequence, then seeing recovery works well would sufficient.\n\nFor example:\n\ncreate unlogged table; begin; insert ..; alter table set logged;\n<crash>. Recovery works.\n\n\"create logged; begin; {alter unlogged; alter logged;} * 1000; alter\nlogged; commit/abort\" doesn't pollute pgdata.\n\n\n> BTW fast feedback regarding that ALTER patch (there were 4 unlogged tables):\n> # ALTER TABLE ALL IN TABLESPACE tbs1 set logged;\n> WARNING: unrecognized node type: 349\n\nlol I met a server crash. Will fix. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 20 Dec 2021 17:39:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Mon, 20 Dec 2021 17:39:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 20 Dec 2021 07:59:29 +0000, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote in \n> > BTW fast feedback regarding that ALTER patch (there were 4 unlogged tables):\n> > # ALTER TABLE ALL IN TABLESPACE tbs1 set logged;\n> > WARNING: unrecognized node type: 349\n> \n> lol I met a server crash. Will fix. Thanks!\n\nThat crash vanished after a recompilation for me and I don't see that\nerror. On my dev env node# 349 is T_ALterTableSetLoggedAllStmt, which\n0002 adds. So perhaps make clean/make all would fix that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 20 Dec 2021 17:52:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hi Kyotaro,\r\n\r\n> At Mon, 20 Dec 2021 17:39:27 +0900 (JST), Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote in\r\n> > At Mon, 20 Dec 2021 07:59:29 +0000, Jakub Wartak\r\n> > <Jakub.Wartak@tomtom.com> wrote in\r\n> > > BTW fast feedback regarding that ALTER patch (there were 4 unlogged\r\n> tables):\r\n> > > # ALTER TABLE ALL IN TABLESPACE tbs1 set logged;\r\n> > > WARNING: unrecognized node type: 349\r\n> >\r\n> > lol I met a server crash. Will fix. Thanks!\r\n> \r\n> That crash vanished after a recompilation for me and I don't see that error. On\r\n> my dev env node# 349 is T_ALterTableSetLoggedAllStmt, which\r\n> 0002 adds. So perhaps make clean/make all would fix that.\r\n\r\nThe fastest I could - I've repeated the whole cycle about that one with fresh v9 (make clean, configure, make install, fresh initdb) and I've found two problems:\r\n\r\n1) check-worlds seems OK but make -C src/test/recovery check shows a couple of failing tests here locally and in https://cirrus-ci.com/task/4699985735319552?logs=test#L807 :\r\nt/009_twophase.pl (Wstat: 256 Tests: 24 Failed: 1)\r\n Failed test: 21\r\n Non-zero exit status: 1\r\nt/014_unlogged_reinit.pl (Wstat: 512 Tests: 12 Failed: 2)\r\n Failed tests: 9-10\r\n Non-zero exit status: 2\r\nt/018_wal_optimize.pl (Wstat: 7424 Tests: 0 Failed: 0)\r\n Non-zero exit status: 29\r\n Parse errors: Bad plan. You planned 38 tests but ran 0.\r\nt/022_crash_temp_files.pl (Wstat: 7424 Tests: 6 Failed: 0)\r\n Non-zero exit status: 29\r\n Parse errors: Bad plan. You planned 9 tests but ran 6.\r\n\r\n018 made no sense, I've tried to take a quick look with wal_level=minimal why it is failing , it is mystery to me as the sequence seems to be pretty basic but the outcome is not:\r\n~> cat repro.sql\r\ncreate tablespace tbs1 location '/tbs1';\r\nCREATE TABLE moved (id int);\r\nINSERT INTO moved VALUES (1);\r\nBEGIN;\r\nALTER TABLE moved SET TABLESPACE tbs1;\r\nCREATE TABLE originated (id int);\r\nINSERT INTO originated VALUES (1);\r\nCREATE UNIQUE INDEX ON originated(id) TABLESPACE tbs1;\r\nCOMMIT;\r\n\r\n~> psql -f repro.sql z3; sleep 1; /usr/pgsql-15/bin/pg_ctl -D /var/lib/pgsql/15/data -l logfile -m immediate stop\r\nCREATE TABLESPACE\r\nCREATE TABLE\r\nINSERT 0 1\r\nBEGIN\r\nALTER TABLE\r\nCREATE TABLE\r\nINSERT 0 1\r\nCREATE INDEX\r\nCOMMIT\r\nwaiting for server to shut down.... done\r\nserver stopped\r\n~> /usr/pgsql-15/bin/pg_ctl -D /var/lib/pgsql/15/data -l logfile start\r\nwaiting for server to start.... done\r\nserver started\r\nz3# select * from moved;\r\nERROR: could not open file \"pg_tblspc/32834/PG_15_202112131/32833/32838\": No such file or directory\r\nz3=# select * from originated;\r\nERROR: could not open file \"base/32833/32839\": No such file or directory\r\nz3=# \\dt+\r\n List of relations\r\n Schema | Name | Type | Owner | Persistence | Size | Description\r\n--------+------------+-------+----------+-------------+---------+-------------\r\n public | moved | table | postgres | permanent | 0 bytes |\r\n public | originated | table | postgres | permanent | 0 bytes |\r\n\r\nThis happens even without placing on tablespace at all {for originated table , but no for moved on}, some major mishap is there (commit should guarantee correctness) or I'm tired and having sloppy fingers.\r\n\r\n2) minor one testcase, still something is odd.\r\n\r\ndrop tablespace tbs1;\r\ncreate tablespace tbs1 location '/tbs1';\r\nCREATE UNLOGGED TABLE t4 (a int) tablespace tbs1;\r\nCREATE UNLOGGED TABLE t5 (a int) tablespace tbs1;\r\nCREATE UNLOGGED TABLE t6 (a int) tablespace tbs1;\r\nCREATE TABLE t7 (a int) tablespace tbs1;\r\ninsert into t7 values (1);\r\ninsert into t5 values (1);\r\ninsert into t6 values (1);\r\n\\dt+\r\n List of relations\r\n Schema | Name | Type | Owner | Persistence | Size | Description\r\n--------+------+-------+----------+-------------+------------+-------------\r\n public | t4 | table | postgres | unlogged | 0 bytes |\r\n public | t5 | table | postgres | unlogged | 8192 bytes |\r\n public | t6 | table | postgres | unlogged | 8192 bytes |\r\n public | t7 | table | postgres | permanent | 8192 bytes |\r\n(4 rows)\r\n\r\nALTER TABLE ALL IN TABLESPACE tbs1 set logged; \r\n==> STILL WARNING: unrecognized node type: 349\r\n\\dt+\r\n List of relations\r\n Schema | Name | Type | Owner | Persistence | Size | Description\r\n--------+------+-------+----------+-------------+------------+-------------\r\n public | t4 | table | postgres | permanent | 0 bytes |\r\n public | t5 | table | postgres | permanent | 8192 bytes |\r\n public | t6 | table | postgres | permanent | 8192 bytes |\r\n public | t7 | table | postgres | permanent | 8192 bytes |\r\n\r\nSo it did rewrite however this warning seems to be unfixed. I've tested on e2c52beecdea152ca680a22ef35c6a7da55aa30f.\r\n\r\n-J.\r\n",
"msg_date": "Mon, 20 Dec 2021 13:38:35 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "Ugh! I completely forgot about TAP tests.. Thanks for the testing and\nsorry for the bugs.\n\nThis is a bit big change so I need a bit of time before posting the\nnext version.\n\n\nAt Mon, 20 Dec 2021 13:38:35 +0000, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote in \n> 1) check-worlds seems OK but make -C src/test/recovery check shows a couple of failing tests here locally and in https://cirrus-ci.com/task/4699985735319552?logs=test#L807 :\n> t/009_twophase.pl (Wstat: 256 Tests: 24 Failed: 1)\n> Failed test: 21\n> Non-zero exit status: 1\n\nPREPARE TRANSACTION requires uncommited file creation to be\ncommitted. Concretely we need to remove the \"mark\" files for the\nin-transaction created relation file during PREPARE TRANSACTION.\n\npendingSync is not a parallel mechanism with pendingDeletes so we\ncannot move mark deletion to pendingSync.\n\nAfter all I decided to add a separate list pendingCleanups for pending\nnon-deletion tasks separately from pendingDeletes and execute it\nbefore insering the commit record. Not only the above but also all of\nthe following failures vanished by the change.\n\n> t/014_unlogged_reinit.pl (Wstat: 512 Tests: 12 Failed: 2)\n> Failed tests: 9-10\n> Non-zero exit status: 2\n> t/018_wal_optimize.pl (Wstat: 7424 Tests: 0 Failed: 0)\n> Non-zero exit status: 29\n> Parse errors: Bad plan. You planned 38 tests but ran 0.\n> t/022_crash_temp_files.pl (Wstat: 7424 Tests: 6 Failed: 0)\n> Non-zero exit status: 29\n> Parse errors: Bad plan. You planned 9 tests but ran 6.\n\n\n> 018 made no sense, I've tried to take a quick look with wal_level=minimal why it is failing , it is mystery to me as the sequence seems to be pretty basic but the outcome is not:\n\nI think this shares the same cause.\n\n> ~> cat repro.sql\n> create tablespace tbs1 location '/tbs1';\n> CREATE TABLE moved (id int);\n> INSERT INTO moved VALUES (1);\n> BEGIN;\n> ALTER TABLE moved SET TABLESPACE tbs1;\n> CREATE TABLE originated (id int);\n> INSERT INTO originated VALUES (1);\n> CREATE UNIQUE INDEX ON originated(id) TABLESPACE tbs1;\n> COMMIT;\n..\n> ERROR: could not open file \"base/32833/32839\": No such file or directory\n\n\n> z3=# \\dt+\n> List of relations\n> Schema | Name | Type | Owner | Persistence | Size | Description\n> --------+------------+-------+----------+-------------+---------+-------------\n> public | moved | table | postgres | permanent | 0 bytes |\n> public | originated | table | postgres | permanent | 0 bytes |\n> \n> This happens even without placing on tablespace at all {for originated table , but no for moved on}, some major mishap is there (commit should guarantee correctness) or I'm tired and having sloppy fingers.\n> \n> 2) minor one testcase, still something is odd.\n> \n> drop tablespace tbs1;\n> create tablespace tbs1 location '/tbs1';\n> CREATE UNLOGGED TABLE t4 (a int) tablespace tbs1;\n> CREATE UNLOGGED TABLE t5 (a int) tablespace tbs1;\n> CREATE UNLOGGED TABLE t6 (a int) tablespace tbs1;\n> CREATE TABLE t7 (a int) tablespace tbs1;\n> insert into t7 values (1);\n> insert into t5 values (1);\n> insert into t6 values (1);\n> \\dt+\n> List of relations\n> Schema | Name | Type | Owner | Persistence | Size | Description\n> --------+------+-------+----------+-------------+------------+-------------\n> public | t4 | table | postgres | unlogged | 0 bytes |\n> public | t5 | table | postgres | unlogged | 8192 bytes |\n> public | t6 | table | postgres | unlogged | 8192 bytes |\n> public | t7 | table | postgres | permanent | 8192 bytes |\n> (4 rows)\n> \n> ALTER TABLE ALL IN TABLESPACE tbs1 set logged; \n> ==> STILL WARNING: unrecognized node type: 349\n> \\dt+\n> List of relations\n> Schema | Name | Type | Owner | Persistence | Size | Description\n> --------+------+-------+----------+-------------+------------+-------------\n> public | t4 | table | postgres | permanent | 0 bytes |\n> public | t5 | table | postgres | permanent | 8192 bytes |\n> public | t6 | table | postgres | permanent | 8192 bytes |\n> public | t7 | table | postgres | permanent | 8192 bytes |\n> \n> So it did rewrite however this warning seems to be unfixed. I've tested on e2c52beecdea152ca680a22ef35c6a7da55aa30f.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Dec 2021 17:13:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Tue, 21 Dec 2021 17:13:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Ugh! I completely forgot about TAP tests.. Thanks for the testing and\n> sorry for the bugs.\n> \n> This is a bit big change so I need a bit of time before posting the\n> next version.\n\nI took a bit too long detour but the patch gets to pass make-world for\nme.\n\nIn this version:\n\n- When relation persistence is changed from logged to unlogged, buffer\n persistence is flipped then an init-fork is created along with a mark\n file for the fork (RelationCreateInitFork). The mark file is removed\n at commit but left alone after a crash before commit. At the next\n startup, ResetUnloggedRelationsInDbspaceDir() removes the init fork\n file if it finds the mark file corresponding to the file.\n\n- When relation persistence is changed from unlogged to logged, buffer\n persistence is flipped then the exisging init-fork is marked to be\n dropped at commit (RelationDropInitFork). Finally the whole content\n is WAL-logged in the page-wise manner (RelationChangePersistence),\n\n- The two operations above are repeatable within a transaction and\n commit makes the last operation persist and rollback make the all\n operations abandoned.\n\n- Storage files are created along with a \"mark\" file for the\n relfilenode. It behaves the same way to the above except the mark\n files corresponds to the whole relfilenode.\n\n- The at-commit operations this patch adds require to be WAL-logged so\n they don't fit pendingDeletes list, which is executed after commit. I\n added a new pending-work list pendingCleanups that is executed just\n after pendingSyncs. (new in this version)\n\n \nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 21 Dec 2021 20:04:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hi Kyotaro,\n\n> I took a bit too long detour but the patch gets to pass make-world for me.\n\nGood news, v10 passes all the tests for me (including TAP recover ones). There's major problem I think:\n\ndrop table t6;\ncreate unlogged table t6 (id bigint, t text);\ncreate sequence s1;\ninsert into t6 select nextval('s1'), repeat('A', 1000) from generate_series(1, 100);\nalter table t6 set logged;\nselect pg_sleep(1);\n<--optional checkpoint, more on this later.\n/usr/pgsql-15/bin/pg_ctl -D /var/lib/pgsql/15/data -l logfile -m immediate stop\n/usr/pgsql-15/bin/pg_ctl -D /var/lib/pgsql/15/data -l logfile start\nselect count(*) from t6; -- shows 0 rows\n\nBut If I perform checkpoint before crash, data is there.. apparently the missing steps done by checkpointer \nseem to help. If checkpoint is not done, then some peeking reveals that upon startup there is truncation (?!):\n\n$ /usr/pgsql-15/bin/pg_ctl -D /var/lib/pgsql/15/data -l logfile -m immediate stop\n$ find /var/lib/pgsql/15/data/ -name '73741*' -ls\n112723206 120 -rw------- 1 postgres postgres 122880 Dec 21 12:42 /var/lib/pgsql/15/data/base/73740/73741\n112723202 24 -rw------- 1 postgres postgres 24576 Dec 21 12:42 /var/lib/pgsql/15/data/base/73740/73741_fsm\n$ /usr/pgsql-15/bin/pg_ctl -D /var/lib/pgsql/15/data -l logfile start\nwaiting for server to start.... done\nserver started\n$ find /var/lib/pgsql/15/data/ -name '73741*' -ls\n112723206 0 -rw------- 1 postgres postgres 0 Dec 21 12:42 /var/lib/pgsql/15/data/base/73740/73741\n112723202 16 -rw------- 1 postgres postgres 16384 Dec 21 12:42 /var/lib/pgsql/15/data/base/73740/73741_fsm\n\nSo what's suspicious is that 122880 -> 0 file size truncation. I've investigated WAL and it seems to contain TRUNCATE records\nafter logged FPI images, so when the crash recovery would kick in it probably clears this table (while it shouldn't).\n\nHowever if I perform CHECKPOINT just before crash the WAL stream contains just RUNNING_XACTS and CHECKPOINT_ONLINE \nredo records, this probably prevents truncating. I'm newbie here so please take this theory with grain of salt, it can be \nsomething completely different.\n\n-J.\n\n\n\n",
"msg_date": "Tue, 21 Dec 2021 13:07:28 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "Hello, Jakub.\n\nAt Tue, 21 Dec 2021 13:07:28 +0000, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote in \n> So what's suspicious is that 122880 -> 0 file size truncation. I've investigated WAL and it seems to contain TRUNCATE records\n> after logged FPI images, so when the crash recovery would kick in it probably clears this table (while it shouldn't).\n\nDarn.. It is too silly that I wrongly issued truncate records for the\ntarget relation of the function (rel) instaed of the relation on which\nwe're currently operating at that time (r).\n\n> However if I perform CHECKPOINT just before crash the WAL stream contains just RUNNING_XACTS and CHECKPOINT_ONLINE \n> redo records, this probably prevents truncating. I'm newbie here so please take this theory with grain of salt, it can be \n> something completely different.\n\nIt is because the WAL records are inconsistent with the on-disk state.\nAfter a crash before a checkpoint after the SET LOGGED, recovery ends with\nrecoverying the broken WAL records, but after that the on-disk state\nis persisted and the broken WAL records are not replayed.\n\nThe following fix works.\n\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -5478,7 +5478,7 @@ RelationChangePersistence(AlteredTableInfo *tab, char persistence,\n xl_smgr_truncate xlrec;\n \n xlrec.blkno = 0;\n- xlrec.rnode = rel->rd_node;\n+ xlrec.rnode = r->rd_node;\n xlrec.flags = SMGR_TRUNCATE_ALL;\n \n\nI made another change in this version. Previously only btree among all\nindex AMs was processed in the in-place manner. In this version we do\nthat all AMs except GiST. Maybe if gistGetFakeLSN behaved the same\nway for permanent and unlogged indexes, we could skip index rebuild in\nexchange of some extra WAL records emitted while it is unlogged.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 22 Dec 2021 15:13:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hi Kyotaro,\n\n> At Tue, 21 Dec 2021 13:07:28 +0000, Jakub Wartak\n> <Jakub.Wartak@tomtom.com> wrote in\n> > So what's suspicious is that 122880 -> 0 file size truncation. I've\n> > investigated WAL and it seems to contain TRUNCATE records after logged\n> FPI images, so when the crash recovery would kick in it probably clears this\n> table (while it shouldn't).\n> \n> Darn.. It is too silly that I wrongly issued truncate records for the target\n> relation of the function (rel) instaed of the relation on which we're currently\n> operating at that time (r).\n> \n> [..]\n> The following fix works.\n\nCool, I have verified basic stuff that was coming to my mind, now even cfbot is happy with v11, You should happy too I hope :)\n\n> I made another change in this version. Previously only btree among all index\n> AMs was processed in the in-place manner. In this version we do that all\n> AMs except GiST. Maybe if gistGetFakeLSN behaved the same way for\n> permanent and unlogged indexes, we could skip index rebuild in exchange of\n> some extra WAL records emitted while it is unlogged.\n\nI think there's slight omission:\n\n-- unlogged table -> logged with GiST:\nDROP TABLE IF EXISTS testcase;\nCREATE UNLOGGED TABLE testcase(geom geometry not null);\nCREATE INDEX idx_testcase_gist ON testcase USING gist(geom);\nINSERT INTO testcase(geom) SELECT ST_Buffer(ST_SetSRID(ST_MakePoint(-1.0, 2.0),4326), 0.0001);\nALTER TABLE testcase SET LOGGED;\n\n-- crashes with:\n(gdb) where\n#0 reindex_index (indexId=indexId@entry=65541, skip_constraint_checks=skip_constraint_checks@entry=true, persistence=persistence@entry=112 'p', params=params@entry=0x0) at index.c:3521\n#1 0x000000000062f494 in RelationChangePersistence (tab=tab@entry=0x1947258, persistence=112 'p', lockmode=lockmode@entry=8) at tablecmds.c:5434\n#2 0x0000000000642819 in ATRewriteTables (context=0x7ffc19c04520, lockmode=<optimized out>, wqueue=0x7ffc19c04388, parsetree=0x1925ec8) at tablecmds.c:5644\n[..]\n#10 0x00000000007f078f in exec_simple_query (query_string=0x1925340 \"ALTER TABLE testcase SET LOGGED;\") at postgres.c:1215\n\napparently reindex_index() params cannot be NULL - the same happens with switching persistent \ntable to unlogged one too (with GiST). \n\nI'll also try to give another shot to the patch early next year - as we are starting long Christmas/holiday break here \n- with verifying WAL for GiST and more advanced setup (more crashes, and standby/archiving/barman to see \nhow it's possible to use wal_level=minimal <-> replica transitions). \n\n-J.\n\n\n\n\n\n\n",
"msg_date": "Wed, 22 Dec 2021 08:42:14 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: In-placre persistance change of a relation"
},
{
"msg_contents": "At Wed, 22 Dec 2021 08:42:14 +0000, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote in \n> I think there's slight omission:\n...\n> apparently reindex_index() params cannot be NULL - the same happens with switching persistent\n\nHmm. a3dc926009 has changed the interface. (But the name is also\nchanged after that.)\n\n-reindex_relation(Oid relid, int flags, int options)\n+reindex_relation(Oid relid, int flags, ReindexParams *params)\n\n> I'll also try to give another shot to the patch early next year - as we are starting long Christmas/holiday break here \n> - with verifying WAL for GiST and more advanced setup (more crashes, and standby/archiving/barman to see \n> how it's possible to use wal_level=minimal <-> replica transitions). \n\nThanks. I added TAP test to excecise the in-place persistence change.\n\nhave a nice holiday, Jakub!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 23 Dec 2021 15:01:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Thu, 23 Dec 2021 15:01:41 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I added TAP test to excecise the in-place persistence change.\n\nWe don't need a base table for every index. TAP test revised.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 23 Dec 2021 15:33:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On 2021-12-23 15:33:35 +0900, Kyotaro Horiguchi wrote:\n> At Thu, 23 Dec 2021 15:01:41 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > I added TAP test to excecise the in-place persistence change.\n>\n> We don't need a base table for every index. TAP test revised.\n\nThe tap tests seems to fail on all platforms. See\nhttps://cirrus-ci.com/build/4911549314760704\n\nE.g. the linux failure is\n\n[16:45:15.569]\n[16:45:15.569] # Failed test 'inserted'\n[16:45:15.569] # at t/027_persistence_change.pl line 121.\n[16:45:15.569] # Looks like you failed 1 test of 25.\n[16:45:15.569] [16:45:15] t/027_persistence_change.pl ..........\n[16:45:15.569] Dubious, test returned 1 (wstat 256, 0x100)\n[16:45:15.569] Failed 1/25 subtests\n[16:45:15.569] [16:45:15]\n[16:45:15.569]\n[16:45:15.569] Test Summary Report\n[16:45:15.569] -------------------\n[16:45:15.569] t/027_persistence_change.pl (Wstat: 256 Tests: 25 Failed: 1)\n[16:45:15.569] Failed test: 18\n[16:45:15.569] Non-zero exit status: 1\n[16:45:15.569] Files=27, Tests=315, 220 wallclock secs ( 0.14 usr 0.03 sys + 48.94 cusr 17.13 csys = 66.24 CPU)\n\nhttps://api.cirrus-ci.com/v1/artifact/task/4785083130314752/tap/src/test/recovery/tmp_check/log/regress_log_027_persistence_change\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jan 2022 16:05:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Tue, 4 Jan 2022 16:05:08 -0800, Andres Freund <andres@anarazel.de> wrote in \n> The tap tests seems to fail on all platforms. See\n> https://cirrus-ci.com/build/4911549314760704\n> \n> E.g. the linux failure is\n> \n> [16:45:15.569]\n> [16:45:15.569] # Failed test 'inserted'\n> [16:45:15.569] # at t/027_persistence_change.pl line 121.\n> [16:45:15.569] # Looks like you failed 1 test of 25.\n> [16:45:15.569] [16:45:15] t/027_persistence_change.pl ..........\n> [16:45:15.569] Dubious, test returned 1 (wstat 256, 0x100)\n> [16:45:15.569] Failed 1/25 subtests\n> [16:45:15.569] [16:45:15]\n> [16:45:15.569]\n> [16:45:15.569] Test Summary Report\n> [16:45:15.569] -------------------\n> [16:45:15.569] t/027_persistence_change.pl (Wstat: 256 Tests: 25 Failed: 1)\n> [16:45:15.569] Failed test: 18\n> [16:45:15.569] Non-zero exit status: 1\n> [16:45:15.569] Files=27, Tests=315, 220 wallclock secs ( 0.14 usr 0.03 sys + 48.94 cusr 17.13 csys = 66.24 CPU)\n> \n> https://api.cirrus-ci.com/v1/artifact/task/4785083130314752/tap/src/test/recovery/tmp_check/log/regress_log_027_persistence_change\n\nThank you very much. It still doesn't fail on my devlopment\nenvironment (CentOS8), but I found a silly bug of the test script.\nI'm still not sure the reason the test item failed but I repost the\nupdated version then watch what the CI says.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 06 Jan 2022 13:30:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hi, \n\nOn January 5, 2022 8:30:17 PM PST, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>At Tue, 4 Jan 2022 16:05:08 -0800, Andres Freund <andres@anarazel.de> wrote in \n>> The tap tests seems to fail on all platforms. See\n>> https://cirrus-ci.com/build/4911549314760704\n>> \n>> E.g. the linux failure is\n>> \n>> [16:45:15.569]\n>> [16:45:15.569] # Failed test 'inserted'\n>> [16:45:15.569] # at t/027_persistence_change.pl line 121.\n>> [16:45:15.569] # Looks like you failed 1 test of 25.\n>> [16:45:15.569] [16:45:15] t/027_persistence_change.pl ..........\n>> [16:45:15.569] Dubious, test returned 1 (wstat 256, 0x100)\n>> [16:45:15.569] Failed 1/25 subtests\n>> [16:45:15.569] [16:45:15]\n>> [16:45:15.569]\n>> [16:45:15.569] Test Summary Report\n>> [16:45:15.569] -------------------\n>> [16:45:15.569] t/027_persistence_change.pl (Wstat: 256 Tests: 25 Failed: 1)\n>> [16:45:15.569] Failed test: 18\n>> [16:45:15.569] Non-zero exit status: 1\n>> [16:45:15.569] Files=27, Tests=315, 220 wallclock secs ( 0.14 usr 0.03 sys + 48.94 cusr 17.13 csys = 66.24 CPU)\n>> \n>> https://api.cirrus-ci.com/v1/artifact/task/4785083130314752/tap/src/test/recovery/tmp_check/log/regress_log_027_persistence_change\n>\n>Thank you very much. It still doesn't fail on my devlopment\n>environment (CentOS8), but I found a silly bug of the test script.\n>I'm still not sure the reason the test item failed but I repost the\n>updated version then watch what the CI says.\n\nFwiw, you can now test the same way as cfbot does with a lower turnaround time, as explained in src/tools/ci/README\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 05 Jan 2022 20:42:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Wed, 05 Jan 2022 20:42:32 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi, \n> \n> On January 5, 2022 8:30:17 PM PST, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >I'm still not sure the reason the test item failed but I repost the\n> >updated version then watch what the CI says.\n> \n> Fwiw, you can now test the same way as cfbot does with a lower turnaround time, as explained in src/tools/ci/README\n\nFantastic! I'll give it a try. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Jan 2022 16:39:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Thu, 06 Jan 2022 16:39:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Fantastic! I'll give it a try. Thanks!\n\nI did that and found that the test stumbled on newlines.\nTests succeeded for other than Windows.\n\nWindows version fails for a real known issue.\n\n[7916][postmaster] LOG: received immediate shutdown request\n[7916][postmaster] LOG: database system is shut down\n[6228][postmaster] LOG: starting PostgreSQL 15devel, compiled by Visual C++ build 1929, 64-bit\n[6228][postmaster] LOG: listening on Unix socket \"C:/Users/ContainerAdministrator/AppData/Local/Temp/NcMnt2KTsr/.s.PGSQL.58698\"\n[2948][startup] LOG: database system was interrupted; last known up at 2022-01-07 07:12:14 GMT\n[2948][startup] LOG: database system was not properly shut down; automatic recovery in progress\n[2948][startup] LOG: redo starts at 0/1484280\n[2948][startup] LOG: invalid record length at 0/14A47B8: wanted 24, got 0\n[2948][startup] FATAL: could not remove file \"base/12759/16384.u\": Permission denied\n[6228][postmaster] LOG: startup process (PID 2948) exited with exit code 1\n\n\nMmm.. Someone is still grasping the file after restart?\n\nAnyway, I post the fixed version. This still fails on Windows..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 07 Jan 2022 17:29:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI've retested v15 of the patch with everything that came to my mind. The patch passes all my tests (well, there's this just windows / cfbot issue). Patch looks good to me. I haven't looked in-depth at the code, so patch might still need review.\r\n\r\nFYI, about potential usage of this patch: the most advanced test that I did was continually bouncing wal_level - it works. So chain of :\r\n1. wal_level=replica->minimal\r\n2. alter table set unlogged and load a lot of data, set logged\r\n3. wal_level=minimal->replica\r\n4. barman incremental backup # rsync(1) just backups changed files in steps 2 and 3 (not whole DB)\r\n5. some other (logged) work\r\nThe idea is that when performing mass-alterations to the DB (think nightly ETL/ELT on TB-sized DBs), one could skip backing up most of DB and then just quickly backup only the changed files - during the maintenance window - e.g. thanks to local-rsync barman mode. This is the output of barman show-backups after loading data to unlogged table each such cycle:\r\nmydb 20220110T100236 - Mon Jan 10 10:05:14 2022 - Size: 144.1 GiB - WAL Size: 16.0 KiB\r\nmydb 20220110T094905 - Mon Jan 10 09:50:12 2022 - Size: 128.5 GiB - WAL Size: 80.2 KiB\r\nmydb 20220110T094016 - Mon Jan 10 09:40:17 2022 - Size: 109.1 GiB - WAL Size: 496.3 KiB\r\nAnd dedupe ratio of the last one: Backup size: 144.1 GiB. Actual size on disk: 36.1 GiB (-74.96% deduplication ratio). \r\n\r\nThe only thing I've found out that bouncing wal_level also forces max_wal_senders=X -> 0 -> X which in turn requires dropping replication slot for pg_receievewal (e.g. barman receive-wal --create-slot/--drop-slot/--reset). I have tested the restore using barman recover afterwards to backup 20220110T094905 and indeed it worked OK using this patch too.\n\nThe new status of this patch is: Needs review\n",
"msg_date": "Tue, 11 Jan 2022 09:33:55 +0000",
"msg_from": "Jakub Wartak <jakub.wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "I found a bug.\n\nmdmarkexists() didn't close the tentatively opend fd. This is a silent\nleak on Linux and similars and it causes delete failure on Windows.\nIt was the reason of the CI failure.\n\n027_persistence_change.pl uses interactive_psql() that doesn't work on\nthe Windos VM on the CI.\n\nIn this version the following changes have been made in 0001.\n\n- Properly close file descriptor in mdmarkexists.\n\n- Skip some tests when IO::Pty is not available.\n It might be better to separate that part.\n\nLooking again the ALTER TABLE ALL IN TABLESPACE SET LOGGED patch, I\nnoticed that it doesn't implement OWNED BY part and doesn't have test\nand documenttaion (it was PoC). Added all of them to 0002.\n\nAt Tue, 11 Jan 2022 09:33:55 +0000, Jakub Wartak <jakub.wartak@tomtom.com> wrote in \n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> I've retested v15 of the patch with everything that came to my mind. The patch passes all my tests (well, there's this just windows / cfbot issue). Patch looks good to me. I haven't looked in-depth at the code, so patch might still need review.\n\nThanks for checking.\n\n> FYI, about potential usage of this patch: the most advanced test that I did was continually bouncing wal_level - it works. So chain of :\n> 1. wal_level=replica->minimal\n> 2. alter table set unlogged and load a lot of data, set logged\n> 3. wal_level=minimal->replica\n> 4. barman incremental backup # rsync(1) just backups changed files in steps 2 and 3 (not whole DB)\n> 5. some other (logged) work\n> The idea is that when performing mass-alterations to the DB (think nightly ETL/ELT on TB-sized DBs), one could skip backing up most of DB and then just quickly backup only the changed files - during the maintenance window - e.g. thanks to local-rsync barman mode. This is the output of barman show-backups after loading data to unlogged table each such cycle:\n> mydb 20220110T100236 - Mon Jan 10 10:05:14 2022 - Size: 144.1 GiB - WAL Size: 16.0 KiB\n> mydb 20220110T094905 - Mon Jan 10 09:50:12 2022 - Size: 128.5 GiB - WAL Size: 80.2 KiB\n> mydb 20220110T094016 - Mon Jan 10 09:40:17 2022 - Size: 109.1 GiB - WAL Size: 496.3 KiB\n> And dedupe ratio of the last one: Backup size: 144.1 GiB. Actual size on disk: 36.1 GiB (-74.96% deduplication ratio). \n\nAh, The patch skips duping relation files. This is advantageous that\nthat not only eliminates the I/O activities the duping causes but also\nreduce the size of incremental backup. I didn't noticed only the\nlatter advantage.\n\n> The only thing I've found out that bouncing wal_level also forces max_wal_senders=X -> 0 -> X which in turn requires dropping replication slot for pg_receievewal (e.g. barman receive-wal --create-slot/--drop-slot/--reset). I have tested the restore using barman recover afterwards to backup 20220110T094905 and indeed it worked OK using this patch too.\n\nYear, it is irrelevant to this patch but I'm annoyed by the\nrestriction. I think it would be okay that max_wal_senders is\nforcibly set to 0 while wal_level=minimal..\n\n> The new status of this patch is: Needs review\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 14 Jan 2022 11:43:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jan 14, 2022 at 11:43:10AM +0900, Kyotaro Horiguchi wrote:\n> I found a bug.\n> \n> mdmarkexists() didn't close the tentatively opend fd. This is a silent\n> leak on Linux and similars and it causes delete failure on Windows.\n> It was the reason of the CI failure.\n> \n> 027_persistence_change.pl uses interactive_psql() that doesn't work on\n> the Windos VM on the CI.\n> \n> In this version the following changes have been made in 0001.\n> \n> - Properly close file descriptor in mdmarkexists.\n> \n> - Skip some tests when IO::Pty is not available.\n> It might be better to separate that part.\n> \n> Looking again the ALTER TABLE ALL IN TABLESPACE SET LOGGED patch, I\n> noticed that it doesn't implement OWNED BY part and doesn't have test\n> and documenttaion (it was PoC). Added all of them to 0002.\n\nThe cfbot is failing on all OS with this version of the patch. Apparently\nv16-0002 introduces some usage of \"testtablespace\" client-side variable that's\nnever defined, e.g.\nhttps://api.cirrus-ci.com/v1/artifact/task/4670105480069120/regress_diffs/src/bin/pg_upgrade/tmp_check/regress/regression.diffs:\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/tablespace.out /tmp/cirrus-ci-build/src/bin/pg_upgrade/tmp_check/regress/results/tablespace.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/tablespace.out\t2022-01-18 04:26:38.744707547 +0000\n+++ /tmp/cirrus-ci-build/src/bin/pg_upgrade/tmp_check/regress/results/tablespace.out\t2022-01-18 04:30:37.557078083 +0000\n@@ -948,76 +948,71 @@\n CREATE SCHEMA testschema;\n GRANT CREATE ON SCHEMA testschema TO regress_tablespace_user1;\n CREATE TABLESPACE regress_tablespace LOCATION :'testtablespace';\n+ERROR: syntax error at or near \":\"\n+LINE 1: CREATE TABLESPACE regress_tablespace LOCATION :'testtablespa...\n\n\n",
"msg_date": "Tue, 18 Jan 2022 14:26:31 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> The cfbot is failing on all OS with this version of the patch. Apparently\n> v16-0002 introduces some usage of \"testtablespace\" client-side variable that's\n> never defined, e.g.\n\nThat test infrastructure got rearranged very recently, see d6d317dbf.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 10:37:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Tue, 18 Jan 2022 10:37:53 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > The cfbot is failing on all OS with this version of the patch. Apparently\n> > v16-0002 introduces some usage of \"testtablespace\" client-side variable that's\n> > never defined, e.g.\n> \n> That test infrastructure got rearranged very recently, see d6d317dbf.\n\nThanks to both. It seems that even though I know about the change, I\nforgot to make my repo up to date before checking.\n\nThe v17 attached changes only the following point (as well as\ncorresponding \"expected\" file).\n\n-+CREATE TABLESPACE regress_tablespace LOCATION :'testtablespace';\n++CREATE TABLESPACE regress_tablespace LOCATION '';\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 19 Jan 2022 09:39:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Rebased on a recent xlog refactoring.\n\nNo functional changes have been made.\n\n- Removed the default case in smgr_desc since it seems to me we don't\n assume out-of-definition values in xlog records elsewhere.\n\n- Simplified some added to storage.c.\n\n- Fix copy-pasto'ed comments in extractPageInfo().\n\n- The previous version smgrDoPendingCleanups() assumes that init-fork\n are not loaded onto shared buffer but it is wrong\n (SetRelationBuffersPersistence assumes the opposite.). Thus we need\n to drop buffers before unlink an init fork. But it is already\n guaranteed by logic so I rewrote the comment for for PCOP_UNLINK_FORK.\n\n > * Unlink the fork file. Currently we use this only for\n > * init forks and we're sure that the init fork is not\n > * loaded on shared buffers. For RelationDropInitFork\n > * case, the function dropped that buffers. For\n > * RelationCreateInitFork case, PCOP_SET_PERSISTENCE(true)\n > * is set and the buffers have been dropped just before.\n \n This logic has the same critical window as\n DropRelFilenodeBuffers. That is, if file deletion fails after\n successful buffer dropping, theoretically the file content of the\n init fork may be stale. However, AFAICS init-fork is write-once fork\n so I don't think that actually matters.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 01 Mar 2022 14:14:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Tue, 01 Mar 2022 14:14:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> - Removed the default case in smgr_desc since it seems to me we don't\n> assume out-of-definition values in xlog records elsewhere.\n\nStupid. The complier on the CI environemnt complains for\nuninitialized variable even though it (presumably) knows that the all\npaths of the switch statement set the variable. Added default value\nto try to silence compiler.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 01 Mar 2022 17:50:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Tue, Mar 01, 2022 at 02:14:13PM +0900, Kyotaro Horiguchi wrote:\n> Rebased on a recent xlog refactoring.\n\nIt'll come as no surprise that this neds to be rebased again.\nAt least a few typos I reported in January aren't fixed.\nSet to \"waiting\".\n\n\n",
"msg_date": "Wed, 30 Mar 2022 08:44:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Thanks! Version 20 is attached.\n\n\nAt Wed, 30 Mar 2022 08:44:02 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Tue, Mar 01, 2022 at 02:14:13PM +0900, Kyotaro Horiguchi wrote:\n> > Rebased on a recent xlog refactoring.\n> \n> It'll come as no surprise that this neds to be rebased again.\n> At least a few typos I reported in January aren't fixed.\n> Set to \"waiting\".\n\nOh, I'm sorry for overlooking it. It somehow didn't show up on my\nmailer.\n\n> I started looking at this and reviewed docs and comments again.\n> \n> > +typedef struct PendingCleanup\n> > +{\n> > +\tRelFileNode relnode;\t\t/* relation that may need to be deleted */\n> > +\tint\t\t\top;\t\t\t\t/* operation mask */\n> > +\tbool\t\tbufpersistence;\t/* buffer persistence to set */\n> > +\tint\t\t\tunlink_forknum;\t/* forknum to unlink */\n> \n> This can be of data type \"ForkNumber\"\n\nRight. Fixed.\n\n> > +\t * We are going to create an init fork. If server crashes before the\n> > +\t * current transaction ends the init fork left alone corrupts data while\n> > +\t * recovery. The mark file works as the sentinel to identify that\n> > +\t * situation.\n> \n> s/while/during/\n\nThis was in v17, but dissapeared in v18.\n\n> > +\t * index-init fork needs further initialization. ambuildempty shoud do\n> \n> should (I reported this before)\n> \n> > +\tif (inxact_created)\n> > +\t{\n> > +\t\tSMgrRelation srel = smgropen(rnode, InvalidBackendId);\n> > +\n> > +\t\t/*\n> > +\t\t * INIT forks never be loaded to shared buffer so no point in dropping\n> \n> \"are never loaded\"\n\nIf was fixed in v18.\n\n> > +\telog(DEBUG1, \"perform in-place persistnce change\");\n> \n> persistence (I reported this before)\n\nSorry. Fixed.\n\n> > +\t\t/*\n> > +\t\t * While wal_level >= replica, switching to LOGGED requires the\n> > +\t\t * relation content to be WAL-logged to recover the table.\n> > +\t\t * We don't emit this fhile wal_level = minimal.\n> \n> while (or \"if\")\n\nThere are \"While\" and \"fhile\". I changed the latter to \"if\".\n\n> > +\t\t\t * The relation is persistent and stays remain persistent. Don't\n> > +\t\t\t * drop the buffers for this relation.\n> \n> \"stays remain\" is redundant (I reported this before)\n\nThanks. I changed it to \"stays persistent\".\n\n> > +\t\t\tif (unlink(rm_path) < 0)\n> > +\t\t\t\tereport(ERROR,\n> > +\t\t\t\t\t\t(errcode_for_file_access(),\n> > +\t\t\t\t\t\t errmsg(\"could not remove file \\\"%s\\\": %m\",\n> > +\t\t\t\t\t\t\t\trm_path)));\n> \n> The parens around errcode are unnecessary since last year.\n> I suggest to avoid using them here and elsewhere.\n\nIt is just moved from elsewhere without editing, but of course I can\ndo that. I didn't know about that change of ereport and not found the\ncorresponding commit, but I found that Tom mentioned that change.\n\nhttps://www.postgresql.org/message-id/flat/5063.1584641224%40sss.pgh.pa.us#63e611c30800133bbddb48de857668e8\n> Now that we can rely on having varargs macros, I think we could\n> stop requiring the extra level of parentheses, ie instead of\n...\n> ereport(ERROR,\n> errcode(ERRCODE_DIVISION_BY_ZERO),\n> errmsg(\"division by zero\"));\n> \n> (The old syntax had better still work, of course. I'm not advocating\n> running around and changing existing calls.)\n\nI changed all ereport calls added by this patch to this style.\n\n> > +\t\t\t * And we may have SMGR_MARK_UNCOMMITTED file. Remove it if the\n> > +\t\t\t * fork files has been successfully removed. It's ok if the file\n> \n> file\n\nFixed.\n\n> > + <para>\n> > + All tables in the current database in a tablespace can be changed by using\n> \n> given tablespace\n\nI did /database in a tablespace/database in the given tablespace/. Is\nit right?\n\n> > + the <literal>ALL IN TABLESPACE</literal> form, which will lock all tables\n> \n> which will first lock\n>\n> > + to be changed first and then change each one. This form also supports\n> \n> remove \"first\" here\n\nThis is almost a dead copy of the description of SET TABLESPACE. This\nchange makes the two almost the same description vary slightly in that\nwordings. Anyway I did that as suggested only for the part this patch\nadds in this version.\n\n> > + <literal>OWNED BY</literal>, which will only change tables owned by the\n> > + roles specified. If the <literal>NOWAIT</literal> option is specified\n> \n> specified roles.\n> is specified, (comma)\n\nThis is the same as above. I did that but it makes the description\ndiffer from another almost-the-same description.\n\n> > + then the command will fail if it is unable to acquire all of the locks\n> \n> if it is unable to immediately acquire\n>\n> > + required immediately. The <literal>information_schema</literal>\n> \n> remove immediately\n\nDitto.\n\n> > + relations are not considered part of the system catalogs and will be\n> \n> I think you need to first say that \"relations in the pg_catalog schema cannot\n> be changed\".\n\nMmm. I don't agree on this. Aren't such \"exceptions\"-ish descriptions\nusually placed after the descriptions of how the feature works? This\nis also the same structure with SET TABLESPACE.\n\n> in patch 2/2:\n> typo: persistene\n\nHmm. Bad. I checked the spellings of the whole patches and found some\ntypos.\n\n+\t\t\t\t\t * The crashed trasaction did SET UNLOGGED. This relation\n+\t\t\t\t\t * is restored to a LOGGED relation.\ns/trasaction/transaction/\n\n+\t\t\t\terrmsg(\"could not crete mark file \\\"%s\\\": %m\", path));\ns/crete/create/\n\nThen rebased on 9c08aea6a3 then pgindent'ed.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 31 Mar 2022 13:58:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 01:58:45PM +0900, Kyotaro Horiguchi wrote:\n> Thanks! Version 20 is attached.\n\nThe patch failed an all CI tasks, and seems to have caused the macos task to\nhang.\n\nhttp://cfbot.cputube.org/kyotaro-horiguchi.html\n\nWould you send a fixed patch, or remove this thread from the CFBOT ? Otherwise\ncirrrus will try to every day to rerun but take 1hr to time out, which is twice\nas slow as the slowest OS.\n\nI think this patch should be moved to the next CF and set to v16.\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Thu, 31 Mar 2022 00:37:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Thu, 31 Mar 2022 00:37:07 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Thu, Mar 31, 2022 at 01:58:45PM +0900, Kyotaro Horiguchi wrote:\n> > Thanks! Version 20 is attached.\n> \n> The patch failed an all CI tasks, and seems to have caused the macos task to\n> hang.\n> \n> http://cfbot.cputube.org/kyotaro-horiguchi.html\n> \n> Would you send a fixed patch, or remove this thread from the CFBOT ? Otherwis\ne\n> cirrrus will try to every day to rerun but take 1hr to time out, which is twice\n> as slow as the slowest OS.\n\nThat is found to be a thinko that causes mark files left behind in new\ndatabase created in the logged version of CREATE DATABASE. It is\neasily fixed.\n\nThat being said, this failure revealed that pg_checksums or\npg_basebackup dislikes the mark files. It happens even in a quite low\npossibility. This would need further consideration and extra rounds of\nreviews.\n\n> I think this patch should be moved to the next CF and set to v16.\n\nI don't think this can be commited to 15. So I post the fixed version\nthen move this to the next CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 31 Mar 2022 18:33:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Thu, 31 Mar 2022 18:33:18 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I don't think this can be commited to 15. So I post the fixed version\n> then move this to the next CF.\n\nThen done. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 31 Mar 2022 18:36:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 2:36 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 31 Mar 2022 18:33:18 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > I don't think this can be commited to 15. So I post the fixed version\n> > then move this to the next CF.\n>\n> Then done. Thanks!\n\nHello! This patchset will need to be rebased over latest -- looks like\nb74e94dc27f (Rethink PROCSIGNAL_BARRIER_SMGRRELEASE) and 5c279a6d350\n(Custom WAL Resource Managers) are interfering.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 6 Jul 2022 08:44:18 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Wed, 6 Jul 2022 08:44:18 -0700, Jacob Champion <jchampion@timescale.com> wrote in \n> On Thu, Mar 31, 2022 at 2:36 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 31 Mar 2022 18:33:18 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > > I don't think this can be commited to 15. So I post the fixed version\n> > > then move this to the next CF.\n> >\n> > Then done. Thanks!\n> \n> Hello! This patchset will need to be rebased over latest -- looks like\n> b74e94dc27f (Rethink PROCSIGNAL_BARRIER_SMGRRELEASE) and 5c279a6d350\n> (Custom WAL Resource Managers) are interfering.\n\nThank you for checking that! It got a wider attack by b0a55e4329\n(RelFileNumber). The commit message suggests \"relfilenode\" as files\nshould be replaced with \"relation storage/file\" so I did that in\nResetUnloggedRelationsInDbspaceDir.\n\nThis patch said that:\n\n>\t\t * INIT forks are never loaded to shared buffer so no point in\n>\t\t * dropping buffers for such files.\n\nBut actually some *buildempty() functions use ReadBufferExtended() for\nINIT_FORK. So that's wrong. So, I did that but... I don't like that.\nOr I don't like that some AMs leave buffers for INIT fork after. But I\nfeel I'm misunderstanding here since I don't understand how the INIT\nfork can work as expected after a crash that happens before the next\ncheckpoint flushes the buffers.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 07 Jul 2022 17:24:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "(Mmm. I haven't noticed an annoying misspelling in the subejct X( )\n\n> Thank you for checking that! It got a wider attack by b0a55e4329\n> (RelFileNumber). The commit message suggests \"relfilenode\" as files\n\nThen, now I stepped on my own foot. Rebased also on nodefuncs\nautogeneration.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 19 Jul 2022 13:33:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Just rebased.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 28 Sep 2022 17:21:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "2022年9月28日(水) 17:21 Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n>\n> Just rebased.\n\nHi\n\ncfbot reports the patch no longer applies. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch.\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 4 Nov 2022 09:32:52 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Fri, 4 Nov 2022 09:32:52 +0900, Ian Lawrence Barwick <barwick@gmail.com> wrote in \n> cfbot reports the patch no longer applies. As CommitFest 2022-11 is\n> currently underway, this would be an excellent time to update the patch.\n\nIndeed, thanks! I'll do that in a few days.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 08 Nov 2022 11:33:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Tue, 08 Nov 2022 11:33:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Indeed, thanks! I'll do that in a few days.\n\nGot too late, but rebased.. The contents of the two patches in the\nlast version was a bit shuffled but they are fixed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 18 Nov 2022 17:25:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "I want to call out this part of this patch:\n\n> Also this allows for the cleanup of files left behind in the crash of\n> the transaction that created it.\n\nThis is interesting to a lot wider audience than ALTER TABLE SET \nLOGGED/UNLOGGED. It also adds most of the complexity, with the new \nmarker files. Can you please split the first patch into two:\n\n1. Cleanup of newly created relations on crash\n\n2. ALTER TABLE SET LOGGED/UNLOGGED changes\n\nThen we can review the first part independently.\n\nRegarding the first part, I'm not sure the marker files are the best \napproach to implement it. You need to create an extra file for every \nrelation, just to delete it at commit. It feels a bit silly, but maybe \nit's OK in practice. The undo log patch set solved this problem with the \nundo log, but it looks like that patch set isn't going anywhere. Maybe \ninvent a very lightweight version of the undo log for this?\n\n- Heikki\n\n\n\n",
"msg_date": "Fri, 3 Feb 2023 08:42:52 +0100",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Thank you for the comment!\n\nAt Fri, 3 Feb 2023 08:42:52 +0100, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> I want to call out this part of this patch:\n> \n> > Also this allows for the cleanup of files left behind in the crash of\n> > the transaction that created it.\n> \n> This is interesting to a lot wider audience than ALTER TABLE SET\n> LOGGED/UNLOGGED. It also adds most of the complexity, with the new\n> marker files. Can you please split the first patch into two:\n> \n> 1. Cleanup of newly created relations on crash\n> \n> 2. ALTER TABLE SET LOGGED/UNLOGGED changes\n> \n> Then we can review the first part independently.\n\nAh, indeed. I'll do that.\n\n> Regarding the first part, I'm not sure the marker files are the best\n> approach to implement it. You need to create an extra file for every\n> relation, just to delete it at commit. It feels a bit silly, but maybe\n\nAgreed. (But I didn't come up with better idea..)\n\n> it's OK in practice. The undo log patch set solved this problem with\n> the undo log, but it looks like that patch set isn't going\n> anywhere. Maybe invent a very lightweight version of the undo log for\n> this?\n\nI didn't thought on that line. Yes, indeed the marker files are a kind\nof undo log.\n\nAnyway, I'll split the current patch to two parts as suggested.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Feb 2023 13:47:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Mon, 6 Feb 2023 at 23:48, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> Thank you for the comment!\n>\n> At Fri, 3 Feb 2023 08:42:52 +0100, Heikki Linnakangas <hlinnaka@iki.fi> wrote in\n> > I want to call out this part of this patch:\n\nLooks like this patch has received some solid feedback from Heikki and\nyou have a path forward. It's not currently building in the build farm\neither.\n\nI'll set the patch to Waiting on Author for now.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Wed, 1 Mar 2023 14:56:25 -0500",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Wed, 1 Mar 2023 14:56:25 -0500, \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> wrote in \n> On Mon, 6 Feb 2023 at 23:48, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > Thank you for the comment!\n> >\n> > At Fri, 3 Feb 2023 08:42:52 +0100, Heikki Linnakangas <hlinnaka@iki.fi> wrote in\n> > > I want to call out this part of this patch:\n> \n> Looks like this patch has received some solid feedback from Heikki and\n> you have a path forward. It's not currently building in the build farm\n> either.\n> \n> I'll set the patch to Waiting on Author for now.\n\nCorrectly they are three parts.\n\nCorrectly they are three parts. The attached patch is the first part -\nthe storage mark files, which are used to identify storage files that\nhave not been committed and should be removed during the next\nstartup. This feature resolves the issue of orphaned storage files\nthat may result from a crash occurring during the execution of a\ntransaction involving the creation of a new table.\n\nI'll post all of the three parts shortly.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 03 Mar 2023 18:03:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Fri, 03 Mar 2023 18:03:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Correctly they are three parts. The attached patch is the first part -\n> the storage mark files, which are used to identify storage files that\n> have not been committed and should be removed during the next\n> startup. This feature resolves the issue of orphaned storage files\n> that may result from a crash occurring during the execution of a\n> transaction involving the creation of a new table.\n> \n> I'll post all of the three parts shortly.\n\nMmm. It took longer than I said, but this is the patch set that\nincludes all three parts.\n\n1. \"Mark files\" to prevent orphan storage files for in-transaction\n created relations after a crash.\n\n2. In-place persistence change: For ALTER TABLE SET LOGGED/UNLOGGED\n with wal_level minimal, and ALTER TABLE SET UNLOGGED with other\n wal_levels, the commands don't require a file copy for the relation\n storage. ALTER TABLE SET LOGGED with non-minimal wal_level emits\n bulk FPIs instead of a bunch of individual INSERTs.\n\n3. An extension to ALTER TABLE SET (UN)LOGGED that can handle all\n tables in a tablespace at once.\n\n\nAs a side note, I quickly go over the behavior of the mark files\nintroduced by the first patch, particularly what happens when deletion\nfails.\n\n(1) The mark file for MAIN fork (\"<oid>.u\") corresponds to all forks,\n while the mark file for INIT fork (\"<oid>_init.u\") corresponds to\n INIT fork alone.\n\n(2) The mark file is created just before the the corresponding storage\n file is made. This is always logged in the WAL.\n\n(3) The mark file is deleted after removing the corresponding storage\n file during the commit and rollback. This action is logged in the\n WAL, too. If the deletion fails, an ERROR is output and the\n transaction aborts.\n\n(4) If a crash leaves a mark file behind, server will try to delete it\n after successfully removing the corresponding storage file during\n the subsequent startup that runs a recovery. If deletion fails,\n server leaves the mark file alone with emitting a WARNING. (The\n same behavior for non-mark files.)\n\n(5) If the deletion of the mark file fails, the leftover mark file\n prevents the creation of the corresponding storage file (causing\n an ERROR). The leftover mark files don't result in the removal of\n the wrong files due to that behavior.\n\n(6) The mark file for an INIT fork is created only when ALTER TABLE\n SET UNLOGGED is executed (not for CREATE UNLOGGED TABLE) to signal\n the crash-cleanup code to remove the INIT fork. (Otherwise the\n cleanup code removes the main fork instead. This is the main\n objective of introducing the mark files.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 17 Mar 2023 15:16:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Fri, 17 Mar 2023 15:16:34 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Mmm. It took longer than I said, but this is the patch set that\n> includes all three parts.\n> \n> 1. \"Mark files\" to prevent orphan storage files for in-transaction\n> created relations after a crash.\n> \n> 2. In-place persistence change: For ALTER TABLE SET LOGGED/UNLOGGED\n> with wal_level minimal, and ALTER TABLE SET UNLOGGED with other\n> wal_levels, the commands don't require a file copy for the relation\n> storage. ALTER TABLE SET LOGGED with non-minimal wal_level emits\n> bulk FPIs instead of a bunch of individual INSERTs.\n> \n> 3. An extension to ALTER TABLE SET (UN)LOGGED that can handle all\n> tables in a tablespace at once.\n> \n> \n> As a side note, I quickly go over the behavior of the mark files\n> introduced by the first patch, particularly what happens when deletion\n> fails.\n> \n> (1) The mark file for MAIN fork (\"<oid>.u\") corresponds to all forks,\n> while the mark file for INIT fork (\"<oid>_init.u\") corresponds to\n> INIT fork alone.\n> \n> (2) The mark file is created just before the the corresponding storage\n> file is made. This is always logged in the WAL.\n> \n> (3) The mark file is deleted after removing the corresponding storage\n> file during the commit and rollback. This action is logged in the\n> WAL, too. If the deletion fails, an ERROR is output and the\n> transaction aborts.\n> \n> (4) If a crash leaves a mark file behind, server will try to delete it\n> after successfully removing the corresponding storage file during\n> the subsequent startup that runs a recovery. If deletion fails,\n> server leaves the mark file alone with emitting a WARNING. (The\n> same behavior for non-mark files.)\n> \n> (5) If the deletion of the mark file fails, the leftover mark file\n> prevents the creation of the corresponding storage file (causing\n> an ERROR). The leftover mark files don't result in the removal of\n> the wrong files due to that behavior.\n> \n> (6) The mark file for an INIT fork is created only when ALTER TABLE\n> SET UNLOGGED is executed (not for CREATE UNLOGGED TABLE) to signal\n> the crash-cleanup code to remove the INIT fork. (Otherwise the\n> cleanup code removes the main fork instead. This is the main\n> objective of introducing the mark files.)\n\nRebased.\n\nI fixed some code comments and commit messages. I fixed the wrong\narrangement of some changes among patches. Most importantly, I fixed\nthe a bug based on a wrong assumption that init-fork is not resides on\nshared buffers. Now smgrDoPendingCleanups drops buffer for a init-fork\nto be removed.\n\nThe new fourth patch is a temporary fix for recently added code, which\nwill soon be no longer needed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 25 Apr 2023 16:54:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Tue, Apr 25, 2023 at 9:55 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Rebased.\n>\n> I fixed some code comments and commit messages. I fixed the wrong\n> arrangement of some changes among patches. Most importantly, I fixed\n> the a bug based on a wrong assumption that init-fork is not resides on\n> shared buffers. Now smgrDoPendingCleanups drops buffer for a init-fork\n> to be removed.\n>\n> The new fourth patch is a temporary fix for recently added code, which\n> will soon be no longer needed.\n>\n\nHi Kyotaro,\n\nI've retested v28 of the patch with everything that came to my mind\n(basic tests, --enable-tap-tests, restarts/crashes along adding the\ndata, checking if there were any files left over and I've checked for\nstuff that earlier was causing problems: GiST on geometry[PostGIS]).\nThe only thing I've not tested this time were the performance runs\ndone earlier. The patch passed all my very limited tests along with\nmake check-world. Patch looks good to me on the surface from a\nusability point of view. I haven't looked at the code, so the patch\nmight still need an in-depth review.\n\nRegards,\n-Jakub Wartak.\n\n\n",
"msg_date": "Thu, 27 Apr 2023 14:47:41 +0200",
"msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "(I find the misspelled subject makes it difficult to find the thread..)\r\n\r\nAt Thu, 27 Apr 2023 14:47:41 +0200, Jakub Wartak <jakub.wartak@enterprisedb.com> wrote in \r\n> On Tue, Apr 25, 2023 at 9:55 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > Rebased.\r\n> >\r\n> > I fixed some code comments and commit messages. I fixed the wrong\r\n> > arrangement of some changes among patches. Most importantly, I fixed\r\n> > the a bug based on a wrong assumption that init-fork is not resides on\r\n> > shared buffers. Now smgrDoPendingCleanups drops buffer for a init-fork\r\n> > to be removed.\r\n> >\r\n> > The new fourth patch is a temporary fix for recently added code, which\r\n> > will soon be no longer needed.\r\n\r\nThis is no longer needed. Thank you, Thomas!\r\n\r\n> Hi Kyotaro,\r\n> \r\n> I've retested v28 of the patch with everything that came to my mind\r\n> (basic tests, --enable-tap-tests, restarts/crashes along adding the\r\n> data, checking if there were any files left over and I've checked for\r\n> stuff that earlier was causing problems: GiST on geometry[PostGIS]).\r\n\r\nMaybe it's fixed by dropping buffers.\r\n\r\n> The only thing I've not tested this time were the performance runs\r\n> done earlier. The patch passed all my very limited tests along with\r\n> make check-world. Patch looks good to me on the surface from a\r\n> usability point of view. I haven't looked at the code, so the patch\r\n> might still need an in-depth review.\r\n\r\nThank you for conducting a thorough test. In this patchset, the first\r\none might be useful on its own and it is the most complex part. I'll\r\nrecheck it.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Fri, 28 Apr 2023 10:58:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "I think there are some good ideas here. I started to take a look at the\npatches, and I've attached a rebased version of the patch set. Apologies\nif I am repeating any discussions from upthread.\n\nFirst, I tested the time difference in ALTER TABLE SET UNLOGGED/LOGGED with\nthe patch applied, and the results looked pretty impressive.\n\n\tbefore:\n\tpostgres=# alter table test set unlogged;\n\tALTER TABLE\n\tTime: 5108.071 ms (00:05.108)\n\tpostgres=# alter table test set logged;\n\tALTER TABLE\n\tTime: 6747.648 ms (00:06.748)\n\n\tafter:\n\tpostgres=# alter table test set unlogged;\n\tALTER TABLE\n\tTime: 25.609 ms\n\tpostgres=# alter table test set logged;\n\tALTER TABLE\n\tTime: 1241.800 ms (00:01.242)\n\nMy first question is whether 0001 is a prerequisite to 0002. I'm assuming\nit is, but the reason wasn't immediately obvious to me. If it's just\nnice-to-have, perhaps we could simplify the patch set a bit. I see that\nHeikki had some general concerns with the marker file approach [0], so\nperhaps it is at least worth brainstorming some alternatives if we _do_\nneed it.\n\n[0] https://postgr.es/m/9827ebd3-de2e-fd52-4091-a568387b1fc2%40iki.fi\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 14 Aug 2023 12:38:48 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Thank you for looking this!\n\nAt Mon, 14 Aug 2023 12:38:48 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> I think there are some good ideas here. I started to take a look at the\n> patches, and I've attached a rebased version of the patch set. Apologies\n> if I am repeating any discussions from upthread.\n> \n> First, I tested the time difference in ALTER TABLE SET UNLOGGED/LOGGED with\n> the patch applied, and the results looked pretty impressive.\n> \n> \tbefore:\n> \tpostgres=# alter table test set unlogged;\n> \tALTER TABLE\n> \tTime: 5108.071 ms (00:05.108)\n> \tpostgres=# alter table test set logged;\n> \tALTER TABLE\n> \tTime: 6747.648 ms (00:06.748)\n> \n> \tafter:\n> \tpostgres=# alter table test set unlogged;\n> \tALTER TABLE\n> \tTime: 25.609 ms\n> \tpostgres=# alter table test set logged;\n> \tALTER TABLE\n> \tTime: 1241.800 ms (00:01.242)\n\nThanks for confirmation. The difference between the both directions is\nthat making a table logged requires to emit WAL records for the entire\ncontent.\n\n> My first question is whether 0001 is a prerequisite to 0002. I'm assuming\n> it is, but the reason wasn't immediately obvious to me. If it's just\n\nIn 0002, if a backend crashes after creating an init fork file but\nbefore the associated commit, a lingering fork file could result in\ndata loss on the next startup. Thus, an utterly reliable file cleanup\nmechanism is essential. 0001 also addresses the orphan storage files\nissue arising from ALTER TABLE and similar commands.\n\n> nice-to-have, perhaps we could simplify the patch set a bit. I see that\n> Heikki had some general concerns with the marker file approach [0], so\n> perhaps it is at least worth brainstorming some alternatives if we _do_\n> need it.\n\nThe rationale behind the file-based implementation is that any\nleftover init fork file from a crash needs to be deleted before the\nreinit(INIT) process kicks in, which happens irrelevantly to WAL,\nbefore the start of crash recovery. I could implement it separately\nfrom the reinit module, but I didn't since that results in almost a\nduplication.\n\nAs commented in xlog.c, the purpose of the pre-recovery reinit CLEANUP\nphase is to ensure hot standbys don't encounter erroneous unlogged\nrelations. Based on that requirement, we need a mechanism to\nguarantee that additional crucial operations are executed reliably at\nthe next startup post-crash, right before recovery kicks in (or reinit\nCLEANUP). 0001 persists this data on a per-operation basis tightly\nbonded to their target objects.\n\nI could turn this into something like undo longs in a simple form, but\nI'd rather not craft a general-purpose undo log system for this unelss\nit's absolutely necessary.\n\n\n> [0] https://postgr.es/m/9827ebd3-de2e-fd52-4091-a568387b1fc2%40iki.fi\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 24 Aug 2023 11:22:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Thu, 24 Aug 2023 11:22:32 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I could turn this into something like undo longs in a simple form, but\n> I'd rather not craft a general-purpose undo log system for this unelss\n> it's absolutely necessary.\n\nThis is a patch for a basic undo log implementation. It looks like it\nworks well for some orphan-files-after-a-crash and data-loss-on-reinit\ncases. However, it is far from complete and likely has issues with\ncrash-safety and the durability of undo log files (and memory leaks\nand performance and..).\n\nI'm posting this to move the discussion forward.\n\n(This doesn't contain the third file \"ALTER TABLE ..ALL IN TABLESPACE\" part.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 04 Sep 2023 17:37:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Mon, 4 Sept 2023 at 16:59, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 24 Aug 2023 11:22:32 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > I could turn this into something like undo longs in a simple form, but\n> > I'd rather not craft a general-purpose undo log system for this unelss\n> > it's absolutely necessary.\n>\n> This is a patch for a basic undo log implementation. It looks like it\n> works well for some orphan-files-after-a-crash and data-loss-on-reinit\n> cases. However, it is far from complete and likely has issues with\n> crash-safety and the durability of undo log files (and memory leaks\n> and performance and..).\n>\n> I'm posting this to move the discussion forward.\n>\n> (This doesn't contain the third file \"ALTER TABLE ..ALL IN TABLESPACE\" part.)\n\nCFBot shows compilation issues at [1] with:\n09:34:44.987] /usr/bin/ld:\nsrc/backend/postgres_lib.a.p/access_transam_twophase.c.o: in function\n`FinishPreparedTransaction':\n[09:34:44.987] /tmp/cirrus-ci-build/build/../src/backend/access/transam/twophase.c:1569:\nundefined reference to `AtEOXact_SimpleUndoLog'\n[09:34:44.987] /usr/bin/ld:\nsrc/backend/postgres_lib.a.p/access_transam_xact.c.o: in function\n`CommitTransaction':\n[09:34:44.987] /tmp/cirrus-ci-build/build/../src/backend/access/transam/xact.c:2372:\nundefined reference to `AtEOXact_SimpleUndoLog'\n[09:34:44.987] /usr/bin/ld:\nsrc/backend/postgres_lib.a.p/access_transam_xact.c.o: in function\n`AbortTransaction':\n[09:34:44.987] /tmp/cirrus-ci-build/build/../src/backend/access/transam/xact.c:2878:\nundefined reference to `AtEOXact_SimpleUndoLog'\n[09:34:44.987] /usr/bin/ld:\nsrc/backend/postgres_lib.a.p/access_transam_xact.c.o: in function\n`CommitSubTransaction':\n[09:34:44.987] /tmp/cirrus-ci-build/build/../src/backend/access/transam/xact.c:5016:\nundefined reference to `AtEOXact_SimpleUndoLog'\n[09:34:44.987] /usr/bin/ld:\nsrc/backend/postgres_lib.a.p/access_transam_xact.c.o: in function\n`AbortSubTransaction':\n[09:34:44.987] /tmp/cirrus-ci-build/build/../src/backend/access/transam/xact.c:5197:\nundefined reference to `AtEOXact_SimpleUndoLog'\n[09:34:44.987] /usr/bin/ld:\nsrc/backend/postgres_lib.a.p/access_transam_xact.c.o:/tmp/cirrus-ci-build/build/../src/backend/access/transam/xact.c:6080:\nmore undefined references to `AtEOXact_SimpleUndoLog' follow\n\n[1] - https://cirrus-ci.com/task/5916232528953344\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 9 Jan 2024 15:07:20 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Tue, 9 Jan 2024 15:07:20 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> CFBot shows compilation issues at [1] with:\n\nThanks!\n\nThe reason for those errors was that I didn't consider Meson at the\ntime. Additionally, the signature change of reindex_index() caused the\nbuild failure. I fixed both issues. While addressing these issues, I\nmodified the simpleundolog module to honor\nwal_sync_method. Previously, the sync method for undo logs was\ndetermined independently, separate from xlog.c. However, I'm still not\nsatisfied with the method for handling PG_O_DIRECT.\n\nIn this version, I have added the changes to enable the use of\nwal_sync_method outside of xlog.c as the first part of the patchset.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 15 Jan 2024 16:50:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere was a CFbot test failure last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/3461/\n[2] https://cirrus-ci.com/task/6050020441456640\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 15:36:31 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Mon, 22 Jan 2024 15:36:31 +1100, Peter Smith <smithpb2250@gmail.com> wrote in \n> 2024-01 Commitfest.\n> \n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there was a CFbot test failure last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n\nThanks! I have added the necessary includes to the header file this\npatch adds. With this change, \"make headerscheck\" now passes. However,\nwhen I run \"make cpluspluscheck\" in my environment, it generates a\nlarge number of errors in other areas, but I didn't find one related\nto this patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 23 Jan 2024 13:15:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Rebased.\n\nAlong with rebasing, I changed the interface of XLogFsyncFile() to\nreturn a boolean instead of an error message.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 24 May 2024 16:09:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Fri, 24 May 2024 at 00:09, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> Along with rebasing, I changed the interface of XLogFsyncFile() to\n> return a boolean instead of an error message.\n\nTwo notes after looking at this quickly during the advanced patch\nfeedback session:\n\n1. I would maybe split 0003 into two separate patches. One to make SET\nUNLOGGED fast, which seems quite easy to do because no WAL is needed.\nAnd then a follow up to make SET LOGGED fast, which does all the\nXLOG_FPI stuff.\n2. When wal_level = minital, still some WAL logging is needed. The\npages that were changed since the last still need to be made available\nfor crash recovery.\n\n\n",
"msg_date": "Tue, 28 May 2024 16:49:45 -0700",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Tue, May 28, 2024 at 04:49:45PM -0700, Jelte Fennema-Nio wrote:\n> Two notes after looking at this quickly during the advanced patch\n> feedback session:\n> \n> 1. I would maybe split 0003 into two separate patches. One to make SET\n> UNLOGGED fast, which seems quite easy to do because no WAL is needed.\n> And then a follow up to make SET LOGGED fast, which does all the\n> XLOG_FPI stuff.\n\nYeah, that would make sense. The LOGGED->UNLOGGED part is\nstraight-forward because we only care about the init fork. The\nUNLOGGED->LOGGED case bugs me, though, a lot.\n\n> 2. When wal_level = minitam, still some WAL logging is needed. The\n> pages that were changed since the last still need to be made available\n> for crash recovery.\n\nMore notes from me, as I was part of this session.\n\n+ * XXXX: Some access methods don't support in-place persistence\n+ * changes. GiST uses page LSNs to figure out whether a block has been\n[...]\n+ if (r->rd_rel->relkind == RELKIND_INDEX &&\n+ /* GiST is excluded */\n+ r->rd_rel->relam != BTREE_AM_OID &&\n+ r->rd_rel->relam != HASH_AM_OID &&\n+ r->rd_rel->relam != GIN_AM_OID &&\n+ r->rd_rel->relam != SPGIST_AM_OID &&\n+ r->rd_rel->relam != BRIN_AM_OID)\n\nThis knowledge should not be encapsulated in the backend code. The\nindex AMs should be able to tell, instead, if they are able to support\nthis code path so as any out-of-core index AM can decide things on its\nown. This ought to be split in its own patch, simple enough as of a\nboolean or a routine telling how this backend path should behave.\n\n+ for (fork = 0; fork < INIT_FORKNUM; fork++)\n+ {\n+ if (smgrexists(RelationGetSmgr(r), fork))\n+ log_newpage_range(r, fork, 0,\n+ smgrnblocks(RelationGetSmgr(r), fork),\n+ false);\n+ }\n\nA simple copy of the blocks means that we keep anything bloated in\nthem, while a rewrite in ALTER TABLE means that we would start afresh\nby deforming the tuples from the origin before giving them to the\ntarget, without any bloat. The compression of the FPWs and the\nremoval of the holes in the pages would surely limit the impact, but\nthis has not been discussed on this thread, and this is a nice\nproperty of the existing implementation that would get silently\nremoved by this patch set.\n\nAnother point that Nathan has made is that it may be more appealling\nto study how this is better than an integration with the multi-INSERT\nAPIs into AMs, so as it is possible to group the inserts in batches\nrather than process them one-at-a-time, see [1]. I am ready to accept\nthat what this patch does is more efficient as long as everything is\nblock-based in some cases. Still there is a risk-vs-gain argument\nhere, and I am not sure whether what we have here is a good tradeoff\ncompared to the potential risk of breaking things. The amount of new\ninfrastructure is large for this code path. Grouping the inserts in\nlarge batches may finish by being more efficient than a WAL stream\nfull of FPWs, as well, even if toast values are deformed? So perhaps\nthere is an argument for making that optional at query level, instead.\nAs a hole, I can say that grouping the INSERTs will be always more\nefficient, while what we have here can be less efficient in some\ncases. I'm OK to be outvoted, but the level of complications created\nby this block-based copy and WAL-logging concerns me when it comes to\ntweaking the relpersistence like that.\n\n[1]: https://commitfest.postgresql.org/48/4777/\n--\nMichael",
"msg_date": "Tue, 4 Jun 2024 09:09:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Thank you for the comments.\n\n# The most significant feedback I received was that this approach is\n# not misdirected..\n\nAt Tue, 4 Jun 2024 09:09:12 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, May 28, 2024 at 04:49:45PM -0700, Jelte Fennema-Nio wrote:\n> > Two notes after looking at this quickly during the advanced patch\n> > feedback session:\n> > \n> > 1. I would maybe split 0003 into two separate patches. One to make SET\n> > UNLOGGED fast, which seems quite easy to do because no WAL is needed.\n> > And then a follow up to make SET LOGGED fast, which does all the\n> > XLOG_FPI stuff.\n> \n> Yeah, that would make sense. The LOGGED->UNLOGGED part is\n> straight-forward because we only care about the init fork. The\n> UNLOGGED->LOGGED case bugs me, though, a lot.\n\nI indeed agree with that. Will do that in the next version.\n\n> > 2. When wal_level = minitam, still some WAL logging is needed. The\n> > pages that were changed since the last still need to be made available\n> > for crash recovery.\n\nI don't quite understand this. It seems that you are reffering to the\nLOGGED to UNLOGGED case. UNLOGGED tables are emptied after a crash,\nand the newly created INIT fork does that trick. Maybe I'm\nmisunderstanding something, though.\n\n> More notes from me, as I was part of this session.\n> \n> + * XXXX: Some access methods don't support in-place persistence\n> + * changes. GiST uses page LSNs to figure out whether a block has been\n> [...]\n> + if (r->rd_rel->relkind == RELKIND_INDEX &&\n> + /* GiST is excluded */\n> + r->rd_rel->relam != BTREE_AM_OID &&\n> + r->rd_rel->relam != HASH_AM_OID &&\n> + r->rd_rel->relam != GIN_AM_OID &&\n> + r->rd_rel->relam != SPGIST_AM_OID &&\n> + r->rd_rel->relam != BRIN_AM_OID)\n> \n> This knowledge should not be encapsulated in the backend code. The\n> index AMs should be able to tell, instead, if they are able to support\n> this code path so as any out-of-core index AM can decide things on its\n> own. This ought to be split in its own patch, simple enough as of a\n> boolean or a routine telling how this backend path should behave.\n\nRight. I was hesitant to expand the scope before being certain that I\ncan proceed in this direction without significant objections. Now I\ncan include that in the next version.\n\n> + for (fork = 0; fork < INIT_FORKNUM; fork++)\n> + {\n> + if (smgrexists(RelationGetSmgr(r), fork))\n> + log_newpage_range(r, fork, 0,\n> + smgrnblocks(RelationGetSmgr(r), fork),\n> + false);\n> + }\n> \n> A simple copy of the blocks means that we keep anything bloated in\n> them, while a rewrite in ALTER TABLE means that we would start afresh\n> by deforming the tuples from the origin before giving them to the\n> target, without any bloat. The compression of the FPWs and the\n> removal of the holes in the pages would surely limit the impact, but\n> this has not been discussed on this thread, and this is a nice\n> property of the existing implementation that would get silently\n> removed by this patch set.\n\nSure. That bloat can be removed beforehand by explicitly running\nVACUUM on the table if needed, but it would be ideal if the same\ncompression occurred automatically. Alternatively, it might be an\noption to fall back to the existing path when the target table is\nfound to have excessive bloat (though I'm not sure how much should be\nconsidered excessive). We could also allow users to decide by adding a\ncommand option.\n\n> Another point that Nathan has made is that it may be more appealling\n> to study how this is better than an integration with the multi-INSERT\n> APIs into AMs, so as it is possible to group the inserts in batches\n> rather than process them one-at-a-time, see [1]. I am ready to accept\n> that what this patch does is more efficient as long as everything is\n> block-based in some cases. Still there is a risk-vs-gain argument\n> here, and I am not sure whether what we have here is a good tradeoff\n> compared to the potential risk of breaking things. The amount of new\n> infrastructure is large for this code path. Grouping the inserts in\n> large batches may finish by being more efficient than a WAL stream\n> full of FPWs, as well, even if toast values are deformed? So perhaps\n> there is an argument for making that optional at query level, instead.\n\nI agree about the uncertainties. With the switching feature mentioned\nabove, it might be sufficient to use the multi-insert stuff in the\nexisting path. However, the uncertainties regarding performance would\nstill remain.\n\n> As a hole, I can say that grouping the INSERTs will be always more\n> efficient, while what we have here can be less efficient in some\n> cases. I'm OK to be outvoted, but the level of complications created\n> by this block-based copy and WAL-logging concerns me when it comes to\n> tweaking the relpersistence like that.\n\nOf course, it is a promising option to move away from the\nblock-logging and fall back to the existing path using the\nmulti-insert stuff in the UNLOGGED to LOGGED case. Let me consider\nthat point.\n\n\nBesides the above, even though this discussion might become\nunnecessary, there was a concern that the blockwise logging might\nresult in unexpected outcomes due to unflushed buffer data. (although\nI could be mistaken). I believe that is not the case because all\nbuffer blocks are flushed out beforehand.\n\n\n> [1]: https://commitfest.postgresql.org/48/4777/\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 04 Jun 2024 16:00:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "+Bharath\n\nOn Tue, Jun 04, 2024 at 04:00:32PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 4 Jun 2024 09:09:12 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> Another point that Nathan has made is that it may be more appealling\n>> to study how this is better than an integration with the multi-INSERT\n>> APIs into AMs, so as it is possible to group the inserts in batches\n>> rather than process them one-at-a-time, see [1]. I am ready to accept\n>> that what this patch does is more efficient as long as everything is\n>> block-based in some cases. Still there is a risk-vs-gain argument\n>> here, and I am not sure whether what we have here is a good tradeoff\n>> compared to the potential risk of breaking things. The amount of new\n>> infrastructure is large for this code path. Grouping the inserts in\n>> large batches may finish by being more efficient than a WAL stream\n>> full of FPWs, as well, even if toast values are deformed? So perhaps\n>> there is an argument for making that optional at query level, instead.\n> \n> I agree about the uncertainties. With the switching feature mentioned\n> above, it might be sufficient to use the multi-insert stuff in the\n> existing path. However, the uncertainties regarding performance would\n> still remain.\n\nBharath, does the multi-INSERT stuff apply when changing a table to be\nLOGGED? If so, I think it would be interesting to compare it with the FPI\napproach being discussed here.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 4 Jun 2024 15:50:51 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Tue, Jun 04, 2024 at 03:50:51PM -0500, Nathan Bossart wrote:\n> Bharath, does the multi-INSERT stuff apply when changing a table to be\n> LOGGED? If so, I think it would be interesting to compare it with the FPI\n> approach being discussed here.\n\nThe answer to this question is yes AFAIK. Look at patch 0002 in the\nlatest series posted here, that touches ATRewriteTable() in\ntablecmds.c where the rewrite happens should a relation's\nrelpersistence, AM, column or default requires a switch (particularly\nif more than one property is changed in a single command, grep for\nAT_REWRITE_*):\nhttps://www.postgresql.org/message-id/CALj2ACUz5+_YNEa4ZY-XG960_oXefM50MjD71VgSCAVDkF3bzQ@mail.gmail.com\n\nI've just read through the patch set, and they are rather pleasant to\nthe eye. I have comments about them, actually, but that's a topic for\nthe other thread.\n--\nMichael",
"msg_date": "Wed, 5 Jun 2024 13:52:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hello.\n\nIt's been a while. Based on our previous face-to-face discussions, I\nhave been restructuring the patch set. During this process, I found\nseveral missing parts and issues, which led to almost everything being\nrewritten. However, I believe the updates are now better organized and\nmore understandable.\n\nThe current patch set broadly consists of the following elements:\n\n- Core feature: Switching buffer persistence (0007) remains mostly the\n same as before, but the creation and deletion of INIT fork files\n have undergone significant modifications. Part of this functionality\n has been moved to commit records.\n\n- UNDO log(0002): This handles file deletion during transaction aborts,\n which was previously managed, in part, by the commit XLOG record at\n the end of a transaction.\n\n- Prevent orphan files after a crash (0005): This is another use-case\n of the UNDO log system.\n\n- Extension of smgr (0012), pendingDeletes (0014), and commit XLOG\n records (0013): These have been extended to handle file deletion at\n the fork level instead of the relfilenumber level. While this\n extension applies to both commit and abort operations, only the file\n deletion process for aborts has been moved to the UNDO log. As a\n result, file deletions during commits continue to be managed by\n commit records.\n\nHere are some issues. Depending on how these points are addressed,\nthis patch set might be dropped. (Or, this patch might already be too\nlarge for its intended effect.)\n\n- Consecutive changes to the persistence of the same table within a\n single transaction are prohibited (0007). Allowing this would\n complicate pendingDeletes and a similar mechanism added to\n bufmgr. Also, due to the append-only nature of the UNDO log, the\n entire process, including subtransaction handling, could not be made\n consistent easily.\n\n- PREPARE is prohibited for transactions that have altered table\n persistence(0009). This is because I haven't found a simple way to\n ensure consistent switching of buffer persistence if the server\n crashes after PREPARE and then commits the transaction after\n recovery.\n\n- Data updates within a single transaction after changing the table's\n persistence are also prohibited(0008). This restriction is necessary\n because if an index update triggers page splits after changing the\n persistence to UNLOGGED, WAL might become inapplicable.\n\nThe last point, in particular, has a significant impact on usability,\nbut it seems to be fundamentally unavoidable. Since heap updates\nappear to be fine, one possible approach could be to give up on\nin-place persistence changes for indexes.\n\nRegards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Sun, 01 Sep 2024 01:09:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On 31/08/2024 19:09, Kyotaro Horiguchi wrote:\n> - UNDO log(0002): This handles file deletion during transaction aborts,\n> which was previously managed, in part, by the commit XLOG record at\n> the end of a transaction.\n> \n> - Prevent orphan files after a crash (0005): This is another use-case\n> of the UNDO log system.\n\nNice, I'm very excited if we can fix that long-standing issue! I'll try \nto review this properly later, but at a quick 5 minute glance, one thing \ncaught my eye:\n\nThis requires fsync()ing the per-xid undo log file every time a relation \nis created. I fear that can be a pretty big performance hit for \nworkloads that repeatedly create and drop small tables. Especially if \nthey're otherwise running with synchronous_commit=off. Instead of \nflushing the undo log file after every write, I'd suggest WAL-logging \nthe undo log like regular relations and SLRUs. So before writing the \nentry to the undo log, WAL-log it. And with a little more effort, you \ncould postpone creating the files altogether until a checkpoint happens, \nsimilar to how twophase state files are checkpointed nowadays.\n\nI wonder if the twophase state files and undo log files should be merged \ninto one file. They're similar in many ways: there's one file per \ntransaction, named using the XID. I haven't thought this fully through, \njust a thought..\n\n> +static void\n> +undolog_set_filename(char *buf, TransactionId xid)\n> +{\n> +\tsnprintf(buf, MAXPGPATH, \"%s/%08x\", SIMPLE_UNDOLOG_DIR, xid);\n> +}\n\nI'd suggest using FullTransactionId. Doesn't matter much, but seems like \na good future-proofing.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 1 Sep 2024 22:15:00 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "On Sun, Sep 01, 2024 at 10:15:00PM +0300, Heikki Linnakangas wrote:\n> I wonder if the twophase state files and undo log files should be merged\n> into one file. They're similar in many ways: there's one file per\n> transaction, named using the XID. I haven't thought this fully through, just\n> a thought..\n\nHmm. It could be possible to extract some of this knowledge out of\ntwophase.c and design some APIs that could be used for both, but would\nthat be really necessary? The 2PC data and the LSNs used by the files\nto check if things are replayed or on disk rely on\nGlobalTransactionData that has its own idea of things and timings at\nrecovery.\n\nOr perhaps your point is actually to do that and add one layer for the\nfile handlings and their flush timings? I am not sure, TBH, what this\nthread is trying to fix is complicated enough that it may be better to\nlive with two different code paths. But perhaps my gut feeling is\njust wrong reading your paragraph.\n--\nMichael",
"msg_date": "Mon, 2 Sep 2024 09:30:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "Hello.\n\nThank you for the response.\n\nAt Sun, 1 Sep 2024 22:15:00 +0300, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> On 31/08/2024 19:09, Kyotaro Horiguchi wrote:\n> > - UNDO log(0002): This handles file deletion during transaction aborts,\n> > which was previously managed, in part, by the commit XLOG record at\n> > the end of a transaction.\n> > - Prevent orphan files after a crash (0005): This is another use-case\n> > of the UNDO log system.\n> \n> Nice, I'm very excited if we can fix that long-standing issue! I'll\n> try to review this properly later, but at a quick 5 minute glance, one\n> thing caught my eye:\n> \n> This requires fsync()ing the per-xid undo log file every time a\n> relation is created. I fear that can be a pretty big performance hit\n> for workloads that repeatedly create and drop small tables. Especially\n\nI initially thought that one additional file manipulation during file\ncreation wouldn't be an issue. However, the created storage file isn't\nbeing synced, so your concern seems valid.\n\n> if they're otherwise running with synchronous_commit=off. Instead of\n> flushing the undo log file after every write, I'd suggest WAL-logging\n> the undo log like regular relations and SLRUs. So before writing the\n> entry to the undo log, WAL-log it. And with a little more effort, you\n> could postpone creating the files altogether until a checkpoint\n> happens, similar to how twophase state files are checkpointed\n> nowadays.\n\nI thought that an UNDO log file not flushed before the last checkpoint\nmight not survive a system crash. However, including UNDO files in the\ncheckpointing process resolves that concern. Thansk you for the\nsuggestion.\n\n> I wonder if the twophase state files and undo log files should be\n> merged into one file. They're similar in many ways: there's one file\n> per transaction, named using the XID. I haven't thought this fully\n> through, just a thought..\n\nPrecisely, UNDO log files are created per subtransaction, unlike\ntwophase files. It might be possible if we allow the twophase files\n(as they are currently named) to be overwritten or modified at every\nsubcommit. If ULOG contents are WAL-logged, these two things will\nbecome even more similar. However, I'm not planning to include that in\nthe next version for now.\n\n> > +static void\n> > +undolog_set_filename(char *buf, TransactionId xid)\n> > +{\n> > +\tsnprintf(buf, MAXPGPATH, \"%s/%08x\", SIMPLE_UNDOLOG_DIR, xid);\n> > +}\n> \n> I'd suggest using FullTransactionId. Doesn't matter much, but seems\n> like a good future-proofing.\n\nAgreed. Will fix it in the next vesion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 05 Sep 2024 15:34:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
},
{
"msg_contents": "At Mon, 2 Sep 2024 09:30:20 +0900, Michael Paquier <michael@paquier.xyz> wrote in \r\n> On Sun, Sep 01, 2024 at 10:15:00PM +0300, Heikki Linnakangas wrote:\r\n> > I wonder if the twophase state files and undo log files should be merged\r\n> > into one file. They're similar in many ways: there's one file per\r\n> > transaction, named using the XID. I haven't thought this fully through, just\r\n> > a thought..\r\n> \r\n> Hmm. It could be possible to extract some of this knowledge out of\r\n> twophase.c and design some APIs that could be used for both, but would\r\n> that be really necessary? The 2PC data and the LSNs used by the files\r\n> to check if things are replayed or on disk rely on\r\n> GlobalTransactionData that has its own idea of things and timings at\r\n> recovery.\r\n\r\nI'm not sure, but I feel that Heikki mentioned only about using the\r\nfile format and in/out functions if the file formats of the two are\r\nsufficiently overlapping.\r\n\r\n> Or perhaps your point is actually to do that and add one layer for the\r\n> file handlings and their flush timings? I am not sure, TBH, what this\r\n> thread is trying to fix is complicated enough that it may be better to\r\n> live with two different code paths. But perhaps my gut feeling is\r\n> just wrong reading your paragraph.\r\n\r\nI believe this statement is valid, so I’m not in a hurry to do this.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 05 Sep 2024 15:46:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: In-placre persistance change of a relation"
}
] |
[
{
"msg_contents": "There are a number of rules like this in the grammar:\n\nopt_foo: FOO\n | /*EMPTY*/\n;\n\nAnd there are some like this:\n\nopt_foo: FOO {}\n | /*EMPTY*/ {}\n;\n\nand some even like this:\n\n%type <node> opt_foo\n\nopt_foo: FOO { $$ = NULL; }\n | /*EMPTY*/ { $$ = NULL; }\n;\n\n(I mean here specifically those rules where FOO is a noise word and the \nactions are the same in each branch.)\n\nIt's obviously confusing to have multiple different styles to do the \nsame thing. And these extra rules (including the empty ones) also end \nup in the output, so they create more work down the line.\n\nThe attached patch cleans this up to make them all look like the first \nstyle.",
"msg_date": "Wed, 11 Nov 2020 10:12:50 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Clean up optional rules in grammar"
},
{
"msg_contents": "On 11/11/2020 11:12, Peter Eisentraut wrote:\n> The attached patch cleans this up to make them all look like the first\n> style.\n\n+1\n\n- Heikki\n\n\n",
"msg_date": "Wed, 11 Nov 2020 12:54:18 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Clean up optional rules in grammar"
},
{
"msg_contents": "On 11/11/20 10:12 AM, Peter Eisentraut wrote:\n> There are a number of rules like this in the grammar:\n> \n> opt_foo: FOO\n> | /*EMPTY*/\n> ;\n> \n> And there are some like this:\n> \n> opt_foo: FOO {}\n> | /*EMPTY*/ {}\n> ;\n> \n> and some even like this:\n> \n> %type <node> opt_foo\n> \n> opt_foo: FOO { $$ = NULL; }\n> | /*EMPTY*/ { $$ = NULL; }\n> ;\n> \n> (I mean here specifically those rules where FOO is a noise word and the\n> actions are the same in each branch.)\n> \n> It's obviously confusing to have multiple different styles to do the\n> same thing. And these extra rules (including the empty ones) also end\n> up in the output, so they create more work down the line.\n> \n> The attached patch cleans this up to make them all look like the first\n> style.\n\nNo objections, but could we also take this opportunity to standardize\nthe comment itself? Even in your patch there is a mix of spacing and\ncasing.\n\nMy preference is /* EMPTY */. That is, uppercase with spaces, but\nwhatever gets chosen is fine with me.\n-- \nVik Fearing\n\n\n",
"msg_date": "Wed, 11 Nov 2020 12:43:40 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Clean up optional rules in grammar"
},
{
"msg_contents": "On 2020-11-11 12:43, Vik Fearing wrote:\n> No objections, but could we also take this opportunity to standardize\n> the comment itself? Even in your patch there is a mix of spacing and\n> casing.\n> \n> My preference is /* EMPTY */. That is, uppercase with spaces, but\n> whatever gets chosen is fine with me.\n\nLooks like /*EMPTY*/ is the most common right now. I agree this should \nbe straightened out.\n\n\n",
"msg_date": "Wed, 11 Nov 2020 13:40:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Clean up optional rules in grammar"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 5:13 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> It's obviously confusing to have multiple different styles to do the\n> same thing. And these extra rules (including the empty ones) also end\n> up in the output, so they create more work down the line.\n>\n> The attached patch cleans this up to make them all look like the first\n> style.\n>\n\n+1 for standardizing in this area. It's worth noting that Bison 3.0\nintroduced %empty for this situation, which is self-documenting. Until we\nget there, this is a good step.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Nov 11, 2020 at 5:13 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:It's obviously confusing to have multiple different styles to do the \nsame thing. And these extra rules (including the empty ones) also end \nup in the output, so they create more work down the line.\n\nThe attached patch cleans this up to make them all look like the first \nstyle.\n+1 for standardizing in this area. It's worth noting that Bison 3.0 introduced %empty for this situation, which is self-documenting. Until we get there, this is a good step.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 11 Nov 2020 09:26:39 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up optional rules in grammar"
},
{
"msg_contents": "On 2020-11-11 11:54, Heikki Linnakangas wrote:\n> On 11/11/2020 11:12, Peter Eisentraut wrote:\n>> The attached patch cleans this up to make them all look like the first\n>> style.\n> \n> +1\n\ndone\n\n\n",
"msg_date": "Thu, 12 Nov 2020 08:08:50 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Clean up optional rules in grammar"
}
] |
[
{
"msg_contents": "(From a report by user \"ftzdomino\" on IRC of a segfault while loading\ndata with a pg_trgm gist index)\n\nIf gtrgm_picksplit is invoked on a vector of exactly 2 items (which I\nthink is rare, but it can happen if gistSplit recurses or I think in\ncases of secondary splits), then it tries to access cache[2] without\never having initialized it, causing hilarity to ensue.\n\nWhat I don't entirely understand is why the code is insisting on\ntreating the last item as special: given N items, it tries to find seeds\nfrom the first N-1 items only. This would make a vague sort of sense if\nthe N'th item was always the just-inserted one, but this doesn't appear\nto be always the case (e.g. recursive calls in gistSplit, or secondary\nsplits), and even if it were it's not clear that it would be correct\nlogic.\n\nWhat gtrgm_picksplit currently does, as I read it, is:\n\n take first N-1 items\n (note that entryvec->n is N+1, it sets maxoff = entryvec->n - 2)\n populate a cache of their signatures\n find the two furthest apart as seeds\n if we didn't choose two, then default to items 1 and 2 as seeds\n (note here that if N=2 then item 2 is not cached)\n make datums from the cache entries of the two seeds\n (explodes here when N=2)\n Increase maxoff and construct the cache entry for item N\n Split all N items using the two seeds\n\nNow the obvious simple fix is just to reorder those last two operations,\nand the original reporter verified that doing so fixed their problem\n(patch attached). But I'd really like to understand the logic here and\nwhether there is any reason to have this special treatment at all - why\nwould it not be better to just cache all N items upfront and consider\nthem all as potential seeds?\n\nAnother issue I don't understand yet is that even though this code is\nlargely unchanged since 8.x, the original reporter could not reproduce\nthe crash on any version before 13.0.\n\nAnyone have any ideas? (If not, I'll commit and backpatch something like\nthe attached patch at some suitable time.)\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Wed, 11 Nov 2020 09:53:40 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Strange GiST logic leading to uninitialized memory access in pg_trgm\n gist code"
},
{
"msg_contents": "Hi!\n\nOn Wed, Nov 11, 2020 at 12:53 PM Andrew Gierth\n<andrew@tao11.riddles.org.uk> wrote:\n> Now the obvious simple fix is just to reorder those last two operations,\n> and the original reporter verified that doing so fixed their problem\n> (patch attached). But I'd really like to understand the logic here and\n> whether there is any reason to have this special treatment at all - why\n> would it not be better to just cache all N items upfront and consider\n> them all as potential seeds?\n\nI think this comes from the idea that when N items are passed to the\npicksplit method, then the first N-1 are existing items on the page,\nwhile the last Nth is the new item to be inserted. So, we are trying\nto split first N-1 items and then insert the last Nth item there. But\nthis is wrong for two reasons.\n\n1) As you've pointed out, GiST code doesn't necessarily pass items to\nthe picksplit method in that way.\n2) Even if items are passed as assumed, there is no point in having\nspecial handling of the item to be inserted. It's better to consider\nthe whole set of items to produce a better split.\n\n> Another issue I don't understand yet is that even though this code is\n> largely unchanged since 8.x, the original reporter could not reproduce\n> the crash on any version before 13.0.\n\nI think this is related to my commit 911e702077. It has changed the\nmemory allocation for the signatures to support the signatures of\nvariable length. So, it seems that despite the error existing since\n8.x, it started causing segfaults only since 911e702077.\n\n> Anyone have any ideas? (If not, I'll commit and backpatch something like\n> the attached patch at some suitable time.)\n\nI would rather propose to rip off special handling of the last item\ncompletely (see the attached patch).\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 12 Nov 2020 06:48:17 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Strange GiST logic leading to uninitialized memory access in\n pg_trgm gist code"
},
{
"msg_contents": ">>>>> \"Alexander\" == Alexander Korotkov <aekorotkov@gmail.com> writes:\n\n >> Another issue I don't understand yet is that even though this code\n >> is largely unchanged since 8.x, the original reporter could not\n >> reproduce the crash on any version before 13.0.\n\n Alexander> I think this is related to my commit 911e702077. It has\n Alexander> changed the memory allocation for the signatures to support\n Alexander> the signatures of variable length. So, it seems that despite\n Alexander> the error existing since 8.x, it started causing segfaults\n Alexander> only since 911e702077.\n\nAha. Prior to that change, cache[i].sign was an array rather than a\npointer, so it would not crash even when accessed without\ninitialization. What would happen instead is that an incorrect signature\nwould be used, which might lead to problems later in index lookups\n(though I haven't tested that).\n\n Alexander> I would rather propose to rip off special handling of the\n Alexander> last item completely (see the attached patch).\n\nYeah. I'll go with that, once I finish testing it.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 12 Nov 2020 13:35:31 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: Strange GiST logic leading to uninitialized memory access in\n pg_trgm gist code"
}
] |
[
{
"msg_contents": "Hello Pgsql-hackers,\n\nSeems I fall into corner case:\n\ntest=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '00:00:00' );\nERROR: step size cannot equal zero\n\nBut:\ntest=> SELECT * FROM generate_series( '2020-11-09', '2020-11-10', INTERVAL '1 day' );\n generate_series \n------------------------\n 2020-11-09 00:00:00+02\n 2020-11-10 00:00:00+02\n(2 rows)\n\nHere we start at 2020-11-09, add interval of one day and finish at 2020-11-10\nDone! series is generated.\n\nIn first case I expect that I start at 2020-11-09, add interval of zero and finish at 2020-11-09\nEverything is consistent.\n\ntest=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '00:00:00' );\n generate_series\n------------------------\n 2020-11-09 00:00:00+02\n(1 row)\n\n\nSo I feature request to allow zero size step for cases when start point is equest to finish\n\nWhat do you think?\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 20:50:59 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Feature request: Improve allowed values for generate series"
},
{
"msg_contents": "Hello Eugen,\n\nWednesday, November 11, 2020, 8:50:59 PM, you wrote:\n\n> Hello Pgsql-hackers,\n\n> Seems I fall into corner case:\n\ntest=>> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '00:00:00' );\n> ERROR: step size cannot equal zero\n\n> But:\ntest=>> SELECT * FROM generate_series( '2020-11-09', '2020-11-10', INTERVAL '1 day' );\n> generate_series \n> ------------------------\n> 2020-11-09 00:00:00+02\n> 2020-11-10 00:00:00+02\n> (2 rows)\n\n> Here we start at 2020-11-09, add interval of one day and finish at 2020-11-10\n> Done! series is generated.\n\n> In first case I expect that I start at 2020-11-09, add interval of zero and finish at 2020-11-09\n> Everything is consistent.\n\ntest=>> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '00:00:00' );\n> generate_series\n> ------------------------\n> 2020-11-09 00:00:00+02\n> (1 row)\n\n\n> So I feature request to allow zero size step for cases when start point is equest to finish\n\n> What do you think?\n\n\n\nhm.... probably with step 0 we always should generate series of one\nvalue and exit, despite on finish value.\nBecause with step 0 we always stay at current position, so there is\nalways should be just one value.\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 21:07:09 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Feature request: Improve allowed values for generate series"
},
{
"msg_contents": "st 11. 11. 2020 v 19:59 odesílatel Eugen Konkov <kes-kes@yandex.ru> napsal:\n\n> Hello Pgsql-hackers,\n>\n> Seems I fall into corner case:\n>\n> test=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL\n> '00:00:00' );\n> ERROR: step size cannot equal zero\n>\n> But:\n> test=> SELECT * FROM generate_series( '2020-11-09', '2020-11-10', INTERVAL\n> '1 day' );\n> generate_series\n> ------------------------\n> 2020-11-09 00:00:00+02\n> 2020-11-10 00:00:00+02\n> (2 rows)\n>\n> Here we start at 2020-11-09, add interval of one day and finish at\n> 2020-11-10\n> Done! series is generated.\n>\n> In first case I expect that I start at 2020-11-09, add interval of\n> zero and finish at 2020-11-09\n> Everything is consistent.\n>\n> test=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL\n> '00:00:00' );\n> generate_series\n> ------------------------\n> 2020-11-09 00:00:00+02\n> (1 row)\n>\n>\n> So I feature request to allow zero size step for cases when start\n> point is equest to finish\n>\n> What do you think?\n>\n\nWhat is the real use case? Current implementation is very simple -\nincrement should not be zero, and then we know so there is no infinity\ncycle.\n\nRegards\n\nPavel\n\n\n\n\n>\n> --\n> Best regards,\n> Eugen Konkov\n>\n>\n>\n>\n\nst 11. 11. 2020 v 19:59 odesílatel Eugen Konkov <kes-kes@yandex.ru> napsal:Hello Pgsql-hackers,\n\nSeems I fall into corner case:\n\ntest=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '00:00:00' );\nERROR: step size cannot equal zero\n\nBut:\ntest=> SELECT * FROM generate_series( '2020-11-09', '2020-11-10', INTERVAL '1 day' );\n generate_series \n------------------------\n 2020-11-09 00:00:00+02\n 2020-11-10 00:00:00+02\n(2 rows)\n\nHere we start at 2020-11-09, add interval of one day and finish at 2020-11-10\nDone! series is generated.\n\nIn first case I expect that I start at 2020-11-09, add interval of zero and finish at 2020-11-09\nEverything is consistent.\n\ntest=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '00:00:00' );\n generate_series\n------------------------\n 2020-11-09 00:00:00+02\n(1 row)\n\n\nSo I feature request to allow zero size step for cases when start point is equest to finish\n\nWhat do you think?What is the real use case? Current implementation is very simple - increment should not be zero, and then we know so there is no infinity cycle.RegardsPavel\n\n\n-- \nBest regards,\nEugen Konkov",
"msg_date": "Wed, 11 Nov 2020 20:07:28 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Improve allowed values for generate series"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 11:59 AM Eugen Konkov <kes-kes@yandex.ru> wrote:\n\n> So I feature request to allow zero size step for cases when start\n> point is equest to finish\n>\n> What do you think?\n>\n\nI don't see how this is useful. If they are equal and you use a non-zero\nstep you get back the one record you are looking for anyway, plus the\nnon-zero step allows them to be unequal. If zero step is allowed it is\nonly useful for when they are equal, being undefined when they are unequal.\n\nDavid J.\n\nOn Wed, Nov 11, 2020 at 11:59 AM Eugen Konkov <kes-kes@yandex.ru> wrote:So I feature request to allow zero size step for cases when start point is equest to finish\n\nWhat do you think?I don't see how this is useful. If they are equal and you use a non-zero step you get back the one record you are looking for anyway, plus the non-zero step allows them to be unequal. If zero step is allowed it is only useful for when they are equal, being undefined when they are unequal.David J.",
"msg_date": "Wed, 11 Nov 2020 12:10:58 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Improve allowed values for generate series"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 12:12 PM Eugen Konkov <kes-kes@yandex.ru> wrote:\n\n>\n> > So I feature request to allow zero size step for cases when\n> start point is equest to finish\n>\n> > What do you think?\n>\n>\n>\n> hm.... probably with step 0 we always should generate series of one\n> value and exit, despite on finish value.\n> Because with step 0 we always stay at current position, so there is\n> always should be just one value.\n>\n>\nHow is this better than writing \"VALUES (start date)\"?\n\nDavid J.\n\nOn Wed, Nov 11, 2020 at 12:12 PM Eugen Konkov <kes-kes@yandex.ru> wrote:\n> So I feature request to allow zero size step for cases when start point is equest to finish\n\n> What do you think?\n\n\n\nhm.... probably with step 0 we always should generate series of one\nvalue and exit, despite on finish value.\nBecause with step 0 we always stay at current position, so there is\nalways should be just one value.How is this better than writing \"VALUES (start date)\"?David J.",
"msg_date": "Wed, 11 Nov 2020 12:17:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Improve allowed values for generate series"
},
{
"msg_contents": "Re: Feature request: Improve allowed values for generate series\n\n\nHello David,\n\nI have a table with services, each service have a period. After which service is auto renewal\n\nServices also could be one-time. At this case its interval is '00:00:00'\n\nThe renewal is calculated via generate_series, when interval '00:00:00' pass to that function\nquery died =(\n\nGenerate dates for one time service:\ntest=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '00:00:00' );\n generate_series\n------------------------\n 2020-11-09 00:00:00+02\n(1 row)\n\nGenerate dates for auto-renew service:\ntest=> SELECT * FROM generate_series( '2020-11-09', '2020-11-10', INTERVAL '1 day' );\n generate_series \n------------------------\n 2020-11-09 00:00:00+02\n 2020-11-10 00:00:00+02\n(2 rows)\n\nSo it is useful in my case. Also behavior is not surprising.\n\n\n\nWednesday, November 11, 2020, 9:17:28 PM, you wrote:\n\n\n\n\n\nOn Wed, Nov 11, 2020 at 12:12 PM Eugen Konkov <kes-kes@yandex.ru> wrote:\n\n\n\n\n\n> So I feature request to allow zero size step for cases when start point is equest to finish\n\n> What do you think?\n\n\n\nhm.... probably with step 0 we always should generate series of one\nvalue and exit, despite on finish value.\nBecause with step 0 we always stay at current position, so there is\nalways should be just one value.\n\n\n\n\nHow is this better than writing \"VALUES (start date)\"?\n\nDavid J.\n\n\n\n\n\n\n--\nBest regards,\nEugen Konkov\n",
"msg_date": "Thu, 12 Nov 2020 04:54:08 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Feature request: Improve allowed values for generate series"
},
{
"msg_contents": "On Wed, Nov 11, 2020 at 7:54 PM Eugen Konkov <kes-kes@yandex.ru> wrote:\n\n> Hello David,\n>\n> I have a table with services, each service have a period. After which\n> service is auto renewal\n>\n> Services also could be one-time. At this case its interval is '00:00:00'\n>\n\nIn which case the concept of interval is undefined - there is no meaningful\n\"second date\" here, just the one expiration date - yet you are choosing to\nkeep it in order to introduce an artificial similarity between one-time\nservice and auto-renewal service. This use case isn't convincing for me.\nWriting the one-time service query without generate_series leaves out\nextraneous stuff that isn't important, which I would recommend even if\ngenerate_series were to work as described.\n\nIf you are going to introduce code-specific stuff to make this work just\nwrite: SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL\n'1000 years' ); It is just as much a construction of code as the other.\n\nDavid J.\n\nOn Wed, Nov 11, 2020 at 7:54 PM Eugen Konkov <kes-kes@yandex.ru> wrote:\n\nHello David,\n\nI have a table with services, each service have a period. After which service is auto renewal\n\nServices also could be one-time. At this case its interval is '00:00:00'In which case the concept of interval is undefined - there is no meaningful \"second date\" here, just the one expiration date - yet you are choosing to keep it in order to introduce an artificial similarity between one-time service and auto-renewal service. This use case isn't convincing for me. Writing the one-time service query without generate_series leaves out extraneous stuff that isn't important, which I would recommend even if generate_series were to work as described.If you are going to introduce code-specific stuff to make this work just write: SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '1000 years' ); It is just as much a construction of code as the other.David J.",
"msg_date": "Wed, 11 Nov 2020 23:08:03 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Improve allowed values for generate series"
}
] |
[
{
"msg_contents": "Currently we only match the Common Name (CN) of a client certificate\nwhen authenticating a user. The attached patch allows matching the\nentire Distinguished Name (DN) of the certificate. This is enabled by\nthe HBA line option \"clientname\", which can take the values \"CN\" or\n\"DN\". \"CN\" is the default.\n\nThe idea is that you might have a role with a CN of, say, \"dbauser\" in\ntwo different parts of the organization, say one with \"OU=marketing\" and\nthe other with \"OU=engineering\", and you only want to allow access to\none of them.\n\nThis feature is best used in conjunction with a map. e.g. in testing I\nhave this pg_hba.conf line:\n\n hostssl all all 127.0.0.1/32 cert clientname=DN map=dn\n\nand this pg_ident.conf line:\n\n dn /^C=US,ST=North.Carolina,O=test,OU=eng,CN=andrew$ andrew\n\nIf people like this idea I'll add tests and docco and add it to the next CF.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\"",
"msg_date": "Wed, 11 Nov 2020 15:44:29 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Allow matching whole DN from a client certificate"
},
{
"msg_contents": "Greetings,\n\n* Andrew Dunstan (andrew@dunslane.net) wrote:\n> Currently we only match the Common Name (CN) of a client certificate\n> when authenticating a user. The attached patch allows matching the\n> entire Distinguished Name (DN) of the certificate. This is enabled by\n> the HBA line option \"clientname\", which can take the values \"CN\" or\n> \"DN\". \"CN\" is the default.\n> \n> The idea is that you might have a role with a CN of, say, \"dbauser\" in\n> two different parts of the organization, say one with \"OU=marketing\" and\n> the other with \"OU=engineering\", and you only want to allow access to\n> one of them.\n> \n> This feature is best used in conjunction with a map. e.g. in testing I\n> have this pg_hba.conf line:\n> \n> hostssl all all 127.0.0.1/32 cert clientname=DN map=dn\n> \n> and this pg_ident.conf line:\n> \n> dn /^C=US,ST=North.Carolina,O=test,OU=eng,CN=andrew$ andrew\n> \n> If people like this idea I'll add tests and docco and add it to the next CF.\n\nYeah, this is definitely a worthwhile feature.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 11 Nov 2020 16:33:23 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "> On 11 Nov 2020, at 21:44, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> If people like this idea I'll add tests and docco and add it to the next CF.\n\nSounds like a good idea, please do.\n\nCan this case really happen in non-ancient OpenSSL version?\n+\t\tif (!x509name)\n\nDoesn't this returnpath need a pfree(peer_cn)?\n+\t\tbio = BIO_new(BIO_s_mem());\n+\t\tif (!bio)\n+\t\t{\n+\t\t\treturn -1;\n+\t\t}\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 12 Nov 2020 14:37:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "\nOn 11/12/20 8:37 AM, Daniel Gustafsson wrote:\n>> On 11 Nov 2020, at 21:44, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> If people like this idea I'll add tests and docco and add it to the next CF.\n> Sounds like a good idea, please do.\n>\n> Can this case really happen in non-ancient OpenSSL version?\n> +\t\tif (!x509name)\n\n\nProbably not. I'll get rid of that.\n\n\n> Doesn't this returnpath need a pfree(peer_cn)?\n> +\t\tbio = BIO_new(BIO_s_mem());\n> +\t\tif (!bio)\n> +\t\t{\n> +\t\t\treturn -1;\n> +\t\t}\n>\n\nYeah, I'll make another pass over the cleanups.\n\n\nThanks for reviewing.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 16:21:18 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On 11/12/20 4:21 PM, Andrew Dunstan wrote:\n> On 11/12/20 8:37 AM, Daniel Gustafsson wrote:\n>>> On 11 Nov 2020, at 21:44, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> If people like this idea I'll add tests and docco and add it to the next CF.\n>> Sounds like a good idea, please do.\n>>\n>> Can this case really happen in non-ancient OpenSSL version?\n>> +\t\tif (!x509name)\n> Probably not. I'll get rid of that.\n>\n>\n>> Doesn't this returnpath need a pfree(peer_cn)?\n>> +\t\tbio = BIO_new(BIO_s_mem());\n>> +\t\tif (!bio)\n>> +\t\t{\n>> +\t\t\treturn -1;\n>> +\t\t}\n>>\n> Yeah, I'll make another pass over the cleanups.\n>\n\n\nOK, here's a new patch, including docco and tests.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 18 Nov 2020 13:01:09 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "> On 18 Nov 2020, at 19:01, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> OK, here's a new patch, including docco and tests.\n\nLooks good on a quick skim, the only thing that stood out was:\n\n+ This option is probably best used in comjunction with a username map.\ns/comjunction/conjunction/\n\nWill do more testing and review, and give a think on how it affects the libnss\npatch.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 18 Nov 2020 23:28:53 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "I've circled back to this and tested/read it more, and I'm still of the opinion\nthat it's a good feature addition. A few small comments on the patch:\n\n+ is the default, the username is matched against the certificate's\nIn the docs we use \"user name\" instead of \"username\" in descriptive text.\nThere are a couple of \"username\" in this added paragraph.\n\n+ This option is probably best used in comjunction with a username map.\nTypo: s/comjunction/conjunction/\n\n+ /* use commas instead of slashes */\n+ X509_NAME_print_ex(bio, x509name, 0, XN_FLAG_SEP_COMMA_PLUS);\nI don't have strong opinions on whether we shold use slashes or commas, but I\nthink it needs to be documented that commas are required since slashes is the\nmore common way to print a DN. pg_stat_ssl and sslinfo are also displaying the\nDN with slashes.\n\n /* Make sure we have received a username in the certificate */\n- if (port->peer_cn == NULL ||\n- strlen(port->peer_cn) <= 0)\n+ peer_username = port->hba->clientcertname == clientCertCN ? port->peer_cn : port->peer_dn;\nNitpickering nitpickery perhaps, but when we can inspect the DN and not just\nthe CN it seems a bit odd to talk about \"username\", which is again echoed in\nthe errormessage just below here. Not sure what would be better though, since\n\"user information\" doesn't really convey enough detail/meaning.\n\n+ /* and extract the Common Name / Distinguished Name from it. */\nNot introduced in this patch, but single line comments should not be punctuated\nIIRC.\n\nThe patch still applies and tests pass, I'm moving this to ready for committer.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 18 Jan 2021 11:23:40 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Mon, 2021-01-18 at 11:23 +0100, Daniel Gustafsson wrote:\r\n> + /* use commas instead of slashes */\r\n> + X509_NAME_print_ex(bio, x509name, 0, XN_FLAG_SEP_COMMA_PLUS);\r\n> I don't have strong opinions on whether we shold use slashes or commas, but I\r\n> think it needs to be documented that commas are required since slashes is the\r\n> more common way to print a DN.\r\n\r\nThere's also a XN_FLAG_RFC2253 flag, which claims to print in an RFC-\r\ncompatible escape system. (It includes XN_FLAG_SEP_COMMA_PLUS.) I think\r\nNSS uses a similar RFC-based escape scheme, but I haven't checked to\r\nsee whether it's fully compatible.\r\n\r\nI think you'll want to be careful to specify the format as much as\r\npossible, both to make sure that other backend TLS implementations can\r\nactually use the same escaping system and to ensure that user regexes\r\ndon't suddenly start matching different things at some point in the\r\nfuture. As a cautionary tale, nginx is stuck with two versions of their\r\nsimilar feature (ssl_client_s_dn_legacy vs ssl_client_s_dn) after their\r\nswitch to an RFC-compatible system [1].\r\n\r\nEven when using RFC 2253 (now 4514) escaping, there are things left to the implementation, such as the order of AVAs inside multivalued RDNs. The RFC warns not to consider these representations to be canonical forms, so it's possible that switching (or upgrading) the TLS library in use could force users to change their regexes at some point in the future.\r\nI'm going to test this patch with some UTF-8 DNs later today; I'll\r\nshare my findings. I'm also going to read up on [2] a bit more.\r\n\r\n--Jacob\r\n\r\n[1] \r\nhttps://serverfault.com/questions/829754/why-did-the-format-of-nginx-ssl-client-i-dn-suddenly-change\r\n\r\n[2] \r\nhttps://frasertweedale.github.io/blog-redhat/posts/2019-05-28-a-dn-is-not-a-string.html\r\n",
"msg_date": "Wed, 20 Jan 2021 19:07:00 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Wed, 2021-01-20 at 19:07 +0000, Jacob Champion wrote:\r\n> I think you'll want to be careful to specify the format as much as\r\n> possible, both to make sure that other backend TLS implementations can\r\n> actually use the same escaping system and to ensure that user regexes\r\n> don't suddenly start matching different things at some point in the\r\n> future.\r\n\r\nAlong those lines: the current implementation doesn't escape commas in\r\nfields, which means you can inject them to force a bad regex match. For\r\ninstance, when using the usermap that's in the patch:\r\n\r\n dn \"/^.*OU=Testing,.*$\" username\r\n\r\nif I create a certificate with the Organizational Unit name \"Testing,\r\nor something\", then that will also match. Switching to RFC 2253/4514\r\nquoting fixes comma injection (and reverses the order of the RDNs,\r\nwhich requires a change to the regex patterns). But I think that the\r\nregex as supplied still isn't strong enough to prevent problems.\r\n\r\nFor example, the equals sign isn't a delimiter and therefore isn't\r\nquoted. So if I get my CA to sign a certificate with some arbitrary\r\nfield value of \"HEY YOU=Testing\", then that will also match the above\r\nusermap. You'd need to write the regex with extreme attention to detail\r\nand a full understanding of the escaping scheme to get around that --\r\nassuming that the scheme is generally parsable with regexes to begin\r\nwith.\r\n\r\n> I'm going to test this patch with some UTF-8 DNs later today; I'll share my findings.\r\n\r\nUTF-8 has the opposite issue; it's escaped in a way that makes it\r\nunusable in a regex match. For example, say I have a (simple for the\r\nsake of example, but broken as noted above) usermap of\r\n\r\n dn \"/^CN=([^,]*).*$\" \\1\r\n\r\nwhich is supposed to emulate the functionality of the \"clientname=CN\"\r\nmode, and two users named \"postgres\" and \"οδυσσέας\". The \"CN=postgres\"\r\nuser will work just fine, but the UTF-8 CN of \"οδυσσέας\" will be\r\nescaped into \"\\U03BF\\U03B4\\U03C5\\U03C3\\U03C3\\U03AD\\U03B1\\U03C2\" and\r\nfail to match the internal user. (I'm not seeing an RFC describe the\r\n\"\\U\" escaping scheme; maybe it's OpenSSL-specific?)\r\n\r\n--Jacob\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 20 Jan 2021 23:33:02 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "> On 20 Jan 2021, at 20:07, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Mon, 2021-01-18 at 11:23 +0100, Daniel Gustafsson wrote:\n>> + /* use commas instead of slashes */\n>> + X509_NAME_print_ex(bio, x509name, 0, XN_FLAG_SEP_COMMA_PLUS);\n>> I don't have strong opinions on whether we shold use slashes or commas, but I\n>> think it needs to be documented that commas are required since slashes is the\n>> more common way to print a DN.\n> \n> There's also a XN_FLAG_RFC2253 flag, which claims to print in an RFC-\n> compatible escape system. (It includes XN_FLAG_SEP_COMMA_PLUS.) I think\n> NSS uses a similar RFC-based escape scheme, but I haven't checked to\n> see whether it's fully compatible.\n> \n> I think you'll want to be careful to specify the format as much as\n> possible, both to make sure that other backend TLS implementations can\n> actually use the same escaping system and to ensure that user regexes\n> don't suddenly start matching different things at some point in the\n> future.\n\nReading more on this it seems we would essentially have to go with RFC 4514, as\nit's the closest to a standard which seems to exist. Having done a lot\nresearch on this topic, do you have a gut feeling on direction?\n\nThe OpenSSL X509_NAME_cmp function use RFC 5280 section 7.1 and RFC 4517\nsection 4.2.15 (which in turn reference RFC 4514 for the DN string format).\nlibnss has CERT_AsciiToName which is referencing RFCs 1485, 1779 and 2253 in\nlib/certdb/alg1485.c. Comparing the two would be interesting.\n\n> Even when using RFC 2253 (now 4514) escaping, there are things left to the implementation, such as the order of AVAs inside multivalued RDNs. The RFC warns not to consider these representations to be canonical forms, so it's possible that switching (or upgrading) the TLS library in use could force users to change their regexes at some point in the future.\n\nRight, multi-value RDN's are defined as sets so C=US,A=B+C=D is equivalent to\nC=US,C=D+A=B according to RFC 5280.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n",
"msg_date": "Tue, 26 Jan 2021 13:49:32 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Tue, 2021-01-26 at 13:49 +0100, Daniel Gustafsson wrote:\r\n> Reading more on this it seems we would essentially have to go with RFC 4514, as\r\n> it's the closest to a standard which seems to exist. Having done a lot\r\n> research on this topic, do you have a gut feeling on direction?\r\n\r\nYeah, if we're going to use string matching then I agree with the use\r\nof RFC 4514. I think a string-based approach is going to work for only\r\nthe most simple cases, though. Anything more complicated than plain\r\nASCII and a fixed base DN is going to become pretty unmanageable with a\r\nregex; you'd need a filter system of some sort that understands DN\r\nstructure. The documentation should be clear on the limitations.\r\n\r\n> The OpenSSL X509_NAME_cmp function use RFC 5280 section 7.1 and RFC 4517\r\n> section 4.2.15 (which in turn reference RFC 4514 for the DN string format).\r\n> libnss has CERT_AsciiToName which is referencing RFCs 1485, 1779 and 2253 in\r\n> lib/certdb/alg1485.c. Comparing the two would be interesting.\r\n\r\nYeah. I'll poke around a bit.\r\n\r\n> Right, multi-value RDN's are defined as sets so C=US,A=B+C=D is equivalent to\r\n> C=US,C=D+A=B according to RFC 5280.\r\n\r\nOther potential differences, by my understanding of that RFC, include\r\nthe quoting of the \"escaped\" character class (e.g. a comma can be\r\nescaped as either \"\\,\" or \"\\2C\"), the case of hex characters (\"\\FF\" vs\r\n\"\\ff\"), and the option to not quote unprintable control characters\r\n(e.g. ASCII DEL, 0x7F, can be included verbatim or quoted as \"\\7F\").\r\n\r\n--Jacob\r\n",
"msg_date": "Tue, 26 Jan 2021 18:43:03 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Tue, 2021-01-26 at 18:43 +0000, Jacob Champion wrote:\r\n> On Tue, 2021-01-26 at 13:49 +0100, Daniel Gustafsson wrote:\r\n> > The OpenSSL X509_NAME_cmp function use RFC 5280 section 7.1 and RFC 4517\r\n> > section 4.2.15 (which in turn reference RFC 4514 for the DN string format).\r\n> > libnss has CERT_AsciiToName which is referencing RFCs 1485, 1779 and 2253 in\r\n> > lib/certdb/alg1485.c. Comparing the two would be interesting.\r\n> \r\n> Yeah. I'll poke around a bit.\r\n\r\nHere's some output from a test program I threw together, which parses\r\nidentical DER using OpenSSL and NSS and writes the corresponding string\r\nrepresentation.\r\n\r\n> input/basic.conf:\r\n> ssl: CN=pchampion,OU=VMware\r\n> nss: CN=pchampion,OU=VMware\r\n> \r\n> input/escape.conf:\r\n> ssl: CN=\\,\\+\\\\\\;\\\"\\<\\>\r\n> nss: CN=\",+\\\\;\\\"<>\"\r\n> \r\n> input/multivalue.conf:\r\n> ssl: CN=pchampion+SN=Champion+GN=Jacob,OU=VMware\r\n> nss: givenName=Jacob+SN=Champion+CN=pchampion,OU=VMware\r\n> \r\n> input/unicode.conf:\r\n> ssl: CN=οδυσσέας,OU=VMware\r\n> nss: CN=οδυσσέας,OU=VMware\r\n> \r\n> input/unprintable.conf:\r\n> ssl: CN=\\01\\,\\02\\,\\03,OU=\\01\\02\\03\r\n> nss: CN=\"\\01,\\02,\\03\",OU=\\01\\02\\03\r\n\r\nbasic.conf is exactly what it looks like: CN=pchampion,OU=VMware. Both\r\nimplementations agree.\r\n\r\nescape.conf contains a CN with the literal value\r\n\r\n ,+\\;\"<>\r\n\r\nand you can see that NSS doesn't follow RFC 4514 here; it uses the\r\nolder double-quoting form of escaping. There's a 15-year-old bug on\r\nthis in NSS [1].\r\n\r\nmultivalue.conf contains a multivalued AVA with commonName \"pchampion\",\r\ngivenName \"Jacob\", and surname \"Champion\". They aren't sorted in the\r\nsame order, and the implementations even disagree on how to represent\r\nthe givenName attribute. (I'm not convinced that either choice is RFC-\r\n4514-compliant; it doesn't look like GN is registered with IANA as a\r\nshort name for givenName.)\r\n\r\nunicode.conf contains a commonName of \"οδυσσέας\". Both implementations\r\nagree, but the only way I was able to get OpenSSL to produce this\r\n(rather than a string of escaped hex) was by using the flags\r\n\r\n XN_FLAG_RFC2253 & ~ASN1_STRFLGS_ESC_MSB\r\n\r\nin the call to X509_NAME_print_ex(). This should work fine for a\r\ndatabase encoding of UTF-8, but would we need to convert for other\r\nencodings? Also, I'm not sure how this would handle certificates that\r\naren't UTF-8 encoded. It looks like some UCS variants are legal?\r\n\r\nunprintable.conf contains the bytes 0x01, 0x02, and 0x03 in the\r\ncommonName and organizationalUnit. They're backslash-escaped by both\r\nimplementations, but if you add any other printable escaped characters\r\n(such as comma, in the CN example here) NSS will still double-quote the\r\nwhole thing.\r\n\r\n--Jacob\r\n\r\n[1] https://bugzilla.mozilla.org/show_bug.cgi?id=355096\r\n",
"msg_date": "Wed, 27 Jan 2021 20:14:53 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "\nOn 1/27/21 3:14 PM, Jacob Champion wrote:\n> On Tue, 2021-01-26 at 18:43 +0000, Jacob Champion wrote:\n>> On Tue, 2021-01-26 at 13:49 +0100, Daniel Gustafsson wrote:\n>>> The OpenSSL X509_NAME_cmp function use RFC 5280 section 7.1 and RFC 4517\n>>> section 4.2.15 (which in turn reference RFC 4514 for the DN string format).\n>>> libnss has CERT_AsciiToName which is referencing RFCs 1485, 1779 and 2253 in\n>>> lib/certdb/alg1485.c. Comparing the two would be interesting.\n>> Yeah. I'll poke around a bit.\n> Here's some output from a test program I threw together, which parses\n> identical DER using OpenSSL and NSS and writes the corresponding string\n> representation.\n>\n>> input/basic.conf:\n>> ssl: CN=pchampion,OU=VMware\n>> nss: CN=pchampion,OU=VMware\n>>\n>> input/escape.conf:\n>> ssl: CN=\\,\\+\\\\\\;\\\"\\<\\>\n>> nss: CN=\",+\\\\;\\\"<>\"\n>>\n>> input/multivalue.conf:\n>> ssl: CN=pchampion+SN=Champion+GN=Jacob,OU=VMware\n>> nss: givenName=Jacob+SN=Champion+CN=pchampion,OU=VMware\n>>\n>> input/unicode.conf:\n>> ssl: CN=οδυσσέας,OU=VMware\n>> nss: CN=οδυσσέας,OU=VMware\n>>\n>> input/unprintable.conf:\n>> ssl: CN=\\01\\,\\02\\,\\03,OU=\\01\\02\\03\n>> nss: CN=\"\\01,\\02,\\03\",OU=\\01\\02\\03\n> basic.conf is exactly what it looks like: CN=pchampion,OU=VMware. Both\n> implementations agree.\n>\n> escape.conf contains a CN with the literal value\n>\n> ,+\\;\"<>\n>\n> and you can see that NSS doesn't follow RFC 4514 here; it uses the\n> older double-quoting form of escaping. There's a 15-year-old bug on\n> this in NSS [1].\n>\n> multivalue.conf contains a multivalued AVA with commonName \"pchampion\",\n> givenName \"Jacob\", and surname \"Champion\". They aren't sorted in the\n> same order, and the implementations even disagree on how to represent\n> the givenName attribute. (I'm not convinced that either choice is RFC-\n> 4514-compliant; it doesn't look like GN is registered with IANA as a\n> short name for givenName.)\n>\n> unicode.conf contains a commonName of \"οδυσσέας\". Both implementations\n> agree, but the only way I was able to get OpenSSL to produce this\n> (rather than a string of escaped hex) was by using the flags\n>\n> XN_FLAG_RFC2253 & ~ASN1_STRFLGS_ESC_MSB\n>\n> in the call to X509_NAME_print_ex(). This should work fine for a\n> database encoding of UTF-8, but would we need to convert for other\n> encodings? Also, I'm not sure how this would handle certificates that\n> aren't UTF-8 encoded. It looks like some UCS variants are legal?\n>\n> unprintable.conf contains the bytes 0x01, 0x02, and 0x03 in the\n> commonName and organizationalUnit. They're backslash-escaped by both\n> implementations, but if you add any other printable escaped characters\n> (such as comma, in the CN example here) NSS will still double-quote the\n> whole thing.\n>\n> --Jacob\n>\n> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=355096\n\n\n\nOK, that bug is a bit ugly.\n\n\nI'm not sure where we want to go with the present patch. Do we want to\ngo with what we have and document the limitations more, or try to do\nsomething more elaborate? If so, exactly what?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 27 Jan 2021 15:42:10 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Wed, 2021-01-27 at 15:42 -0500, Andrew Dunstan wrote:\r\n> I'm not sure where we want to go with the present patch. Do we want to\r\n> go with what we have and document the limitations more, or try to do\r\n> something more elaborate? If so, exactly what?\r\n\r\nAfter sleeping on it:\r\n\r\nI think that if you expect the 99% use case to be in the vein of what\r\nyou originally proposed:\r\n\r\n dn /^C=US,ST=North.Carolina,O=test,OU=eng,CN=andrew$ andrew\r\n\r\nwhere the *entire* DN is pinned, there are no characters outside the\r\nASCII subset, and no subgroup matching is required to find the mapped\r\nusername (i.e. there's one line for every user of the system), then\r\nthis approach would work. (I'd still recommend switching to use the RFC\r\nflag to OpenSSL, to ease future improvements.) There should be a bunch\r\nof warning documentation saying not to do anything more complex unless\r\nyou're an expert, and that the exact regular expression needed may\r\nchange depending on the TLS backend.\r\n\r\nIf you want to add UTF-8 support to the above, so that things outside\r\nASCII can be matched nicely, then the ASN1_STRFLGS_ESC_MSB flag should\r\nbe removed from the _print_ex() call per OpenSSL documentation, and we\r\nshould investigate how it plays with other non-UTF-8 database\r\nencodings. That seems like work but not a huge amount of it.\r\n\r\nBut if you think users are going to try to match more complex regular\r\nexpressions, for example to be able to peel out a portion of the DN to\r\nuse as a mapped username, or match just a few specific RDNs for\r\nauthorization, then I think a more elaborate approach is needed to\r\navoid handing people a dangerous tool. Most of the DN-matching regex\r\nexamples I've seen on self-help sites have been wrong, in that they\r\nmatch too much.\r\n\r\nUnfortunately I don't really know what that solution should look like.\r\nA DSL for filtering on RDNs would be a lot of work, but it could\r\npotentially allow LDAP to be mapped through pg_ident as well?\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 28 Jan 2021 16:39:29 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "\nOn 1/28/21 11:39 AM, Jacob Champion wrote:\n> On Wed, 2021-01-27 at 15:42 -0500, Andrew Dunstan wrote:\n>> I'm not sure where we want to go with the present patch. Do we want to\n>> go with what we have and document the limitations more, or try to do\n>> something more elaborate? If so, exactly what?\n> After sleeping on it:\n>\n> I think that if you expect the 99% use case to be in the vein of what\n> you originally proposed:\n>\n> dn /^C=US,ST=North.Carolina,O=test,OU=eng,CN=andrew$ andrew\n>\n> where the *entire* DN is pinned, there are no characters outside the\n> ASCII subset, and no subgroup matching is required to find the mapped\n> username (i.e. there's one line for every user of the system), then\n> this approach would work. \n\n\nI think this is the 99% use case, TBH. It's certainly what I was\noriginally asked for.\n\n\n> (I'd still recommend switching to use the RFC\n> flag to OpenSSL, to ease future improvements.) There should be a bunch\n> of warning documentation saying not to do anything more complex unless\n> you're an expert, and that the exact regular expression needed may\n> change depending on the TLS backend.\n\n\nI'll play around with the RFC flag.\n\n\n>\n> If you want to add UTF-8 support to the above, so that things outside\n> ASCII can be matched nicely, then the ASN1_STRFLGS_ESC_MSB flag should\n> be removed from the _print_ex() call per OpenSSL documentation, and we\n> should investigate how it plays with other non-UTF-8 database\n> encodings. That seems like work but not a huge amount of it.\n\n\nHow about if we remove ASN1_STRFLGS_ESC_MSB when the server encoding is\nUTF8? That should be an extremely simple change.\n\n\n> But if you think users are going to try to match more complex regular\n> expressions, for example to be able to peel out a portion of the DN to\n> use as a mapped username, or match just a few specific RDNs for\n> authorization, then I think a more elaborate approach is needed to\n> avoid handing people a dangerous tool. Most of the DN-matching regex\n> examples I've seen on self-help sites have been wrong, in that they\n> match too much.\n>\n> Unfortunately I don't really know what that solution should look like.\n> A DSL for filtering on RDNs would be a lot of work, but it could\n> potentially allow LDAP to be mapped through pg_ident as well?\n>\n\n\nIn the end it will be up to users to come up with expressions that meet\ntheir usage. Yes they could get it wrong, but then they can get so many\nthings wrong ;-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 28 Jan 2021 17:10:35 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "> On 28 Jan 2021, at 23:10, Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 1/28/21 11:39 AM, Jacob Champion wrote:\n>> \n>> Unfortunately I don't really know what that solution should look like.\n>> A DSL for filtering on RDNs would be a lot of work, but it could\n>> potentially allow LDAP to be mapped through pg_ident as well\n> \n> In the end it will be up to users to come up with expressions that meet\n> their usage. Yes they could get it wrong, but then they can get so many\n> things wrong ;-)\n\nMy main concern with this isn't that it's easy to get it wrong, but that it may\nend up being hard to get it right (with false positives in the auth path as a\nresult). Right now I'm not sure where it leans.\n\nMaybe it will be easier to judge the proposal when the documentation has been\nupdated warnings for the potential pitfalls?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 29 Jan 2021 14:18:09 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "\nOn 1/29/21 8:18 AM, Daniel Gustafsson wrote:\n>> On 28 Jan 2021, at 23:10, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 1/28/21 11:39 AM, Jacob Champion wrote:\n>>> Unfortunately I don't really know what that solution should look like.\n>>> A DSL for filtering on RDNs would be a lot of work, but it could\n>>> potentially allow LDAP to be mapped through pg_ident as well\n>> In the end it will be up to users to come up with expressions that meet\n>> their usage. Yes they could get it wrong, but then they can get so many\n>> things wrong ;-)\n> My main concern with this isn't that it's easy to get it wrong, but that it may\n> end up being hard to get it right (with false positives in the auth path as a\n> result). Right now I'm not sure where it leans.\n>\n> Maybe it will be easier to judge the proposal when the documentation has been\n> updated warnings for the potential pitfalls?\n>\n\nFeel free to make suggestions for wording :-)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 29 Jan 2021 08:59:01 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "\nOn 1/28/21 5:10 PM, Andrew Dunstan wrote:\n>\n>> (I'd still recommend switching to use the RFC\n>> flag to OpenSSL, to ease future improvements.) There should be a bunch\n>> of warning documentation saying not to do anything more complex unless\n>> you're an expert, and that the exact regular expression needed may\n>> change depending on the TLS backend.\n>\n> I'll play around with the RFC flag.\n>\n>\n>> If you want to add UTF-8 support to the above, so that things outside\n>> ASCII can be matched nicely, then the ASN1_STRFLGS_ESC_MSB flag should\n>> be removed from the _print_ex() call per OpenSSL documentation, and we\n>> should investigate how it plays with other non-UTF-8 database\n>> encodings. That seems like work but not a huge amount of it.\n>\n> How about if we remove ASN1_STRFLGS_ESC_MSB when the server encoding is\n> UTF8? That should be an extremely simple change.\n>\n>\n>\n\nOf course, we don't have any idea what the database is at this stage, so\nthat wasn't well thought out.\n\n\nI'm inclined at this stage just to turn on RFC2253. If someone wants to\ndeal with UTF8 (or other encodings) at a later stage they can.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 29 Jan 2021 10:10:32 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On 1/29/21 10:10 AM, Andrew Dunstan wrote:\n> On 1/28/21 5:10 PM, Andrew Dunstan wrote:\n>>> (I'd still recommend switching to use the RFC\n>>> flag to OpenSSL, to ease future improvements.) There should be a bunch\n>>> of warning documentation saying not to do anything more complex unless\n>>> you're an expert, and that the exact regular expression needed may\n>>> change depending on the TLS backend.\n>> I'll play around with the RFC flag.\n>>\n>>\n>>> If you want to add UTF-8 support to the above, so that things outside\n>>> ASCII can be matched nicely, then the ASN1_STRFLGS_ESC_MSB flag should\n>>> be removed from the _print_ex() call per OpenSSL documentation, and we\n>>> should investigate how it plays with other non-UTF-8 database\n>>> encodings. That seems like work but not a huge amount of it.\n>> How about if we remove ASN1_STRFLGS_ESC_MSB when the server encoding is\n>> UTF8? That should be an extremely simple change.\n>>\n>>\n>>\n> Of course, we don't have any idea what the database is at this stage, so\n> that wasn't well thought out.\n>\n>\n> I'm inclined at this stage just to turn on RFC2253. If someone wants to\n> deal with UTF8 (or other encodings) at a later stage they can.\n>\n>\n\nHere's a version of the patch that does it that way. For fun I have\nmodified the certificate so it has two OU fields in the DN.\n\nFinding out how to generate the new cert without regenerating everything\nelse was also fun :-) Here's what I did in a pristine source with patch\napplied, but where configure hasn't been run:\n\n cd src/test/ssl\n touch ../../Makefile.global\n rm -f ssl/client-dn.crt ssl/client-dn.key\n touch ssl/root_ca-certindex\n echo 01> ssl/root_ca.srl\n make ssl/client-dn.crt\n rm -rf ssl/*certindex* ssl/root_ca.srl ssl/new_certs_dir\n rm ../../Makefile.global\n\nMaking incremental additions to the certificate set easier wouldn't be a\nbad thing.\n\nI wonder if we should really be setting 1 as the serial number, though.\nMight it not be better to use, say, `date +%Y%m%d01` rather like we do\nwith catalog version numbers?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 30 Jan 2021 16:18:12 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Sat, 2021-01-30 at 16:18 -0500, Andrew Dunstan wrote:\r\n> Here's a version of the patch that does it that way. For fun I have\r\n> modified the certificate so it has two OU fields in the DN.\r\n\r\n> diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml\r\n> [...]\r\n> + <literal>Common Name (CN)</literal>. If instead you specify\r\n> + <literal>clientname=DN</literal> the username is matched against the\r\n> + entire <literal>Distinguished Name (DN)</literal> of the certificate.\r\n> + This option is probably best used in comjunction with a username map.\r\n\r\nsp: comjunction -> conjunction\r\n\r\n> diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c\r\n> [...]\r\n> +\r\n> +\t\tbio = BIO_new(BIO_s_mem());\r\n> +\t\tif (!bio)\r\n> +\t\t{\r\n> +\t\t\tpfree(port->peer_cn);\r\n> +\t\t\tport->peer_cn = NULL;\r\n> +\t\t\treturn -1;\r\n> +\t\t}\r\n\r\nShould this case have a log entry, DEBUG or otherwise?\r\n\r\n> +\t\t/* use commas instead of slashes */\r\n> +\t\tX509_NAME_print_ex(bio, x509name, 0, XN_FLAG_RFC2253);\r\n> +\t\tBIO_get_mem_ptr(bio, &bio_buf);\r\n> +\t\tpeer_dn = MemoryContextAlloc(TopMemoryContext, bio_buf->length + 1);\r\n> +\t\tmemcpy(peer_dn, bio_buf->data, bio_buf->length);\r\n> +\t\tpeer_dn[bio_buf->length] = '\\0';\r\n> +\t\tif (bio_buf->length != strlen(peer_dn))\r\n> +\t\t{\r\n> +\t\t\tereport(COMMERROR,\r\n> +\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> +\t\t\t\t\t errmsg(\"SSL certificate's distinguished name contains embedded null\")));\r\n> +\t\t\tBIO_free(bio);\r\n> +\t\t\tpfree(peer_dn);\r\n> +\t\t\tpfree(port->peer_cn);\r\n> +\t\t\tport->peer_cn = NULL;\r\n> +\t\t\treturn -1;\r\n> +\t\t}\r\n\r\nSince the definition of XN_FLAG_RFC2253 includes ASN1_STRFLGS_ESC_CTRL,\r\nthis case shouldn't be possible. I think it's still worth it to double-\r\ncheck, but maybe it should assert as well? Or at least have a comment\r\nexplaining that this is an internal error and not a client error.\r\n\r\n> diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl\r\n> +# correct client cert using whole DN\r\n> +my $dn_connstr = $common_connstr;\r\n> +$dn_connstr =~ s/certdb/certdb_dn/;\r\n> +\r\n> +test_connect_ok(\r\n> +\t$dn_connstr,\r\n> +\t\"user=ssltestuser sslcert=ssl/client-dn.crt sslkey=ssl/client-dn_tmp.key\",\r\n> +\t\"certificate authorization succeeds with DN mapping\"\r\n> +);\r\n\r\nA negative case for the new DN code paths would be good to add.\r\n\r\n--Jacob\r\n",
"msg_date": "Mon, 8 Feb 2021 18:29:10 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Sat, 2021-01-30 at 16:18 -0500, Andrew Dunstan wrote:\r\n> cd src/test/ssl\r\n> touch ../../Makefile.global\r\n> rm -f ssl/client-dn.crt ssl/client-dn.key\r\n> touch ssl/root_ca-certindex\r\n> echo 01> ssl/root_ca.srl\r\n\r\nNote that, on my machine at least, the root_ca serial counter is at 03\r\nafter running `make sslfiles`. 1 and 2 are already assigned to\r\nserver_ca and client_ca, respectively.\r\n\r\nSpeaking of which, what's the reason you need to recreate the root_ca\r\nmachinery when it's the client_ca that issues the new certificate?\r\n\r\n> make ssl/client-dn.crt\r\n> rm -rf ssl/*certindex* ssl/root_ca.srl ssl/new_certs_dir\r\n> rm ../../Makefile.global\r\n> \r\n> Making incremental additions to the certificate set easier wouldn't be a\r\n> bad thing.\r\n> \r\n> I wonder if we should really be setting 1 as the serial number, though.\r\n> Might it not be better to use, say, `date +%Y%m%d01` rather like we do\r\n> with catalog version numbers?\r\n\r\nYou could also check in the CA state files.\r\n\r\n--Jacob\r\n",
"msg_date": "Mon, 22 Feb 2021 18:00:51 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Sat, 2021-01-30 at 16:18 -0500, Andrew Dunstan wrote:\r\n> Making incremental additions to the certificate set easier wouldn't be a\r\n> bad thing.\r\n> \r\n> I wonder if we should really be setting 1 as the serial number, though.\r\n> Might it not be better to use, say, `date +%Y%m%d01` rather like we do\r\n> with catalog version numbers?\r\n\r\nI have been experimenting a bit with both of these suggestions; hope to\r\nhave something in time for commitfest on Monday. Writing new tests for\r\nNSS has run into the same problems you've mentioned.\r\n\r\nFYI, I've pulled the port->peer_dn functionality you've presented here\r\ninto my authenticated identity patchset at [1].\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/flat/c55788dd1773c521c862e8e0dddb367df51222be.camel%40vmware.com\r\n",
"msg_date": "Fri, 26 Feb 2021 19:55:18 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "\nOn 2/26/21 2:55 PM, Jacob Champion wrote:\n> On Sat, 2021-01-30 at 16:18 -0500, Andrew Dunstan wrote:\n>> Making incremental additions to the certificate set easier wouldn't be a\n>> bad thing.\n>>\n>> I wonder if we should really be setting 1 as the serial number, though.\n>> Might it not be better to use, say, `date +%Y%m%d01` rather like we do\n>> with catalog version numbers?\n> I have been experimenting a bit with both of these suggestions; hope to\n> have something in time for commitfest on Monday. Writing new tests for\n> NSS has run into the same problems you've mentioned.\n>\n> FYI, I've pulled the port->peer_dn functionality you've presented here\n> into my authenticated identity patchset at [1].\n>\n> --Jacob\n>\n> [1] https://www.postgresql.org/message-id/flat/c55788dd1773c521c862e8e0dddb367df51222be.camel%40vmware.com\n\n\nCool.\n\n\nI think the thing that's principally outstanding w.r.t. this patch is\nwhat format we should use to extract the DN. Should we use RFC2253,\nwhich reverses the field order, as has been suggested upthread and is in\nthe latest patch? I'm slightly worried that it might be a POLA\nviolation. But I don't have terribly strong feelings about it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 26 Feb 2021 15:40:14 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Sat, Jan 30, 2021 at 04:18:12PM -0500, Andrew Dunstan wrote:\n> @@ -610,6 +610,19 @@ hostnogssenc <replaceable>database</replaceable> <replaceable>user</replaceabl\n> the verification of client certificates with any authentication\n> method that supports <literal>hostssl</literal> entries.\n> </para>\n> + <para>\n> + On any record using client certificate authentication, that is one\n> + using the <literal>cert</literal> authentication method or one\n> + using the <literal>clientcert</literal> option, you can specify\n\nI suggest instead of \"that is\" to instead parenthesize this part:\n| (one using the <literal>cert</literal> authentication method or the\n| <literal>clientcert</literal> option), you can specify\n\n> + which part of the client certificate credentials to match using\n> + the <literal>clientname</literal> option. This option can have one\n> + of two values. If you specify <literal>clientname=CN</literal>, which\n> + is the default, the username is matched against the certificate's\n> + <literal>Common Name (CN)</literal>. If instead you specify\n> + <literal>clientname=DN</literal> the username is matched against the\n> + entire <literal>Distinguished Name (DN)</literal> of the certificate.\n> + This option is probably best used in comjunction with a username map.\n\nspell: conjunction\n\n\n",
"msg_date": "Sat, 27 Feb 2021 13:37:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Fri, 2021-02-26 at 15:40 -0500, Andrew Dunstan wrote:\r\n> I think the thing that's principally outstanding w.r.t. this patch is\r\n> what format we should use to extract the DN.\r\n\r\nThat and the warning label for sharp edges.\r\n\r\n> Should we use RFC2253,\r\n> which reverses the field order, as has been suggested upthread and is in\r\n> the latest patch? I'm slightly worried that it might be a POLA\r\n> violation.\r\n\r\nAll I can provide is the hindsight from httpd. [1] is the thread that\r\ngave rise to its LegacyDNStringFormat.\r\n\r\nSince RFC 2253 isn't a canonical encoding scheme, and we've already\r\nestablished that different TLS implementations do things slightly\r\ndifferently even when providing RFC-compliant output, maybe it doesn't\r\nmatter in the end: to get true compatibility, we need to implement a DN\r\nmatching scheme rather than checking string equality. But using RFC2253\r\nfor version 1 of the feature at least means that the *simplest* cases\r\nare the same across backends, since I doubt the NSS implementation is\r\ngoing to try to recreate OpenSSL's custom format.\r\n\r\n--Jacob\r\n\r\n[1] https://lists.apache.org/thread.html/2055b56985c69e7a6977151bf9817a0f982a4ad3b78a6a1984977fd0%401289507617%40%3Cusers.httpd.apache.org%3E\r\n",
"msg_date": "Tue, 2 Mar 2021 22:03:14 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 3:03 AM Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Fri, 2021-02-26 at 15:40 -0500, Andrew Dunstan wrote:\n> > I think the thing that's principally outstanding w.r.t. this patch is\n> > what format we should use to extract the DN.\n>\n> That and the warning label for sharp edges.\n>\n> > Should we use RFC2253,\n> > which reverses the field order, as has been suggested upthread and is in\n> > the latest patch? I'm slightly worried that it might be a POLA\n> > violation.\n>\n> All I can provide is the hindsight from httpd. [1] is the thread that\n> gave rise to its LegacyDNStringFormat.\n>\n> Since RFC 2253 isn't a canonical encoding scheme, and we've already\n> established that different TLS implementations do things slightly\n> differently even when providing RFC-compliant output, maybe it doesn't\n> matter in the end: to get true compatibility, we need to implement a DN\n> matching scheme rather than checking string equality. But using RFC2253\n> for version 1 of the feature at least means that the *simplest* cases\n> are the same across backends, since I doubt the NSS implementation is\n> going to try to recreate OpenSSL's custom format.\n>\n> --Jacob\n>\n> [1]\n> https://lists.apache.org/thread.html/2055b56985c69e7a6977151bf9817a0f982a4ad3b78a6a1984977fd0%401289507617%40%3Cusers.httpd.apache.org%3E\n>\n\n\nThis patch set no longer applies\nhttp://cfbot.cputube.org/patch_32_2835.log\n\nCan we get a rebase?\n\nI marked the patch \"Waiting on Author\".\n\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Mar 3, 2021 at 3:03 AM Jacob Champion <pchampion@vmware.com> wrote:On Fri, 2021-02-26 at 15:40 -0500, Andrew Dunstan wrote:\n> I think the thing that's principally outstanding w.r.t. this patch is\n> what format we should use to extract the DN.\n\nThat and the warning label for sharp edges.\n\n> Should we use RFC2253,\n> which reverses the field order, as has been suggested upthread and is in\n> the latest patch? I'm slightly worried that it might be a POLA\n> violation.\n\nAll I can provide is the hindsight from httpd. [1] is the thread that\ngave rise to its LegacyDNStringFormat.\n\nSince RFC 2253 isn't a canonical encoding scheme, and we've already\nestablished that different TLS implementations do things slightly\ndifferently even when providing RFC-compliant output, maybe it doesn't\nmatter in the end: to get true compatibility, we need to implement a DN\nmatching scheme rather than checking string equality. But using RFC2253\nfor version 1 of the feature at least means that the *simplest* cases\nare the same across backends, since I doubt the NSS implementation is\ngoing to try to recreate OpenSSL's custom format.\n\n--Jacob\n\n[1] https://lists.apache.org/thread.html/2055b56985c69e7a6977151bf9817a0f982a4ad3b78a6a1984977fd0%401289507617%40%3Cusers.httpd.apache.org%3E\nThis patch set no longer applieshttp://cfbot.cputube.org/patch_32_2835.logCan we get a rebase? I marked the patch \"Waiting on Author\". -- Ibrar Ahmed",
"msg_date": "Thu, 4 Mar 2021 15:53:34 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Sat, Jan 30, 2021, at 22:18, Andrew Dunstan wrote:\n> ssl-match-client-cert-dn-v3.patch\n\nSpelling error of \"conjunction\":\n+ This option is probably best used in comjunction with a username map. \n\n/Joel\nOn Sat, Jan 30, 2021, at 22:18, Andrew Dunstan wrote:ssl-match-client-cert-dn-v3.patchSpelling error of \"conjunction\":+ This option is probably best used in comjunction with a username map. /Joel",
"msg_date": "Thu, 04 Mar 2021 20:16:10 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On 3/4/21 2:16 PM, Joel Jacobson wrote:\n> On Sat, Jan 30, 2021, at 22:18, Andrew Dunstan wrote:\n>> ssl-match-client-cert-dn-v3.patch\n>\n> Spelling error of \"conjunction\":\n> + This option is probably best used in comjunction with a\n> username map.\n>\n>\n\n\n\nYeah, fixed this, added a bit more docco, and got rid of some bitrot.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 5 Mar 2021 16:01:38 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 04:01:38PM -0500, Andrew Dunstan wrote:\n> Yeah, fixed this, added a bit more docco, and got rid of some bitrot.\n\nI had a look at this patch. What you have here looks in good shape,\nand I have come comments.\n\n> + This option is probably best used in conjunction with a username map.\n> + The comparison is done with the <literal>DN</literal> in RFC2253 format.\n\nYou should add a link to the RFC, using https://tools.ietf.org/html/\nas a base like the other parts of the docs.\n\n> /* Make sure we have received a username in the certificate */\n> - if (port->peer_cn == NULL ||\n> - strlen(port->peer_cn) <= 0)\n> + peer_username = port->hba->clientcertname == clientCertCN ? port->peer_cn : port->peer_dn;\n\nShould have some parenthesis for clarity. But, couldn't you just\nuse a switch based on ClientCertName to select the data you wish to\nuse for the match? If a new option is added then it is impossible to\nmiss that peer_username needs a value as a compiler would warn on\nthat. \n\n> - (errmsg(\"certificate validation (clientcert=verify-full) failed for user \\\"%s\\\": CN mismatch\",\n> + (errmsg(\"certificate validation (clientcert=verify-full) failed for user \\\"%s\\\": CN/DN mismatch\",\n\nIt would be cleaner to show in this LOG that it is either a CN or a DN\nmismatch, but not both of them. And you can make the difference with\nhba->clientcertname for that.\n\n> + len = X509_NAME_get_text_by_NID(x509name, NID_commonName, NULL, 0);\n> if (len != -1)\n> {\n> - char *peer_cn;\n\nI think that you should keep this local declaration here.\n\n> + /* use commas instead of slashes */\n> + X509_NAME_print_ex(bio, x509name, 0, XN_FLAG_RFC2253);\n\nThe reason is not obvious for the reader here (aka that commas are\nrequired as slashes are common when printing the DN, quoting\nupthread). Hence, wouldn't it be better to add a comment explaining\nthat here?\n\n> + BIO_get_mem_ptr(bio, &bio_buf);\n\nBIO_get_mem_ptr() (BIO_ctrl() in the OpenSSL code) returns a status\ncode. I think that we had better check after it.\n\n> + peer_dn = MemoryContextAlloc(TopMemoryContext, bio_buf->length + 1);\n> + memcpy(peer_dn, bio_buf->data, bio_buf->length);\n> + peer_dn[bio_buf->length] = '\\0';\n> + if (bio_buf->length != strlen(peer_dn))\n> + {\n> + ereport(COMMERROR,\n> + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> + errmsg(\"SSL certificate's distinguished name contains embedded null\")));\n> + BIO_free(bio);\n> + pfree(peer_dn);\n> + pfree(port->peer_cn);\n> + port->peer_cn = NULL;\n> + return -1;\n> + }\n> +\n> + BIO_free(bio);\n\nYou could just do one BIO_free() once the memcpy() is done, no?\n\n> @@ -121,7 +121,7 @@ secure_open_server(Port *port)\n> \n> ereport(DEBUG2,\n> (errmsg_internal(\"SSL connection from \\\"%s\\\"\",\n> - port->peer_cn ? port->peer_cn : \"(anonymous)\")));\n> + port->peer_dn ? port->peer_dn : \"(anonymous)\")));\n\nCould it be better for debugging to show both the CN and DN if both\nare available?\n\n> -} ClientCertMode;\n> +} ClientCertMode;\n> +\n> +typedef enum ClientCertName\n> +{\n> + clientCertCN,\n> + clientCertDN\n> +} ClientCertName;\n\nMissing some indentation stuff here.\n\n> +# correct client cert using whole DN\n> +my $dn_connstr = $common_connstr;\n> +$dn_connstr =~ s/certdb/certdb_dn/;\n\nI would use a separate variable rather than enforcing an update of the\nexisting $common_connstr created a couple of lines above.\n\n> +# same thing but with a regex\n> +$dn_connstr =~ s/certdb_dn/certdb_dn_re/;\n\nSame here. This depends on the last variable assignment, which itself\ndepends on the assignment of $common_connstr.\n--\nMichael",
"msg_date": "Wed, 24 Mar 2021 13:54:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On 3/24/21 12:54 AM, Michael Paquier wrote:\n\n[numerous useful comments]\n\n\nOK, here's a new patch. I hope to commit this within a few days.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 26 Mar 2021 09:34:03 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Fri, Mar 26, 2021 at 09:34:03AM -0400, Andrew Dunstan wrote:\n> OK, here's a new patch. I hope to commit this within a few days.\n\nThanks!\n\n+ switch (port->hba->clientcertname)\n+ {\n+ case clientCertDN:\n+ peer_username = port->peer_dn;\n+ break;\n+ default:\n+ peer_username = port->peer_cn;\n+ }\n\nThis does not need a \"default\". I think that you should use \"case\nclientCertCN\" instead here.\n\n+ BIO_get_mem_ptr(bio, &bio_buf);\nNo status checks? OpenSSL calls return 1 on success and 0 on failure,\nso I would check after <= 0 here.\n\n++ if (port->hba->clientcertname == clientCertDN)\n++ {\n++ ereport(LOG,\nMay be better to use a switch() here as well.\n\nIt looks like this patch misses src/test/ssl/ssl/client-dn.crt,\ncausing the SSL tests to fail.\n--\nMichael",
"msg_date": "Mon, 29 Mar 2021 10:57:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 10:57:00AM +0900, Michael Paquier wrote:\n> + switch (port->hba->clientcertname)\n> + {\n> + case clientCertDN:\n> + peer_username = port->peer_dn;\n> + break;\n> + default:\n> + peer_username = port->peer_cn;\n> + }\n> \n> This does not need a \"default\". I think that you should use \"case\n> clientCertCN\" instead here.\n> \n> + BIO_get_mem_ptr(bio, &bio_buf);\n> No status checks? OpenSSL calls return 1 on success and 0 on failure,\n> so I would check after <= 0 here.\n> \n> ++ if (port->hba->clientcertname == clientCertDN)\n> ++ {\n> ++ ereport(LOG,\n> May be better to use a switch() here as well.\n> \n> It looks like this patch misses src/test/ssl/ssl/client-dn.crt,\n> causing the SSL tests to fail.\n\nFor the sake of the archives, this has been applied as of 6d7a6fe with\nall those nits from me addressed.\n--\nMichael",
"msg_date": "Tue, 30 Mar 2021 10:08:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allow matching whole DN from a client certificate"
}
] |
[
{
"msg_contents": "I would like to propose a few points that will help us detect file\ndamage, inconsistencies in files and track actions of users.\n\nOptionally, we could add these fields onto the WAL commit record:\n* 8-byte session start time (from MyStartTime)\n* 2-byte pid (from MyProcPid)\n* 4-byte user oid\nIdentified by a name flag bit, XACT_XINFO_HAS_USERINFO.\nThat allows us to match up transactions with the server log %c option.\nAnother option might allow text username to be added to each commit as well.\n\nXLogLongPageHeaderData has 4 bytes of unused data because of\nalignment. We could use that space for 1) the Xid Epoch value or 2) a\nCRC value - since only WAL records are covered by a CRC, not page or\nfile headers. Perhaps we should add both?\n\nThere are also 2 bytes unused in the XLogRecord header (with 8 byte\nalignment). We could optionally use that space for the pid that wrote\nthe record, but that's not compelling. What can we use those 2 bytes\nfor?\n\nREINDEX VERIFY\nAfter the new index is created, but before we drop the old index:\nCheck whether the two indexes match:\n* checks whether the previous index had pointers to row versions that\ndon't exist\n* checks whether the heap has rows that were not in the old index\nThis approach piggybacks on existing operations. AccessShareLock is\nheld on both indexes before the old one is dropped.\n\nOther ideas are welcome.\n\nThoughts?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 11 Nov 2020 20:56:27 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On 11/11/20 21:56, Simon Riggs wrote:\n> [ŝnip]\n>\n> REINDEX VERIFY\n> After the new index is created, but before we drop the old index:\n> Check whether the two indexes match:\n> * checks whether the previous index had pointers to row versions that\n> don't exist\n> * checks whether the heap has rows that were not in the old index\n> This approach piggybacks on existing operations. AccessShareLock is\n> held on both indexes before the old one is dropped.\n\nFWIW, as long as it's optional (due to the added runtime), it'd be a \nwelcome feature.\n\nMaybe something along the lines of:\n\n REINDEX (verify yes) ....\n\n\n> Other ideas are welcome.\n\nI still have nightmares from an specific customer case w/ shared storage \n(using VxFS) among two postmaster instances ---supposedly could never be \nactive concurrently, not completely sure that it didn't actually \nhappen--- and the corruption that we found there. I seem to remember \nthat they even had scripts to remove the locking when switching over and \nback :S\n\nI don't think Postgres can do much about this, but maybe someone can \ncome up with a suitable countermeasure.\n\n\nJust my .02€\n\nThanks,\n\n / J.L.\n\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 06:15:11 +0100",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "From: Simon Riggs <simon@2ndquadrant.com>\r\n> I would like to propose a few points that will help us detect file\r\n> damage, inconsistencies in files and track actions of users.\r\n\r\nHello, Simon san. Long time no see. I'm happy to see you be back here recently.\r\n\r\nWhat kind of improvement do you expect? What problems would this make detectable?\r\n\r\n\r\n> * 2-byte pid (from MyProcPid)\r\n\r\npid is 4 bytes on Windows. Isn't it also 4 byte on Linux when some kernel parameter is set to a certain value?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 12 Nov 2020 06:42:21 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Thu, 12 Nov 2020 at 06:42, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Simon Riggs <simon@2ndquadrant.com>\n> > I would like to propose a few points that will help us detect file\n> > damage, inconsistencies in files and track actions of users.\n>\n> Hello, Simon san. Long time no see. I'm happy to see you be back here recently.\n\nThank you, happy to be back. It's good to have the time to contribute again.\n\n> What kind of improvement do you expect? What problems would this make detectable?\n\nIf a rogue user/process is suspected, this would allow you to identify\nmore easily the changes made by specific sessions/users.\n\n> > * 2-byte pid (from MyProcPid)\n>\n> pid is 4 bytes on Windows. Isn't it also 4 byte on Linux when some kernel parameter is set to a certain value?\n\n4 bytes is no problem, thanks for pointing that out.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 12 Nov 2020 07:52:59 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "From: Simon Riggs <simon@2ndquadrant.com>\r\n> If a rogue user/process is suspected, this would allow you to identify\r\n> more easily the changes made by specific sessions/users.\r\n\r\nIsn't that kind of auditing a job of pgAudit or log_statement = mod? Or, does \"more easily\" mean that you find pgAudit complex to use and/or log_statement's overhead is big?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 13 Nov 2020 00:50:30 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Fri, 13 Nov 2020 at 00:50, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Simon Riggs <simon@2ndquadrant.com>\n> > If a rogue user/process is suspected, this would allow you to identify\n> > more easily the changes made by specific sessions/users.\n>\n> Isn't that kind of auditing a job of pgAudit or log_statement = mod? Or, does \"more easily\" mean that you find pgAudit complex to use and/or log_statement's overhead is big?\n\nWell, I designed pgaudit, so yes, I think pgaudit is useful.\n\nHowever, pgaudit works at the statement level, not the data level. So\nusing pgaudit to locate data rows that have changed is fairly hard.\n\nWhat I'm proposing is an option to add 16 bytes onto each COMMIT\nrecord, which is considerably less than turning on full auditing in\npgaudit. This option would allow identifying data at the row level, so\nyou could for example find all rows changed by specific sessions.\nAlso, because it is stored in WAL it will show updates that might no\nlonger exist in the database because the changed row versions might\nhave been vacuumed away. So pgaudit will tell you that happened, but\nhaving extra info in WAL is important also.\n\nSo thank you for the question because it has allowed me to explain why\nit is useful and important.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 13 Nov 2020 11:24:40 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Fri, 13 Nov 2020 at 11:24, Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Fri, 13 Nov 2020 at 00:50, tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Simon Riggs <simon@2ndquadrant.com>\n> > > If a rogue user/process is suspected, this would allow you to identify\n> > > more easily the changes made by specific sessions/users.\n> >\n> > Isn't that kind of auditing a job of pgAudit or log_statement = mod? Or, does \"more easily\" mean that you find pgAudit complex to use and/or log_statement's overhead is big?\n>\n> Well, I designed pgaudit, so yes, I think pgaudit is useful.\n>\n> However, pgaudit works at the statement level, not the data level. So\n> using pgaudit to locate data rows that have changed is fairly hard.\n>\n> What I'm proposing is an option to add 16 bytes onto each COMMIT\n> record, which is considerably less than turning on full auditing in\n> pgaudit. This option would allow identifying data at the row level, so\n> you could for example find all rows changed by specific sessions.\n> Also, because it is stored in WAL it will show updates that might no\n> longer exist in the database because the changed row versions might\n> have been vacuumed away. So pgaudit will tell you that happened, but\n> having extra info in WAL is important also.\n>\n> So thank you for the question because it has allowed me to explain why\n> it is useful and important.\n\nPatch attached to implement \"wal_sessioninfo\" option.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 17 Nov 2020 17:14:14 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n\n>\n> What I'm proposing is an option to add 16 bytes onto each COMMIT\n> record\n>\n\nWould it make sense to write this at the time we write a topxid assignment\nto WAL instead?\n\nOtherwise it won't be accessible to streaming-mode logical decoding.\n\nOn Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com> wrote:\nWhat I'm proposing is an option to add 16 bytes onto each COMMIT\nrecordWould it make sense to write this at the time we write a topxid assignment to WAL instead?Otherwise it won't be accessible to streaming-mode logical decoding.",
"msg_date": "Wed, 18 Nov 2020 14:42:29 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Wed, 18 Nov 2020 at 06:42, Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>>\n>>\n>> What I'm proposing is an option to add 16 bytes onto each COMMIT\n>> record\n>\n>\n> Would it make sense to write this at the time we write a topxid assignment to WAL instead?\n>\n> Otherwise it won't be accessible to streaming-mode logical decoding.\n\nDo you mean extend the xl_xact_assignment record? My understanding is\nthat is not sent in all cases, so not sure what you mean by \"instead\".\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 18 Nov 2020 10:23:41 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "Hi Simon,\n\nOn Wed, Nov 18, 2020 at 2:14 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Fri, 13 Nov 2020 at 11:24, Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > On Fri, 13 Nov 2020 at 00:50, tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > >\n> > > From: Simon Riggs <simon@2ndquadrant.com>\n> > > > If a rogue user/process is suspected, this would allow you to identify\n> > > > more easily the changes made by specific sessions/users.\n> > >\n> > > Isn't that kind of auditing a job of pgAudit or log_statement = mod? Or, does \"more easily\" mean that you find pgAudit complex to use and/or log_statement's overhead is big?\n> >\n> > Well, I designed pgaudit, so yes, I think pgaudit is useful.\n> >\n> > However, pgaudit works at the statement level, not the data level. So\n> > using pgaudit to locate data rows that have changed is fairly hard.\n> >\n> > What I'm proposing is an option to add 16 bytes onto each COMMIT\n> > record, which is considerably less than turning on full auditing in\n> > pgaudit. This option would allow identifying data at the row level, so\n> > you could for example find all rows changed by specific sessions.\n> > Also, because it is stored in WAL it will show updates that might no\n> > longer exist in the database because the changed row versions might\n> > have been vacuumed away. So pgaudit will tell you that happened, but\n> > having extra info in WAL is important also.\n> >\n> > So thank you for the question because it has allowed me to explain why\n> > it is useful and important.\n>\n> Patch attached to implement \"wal_sessioninfo\" option.\n\nYou sent in your patch, wal_sessioninfo.v2.patch to pgsql-hackers on\nNov 18, but you did not post it to the next CommitFest[1]. If this\nwas intentional, then you need to take no action. However, if you\nwant your patch to be reviewed as part of the upcoming CommitFest,\nthen you need to add it yourself before 2021-01-01 AoE[2]. Thanks for\nyour contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 29 Dec 2020 16:40:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On 11/18/20 5:23 AM, Simon Riggs wrote:\n> On Wed, 18 Nov 2020 at 06:42, Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n>>\n>> On Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>>>\n>>>\n>>> What I'm proposing is an option to add 16 bytes onto each COMMIT\n>>> record\n>>\n>>\n>> Would it make sense to write this at the time we write a topxid assignment to WAL instead?\n>>\n>> Otherwise it won't be accessible to streaming-mode logical decoding.\n> \n> Do you mean extend the xl_xact_assignment record? My understanding is\n> that is not sent in all cases, so not sure what you mean by \"instead\".\n\nCraig, can you clarify?\n\nCleysson, you are signed up as a reviewer. Do you know when you'll have \na change to have a look?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 15 Mar 2021 09:01:49 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Mon, 15 Mar 2021 at 21:01, David Steele <david@pgmasters.net> wrote:\n\n> On 11/18/20 5:23 AM, Simon Riggs wrote:\n> > On Wed, 18 Nov 2020 at 06:42, Craig Ringer\n> > <craig.ringer@enterprisedb.com> wrote:\n> >>\n> >> On Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com>\n> wrote:\n> >>>\n> >>>\n> >>> What I'm proposing is an option to add 16 bytes onto each COMMIT\n> >>> record\n> >>\n> >>\n> >> Would it make sense to write this at the time we write a topxid\n> assignment to WAL instead?\n> >>\n> >> Otherwise it won't be accessible to streaming-mode logical decoding.\n> >\n> > Do you mean extend the xl_xact_assignment record? My understanding is\n> > that is not sent in all cases, so not sure what you mean by \"instead\".\n>\n> Craig, can you clarify?\n>\n\nRight. Or write a separate WAL record when the feature is enabled. But it's\nprobably sufficient to write it as an optional chunk on xl_xact_assignment\nrecords. We often defer writing them so we can optimise away xacts that\nnever actually wrote anything, but IIRC we still write one before we write\nany WAL that references the xid. That'd be fine, since we don't need the\ninfo any sooner than that during decoding. I'd have to double check that we\nwrite it in all cases and won't get to that too soon, but I'm pretty sure\nwe do...\n\nOn Mon, 15 Mar 2021 at 21:01, David Steele <david@pgmasters.net> wrote:On 11/18/20 5:23 AM, Simon Riggs wrote:\n> On Wed, 18 Nov 2020 at 06:42, Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n>>\n>> On Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>>>\n>>>\n>>> What I'm proposing is an option to add 16 bytes onto each COMMIT\n>>> record\n>>\n>>\n>> Would it make sense to write this at the time we write a topxid assignment to WAL instead?\n>>\n>> Otherwise it won't be accessible to streaming-mode logical decoding.\n> \n> Do you mean extend the xl_xact_assignment record? My understanding is\n> that is not sent in all cases, so not sure what you mean by \"instead\".\n\nCraig, can you clarify?Right. Or write a separate WAL record when the feature is enabled. But it's probably sufficient to write it as an optional chunk on xl_xact_assignment records. We often defer writing them so we can optimise away xacts that never actually wrote anything, but IIRC we still write one before we write any WAL that references the xid. That'd be fine, since we don't need the info any sooner than that during decoding. I'd have to double check that we write it in all cases and won't get to that too soon, but I'm pretty sure we do...",
"msg_date": "Thu, 18 Mar 2021 14:19:52 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 6:20 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Mon, 15 Mar 2021 at 21:01, David Steele <david@pgmasters.net> wrote:\n>>\n>> On 11/18/20 5:23 AM, Simon Riggs wrote:\n>> > On Wed, 18 Nov 2020 at 06:42, Craig Ringer\n>> > <craig.ringer@enterprisedb.com> wrote:\n>> >>\n>> >> On Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>> >>>\n>> >>>\n>> >>> What I'm proposing is an option to add 16 bytes onto each COMMIT\n>> >>> record\n>> >>\n>> >>\n>> >> Would it make sense to write this at the time we write a topxid assignment to WAL instead?\n>> >>\n>> >> Otherwise it won't be accessible to streaming-mode logical decoding.\n>> >\n>> > Do you mean extend the xl_xact_assignment record? My understanding is\n>> > that is not sent in all cases, so not sure what you mean by \"instead\".\n>>\n>> Craig, can you clarify?\n>\n>\n> Right. Or write a separate WAL record when the feature is enabled. But it's probably sufficient to write it as an optional chunk on xl_xact_assignment records. We often defer writing them so we can optimise away xacts that never actually wrote anything, but IIRC we still write one before we write any WAL that references the xid. That'd be fine, since we don't need the info any sooner than that during decoding. I'd have to double check that we write it in all cases and won't get to that too soon, but I'm pretty sure we do...\n\nThe commit record is optimized away if no xid is assigned, though is\nstill present if we didn't write any WAL records.\n\nBut if a commit record exists in the WAL stream, we want to know where\nit came from.\n\nA later patch will add PITR capability based on this information so\nattaching it directly to the commit record is fairly important, IMHO.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 21 Jun 2021 17:24:44 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Tue, 22 Jun 2021 at 00:24, Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Thu, Mar 18, 2021 at 6:20 AM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n> >\n> > On Mon, 15 Mar 2021 at 21:01, David Steele <david@pgmasters.net> wrote:\n> >>\n> >> On 11/18/20 5:23 AM, Simon Riggs wrote:\n> >> > On Wed, 18 Nov 2020 at 06:42, Craig Ringer\n> >> > <craig.ringer@enterprisedb.com> wrote:\n> >> >>\n> >> >> On Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com>\n> wrote:\n> >> >>>\n> >> >>>\n> >> >>> What I'm proposing is an option to add 16 bytes onto each COMMIT\n> >> >>> record\n> >> >>\n> >> >>\n> >> >> Would it make sense to write this at the time we write a topxid\n> assignment to WAL instead?\n> >> >>\n> >> >> Otherwise it won't be accessible to streaming-mode logical decoding.\n> >> >\n> >> > Do you mean extend the xl_xact_assignment record? My understanding is\n> >> > that is not sent in all cases, so not sure what you mean by \"instead\".\n> >>\n> >> Craig, can you clarify?\n> >\n> >\n> > Right. Or write a separate WAL record when the feature is enabled. But\n> it's probably sufficient to write it as an optional chunk on\n> xl_xact_assignment records. We often defer writing them so we can optimise\n> away xacts that never actually wrote anything, but IIRC we still write one\n> before we write any WAL that references the xid. That'd be fine, since we\n> don't need the info any sooner than that during decoding. I'd have to\n> double check that we write it in all cases and won't get to that too soon,\n> but I'm pretty sure we do...\n>\n> The commit record is optimized away if no xid is assigned, though is\n> still present if we didn't write any WAL records.\n>\n> But if a commit record exists in the WAL stream, we want to know where\n> it came from.\n>\n> A later patch will add PITR capability based on this information so\n> attaching it directly to the commit record is fairly important, IMHO.\n>\n\nWhy?\n\nAll the proposed info:\n\n* 8-byte session start time (from MyStartTime)\n* 2-byte pid (from MyProcPid)\n* 4-byte user oid\n\nare available at topxid assignment time. If we defer writing them until\ncommit, we lose the ability to use this information during streaming\nlogical decoding. That's something I believe you've wanted for other\nfunctionality in the past, such as logical decoding based audit\nfunctionality.\n\nIIRC the restart_lsn horizon already ensures that we can't miss the\nxl_xact_assignment at the start of a txn. We would ensure that the desired\ninfo is available throughout decoding of the txn, including at commit\nrecord processing time, by adding it to the toplevel ReorderBufferTxn.\n\nThe only advantage I can see to annotating the commit record instead is\nthat we don't have to spend a few bytes per reorder-buffered topxid to\ntrack this info between start of decoding for the tx and processing of the\ncommit record. I don't think that's worth caring about.The advantages that\nhaving it earlier would give us are much more significant.\n\nA few examples:\n\n* Skip reorder buffering of non-target transactions early, so we can decode\nthe WAL stream to find the target transactions much faster using less\nmemory and I/O;\n\n* Read the database change stream and use the session info to stream info\ninto an intrusion detection system and/or audit engine in real time, using\ntxn streaming to avoid the need to create huge reorder buffers;\n\n* Re-decode the WAL stream to identify a target txn you know was aborted,\nand commit it instead, so you can recover data from aborted txns from the\nWAL stream using logical decoding. (Only possible if the catalog_xmin\nhasn't advanced past that point already though)\n\nSo yeah. I think it'd be better to log the info you want at start-of-txn\nunless there's a compelling reason not so, and I don't see one yet.\n\nOn Tue, 22 Jun 2021 at 00:24, Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Thu, Mar 18, 2021 at 6:20 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Mon, 15 Mar 2021 at 21:01, David Steele <david@pgmasters.net> wrote:\n>>\n>> On 11/18/20 5:23 AM, Simon Riggs wrote:\n>> > On Wed, 18 Nov 2020 at 06:42, Craig Ringer\n>> > <craig.ringer@enterprisedb.com> wrote:\n>> >>\n>> >> On Fri, Nov 13, 2020 at 7:24 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>> >>>\n>> >>>\n>> >>> What I'm proposing is an option to add 16 bytes onto each COMMIT\n>> >>> record\n>> >>\n>> >>\n>> >> Would it make sense to write this at the time we write a topxid assignment to WAL instead?\n>> >>\n>> >> Otherwise it won't be accessible to streaming-mode logical decoding.\n>> >\n>> > Do you mean extend the xl_xact_assignment record? My understanding is\n>> > that is not sent in all cases, so not sure what you mean by \"instead\".\n>>\n>> Craig, can you clarify?\n>\n>\n> Right. Or write a separate WAL record when the feature is enabled. But it's probably sufficient to write it as an optional chunk on xl_xact_assignment records. We often defer writing them so we can optimise away xacts that never actually wrote anything, but IIRC we still write one before we write any WAL that references the xid. That'd be fine, since we don't need the info any sooner than that during decoding. I'd have to double check that we write it in all cases and won't get to that too soon, but I'm pretty sure we do...\n\nThe commit record is optimized away if no xid is assigned, though is\nstill present if we didn't write any WAL records.\n\nBut if a commit record exists in the WAL stream, we want to know where\nit came from.\n\nA later patch will add PITR capability based on this information so\nattaching it directly to the commit record is fairly important, IMHO.Why?All the proposed info:\n* 8-byte session start time (from MyStartTime)\n* 2-byte pid (from MyProcPid)\n* 4-byte user oidare available at topxid assignment time. If we defer writing them until commit, we lose the ability to use this information during streaming logical decoding. That's something I believe you've wanted for other functionality in the past, such as logical decoding based audit functionality.IIRC the restart_lsn horizon already ensures that we can't miss the xl_xact_assignment at the start of a txn. We would ensure that the desired info is available throughout decoding of the txn, including at commit record processing time, by adding it to the toplevel ReorderBufferTxn.The only advantage I can see to annotating the commit record instead is that we don't have to spend a few bytes per reorder-buffered topxid to track this info between start of decoding for the tx and processing of the commit record. I don't think that's worth caring about.The advantages that having it earlier would give us are much more significant.A few examples:* Skip reorder buffering of non-target transactions early, so we can decode the WAL stream to find the target transactions much faster using less memory and I/O;* Read the database change stream and use the session info to stream info into an intrusion detection system and/or audit engine in real time, using txn streaming to avoid the need to create huge reorder buffers;* Re-decode the WAL stream to identify a target txn you know was aborted, and commit it instead, so you can recover data from aborted txns from the WAL stream using logical decoding. (Only possible if the catalog_xmin hasn't advanced past that point already though)So yeah. I think it'd be better to log the info you want at start-of-txn unless there's a compelling reason not so, and I don't see one yet.",
"msg_date": "Tue, 22 Jun 2021 13:31:34 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Tue, Jun 22, 2021 at 6:32 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n\n> IIRC the restart_lsn horizon already ensures that we can't miss the xl_xact_assignment at the start of a txn. We would ensure that the desired info is available throughout decoding of the txn, including at commit record processing time, by adding it to the toplevel ReorderBufferTxn.\n>\n> The only advantage I can see to annotating the commit record instead is that we don't have to spend a few bytes per reorder-buffered topxid to track this info between start of decoding for the tx and processing of the commit record. I don't think that's worth caring about.The advantages that having it earlier would give us are much more significant.\n>\n> A few examples:\n>\n> * Skip reorder buffering of non-target transactions early, so we can decode the WAL stream to find the target transactions much faster using less memory and I/O;\n>\n> * Read the database change stream and use the session info to stream info into an intrusion detection system and/or audit engine in real time, using txn streaming to avoid the need to create huge reorder buffers;\n>\n> * Re-decode the WAL stream to identify a target txn you know was aborted, and commit it instead, so you can recover data from aborted txns from the WAL stream using logical decoding. (Only possible if the catalog_xmin hasn't advanced past that point already though)\n>\n> So yeah. I think it'd be better to log the info you want at start-of-txn unless there's a compelling reason not so, and I don't see one yet.\n\nAFAIK, XLOG_XACT_ASSIGNMENT does not occur for normal top-level\ntransactions, only for subxids.\n\nI don't really want to add an extra record just for this because it\nwill slow down applications and it won't get turned on as often.\n\nThoughts?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 1 Jul 2021 17:19:43 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Fri, 2 Jul 2021 at 00:19, Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n\n> > So yeah. I think it'd be better to log the info you want at start-of-txn\n> unless there's a compelling reason not so, and I don't see one yet.\n>\n> AFAIK, XLOG_XACT_ASSIGNMENT does not occur for normal top-level\n> transactions, only for subxids.\n>\n> I don't really want to add an extra record just for this because it\n> will slow down applications and it won't get turned on as often.\n>\n\nOK, that makes sense - I was indeed operating on an incorrect assumption.\n\nI wouldn't want to add a new record either. I thought we could piggyback on\nXLOG_XACT_ASSIGNMENT with a new chunk that's only added when the feature is\nrequired, much like we do for replication origin info on commit records.\n\nIs it worth considering forcing XLOG_XACT_ASSIGNMENT to be logged if this\nfeature is enabled?\n\nIf you don't think the sorts of use cases I presented are worth the trouble\nthat's fair enough. I'm not against adding it on the commit record. It's\njust that with logical decoding moving toward a streaming model I suspect\nonly having it at commit time may cause us some pain later.\n\nOn Fri, 2 Jul 2021 at 00:19, Simon Riggs <simon.riggs@enterprisedb.com> wrote: \n> So yeah. I think it'd be better to log the info you want at start-of-txn unless there's a compelling reason not so, and I don't see one yet.\n\nAFAIK, XLOG_XACT_ASSIGNMENT does not occur for normal top-level\ntransactions, only for subxids.\n\nI don't really want to add an extra record just for this because it\nwill slow down applications and it won't get turned on as often.OK, that makes sense - I was indeed operating on an incorrect assumption.I wouldn't want to add a new record either. I thought we could piggyback on XLOG_XACT_ASSIGNMENT with a new chunk that's only added when the feature is required, much like we do for replication origin info on commit records.Is it worth considering forcing XLOG_XACT_ASSIGNMENT to be logged if this feature is enabled?If you don't think the sorts of use cases I presented are worth the trouble that's fair enough. I'm not against adding it on the commit record. It's just that with logical decoding moving toward a streaming model I suspect only having it at commit time may cause us some pain later.",
"msg_date": "Fri, 2 Jul 2021 12:34:13 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Fri, Jul 2, 2021 at 5:34 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> On Fri, 2 Jul 2021 at 00:19, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n>>\n>> > So yeah. I think it'd be better to log the info you want at start-of-txn unless there's a compelling reason not so, and I don't see one yet.\n>>\n>> AFAIK, XLOG_XACT_ASSIGNMENT does not occur for normal top-level\n>> transactions, only for subxids.\n>>\n>> I don't really want to add an extra record just for this because it\n>> will slow down applications and it won't get turned on as often.\n>\n>\n> OK, that makes sense - I was indeed operating on an incorrect assumption.\n>\n> I wouldn't want to add a new record either. I thought we could piggyback on XLOG_XACT_ASSIGNMENT with a new chunk that's only added when the feature is required, much like we do for replication origin info on commit records.\n>\n> Is it worth considering forcing XLOG_XACT_ASSIGNMENT to be logged if this feature is enabled?\n\nMy feeling is that the drop in performance would lead to it being\nturned off most of the time, reducing the value of the feature.\n\nDoes anyone else disagree?\n\n> If you don't think the sorts of use cases I presented are worth the trouble that's fair enough. I'm not against adding it on the commit record. It's just that with logical decoding moving toward a streaming model I suspect only having it at commit time may cause us some pain later.\n\nI think you have some good ideas about how to handle larger\ntransactions with streaming. As a separate patch it might be worth\nkeeping track of transaction size and logging something when a\ntransaction gets too large.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 2 Jul 2021 15:58:56 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Fri, Jul 2, 2021 at 8:29 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, Jul 2, 2021 at 5:34 AM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n> >\n>\n> > If you don't think the sorts of use cases I presented are worth the trouble that's fair enough. I'm not against adding it on the commit record. It's just that with logical decoding moving toward a streaming model I suspect only having it at commit time may cause us some pain later.\n>\n> I think you have some good ideas about how to handle larger\n> transactions with streaming. As a separate patch it might be worth\n> keeping track of transaction size and logging something when a\n> transaction gets too large.\n>\n\nIf we want this additional information for streaming mode in logical\nreplication then can't we piggyback it on the very first record\nwritten for a transaction when this info is required? Currently, we do\nsomething similar for logging top_level_xid for subtransaction in\nXLogRecordAssemble().\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Jul 2021 08:51:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 4:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 2, 2021 at 8:29 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Fri, Jul 2, 2021 at 5:34 AM Craig Ringer\n> > <craig.ringer@enterprisedb.com> wrote:\n> > >\n> >\n> > > If you don't think the sorts of use cases I presented are worth the trouble that's fair enough. I'm not against adding it on the commit record. It's just that with logical decoding moving toward a streaming model I suspect only having it at commit time may cause us some pain later.\n> >\n> > I think you have some good ideas about how to handle larger\n> > transactions with streaming. As a separate patch it might be worth\n> > keeping track of transaction size and logging something when a\n> > transaction gets too large.\n> >\n>\n> If we want this additional information for streaming mode in logical\n> replication then can't we piggyback it on the very first record\n> written for a transaction when this info is required? Currently, we do\n> something similar for logging top_level_xid for subtransaction in\n> XLogRecordAssemble().\n\nIt's possible, but I'm struggling to believe anybody would accept that\nas an approach because it breaks simplicity, modularity and makes it\nharder to search for this info in the WAL.\n\nI was imagining that we'd keep track of amount of WAL written by a\ntransaction and when it reaches a certain size generate a \"streaming\ninfo\" record as an early warning that we have a big transaction coming\ndown the pipe.\n\nI'm feeling that a simple patch is expanding well beyond its original\nscope and timeline. How can we do this simply?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 13 Jul 2021 15:59:47 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 8:29 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, Jul 6, 2021 at 4:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > > If you don't think the sorts of use cases I presented are worth the trouble that's fair enough. I'm not against adding it on the commit record. It's just that with logical decoding moving toward a streaming model I suspect only having it at commit time may cause us some pain later.\n> > >\n> > > I think you have some good ideas about how to handle larger\n> > > transactions with streaming. As a separate patch it might be worth\n> > > keeping track of transaction size and logging something when a\n> > > transaction gets too large.\n> > >\n> >\n> > If we want this additional information for streaming mode in logical\n> > replication then can't we piggyback it on the very first record\n> > written for a transaction when this info is required? Currently, we do\n> > something similar for logging top_level_xid for subtransaction in\n> > XLogRecordAssemble().\n>\n> It's possible, but I'm struggling to believe anybody would accept that\n> as an approach because it breaks simplicity, modularity and makes it\n> harder to search for this info in the WAL.\n>\n> I was imagining that we'd keep track of amount of WAL written by a\n> transaction and when it reaches a certain size generate a \"streaming\n> info\" record as an early warning that we have a big transaction coming\n> down the pipe.\n>\n\nI am not sure if that satisfies Craig's requirement of making it\navailable during the streaming of in-progress xacts during logical\nreplication. It is quite possible that by the time we decide to start\nstreaming a transaction this information won't be logged yet.\n\n> I'm feeling that a simple patch is expanding well beyond its original\n> scope and timeline. How can we do this simply?\n>\n\nThe patch is simple but its use doesn't seem to be very clear. You\nhave mentioned its use for future PITR patches and Craig mentioned\nsome use cases in logical decoding and it appears to me that to\nsupport the use cases mentioned by Craig, it is important to LOG this\nearlier than at commit time. As there are no details about how it will\nbe used for PITR patches and whether such patch ideas are accepted, it\nmakes it harder to judge the value of this patch.\n\nI think if we would have patches (even at WIP/POC stage) for the ideas\nyou and Craig have in mind, it would have been much easier to see the\nvalue of this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Jul 2021 09:31:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Wed, 14 Jul 2021 at 05:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> The patch is simple but its use doesn't seem to be very clear. You\n> have mentioned its use for future PITR patches and Craig mentioned\n> some use cases in logical decoding and it appears to me that to\n> support the use cases mentioned by Craig, it is important to LOG this\n> earlier than at commit time. As there are no details about how it will\n> be used for PITR patches and whether such patch ideas are accepted, it\n> makes it harder to judge the value of this patch.\n\n> I think if we would have patches (even at WIP/POC stage) for the ideas\n> you and Craig have in mind, it would have been much easier to see the\n> value of this patch.\n\nFair question. This is one of a series of 4 independent patches I have\nplanned to provide enhanced information and/or recovery tools. (The\nsecond one is already in this CF). This is an area I know lots about\nand nobody else is working on, so I thought I would contribute. I've\nnot discussed this off-list with anyone else. So this is PITR as\nrecovery in a broad sense, not just that specific feature.\n\nFor this patch, the idea is to be able to go in either direction:\nData damage <--> User info\n\nSo if you know a user was an intruder, you can detect the damage they caused.\nOr, if you detect damage, you can work out who caused it, work out if\nthey were an intruder and if so, detect what else they did.\n\nThe most important thing is to have the info available in WAL, nothing\nis possible until that is available.\nWe already added an option to add this same info to log_line_prefix,\nyet nobody said it wasn't useful there, or needed other uses to allow\nthe feature.\nThe two sources of info are designed to be able to be used in combination.\n\nMy experience of recovery scenarios is that you often have to build\ncustom search tools to make it work. It's hard to say whether you'll\nwant to track the user, the specific session, or even specific\ntransactions.\n\nBut I do understand the overall request, so I propose adding\n* pg_waldump output for wal_sessioninfo data, if it exists\n* pg_waldump --user=USERNAME as a filter on username\nto demonstrate the use of this\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 22 Jul 2021 14:40:01 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 7:10 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Wed, 14 Jul 2021 at 05:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> But I do understand the overall request, so I propose adding\n> * pg_waldump output for wal_sessioninfo data, if it exists\n> * pg_waldump --user=USERNAME as a filter on username\n> to demonstrate the use of this\n>\n\nThis makes sense but I am still thinking of some more concrete way.\nCan we think of providing a way to filter WAL from user/process (like\nfilter_by_origin) for logical decoding? If so, then we can have an\nexample to show via test_decoding.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 31 Jul 2021 18:00:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detecting File Damage & Inconsistencies"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile restoring a backup using recovery_target immediate, I noticed that\nthere\nisn't a timestamp available.\n\nLOG: consistent recovery state reached at 0/A000100\nLOG: recovery stopping after reaching consistency\nLOG: pausing at the end of recovery\nHINT: Execute pg_wal_replay_resume() to promote.\nLOG: database system is ready to accept read only connections\n\nif you decide to use one of the targets or just recover until the end of the\nWAL, you get a (last completed transaction) timestamp.\n\nLOG: redo done at 0/10FFEC38 system usage: CPU: user: 0.10 s, system: 0.05\ns, elapsed: 1.65 s\nLOG: last completed transaction was at log time 2020-11-11\n17:27:31.715251-03\nLOG: restored log file \"000000010000000000000010\" from archive\ncp: cannot stat '/a/pgarchive/00000002.history': No such file or directory\nLOG: selected new timeline ID: 2\nLOG: archive recovery complete\ncp: cannot stat '/a/pgarchive/00000001.history': No such file or directory\nLOG: database system is ready to accept connections\n\nI dig into the pg_waldump output to figure out the timestamp, however, the\ncheckpoint timestamp isn't printed by pg_waldump. The checkpoint timestamp\nmight be useful information at least when you set large values for a\ncheckpoint\nor need to investigate a performance/corruption issue.\n\nThe first patch adds a new message that prints the latest completed\ncheckpoint\nwhen the consistent state is reached. It also exposes the checkpoint\ntimestamp\nin debug messages.\n\nLOG: consistent recovery state reached at 0/A000100\nDETAIL: Last completed checkpoint was at log time 2020-11-11 17:31:50 -03.\nLOG: recovery stopping after reaching consistency\nLOG: pausing at the end of recovery\nHINT: Execute pg_wal_replay_resume() to promote.\nLOG: database system is ready to accept read only connections\n.\n.\n.\nDEBUG: checkpoint record is at 0/A000060\nDEBUG: checkpoint time is 2020-11-11 17:34:19 -03\n\nThe second patch provides the checkpoint timestamp in the pg_waldump output\nand\nalso when you enable wal_debug parameter. The pg_waldump output looks like\n\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n0/0A000060, prev 0/0A000028, desc: CHECKPOINT_ONLINE redo 0/A000028;\ntimestamp qua 11 nov 2020 17:34:19 -03; tli 1; prev tli 1; fpw true; xid\n0:519; oid 24576; multi 1; offset 0; oldest xid 501 in DB 1; oldest multi 1\nin DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 519;\nonline\n\nand the debug messages are\n\nDEBUG: checkpoint record is at 0/A000060\nDEBUG: checkpoint time is 2020-11-11 17:37:47 -03\nLOG: REDO @ 0/A000060; LSN 0/A0000D8: prev 0/A000028; xid 0; len 88 -\nXLOG/CHECKPOINT_ONLINE: redo 0/A000028; timestamp Wed Nov 11 17:37:47 2020;\ntli 1; prev tli 1; fpw true; xid 0:519; oid 24576; multi 1; offset 0;\noldest xid 501 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\ntimestamp xid: 0/0; oldest running xid 519; online\n\n\nRegards,\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 11 Nov 2020 18:00:00 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "recovery_target immediate timestamp"
},
{
"msg_contents": "\n\nOn 2020/11/12 6:00, Euler Taveira wrote:\n> Hi,\n> \n> While restoring a backup using recovery_target immediate, I noticed that there\n> isn't a timestamp available.\n> \n> LOG: consistent recovery state reached at 0/A000100\n> LOG: recovery stopping after reaching consistency\n> LOG: pausing at the end of recovery\n> HINT: Execute pg_wal_replay_resume() to promote.\n> LOG: database system is ready to accept read only connections\n> \n> if you decide to use one of the targets or just recover until the end of the\n> WAL, you get a (last completed transaction) timestamp.\n> \n> LOG: redo done at 0/10FFEC38 system usage: CPU: user: 0.10 s, system: 0.05 s, elapsed: 1.65 s\n> LOG: last completed transaction was at log time 2020-11-11 17:27:31.715251-03\n> LOG: restored log file \"000000010000000000000010\" from archive\n> cp: cannot stat '/a/pgarchive/00000002.history': No such file or directory\n> LOG: selected new timeline ID: 2\n> LOG: archive recovery complete\n> cp: cannot stat '/a/pgarchive/00000001.history': No such file or directory\n> LOG: database system is ready to accept connections\n> \n> I dig into the pg_waldump output to figure out the timestamp, however, the\n> checkpoint timestamp isn't printed by pg_waldump. The checkpoint timestamp\n> might be useful information at least when you set large values for a checkpoint\n> or need to investigate a performance/corruption issue.\n> \n> The first patch adds a new message that prints the latest completed checkpoint\n> when the consistent state is reached.\n\nI'm not sure how useful this information is in practice.\n\n> It also exposes the checkpoint timestamp\n> in debug messages.\n\n \t\t\tereport(DEBUG1,\n \t\t\t\t\t(errmsg(\"checkpoint record is at %X/%X\",\n \t\t\t\t\t\t\t(uint32) (checkPointLoc >> 32), (uint32) checkPointLoc)));\n+\t\t\tereport(DEBUG1,\n+\t\t\t\t\t(errmsg(\"checkpoint time is %s\", str_time(checkPoint.time))));\n\nThe above first debug message displays the LSN of the checkpoint record.\nOTOH, the second message displays the time when the checkpoint started\n(not the time when checkpoint record was written at the end of checkpoint).\nSo isn't it confusing to display those inconsistent information together?\n\n> \n> LOG: consistent recovery state reached at 0/A000100\n> DETAIL: Last completed checkpoint was at log time 2020-11-11 17:31:50 -03.\n> LOG: recovery stopping after reaching consistency\n> LOG: pausing at the end of recovery\n> HINT: Execute pg_wal_replay_resume() to promote.\n> LOG: database system is ready to accept read only connections\n> .\n> .\n> .\n> DEBUG: checkpoint record is at 0/A000060\n> DEBUG: checkpoint time is 2020-11-11 17:34:19 -03\n> \n> The second patch provides the checkpoint timestamp in the pg_waldump output and\n> also when you enable wal_debug parameter. The pg_waldump output looks like\n\n+1\n\n+#ifdef FRONTEND\n+\t\tstrftime(checkpointstr, sizeof(checkpointstr), \"%c\", localtime(&time_tmp));\n+#else\n+\t\tpg_strftime(checkpointstr, sizeof(checkpointstr), \"%c\", pg_localtime(&time_tmp, log_timezone));\n+#endif\n\nYou can simplify the code by using timestamptz_to_str() here instead, like xact_desc_commit() does.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 12 Nov 2020 10:40:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target immediate timestamp"
},
{
"msg_contents": "On Wed, 11 Nov 2020 at 22:40, Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n> On 2020/11/12 6:00, Euler Taveira wrote:\n>\n> > The first patch adds a new message that prints the latest completed\n> checkpoint\n> > when the consistent state is reached.\n>\n> I'm not sure how useful this information is in practice.\n>\n> Fujii, thanks for reviewing it. It provides the same information as the\n\"last\ncompleted transaction was\" message.\n\n\n> > It also exposes the checkpoint timestamp\n> > in debug messages.\n>\n> ereport(DEBUG1,\n> (errmsg(\"checkpoint record is at\n> %X/%X\",\n> (uint32)\n> (checkPointLoc >> 32), (uint32) checkPointLoc)));\n> + ereport(DEBUG1,\n> + (errmsg(\"checkpoint time is %s\",\n> str_time(checkPoint.time))));\n>\n> The above first debug message displays the LSN of the checkpoint record.\n> OTOH, the second message displays the time when the checkpoint started\n> (not the time when checkpoint record was written at the end of checkpoint).\n> So isn't it confusing to display those inconsistent information together?\n>\n> Indeed the checkpoint timestamp is from before it determines the REDO LSN.\nAre\nyou saying the this checkpoint timestamp is inconsistent because it is not\nnear\nthe point it saves the RedoRecPtr? If so, let's move checkPoint.time a few\nlines below.\n\n /*\n * Here we update the shared RedoRecPtr for future XLogInsert calls;\nthis\n * must be done while holding all the insertion locks.\n *\n * Note: if we fail to complete the checkpoint, RedoRecPtr will be left\n * pointing past where it really needs to point. This is okay; the only\n * consequence is that XLogInsert might back up whole buffers that it\n * didn't really need to. We can't postpone advancing RedoRecPtr\nbecause\n * XLogInserts that happen while we are dumping buffers must assume that\n * their buffer changes are not included in the checkpoint.\n */\n RedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n checkPoint.time = (pg_time_t) time(NULL);\n\nI realized that I was using the wrong variable in one of the debug messages.\n\n\n> > The second patch provides the checkpoint timestamp in the pg_waldump\n> output and\n> > also when you enable wal_debug parameter. The pg_waldump output looks\n> like\n>\n> +1\n>\n> +#ifdef FRONTEND\n> + strftime(checkpointstr, sizeof(checkpointstr), \"%c\",\n> localtime(&time_tmp));\n> +#else\n> + pg_strftime(checkpointstr, sizeof(checkpointstr), \"%c\",\n> pg_localtime(&time_tmp, log_timezone));\n> +#endif\n>\n> You can simplify the code by using timestamptz_to_str() here instead, like\n> xact_desc_commit() does.\n>\n> I have the same idea until I realized that checkPoint.time is pg_time_t\nand not\nTimestampTz. [digging the code a bit...] I figure out there is a function\nthat\nconverts from pg_time_t to TimestampTz: time_t_to_timestamptz(). I removed\nthat\nugly code but have to duplicate this function into compat.c. I don't have a\nstrong preference but I'm attaching a new patch.\n\nAt the end, I asked myself if it is worth changing this type from pg_time_t\nto\nTimestampTz.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 12 Nov 2020 20:39:44 -0300",
"msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery_target immediate timestamp"
},
{
"msg_contents": "\n\nOn 2020/11/13 8:39, Euler Taveira wrote:\n> On Wed, 11 Nov 2020 at 22:40, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> On 2020/11/12 6:00, Euler Taveira wrote:\n> \n> > The first patch adds a new message that prints the latest completed checkpoint\n> > when the consistent state is reached.\n> \n> I'm not sure how useful this information is in practice.\n> \n> Fujii, thanks for reviewing it. It provides the same information as the \"last\n> completed transaction was\" message.\n> \n> > It also exposes the checkpoint timestamp\n> > in debug messages.\n> \n> ereport(DEBUG1,\n> (errmsg(\"checkpoint record is at %X/%X\",\n> (uint32) (checkPointLoc >> 32), (uint32) checkPointLoc)));\n> + ereport(DEBUG1,\n> + (errmsg(\"checkpoint time is %s\", str_time(checkPoint.time))));\n> \n> The above first debug message displays the LSN of the checkpoint record.\n> OTOH, the second message displays the time when the checkpoint started\n> (not the time when checkpoint record was written at the end of checkpoint).\n> So isn't it confusing to display those inconsistent information together?\n> \n> Indeed the checkpoint timestamp is from before it determines the REDO LSN. Are\n> you saying the this checkpoint timestamp is inconsistent because it is not near\n> the point it saves the RedoRecPtr? If so, let's move checkPoint.time a few\n> lines below.\n\nNo. What I'd like to say is; checkPointLoc that the first debug message\noutputs is the LSN of checkpoint record, not the checkpoint REDO location.\nThe checkpoint REDO location is determined at the early stage of\ncheckpointing. OTOH, the location of checkpoint record is determined\nat the end of checkpointing. They are different.\n\nThe checkpoint time that the second debug message you added outputs is\nthe timestamp determined at the beginning of checkpointing. So it seems\nnot reasonable to display the location of checkpoint record and\nthe checkpoint time because they are determined at the different timing.\nAm I missing something?\n\n\n> \n> /*\n> * Here we update the shared RedoRecPtr for future XLogInsert calls; this\n> * must be done while holding all the insertion locks.\n> *\n> * Note: if we fail to complete the checkpoint, RedoRecPtr will be left\n> * pointing past where it really needs to point. This is okay; the only\n> * consequence is that XLogInsert might back up whole buffers that it\n> * didn't really need to. We can't postpone advancing RedoRecPtr because\n> * XLogInserts that happen while we are dumping buffers must assume that\n> * their buffer changes are not included in the checkpoint.\n> */\n> RedoRecPtr = XLogCtl->Insert.RedoRecPtr = checkPoint.redo;\n> checkPoint.time = (pg_time_t) time(NULL);\n> \n> I realized that I was using the wrong variable in one of the debug messages.\n> \n> > The second patch provides the checkpoint timestamp in the pg_waldump output and\n> > also when you enable wal_debug parameter. The pg_waldump output looks like\n> \n> +1\n> \n> +#ifdef FRONTEND\n> + strftime(checkpointstr, sizeof(checkpointstr), \"%c\", localtime(&time_tmp));\n> +#else\n> + pg_strftime(checkpointstr, sizeof(checkpointstr), \"%c\", pg_localtime(&time_tmp, log_timezone));\n> +#endif\n> \n> You can simplify the code by using timestamptz_to_str() here instead, like xact_desc_commit() does.\n> \n> I have the same idea until I realized that checkPoint.time is pg_time_t and not\n> TimestampTz. [digging the code a bit...] I figure out there is a function that\n> converts from pg_time_t to TimestampTz: time_t_to_timestamptz(). I removed that\n> ugly code but have to duplicate this function into compat.c. I don't have a\n> strong preference but I'm attaching a new patch.\n\nThanks for updating the patch! At least for me this approach looks better.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 13 Nov 2020 23:25:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery_target immediate timestamp"
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nWhile supporting customers, it would frequently be useful to have more information about the history of a cluster. For example, which prior versions were ever installed and running on the cluster? Has the cluster ever been run with fsync=off? When did the server last enter recovery, if ever? Was a backup_label file present at that time?\n\nSome of this type of information could strictly be fixed size, such as a fixed set of timestamps for the time at which a fixed set of things last occurred, or a fixed set of bits indicating whether a fixed set of things ever happened.\n\nSome other types would be variable size, but hopefully short in practice, like a list of all postgres versions that have ever been run on the cluster.\n\nLogging the information via the usual log mechanism seems insufficient, as log files may get rotated and this information lost.\n\nWould it be acceptable to store some fixed set of flag bits and timestamps in pg_control? Space there is at a premium.\n\nWould it make sense to alternately, or additionally, store some of this information in a flat text file in pg_data, say a new file named \"cluster_history\" or such?\n\nI'm happy to put together a more concrete proposal, but solicit your opinions first on the merits of the idea generally, and if you think the idea good, on the specifics you'd like to see included.\n\nThanks!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 11 Nov 2020 14:31:32 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Tracking cluster upgrade and configuration history"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 4:01 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> While supporting customers, it would frequently be useful to have more information about the history of a cluster. For example, which prior versions were ever installed and running on the cluster? Has the cluster ever been run with fsync=off? When did the server last enter recovery, if ever? Was a backup_label file present at that time?\n>\n\n+1 for the idea. The information will be useful at times for debugging purposes.\n\n>\n> Some of this type of information could strictly be fixed size, such as a fixed set of timestamps for the time at which a fixed set of things last occurred, or a fixed set of bits indicating whether a fixed set of things ever happened.\n>\n> Some other types would be variable size, but hopefully short in practice, like a list of all postgres versions that have ever been run on the cluster.\n>\n> Logging the information via the usual log mechanism seems insufficient, as log files may get rotated and this information lost.\n>\n\nTrue. Just a thought, can we use existing logging mechanism and APIs\nto write to a new file that never gets rotated by the syslogger(Of\ncourse, we need to think of the maximum file size that's allowed)? The\nidea is like this: we use elog/ereport and so on with a new debug\nlevel, when specified, instead of logging into the standard log files,\nwe log it to the new file.\n\n>\n> Would it be acceptable to store some fixed set of flag bits and timestamps in pg_control? Space there is at a premium.\n>\n\nSince we allocate ControlFileData in shared memory and also we may\nhave some data with timestamps, variable texts and so on, having this\nincluded in pg_control data structure would not seem a good idea to\nme.\n\n>\n> Would it make sense to alternately, or additionally, store some of this information in a flat text file in pg_data, say a new file named \"cluster_history\" or such?\n>\n\nIMHO, this is also a good idea. We need to think of the APIs to\nopen/read/write/close that history file? How often and which processes\nand what type of data they write? Is it that the postmaster alone will\nwrite into that file? If multiple processes are allowed to write, how\nto deal with concurrent writers? Will users have to open manually and\nread that file? or Will we have some program similar to\npg_controldata? Will we have some maximum limit to the size of this\nfile?\n\n>\n> I'm happy to put together a more concrete proposal, but solicit your opinions first on the merits of the idea generally, and if you think the idea good, on the specifics you'd like to see included.\n>\n\nWelcome to know more about this idea.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Nov 2020 12:17:51 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tracking cluster upgrade and configuration history"
},
{
"msg_contents": "2020年11月16日(月) 15:48 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Thu, Nov 12, 2020 at 4:01 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> >\n> > While supporting customers, it would frequently be useful to have more information about the history of a cluster. For example, which prior versions were ever installed and running on the cluster? Has the cluster ever been run with fsync=off? When did the server last enter recovery, if ever? Was a backup_label file present at that time?\n> >\n>\n> +1 for the idea. The information will be useful at times for debugging purposes.\n\nIt's certainly something which would be nice to have.\n\n> > Would it make sense to alternately, or additionally, store some of this information in a flat text file in pg_data, say a new file named \"cluster_history\" or such?\n> >\n>\n> IMHO, this is also a good idea. We need to think of the APIs to\n> open/read/write/close that history file? How often and which processes\n> and what type of data they write? Is it that the postmaster alone will\n> write into that file? If multiple processes are allowed to write, how\n> to deal with concurrent writers? Will users have to open manually and\n> read that file? or Will we have some program similar to\n> pg_controldata? Will we have some maximum limit to the size of this\n> file?\n\npg_stat_statements might be worth looking at as one way of handling that kind\nof file.\n\nHowever the problem with keeping a separate file which is not WAL-logged would\nmean it doesn't get propagated to standbys, and there's also the question\nof how it could be maintained across upgrades via pg_upgrade.\n\nFWIW I did once create a background worker extension [1] which logs\nconfiguration changes to a table, though it's not particularly maintained or\nrecommended for production use.\n\n[1] https://github.com/ibarwick/config_log\n\n\nRegards\n\nIan Barwick\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Nov 2020 16:23:42 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tracking cluster upgrade and configuration history"
},
{
"msg_contents": "\n\n> On Nov 15, 2020, at 10:47 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Thu, Nov 12, 2020 at 4:01 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>> While supporting customers, it would frequently be useful to have more information about the history of a cluster. For example, which prior versions were ever installed and running on the cluster? Has the cluster ever been run with fsync=off? When did the server last enter recovery, if ever? Was a backup_label file present at that time?\n>> \n> \n> +1 for the idea. The information will be useful at times for debugging purposes.\n\nThanks for the feedback.\n\n> \n>> \n>> Some of this type of information could strictly be fixed size, such as a fixed set of timestamps for the time at which a fixed set of things last occurred, or a fixed set of bits indicating whether a fixed set of things ever happened.\n>> \n>> Some other types would be variable size, but hopefully short in practice, like a list of all postgres versions that have ever been run on the cluster.\n>> \n>> Logging the information via the usual log mechanism seems insufficient, as log files may get rotated and this information lost.\n>> \n> \n> True. Just a thought, can we use existing logging mechanism and APIs\n> to write to a new file that never gets rotated by the syslogger(Of\n> course, we need to think of the maximum file size that's allowed)? The\n> idea is like this: we use elog/ereport and so on with a new debug\n> level, when specified, instead of logging into the standard log files,\n> we log it to the new file.\n\nThat's going in a very different direction from what I had in mind. I was imagining something like a single binary or text file of either fixed size or something very short but not fixed. The \"very short but not fixed\" part of that seems a bit too hand-waving on reflection. Any variable length list, such as the list of postgres versions started on the cluster, could be made fixed length by only tracking the most recent N of them, perhaps with a flag bit to indicate if the list has overflowed.\n\nUsing elog/ereport with a new log level that gets directed into a different log file is an interesting idea, but it is not clear how to use elog/ereport in a principled way to write files that need never get too large.\n\n>> Would it be acceptable to store some fixed set of flag bits and timestamps in pg_control? Space there is at a premium.\n>> \n> \n> Since we allocate ControlFileData in shared memory and also we may\n> have some data with timestamps, variable texts and so on, having this\n> included in pg_control data structure would not seem a good idea to\n> me.\n\nVariable length texts seem completely out of scope for this. I would expect the data to be a collection of integer types and flag bits. Fixed length text might also be possible, but I don't have any examples in mind of text that we'd want to track.\n\n>> Would it make sense to alternately, or additionally, store some of this information in a flat text file in pg_data, say a new file named \"cluster_history\" or such?\n>> \n> \n> IMHO, this is also a good idea. We need to think of the APIs to\n> open/read/write/close that history file? How often and which processes\n> and what type of data they write? Is it that the postmaster alone will\n> write into that file? If multiple processes are allowed to write, how\n> to deal with concurrent writers? Will users have to open manually and\n> read that file? or Will we have some program similar to\n> pg_controldata? Will we have some maximum limit to the size of this\n> file?\n\nThis depends in part on feedback about which information others on this list would like to see included, but I was imagining something similar to how pg_control works, or using pg_control itself. The maximum size for pg_control is 512 bytes, and on my system sizeof(ControlFileData) = 296, which leaves 216 bytes free. I didn't check how much that might change on systems with different alignments. We could either use some of the ~200 bytes currently available in pg_control, or use another file, \"pg_history\" or such, following the design pattern already used for pg_control.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 16 Nov 2020 11:01:50 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Tracking cluster upgrade and configuration history"
},
{
"msg_contents": "\n\n> On Nov 15, 2020, at 11:23 PM, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> \n> 2020年11月16日(月) 15:48 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>> \n>> On Thu, Nov 12, 2020 at 4:01 AM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> \n>>> While supporting customers, it would frequently be useful to have more information about the history of a cluster. For example, which prior versions were ever installed and running on the cluster? Has the cluster ever been run with fsync=off? When did the server last enter recovery, if ever? Was a backup_label file present at that time?\n>>> \n>> \n>> +1 for the idea. The information will be useful at times for debugging purposes.\n> \n> It's certainly something which would be nice to have.\n\nThanks for the feedback.\n\n>>> Would it make sense to alternately, or additionally, store some of this information in a flat text file in pg_data, say a new file named \"cluster_history\" or such?\n>>> \n>> \n>> IMHO, this is also a good idea. We need to think of the APIs to\n>> open/read/write/close that history file? How often and which processes\n>> and what type of data they write? Is it that the postmaster alone will\n>> write into that file? If multiple processes are allowed to write, how\n>> to deal with concurrent writers? Will users have to open manually and\n>> read that file? or Will we have some program similar to\n>> pg_controldata? Will we have some maximum limit to the size of this\n>> file?\n> \n> pg_stat_statements might be worth looking at as one way of handling that kind\n> of file.\n> \n> However the problem with keeping a separate file which is not WAL-logged would\n> mean it doesn't get propagated to standbys, and there's also the question\n> of how it could be maintained across upgrades via pg_upgrade.\n\nHmmm. I was not expecting the file to be propagated to standbys. The information could legitimately be different for a primary and a standby. As a very simple example, there may be a flag bit for whether the cluster has operated as a standby. That does raise questions about what sort of information about a primary that a standby should track, in case they get promoted to primary and information about the old primary would be useful for troubleshooting. Ideas welcome....\n\n> \n> FWIW I did once create a background worker extension [1] which logs\n> configuration changes to a table, though it's not particularly maintained or\n> recommended for production use.\n\nI'm happy to change course if the consensus on the list favors using something larger, like log files or logging to a table, but for now I'm still thinking about this in terms of something smaller than that.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 16 Nov 2020 11:02:08 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Tracking cluster upgrade and configuration history"
}
] |
[
{
"msg_contents": "So far as I can tell, the partition machinery is critically dependent\non the idea that partition constraint conditions can never return\nNULL, only true or false. This is explicitly noted in the comments in\nget_qual_for_list, for instance, and it's visibly true by construction\nfor both list and range constraints.\n\nBut hash partition constraints are just calls to\nsatisfies_hash_partition(), and look what we've got there:\n\n\t/* Return null if the parent OID, modulus, or remainder is NULL. */\n\tif (PG_ARGISNULL(0) || PG_ARGISNULL(1) || PG_ARGISNULL(2))\n\t\tPG_RETURN_NULL();\n\n\tparent = try_relation_open(parentId, AccessShareLock);\n\tif (parent == NULL)\n\t\tPG_RETURN_NULL();\n\nOK, the first one probably shouldn't happen, but it's far from clear that\nthe second cannot. If we did return NULL, that would be taken as a \"pass\"\nof the constraint, possibly allowing a non-matching row to be inserted\ninto a partition.\n\nSo this seems like a seriously bad idea. I don't have a strong position\non whether it'd be better to return FALSE (causing a constraint failure\nerror) or just throw an error outright. But we can't leave it like this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Nov 2020 17:46:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Returning NULL from satisfies_hash_partition() is a bad idea"
}
] |
[
{
"msg_contents": "Hello\n\nI found a simple test case:\n\nCREATE TABLE test(id int NOT NULL, gen int GENERATED ALWAYS AS (id + 1) STORED) partition by range (id);\ncreate table test_part_create partition of test for values from ( 0) to (10);\ncreate table test_part_attach (id int NOT NULL, gen int);\nalter table test attach partition test_part_attach for values from (10) to (20);\n\npostgres=# \\d test_part_create \n Table \"public.test_part_create\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+-------------------------------------\n id | integer | | not null | \n gen | integer | | | generated always as (id + 1) stored\nPartition of: test FOR VALUES FROM (0) TO (10)\n\npostgres=# \\d test_part_attach \n Table \"public.test_part_attach\"\n Column | Type | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n id | integer | | not null | \n gen | integer | | | \nPartition of: test FOR VALUES FROM (10) TO (20)\n\nBoth partitions are attached, but alter table attach patition did not check nor enforce a generated column. Same for table inheritance stuff. While looking at MergeAttributesIntoExisting in src/backend/commands/tablecmds.c I did not notice any special handling or comments for attgenerated. It's an oversight and a bug?\n\nAlso,\npostgres=# alter table test alter COLUMN gen drop expression ;\nERROR: column \"gen\" of relation \"test_part_attach\" is not a stored generated column\n\nRegards, Sergei\n\n\n",
"msg_date": "Thu, 12 Nov 2020 13:34:46 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?B?4oCLZ2VuZXJhdGVkIGNvbHVtbnMgaW4gTWVyZ2VBdHRyaWJ1dGVzSW50b0V4aXN0aW5n?="
}
] |
[
{
"msg_contents": "Hi,\n\n\nWe're facing some issue in a new extension we use at LzLabs to\nemulate server side rollback at statement level in PostgreSQL,\nsee for full detail https://github.com/lzlabs/pg_statement_rollback/\n\nThe problem we are encountering is when PostgreSQL is compiled in debug\nmode with --enable-cassert. At line 1327 of src/backend/tcop/pquery.c\nthe following assert fail:\n\n /*\n * Clear subsidiary contexts to recover temporary memory.\n */\n Assert(portal->portalContext == CurrentMemoryContext);\n\n MemoryContextDeleteChildren(portal->portalContext);\n\nThis extension, although it is a risky implementation, works extremely\nwell when used in a fully controlled environment. It avoid the latency\nof the extra communication for the RELEASE+SAVEPOINT usually controlled at\nclient side. The client is only responsible to issue the \"ROLLBACK TO \nautosavepoint\"\nwhen needed. The extension allow a high performances gain for this feature\nthat helps customers using Oracle or DB2 to migrate to PostgreSQL.\n\n\nActually with the extension the memory context is not CurrentMemoryContext\nas expected by the assert.\n\n (gdb) b pquery.c:1327\n Breakpoint 1 at 0x55792fd7a04d: file pquery.c, line 1327.\n (gdb) c\n Continuing.\n\n Breakpoint 1, PortalRunMulti (portal=portal@entry=0x5579316e3e10, \nisTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false, \ndest=dest@entry=0x557931755ce8, altdest=altdest@entry=0x557931755ce8,\n qc=qc@entry=0x7ffc4aa1f8a0) at pquery.c:1327\n 1327 Assert(portal->portalContext == CurrentMemoryContext);\n (gdb) p portal->sourceText\n $1 = 0x557931679c80 \"INSERT INTO savepoint_test SELECT 1;\"\n (gdb) p MemoryContextStats(portal->portalContext)\n $2 = void\n (gdb)\n\nThe memory context dump output from log:\n\n PortalContext: 1024 total in 1 blocks; 704 free (1 chunks); 320 \nused: <unnamed>\n Grand total: 1024 bytes in 1 blocks; 704 free (1 chunks); 320 used\n\nIf I naively remove the assert on pquery.c everything works without any\nnew assert error.\n\nAs I said I am aware that this is clearly not a standard PostgreSQL use\nbut we have no choice. We are emulating DB2 statement-level rollback\nbehavior and we have chosen to not create a new fork of PostgreSQL and\nonly work with extensions. As there is no hook or API that could allow a \nperfect\nserver side integration of this feature we have done what is possible to do\nin the extension.\n\n\nSo my question is should we allow such use through an extension and in\nthis case what is the change to PostgreSQL code that could avoid the\nassert crash? Or perhaps we have missed something in this extension to\nbe able to make the assert happy but I don't think so.\n\nCheers\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Thu, 12 Nov 2020 11:40:22 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Issue with server side statement-level rollback"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-12 11:40:22 +0100, Gilles Darold wrote:\n> The problem we are encountering is when PostgreSQL is compiled in debug\n> mode with --enable-cassert. At line 1327 of src/backend/tcop/pquery.c\n> the following assert fail:\n> \n> ��� /*\n> ��� �* Clear subsidiary contexts to recover temporary memory.\n> ��� �*/\n> ��� Assert(portal->portalContext == CurrentMemoryContext);\n> \n> ��� MemoryContextDeleteChildren(portal->portalContext);\n> \n> This extension, although it is a risky implementation, works extremely\n> well when used in a fully controlled environment. It avoid the latency\n> of the extra communication for the RELEASE+SAVEPOINT usually controlled at\n> client side. The client is only responsible to issue the \"ROLLBACK TO\n> autosavepoint\"\n> when needed.� The extension allow a high performances gain for this feature\n> that helps customers using Oracle or DB2 to migrate to PostgreSQL.\n> \n> \n> Actually with the extension the memory context is not CurrentMemoryContext\n> as expected by the assert.\n\nWhat is it instead? I don't think you really can safely be in a\ndifferent context at this point. There's risks of CurrentMemoryContext\npointing to a deleted context, and risks of memory leaks, depending on\nthe situation.\n\n\n> As there is no hook or API that could allow a perfect server side\n> integration of this feature we have done what is possible to do in the\n> extension.\n\n> So my question is should we allow such use through an extension and in\n> this case what is the change to PostgreSQL code that could avoid the\n> assert crash? Or perhaps we have missed something in this extension to\n> be able to make the assert happy but I don't think so.\n\nWithout more detail of what you actually are precisely doing, and what\nthe hooks / integration you'd like would look like, it's hard to comment\nusefully here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Nov 2020 12:43:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with server side statement-level rollback"
},
{
"msg_contents": "Hi,\n\nLe 19/11/2020 à 21:43, Andres Freund a écrit :\n> Hi,\n>\n> On 2020-11-12 11:40:22 +0100, Gilles Darold wrote:\n>> The problem we are encountering is when PostgreSQL is compiled in debug\n>> mode with --enable-cassert. At line 1327 of src/backend/tcop/pquery.c\n>> the following assert fail:\n>>\n>> /*\n>> * Clear subsidiary contexts to recover temporary memory.\n>> */\n>> Assert(portal->portalContext == CurrentMemoryContext);\n>>\n>> MemoryContextDeleteChildren(portal->portalContext);\n>>\n>> This extension, although it is a risky implementation, works extremely\n>> well when used in a fully controlled environment. It avoid the latency\n>> of the extra communication for the RELEASE+SAVEPOINT usually controlled at\n>> client side. The client is only responsible to issue the \"ROLLBACK TO\n>> autosavepoint\"\n>> when needed. The extension allow a high performances gain for this feature\n>> that helps customers using Oracle or DB2 to migrate to PostgreSQL.\n>>\n>>\n>> Actually with the extension the memory context is not CurrentMemoryContext\n>> as expected by the assert.\n> What is it instead? I don't think you really can safely be in a\n> different context at this point. There's risks of CurrentMemoryContext\n> pointing to a deleted context, and risks of memory leaks, depending on\n> the situation.\n\nThis is a PortalContext. Yes this implementation has some risks but \nuntil now I have not met any problem because its use and the environment \nare fully controlled.\n\n\n>\n>> So my question is should we allow such use through an extension and in\n>> this case what is the change to PostgreSQL code that could avoid the\n>> assert crash? Or perhaps we have missed something in this extension to\n>> be able to make the assert happy but I don't think so.\n> Without more detail of what you actually are precisely doing, and what\n> the hooks / integration you'd like would look like, it's hard to comment\n> usefully here.\n\n\nWe have implemented an extension to allow server side \"statement-level \nrollback\" with what is possible to do now with PG but the objective was \nto do the same thing that what was proposed as a core patch submitted by \nTakayuki Tsunakawa [1] . This patch will not be included into core and \nwhat I'm trying to do now is to have some facilities to allow this \nfeature through an extension that does not suffer from the same \nlimitation of pg_statement_rollback.\n\n\nLooking that this patch for example, if we have a hook on \nfinish_xact_command(), finish_xact_command() and \nAbortCurrentTransaction() I think we could probably be able to implement \nthe feature through an extension in a more \"safe\" way. A hook on \nstart_xact_command() seems useless as it looks it is executed before the \nUtilityProcess and Executor* hooks. See attached patch for an example of \nwhat could be useful for this kind of extension. Unfortunately my \nknowledge doesn't allow me to see further and especially if there is \ndrawbacks. I hope this is more clear, I will work later on a POC to \ndemonstrate the use case I want to implement.\n\n\n[1] \nhttps://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1F6A9286%40G01JPEXMBYT05\n\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Fri, 20 Nov 2020 16:18:38 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: Issue with server side statement-level rollback"
},
{
"msg_contents": "Le 20/11/2020 à 16:18, Gilles Darold a écrit :\n> I will work later on a POC to demonstrate the use case I want to\n> implement. \n\n\nHi Andres,\n\n\nI have created a new version of the pg_statement_rollback extension [1]\nto demonstrate the use of the hooks on start_xact_command(),\nfinish_xact_command() and AbortCurrentTransaction() to implement the\nstatement-level rollback feature entirely driven at serverside. It\nrequire that the patch [2] I've provided be applied on PostgreSQL\nsource first.\n\n\nHere is what can be achieved with this patch:\n\n\n LOAD 'pg_statement_rollback.so';\n LOAD\n SET pg_statement_rollback.enabled TO on;\n SET\n CREATE SCHEMA testrsl;\n CREATE SCHEMA\n SET search_path TO testrsl,public;\n SET\n BEGIN;\n BEGIN\n CREATE TABLE tbl_rsl(id integer, val varchar(256));\n CREATE TABLE\n INSERT INTO tbl_rsl VALUES (1, 'one');\n INSERT 0 1\n WITH write AS (INSERT INTO tbl_rsl VALUES (2, 'two') RETURNING id,\n val) SELECT * FROM write;\n id | val\n ----+-----\n 2 | two\n (1 row)\n\n UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1; -- >>>>> will fail\n psql:simple.sql:14: ERROR: invalid input syntax for type integer: \"two\"\n LINE 1: UPDATE tbl_rsl SET id = 'two', val = 2 WHERE id = 1;\n ^\n SELECT * FROM tbl_rsl; -- Should show records id 1 + 2\n id | val\n ----+-----\n 1 | one\n 2 | two\n (2 rows)\n\n COMMIT;\n COMMIT\n\n\nActually unlike I've though this is the hook on finish_xact_command()\nthat is useless. In the extension I'm executing the RELEASE/SAVEPOINT in\nthe start_xact_command() hook before executing the next statement. The\nhook on AbortCurrentTransaction() is used to signal that a ROLLOBACK\nTO/SAVEPOINT need to be executed into the start_xact_command() hook\ninstead of a RELEASE/SAVEPOINT.\n\n\nThis works perfectly and do not crash PG anymore when compiled with\nassert. Advanced tests (with triggers, client savepoint, CTE, etc.) are\navailable in the test/sql/ directory. Use of \"make installcheck\" allow\nto run the regression tests.\n\n\nBased on this result I really think that these hooks should be included\nto be able to extend PostgreSQL for such feature although I have not\nthough about an other use that this one.\n\n\nRegards, \n\n\nI've attached all code for archiving but the current version can be\nfound here too:\n\n [1] https://github.com/darold/pg_statement_rollbackv2\n\n [2]\n https://raw.githubusercontent.com/darold/pg_statement_rollbackv2/main/command-start-finish-hook-v1.patch\n\n\n-- \n\nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Sun, 22 Nov 2020 11:05:52 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: Issue with server side statement-level rollback"
}
] |
[
{
"msg_contents": "While going through the code I noticed that the nTapes member in\nSharedSort is unused. This is just initialized with nworkers but\nnever used. The attached patch removes this variable.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Nov 2020 17:28:55 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove unused variable from SharedSort"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 5:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> While going through the code I noticed that the nTapes member in\n> SharedSort is unused. This is just initialized with nworkers but\n> never used. The attached patch removes this variable.\n>\n\nWe could have used that variable for an assert like\nAssert(state->worker <= shared->nTapes) in worker_freeze_result_tape()\nbefore accessing shared->tapes[state->worker] = output; as sometimes\nstate->worker is being set to -1. But, it seems like we reach\nworker_freeze_result_tape(), only when WORKER(state) is true. So, we\ndon't need that extra Assert and removing nTapes variable makes sense\nto me.\n\nPatch looks good to me. Regression tests make check and make\ncheck-world ran successfully.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 15 Nov 2020 12:50:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove unused variable from SharedSort"
},
{
"msg_contents": "On Sun, Nov 15, 2020 at 12:50 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 12, 2020 at 5:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > While going through the code I noticed that the nTapes member in\n> > SharedSort is unused. This is just initialized with nworkers but\n> > never used. The attached patch removes this variable.\n> >\n>\n> We could have used that variable for an assert like\n> Assert(state->worker <= shared->nTapes) in worker_freeze_result_tape()\n> before accessing shared->tapes[state->worker] = output; as sometimes\n> state->worker is being set to -1. But, it seems like we reach\n> worker_freeze_result_tape(), only when WORKER(state) is true. So, we\n> don't need that extra Assert and removing nTapes variable makes sense\n> to me.\n\nRight, but anyway IMHO adding extra shared memory variables for just\nand assert purposes doesn't make sense.\n\n> Patch looks good to me. Regression tests make check and make\n> check-world ran successfully.\n\nThanks for looking into this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 15 Nov 2020 15:49:58 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove unused variable from SharedSort"
},
{
"msg_contents": "On Sun, Nov 15, 2020 at 03:49:58PM +0530, Dilip Kumar wrote:\n> On Sun, Nov 15, 2020 at 12:50 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> We could have used that variable for an assert like\n>> Assert(state->worker <= shared->nTapes) in worker_freeze_result_tape()\n>> before accessing shared->tapes[state->worker] = output; as sometimes\n>> state->worker is being set to -1. But, it seems like we reach\n>> worker_freeze_result_tape(), only when WORKER(state) is true. So, we\n>> don't need that extra Assert and removing nTapes variable makes sense\n>> to me.\n> \n> Right, but anyway IMHO adding extra shared memory variables for just\n> and assert purposes doesn't make sense.\n\nFWIW, I disagree with the removal of this variable because it is\nuseful to track down the number of members in a flexible array at\nshmem level. Even if you don't use that in some sanity checks for\ncode paths, which I think we actually should really do for at least\ninittapes() and leader_takeover_tapes() when it comes to the number of\nparticipants assumed to exist, that's useful for debugging purposes.\n\nRobert, this code has been introduced by 9da0cc3, could you comment on\nthat?\n--\nMichael",
"msg_date": "Mon, 16 Nov 2020 09:54:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove unused variable from SharedSort"
}
] |
[
{
"msg_contents": "Hackers,\n\nI'm investigating the bug report [1] about the behavior of\nwebsearch_to_tsquery() with quotes and multi-lexeme tokens. See the\nexample below.\n\n# select to_tsvector('pg_class foo') @@ websearch_to_tsquery('\"pg_class\nfoo\"');\n ?column?\n----------\n f\n\nSo, tsvector doesn't match tsquery, when absolutely the same text was\nput to the to_tsvector() and to the quotes of websearch_to_tsquery().\nLooks wrong to me. Let's examine output of to_tsvector() and\nwebsearch_to_tsquery().\n\n# select to_tsvector('pg_class foo');\n to_tsvector\n--------------------------\n 'class':2 'foo':3 'pg':1\n\n# select websearch_to_tsquery('\"pg_class foo\"');\n websearch_to_tsquery\n------------------------------\n ( 'pg' & 'class' ) <-> 'foo'\n(1 row)\n\nSo, 'pg_class' token was split into two lexemes 'pg' and 'class'. But\nthe output websearch_to_tsquery() connects 'pg' and 'class' with &\noperator. tsquery expects 'pg' and 'class' to be both neighbors of\n'foo'. So, 'pg' and 'class' are expected to share the same position,\nand that isn't true for tsvector. Let's see how phraseto_tsquery()\nhandles that.\n\n# select to_tsvector('pg_class foo') @@ phraseto_tsquery('pg_class foo');\n ?column?\n----------\n t\n\n# select phraseto_tsquery('pg_class foo');\n phraseto_tsquery\n----------------------------\n 'pg' <-> 'class' <-> 'foo'\n\nphraseto_tsquery() connects all the lexemes with phrase operators and\neverything works OK.\n\nFor me it's obvious that phraseto_tsquery() and websearch_to_tsquery()\nwith quotes should work the same way. Noticeably, current behavior of\nwebsearch_to_tsquery() is recorded in the regression tests. So, it\nmight look that this behavior is intended, but it's too ridiculous and\nI think the regression tests contain oversight as well.\n\nI've prepared a fix, which doesn't break the fts parser abstractions\ntoo much (attached patch), but I've faced another similar issue in\nto_tsquery().\n\n# select to_tsvector('pg_class foo') @@ to_tsquery('pg_class <-> foo');\n ?column?\n----------\n f\n\n# select to_tsquery('pg_class <-> foo');\n to_tsquery\n------------------------------\n ( 'pg' & 'class' ) <-> 'foo'\n\nI think if a user writes 'pg_class <-> foo', then it's expected to\nmatch 'pg_class foo' independently on which lexemes 'pg_class' is\nsplit into.\n\nThis issue looks like the much more complex design bug in phrase\nsearch. Fixing this would require some kind of readahead or multipass\nprocessing, because we don't know how to process 'pg_class' in\nadvance.\n\nIs this really a design bug existing in phrase search from the\nbeginning. Or am I missing something?\n\nLinks\n1. https://www.postgresql.org/message-id/16592-70b110ff9731c07d%40postgresql.org\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 12 Nov 2020 16:09:51 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Phrase search vs. multi-lexeme tokens"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 4:09 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> This issue looks like the much more complex design bug in phrase\n> search. Fixing this would require some kind of readahead or multipass\n> processing, because we don't know how to process 'pg_class' in\n> advance.\n>\n> Is this really a design bug existing in phrase search from the\n> beginning. Or am I missing something?\n\nNo feedback yet. I've added this to the commitfest to don't lose track of this.\nhttps://commitfest.postgresql.org/31/2854/\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Dec 2020 19:13:53 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Phrase search vs. multi-lexeme tokens"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> # select to_tsvector('pg_class foo') @@ websearch_to_tsquery('\"pg_class foo\"');\n> ?column?\n> ----------\n> f\n\nYeah, surely this is wrong.\n\n> # select to_tsquery('pg_class <-> foo');\n> to_tsquery\n> ------------------------------\n> ( 'pg' & 'class' ) <-> 'foo'\n\n> I think if a user writes 'pg_class <-> foo', then it's expected to\n> match 'pg_class foo' independently on which lexemes 'pg_class' is\n> split into.\n\nIndeed. It seems to me that this:\n\nregression=# select to_tsquery('pg_class');\n to_tsquery \n----------------\n 'pg' & 'class'\n(1 row)\n\nis wrong all by itself. Now that we have phrase search, a much\nsaner translation would be \"'pg' <-> 'class'\". If we fixed that\nthen it seems like the more complex case would just work.\n\nI read your patch over quickly and it seems like a reasonable\napproach (but sadly underdocumented). Can we extend the idea\nto fix the to_tsquery case?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Jan 2021 12:18:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Phrase search vs. multi-lexeme tokens"
},
{
"msg_contents": "Hi!\n\nOn Wed, Jan 6, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > # select to_tsvector('pg_class foo') @@ websearch_to_tsquery('\"pg_class foo\"');\n> > ?column?\n> > ----------\n> > f\n>\n> Yeah, surely this is wrong.\n\nThank you for confirming my thoughts. I also felt that is wrong but\ndoubted such a basic bug could exist for so long.\n\n> > # select to_tsquery('pg_class <-> foo');\n> > to_tsquery\n> > ------------------------------\n> > ( 'pg' & 'class' ) <-> 'foo'\n>\n> > I think if a user writes 'pg_class <-> foo', then it's expected to\n> > match 'pg_class foo' independently on which lexemes 'pg_class' is\n> > split into.\n>\n> Indeed. It seems to me that this:\n>\n> regression=# select to_tsquery('pg_class');\n> to_tsquery\n> ----------------\n> 'pg' & 'class'\n> (1 row)\n>\n> is wrong all by itself. Now that we have phrase search, a much\n> saner translation would be \"'pg' <-> 'class'\". If we fixed that\n> then it seems like the more complex case would just work.\n\nNice idea! Fixing this way should be much easier than fixing only the\ncase when we have the phrase operator on the upper level.\n\n> I read your patch over quickly and it seems like a reasonable\n> approach (but sadly underdocumented). Can we extend the idea\n> to fix the to_tsquery case?\n\nSure, I'll provide a revised patch.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 7 Jan 2021 06:36:05 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Phrase search vs. multi-lexeme tokens"
},
{
"msg_contents": "On Thu, Jan 7, 2021 at 6:36 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> > I read your patch over quickly and it seems like a reasonable\n> > approach (but sadly underdocumented). Can we extend the idea\n> > to fix the to_tsquery case?\n>\n> Sure, I'll provide a revised patch.\n\nThe next version of the patch is attached. Now, it just makes\nto_tsquery() and websearch_to_tsquery() use phrase operator to connect\nmultiple lexemes of the same tsquery token. I leave plainto_tsquery()\naside because it considers the whole argument as a single token.\nChanging it would make it an equivalent of phraseto_tsquery().\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 20 Jan 2021 20:27:01 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Phrase search vs. multi-lexeme tokens"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nGreetings,\r\n\r\nAlthough I am not an expert in this field, I carefully read the full-text search section in the document. I think the change is surprising, but yes, it is correct.\r\nI found that your patch didn't modify the regress/excepted/tsearch.out. So I updated it and carried out the regression test. It passed. Also, I manually executed some test cases, all of which were OK.",
"msg_date": "Mon, 25 Jan 2021 08:44:48 +0000",
"msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Phrase search vs. multi-lexeme tokens"
},
{
"msg_contents": "Hi, Neil!\n\nOn Mon, Jan 25, 2021 at 11:45 AM Neil Chen <carpenter.nail.cz@gmail.com> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Greetings,\n>\n> Although I am not an expert in this field, I carefully read the full-text search section in the document. I think the change is surprising, but yes, it is correct.\n> I found that your patch didn't modify the regress/excepted/tsearch.out. So I updated it and carried out the regression test. It passed. Also, I manually executed some test cases, all of which were OK.\n\nThank you for looking into this. Yes, I've adjusted tsearch.sql\nregression tests to provide reasonable exercises for the new logic,\nbut forgot to add tsearch.out to the patch.\n\nBTW, you mentioned you read the documentation. Do you think it needs\nto be adjusted accordingly to the patch?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 25 Jan 2021 18:24:52 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Phrase search vs. multi-lexeme tokens"
},
{
"msg_contents": "Hi Alexander,\nOn Mon, Jan 25, 2021 at 11:25 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n>\n> BTW, you mentioned you read the documentation. Do you think it needs\n> to be adjusted accordingly to the patch?\n>\n>\nYes, I checked section 8.11, section 9.13 and Chapter 12 of the document.\nThe change of this patch did not conflict with the document, because it was\nnot mentioned in the document at all. We can simply not modify it, or we\ncan supplement these situations.\n\n-- \nThere is no royal road to learning.\nHighGo Software Co.\n\nHi Alexander,On Mon, Jan 25, 2021 at 11:25 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\nBTW, you mentioned you read the documentation. Do you think it needs\nto be adjusted accordingly to the patch? Yes, I checked section 8.11, section 9.13 and Chapter 12 of the document. The change of this patch did not conflict with the document, because it was not mentioned in the document at all. We can simply not modify it, or we can supplement these situations.-- There is no royal road to learning.HighGo Software Co.",
"msg_date": "Tue, 26 Jan 2021 09:31:32 +0800",
"msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Phrase search vs. multi-lexeme tokens"
},
{
"msg_contents": "On Tue, Jan 26, 2021 at 4:31 AM Neil Chen <carpenter.nail.cz@gmail.com> wrote:\n> On Mon, Jan 25, 2021 at 11:25 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>\n>>\n>> BTW, you mentioned you read the documentation. Do you think it needs\n>> to be adjusted accordingly to the patch?\n>>\n>\n> Yes, I checked section 8.11, section 9.13 and Chapter 12 of the document. The change of this patch did not conflict with the document, because it was not mentioned in the document at all. We can simply not modify it, or we can supplement these situations.\n\nI've checked the docs myself and I think you're right (despite that's\nsurprising for me). It seems that this patch just changes\nundocumented aspects of full-text search to be more consistent and\nintuitive.\n\nThe revised patch is attached. This revision adds just comment and\ncommit message. I'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 29 Jan 2021 16:51:03 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Phrase search vs. multi-lexeme tokens"
}
] |
[
{
"msg_contents": "Hi, it looks like Vik Fearing's patch does not apply anymore, because there\nare many conflicts with recent changes, fixed patch attached.\nI am interested in reviewing and testing it for the next commitfest, if\nit's design and implementation is found to be acceptable.\nAdditionally, if it is also acceptable, I can add support for handling\nnegative indexes for nth_value(), to be able to reverse order from\nfirst/from last for the window frame.",
"msg_date": "Fri, 13 Nov 2020 00:35:20 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "On 11/12/20 11:35 PM, Krasiyan Andreev wrote:\n> Hi, it looks like Vik Fearing's patch does not apply anymore, because there\n> are many conflicts with recent changes, fixed patch attached.\n\nThanks! I've been away from this list for a while and I have some\ncatching up to do.\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 12 Nov 2020 23:44:07 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "Fixed patch attached, after new introduced conflicts.\nVik, can you add it to the next commitfest, to be able to test it.\nAlso, all tests from Oliver Ford's old patch also passed successfully.\n\nНа пт, 13.11.2020 г. в 0:44 ч. Vik Fearing <vik@postgresfriends.org> написа:\n\n> On 11/12/20 11:35 PM, Krasiyan Andreev wrote:\n> > Hi, it looks like Vik Fearing's patch does not apply anymore, because\n> there\n> > are many conflicts with recent changes, fixed patch attached.\n>\n> Thanks! I've been away from this list for a while and I have some\n> catching up to do.\n> --\n> Vik Fearing\n>",
"msg_date": "Sat, 21 Nov 2020 11:07:09 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "On 11/21/20 10:07 AM, Krasiyan Andreev wrote:\n> Fixed patch attached, after new introduced conflicts.\n> Vik, can you add it to the next commitfest, to be able to test it.\n\n\nI have done this now. Thanks!\n-- \nVik Fearing\n\n\n",
"msg_date": "Tue, 8 Dec 2020 15:27:15 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "Hi, after latest committed patches about multirange datatypes, I get a\ncompilation error,\nwhen I try to apply a patch about respect/ignore null for window functions.\nWithout it applied, it complies clean and all checks are passed.\n\n[krasiyan@localhost build]$ /home/krasiyan/pgsql/postgresql/configure\n--with-openssl --with-libxml --with-libxslt --with-systemd --with-selinux\n--with-perl --with-python --enable-cassert --prefix=/var/lib/pgsql/14\n...\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -O2 -I../../../src/include\n-I/home/krasiyan/pgsql/postgresql/src/include -D_GNU_SOURCE\n-I/usr/include/libxml2 -c -o typecmds.o\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c: In\nfunction ‘makeMultirangeConstructors’:\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1849:9:\nwarning: passing argument 17 of ‘ProcedureCreate’ makes integer from\npointer without a cast [-Wint-conversion]\n 1849 | argtypes, /* parameterTypes */\n | ^~~~~~~~\n | |\n | oidvector *\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:206:16: note:\nexpected ‘char’ but argument is of type ‘oidvector *’\n 206 | char parallel,\n | ~~~~~^~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:32:\n/home/krasiyan/pgsql/postgresql/src/include/postgres.h:556:29: warning:\npassing argument 18 of ‘ProcedureCreate’ makes pointer from integer without\na cast [-Wint-conversion]\n 556 | #define PointerGetDatum(X) ((Datum) (X))\n | ~^~~~~~~~~~~~\n | |\n | long unsigned int\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1850:9:\nnote: in expansion of macro ‘PointerGetDatum’\n 1850 | PointerGetDatum(NULL), /* allParameterTypes */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:207:22: note:\nexpected ‘oidvector *’ but argument is of type ‘long unsigned int’\n 207 | oidvector *parameterTypes,\n | ~~~~~~~~~~~^~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/include/access/tupdesc.h:19,\n from\n/home/krasiyan/pgsql/postgresql/src/include/utils/relcache.h:18,\n from\n/home/krasiyan/pgsql/postgresql/src/include/access/genam.h:21,\n from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:34:\n/home/krasiyan/pgsql/postgresql/src/include/nodes/pg_list.h:65:19: warning:\npassing argument 21 of ‘ProcedureCreate’ makes integer from pointer without\na cast [-Wint-conversion]\n 65 | #define NIL ((List *) NULL)\n | ~^~~~~~~~~~~~~~\n | |\n | List *\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1853:9:\nnote: in expansion of macro ‘NIL’\n 1853 | NIL, /* parameterDefaults */\n | ^~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:210:17: note:\nexpected ‘Datum’ {aka ‘long unsigned int’} but argument is of type ‘List *’\n 210 | Datum parameterNames,\n | ~~~~~~^~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:32:\n/home/krasiyan/pgsql/postgresql/src/include/postgres.h:556:29: warning:\npassing argument 22 of ‘ProcedureCreate’ makes pointer from integer without\na cast [-Wint-conversion]\n 556 | #define PointerGetDatum(X) ((Datum) (X))\n | ~^~~~~~~~~~~~\n | |\n | long unsigned int\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1854:9:\nnote: in expansion of macro ‘PointerGetDatum’\n 1854 | PointerGetDatum(NULL), /* trftypes */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:211:17: note:\nexpected ‘List *’ but argument is of type ‘long unsigned int’\n 211 | List *parameterDefaults,\n | ~~~~~~^~~~~~~~~~~~~~~~~\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1833:11:\nerror: too few arguments to function ‘ProcedureCreate’\n 1833 | myself = ProcedureCreate(name, /* name: same as multirange type */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:190:22: note:\ndeclared here\n 190 | extern ObjectAddress ProcedureCreate(const char *procedureName,\n | ^~~~~~~~~~~~~~~\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1892:9:\nwarning: passing argument 17 of ‘ProcedureCreate’ makes integer from\npointer without a cast [-Wint-conversion]\n 1892 | argtypes, /* parameterTypes */\n | ^~~~~~~~\n | |\n | oidvector *\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:206:16: note:\nexpected ‘char’ but argument is of type ‘oidvector *’\n 206 | char parallel,\n | ~~~~~^~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:32:\n/home/krasiyan/pgsql/postgresql/src/include/postgres.h:556:29: warning:\npassing argument 18 of ‘ProcedureCreate’ makes pointer from integer without\na cast [-Wint-conversion]\n 556 | #define PointerGetDatum(X) ((Datum) (X))\n | ~^~~~~~~~~~~~\n | |\n | long unsigned int\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1893:9:\nnote: in expansion of macro ‘PointerGetDatum’\n 1893 | PointerGetDatum(NULL), /* allParameterTypes */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:207:22: note:\nexpected ‘oidvector *’ but argument is of type ‘long unsigned int’\n 207 | oidvector *parameterTypes,\n | ~~~~~~~~~~~^~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/include/access/tupdesc.h:19,\n from\n/home/krasiyan/pgsql/postgresql/src/include/utils/relcache.h:18,\n from\n/home/krasiyan/pgsql/postgresql/src/include/access/genam.h:21,\n from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:34:\n/home/krasiyan/pgsql/postgresql/src/include/nodes/pg_list.h:65:19: warning:\npassing argument 21 of ‘ProcedureCreate’ makes integer from pointer without\na cast [-Wint-conversion]\n 65 | #define NIL ((List *) NULL)\n | ~^~~~~~~~~~~~~~\n | |\n | List *\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1896:9:\nnote: in expansion of macro ‘NIL’\n 1896 | NIL, /* parameterDefaults */\n | ^~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:210:17: note:\nexpected ‘Datum’ {aka ‘long unsigned int’} but argument is of type ‘List *’\n 210 | Datum parameterNames,\n | ~~~~~~^~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:32:\n/home/krasiyan/pgsql/postgresql/src/include/postgres.h:556:29: warning:\npassing argument 22 of ‘ProcedureCreate’ makes pointer from integer without\na cast [-Wint-conversion]\n 556 | #define PointerGetDatum(X) ((Datum) (X))\n | ~^~~~~~~~~~~~\n | |\n | long unsigned int\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1897:9:\nnote: in expansion of macro ‘PointerGetDatum’\n 1897 | PointerGetDatum(NULL), /* trftypes */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:211:17: note:\nexpected ‘List *’ but argument is of type ‘long unsigned int’\n 211 | List *parameterDefaults,\n | ~~~~~~^~~~~~~~~~~~~~~~~\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1876:11:\nerror: too few arguments to function ‘ProcedureCreate’\n 1876 | myself = ProcedureCreate(name, /* name: same as multirange type */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:190:22: note:\ndeclared here\n 190 | extern ObjectAddress ProcedureCreate(const char *procedureName,\n | ^~~~~~~~~~~~~~~\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1932:9:\nwarning: passing argument 17 of ‘ProcedureCreate’ makes integer from\npointer without a cast [-Wint-conversion]\n 1932 | argtypes, /* parameterTypes */\n | ^~~~~~~~\n | |\n | oidvector *\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:206:16: note:\nexpected ‘char’ but argument is of type ‘oidvector *’\n 206 | char parallel,\n | ~~~~~^~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:32:\n/home/krasiyan/pgsql/postgresql/src/include/postgres.h:556:29: warning:\npassing argument 18 of ‘ProcedureCreate’ makes pointer from integer without\na cast [-Wint-conversion]\n 556 | #define PointerGetDatum(X) ((Datum) (X))\n | ~^~~~~~~~~~~~\n | |\n | long unsigned int\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1933:9:\nnote: in expansion of macro ‘PointerGetDatum’\n 1933 | PointerGetDatum(allParameterTypes), /* allParameterTypes */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:207:22: note:\nexpected ‘oidvector *’ but argument is of type ‘long unsigned int’\n 207 | oidvector *parameterTypes,\n | ~~~~~~~~~~~^~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/include/access/tupdesc.h:19,\n from\n/home/krasiyan/pgsql/postgresql/src/include/utils/relcache.h:18,\n from\n/home/krasiyan/pgsql/postgresql/src/include/access/genam.h:21,\n from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:34:\n/home/krasiyan/pgsql/postgresql/src/include/nodes/pg_list.h:65:19: warning:\npassing argument 21 of ‘ProcedureCreate’ makes integer from pointer without\na cast [-Wint-conversion]\n 65 | #define NIL ((List *) NULL)\n | ~^~~~~~~~~~~~~~\n | |\n | List *\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1936:9:\nnote: in expansion of macro ‘NIL’\n 1936 | NIL, /* parameterDefaults */\n | ^~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:210:17: note:\nexpected ‘Datum’ {aka ‘long unsigned int’} but argument is of type ‘List *’\n 210 | Datum parameterNames,\n | ~~~~~~^~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:32:\n/home/krasiyan/pgsql/postgresql/src/include/postgres.h:556:29: warning:\npassing argument 22 of ‘ProcedureCreate’ makes pointer from integer without\na cast [-Wint-conversion]\n 556 | #define PointerGetDatum(X) ((Datum) (X))\n | ~^~~~~~~~~~~~\n | |\n | long unsigned int\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1937:9:\nnote: in expansion of macro ‘PointerGetDatum’\n 1937 | PointerGetDatum(NULL), /* trftypes */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:211:17: note:\nexpected ‘List *’ but argument is of type ‘long unsigned int’\n 211 | List *parameterDefaults,\n | ~~~~~~^~~~~~~~~~~~~~~~~\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:1916:11:\nerror: too few arguments to function ‘ProcedureCreate’\n 1916 | myself = ProcedureCreate(name, /* name: same as multirange type */\n | ^~~~~~~~~~~~~~~\nIn file included from\n/home/krasiyan/pgsql/postgresql/src/backend/commands/typecmds.c:52:\n/home/krasiyan/pgsql/postgresql/src/include/catalog/pg_proc.h:190:22: note:\ndeclared here\n 190 | extern ObjectAddress ProcedureCreate(const char *procedureName,\n | ^~~~~~~~~~~~~~~\nmake[3]: *** [<builtin>: typecmds.o] Error 1\nmake[3]: Leaving directory '/home/krasiyan/pgsql/build/src/backend/commands'\nmake[2]: *** [/home/krasiyan/pgsql/postgresql/src/backend/common.mk:39:\ncommands-recursive] Error 2\nmake[2]: Leaving directory '/home/krasiyan/pgsql/build/src/backend'\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\nmake[1]: Leaving directory '/home/krasiyan/pgsql/build/src'\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n[krasiyan@localhost build]$\n\nНа вт, 8.12.2020 г. в 16:27 ч. Vik Fearing <vik@postgresfriends.org> написа:\n\n> On 11/21/20 10:07 AM, Krasiyan Andreev wrote:\n> > Fixed patch attached, after new introduced conflicts.\n> > Vik, can you add it to the next commitfest, to be able to test it.\n>\n>\n> I have done this now. Thanks!\n> --\n> Vik Fearing\n>",
"msg_date": "Wed, 30 Dec 2020 21:32:26 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "On Wed, Dec 30, 2020 at 09:32:26PM +0200, Krasiyan Andreev wrote:\n> Hi, after latest committed patches about multirange datatypes, I get a\n> compilation error,\n\nOh, right. I'd been meaning to send a patch to fix that. Here it is.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 30 Dec 2020 20:50:37 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "It works - now it compiles clean and all checks are passed, thank you. I\nwill continue with more complex tests.\n\nНа ср, 30.12.2020 г. в 21:50 ч. David Fetter <david@fetter.org> написа:\n\n> On Wed, Dec 30, 2020 at 09:32:26PM +0200, Krasiyan Andreev wrote:\n> > Hi, after latest committed patches about multirange datatypes, I get a\n> > compilation error,\n>\n> Oh, right. I'd been meaning to send a patch to fix that. Here it is.\n>\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>\n\nIt works - now it compiles clean and all checks are passed, thank you. I will continue with more complex tests.На ср, 30.12.2020 г. в 21:50 ч. David Fetter <david@fetter.org> написа:On Wed, Dec 30, 2020 at 09:32:26PM +0200, Krasiyan Andreev wrote:\n> Hi, after latest committed patches about multirange datatypes, I get a\n> compilation error,\n\nOh, right. I'd been meaning to send a patch to fix that. Here it is.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Wed, 30 Dec 2020 22:16:14 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "Hi, it looks like cfbot.cputube.org didn't recognize and can't apply a\npatch, so I resend it now with a different format, no other changes.\n\nНа ср, 30.12.2020 г. в 22:16 ч. Krasiyan Andreev <krasiyan@gmail.com>\nнаписа:\n\n> It works - now it compiles clean and all checks are passed, thank you. I\n> will continue with more complex tests.\n>\n> На ср, 30.12.2020 г. в 21:50 ч. David Fetter <david@fetter.org> написа:\n>\n>> On Wed, Dec 30, 2020 at 09:32:26PM +0200, Krasiyan Andreev wrote:\n>> > Hi, after latest committed patches about multirange datatypes, I get a\n>> > compilation error,\n>>\n>> Oh, right. I'd been meaning to send a patch to fix that. Here it is.\n>>\n>> Best,\n>> David.\n>> --\n>> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n>> Phone: +1 415 235 3778\n>>\n>> Remember to vote!\n>> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>>\n>",
"msg_date": "Fri, 1 Jan 2021 22:58:36 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "Krasiyan:\nHappy New Year.\n\nFor WinGetFuncArgInPartition():\n\n+ if (target > 0)\n+ step = 1;\n+ else if (target < 0)\n+ step = -1;\n+ else\n+ step = 0;\n\nWhen would the last else statement execute ? Since the above code is\nfor WINDOW_SEEK_CURRENT, I wonder why step should be 0.\n\nSimilar question for the last else statement below\nin WinGetFuncArgInFrame():\n\n+ else if (seektype == WINDOW_SEEK_TAIL)\n+ step = -1;\n+ else\n+ step = 0;\n\nThanks\n\nOn Fri, Jan 1, 2021 at 12:59 PM Krasiyan Andreev <krasiyan@gmail.com> wrote:\n\n> Hi, it looks like cfbot.cputube.org didn't recognize and can't apply a\n> patch, so I resend it now with a different format, no other changes.\n>\n> На ср, 30.12.2020 г. в 22:16 ч. Krasiyan Andreev <krasiyan@gmail.com>\n> написа:\n>\n>> It works - now it compiles clean and all checks are passed, thank you. I\n>> will continue with more complex tests.\n>>\n>> На ср, 30.12.2020 г. в 21:50 ч. David Fetter <david@fetter.org> написа:\n>>\n>>> On Wed, Dec 30, 2020 at 09:32:26PM +0200, Krasiyan Andreev wrote:\n>>> > Hi, after latest committed patches about multirange datatypes, I get a\n>>> > compilation error,\n>>>\n>>> Oh, right. I'd been meaning to send a patch to fix that. Here it is.\n>>>\n>>> Best,\n>>> David.\n>>> --\n>>> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n>>> Phone: +1 415 235 3778\n>>>\n>>> Remember to vote!\n>>> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>>>\n>>\n\nKrasiyan:Happy New Year.For WinGetFuncArgInPartition():+ if (target > 0)+ step = 1;+ else if (target < 0)+ step = -1;+ else+ step = 0;When would the last else statement execute ? Since the above code is for WINDOW_SEEK_CURRENT, I wonder why step should be 0.Similar question for the last else statement below in WinGetFuncArgInFrame():+ else if (seektype == WINDOW_SEEK_TAIL)+ step = -1;+ else+ step = 0;ThanksOn Fri, Jan 1, 2021 at 12:59 PM Krasiyan Andreev <krasiyan@gmail.com> wrote:Hi, it looks like cfbot.cputube.org didn't recognize and can't apply a patch, so I resend it now with a different format, no other changes.На ср, 30.12.2020 г. в 22:16 ч. Krasiyan Andreev <krasiyan@gmail.com> написа:It works - now it compiles clean and all checks are passed, thank you. I will continue with more complex tests.На ср, 30.12.2020 г. в 21:50 ч. David Fetter <david@fetter.org> написа:On Wed, Dec 30, 2020 at 09:32:26PM +0200, Krasiyan Andreev wrote:\n> Hi, after latest committed patches about multirange datatypes, I get a\n> compilation error,\n\nOh, right. I'd been meaning to send a patch to fix that. Here it is.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 1 Jan 2021 13:21:10 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "On Fri, Jan 01, 2021 at 01:21:10PM -0800, Zhihong Yu wrote:\n> Krasiyan:\n> Happy New Year.\n> \n> For WinGetFuncArgInPartition():\n> \n> + if (target > 0)\n> + step = 1;\n> + else if (target < 0)\n> + step = -1;\n> + else\n> + step = 0;\n> \n> When would the last else statement execute ? Since the above code is\n> for WINDOW_SEEK_CURRENT, I wonder why step should be 0.\n\nIf it does actually need step to be one of those three choices, it\nmight be shorter (well, less branchy) to write as\n\n step = (target > 0) - (target < 0);\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 3 Jan 2021 20:04:16 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "On 1/1/21 10:21 PM, Zhihong Yu wrote:\n> Krasiyan:\n> Happy New Year.\n> \n> For WinGetFuncArgInPartition():\n> \n> + if (target > 0)\n> + step = 1;\n> + else if (target < 0)\n> + step = -1;\n> + else\n> + step = 0;\n> \n> When would the last else statement execute ? Since the above code is\n> for WINDOW_SEEK_CURRENT, I wonder why step should be 0.\n\nHi.\n\n\"lag(expr, 0) over w\" is useless but valid.\n-- \nVik Fearing\n\n\n",
"msg_date": "Fri, 8 Jan 2021 01:02:52 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "Hi, the building warning below is fixed now, no other changes. Also, I can\nconfirm that the corner case with offset=0 in lead and lag works correctly.\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -O2 -I../../../src/include\n-I/home/krasiyan/pgsql/postgresql/src/include -D_GNU_SOURCE\n-I/usr/include/libxml2 -c -o nodeWindowAgg.o\n/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c\n/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In\nfunction ‘WinGetFuncArgInPartition’:\n/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3274:10:\nwarning: ‘step’ may be used uninitialized in this function\n[-Wmaybe-uninitialized]\n 3274 | relpos += step;\n | ~~~~~~~^~~~~~~\n/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In\nfunction ‘WinGetFuncArgInFrame’:\n/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3531:10:\nwarning: ‘step’ may be used uninitialized in this function\n[-Wmaybe-uninitialized]\n 3531 | relpos += step;\n | ~~~~~~~^~~~~~~\n\n\n\nНа пт, 8.01.2021 г. в 2:02 ч. Vik Fearing <vik@postgresfriends.org> написа:\n\n> On 1/1/21 10:21 PM, Zhihong Yu wrote:\n> > Krasiyan:\n> > Happy New Year.\n> >\n> > For WinGetFuncArgInPartition():\n> >\n> > + if (target > 0)\n> > + step = 1;\n> > + else if (target < 0)\n> > + step = -1;\n> > + else\n> > + step = 0;\n> >\n> > When would the last else statement execute ? Since the above code is\n> > for WINDOW_SEEK_CURRENT, I wonder why step should be 0.\n>\n> Hi.\n>\n> \"lag(expr, 0) over w\" is useless but valid.\n> --\n> Vik Fearing\n>",
"msg_date": "Sat, 9 Jan 2021 13:30:59 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "Hi, patch applies and compiles, all included and external tests and\nbuilding of the docs pass.\nAfter the last run of the cfbot, there are no any building warnings.\nI am using last version in our testing environment with real data and I\ndidn't find any bugs,\nso I'm marking this patch as ready for the committer in the commitfest app.\n\nНа сб, 9.01.2021 г. в 13:30 ч. Krasiyan Andreev <krasiyan@gmail.com> написа:\n\n> Hi, the building warning below is fixed now, no other changes. Also, I can\n> confirm that the corner case with offset=0 in lead and lag works correctly.\n>\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../../src/include\n> -I/home/krasiyan/pgsql/postgresql/src/include -D_GNU_SOURCE\n> -I/usr/include/libxml2 -c -o nodeWindowAgg.o\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In\n> function ‘WinGetFuncArgInPartition’:\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3274:10:\n> warning: ‘step’ may be used uninitialized in this function\n> [-Wmaybe-uninitialized]\n> 3274 | relpos += step;\n> | ~~~~~~~^~~~~~~\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In\n> function ‘WinGetFuncArgInFrame’:\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3531:10:\n> warning: ‘step’ may be used uninitialized in this function\n> [-Wmaybe-uninitialized]\n> 3531 | relpos += step;\n> | ~~~~~~~^~~~~~~\n>\n>\n>\n> На пт, 8.01.2021 г. в 2:02 ч. Vik Fearing <vik@postgresfriends.org>\n> написа:\n>\n>> On 1/1/21 10:21 PM, Zhihong Yu wrote:\n>> > Krasiyan:\n>> > Happy New Year.\n>> >\n>> > For WinGetFuncArgInPartition():\n>> >\n>> > + if (target > 0)\n>> > + step = 1;\n>> > + else if (target < 0)\n>> > + step = -1;\n>> > + else\n>> > + step = 0;\n>> >\n>> > When would the last else statement execute ? Since the above code is\n>> > for WINDOW_SEEK_CURRENT, I wonder why step should be 0.\n>>\n>> Hi.\n>>\n>> \"lag(expr, 0) over w\" is useless but valid.\n>> --\n>> Vik Fearing\n>>\n>\n\nHi, patch applies and compiles, all included and external tests and building of the docs pass.After the last run of the cfbot, there are no any building warnings.I am using last version in our testing environment with real data and I didn't find any bugs,so I'm marking this patch as ready for the committer in the commitfest app.На сб, 9.01.2021 г. в 13:30 ч. Krasiyan Andreev <krasiyan@gmail.com> написа:Hi, the building warning below is fixed now, no other changes. Also, I can confirm that the corner case with offset=0 in lead and lag works correctly.gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../../src/include -I/home/krasiyan/pgsql/postgresql/src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o nodeWindowAgg.o /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In function ‘WinGetFuncArgInPartition’:/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3274:10: warning: ‘step’ may be used uninitialized in this function [-Wmaybe-uninitialized] 3274 | relpos += step; | ~~~~~~~^~~~~~~/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In function ‘WinGetFuncArgInFrame’:/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3531:10: warning: ‘step’ may be used uninitialized in this function [-Wmaybe-uninitialized] 3531 | relpos += step; | ~~~~~~~^~~~~~~На пт, 8.01.2021 г. в 2:02 ч. Vik Fearing <vik@postgresfriends.org> написа:On 1/1/21 10:21 PM, Zhihong Yu wrote:\n> Krasiyan:\n> Happy New Year.\n> \n> For WinGetFuncArgInPartition():\n> \n> + if (target > 0)\n> + step = 1;\n> + else if (target < 0)\n> + step = -1;\n> + else\n> + step = 0;\n> \n> When would the last else statement execute ? Since the above code is\n> for WINDOW_SEEK_CURRENT, I wonder why step should be 0.\n\nHi.\n\n\"lag(expr, 0) over w\" is useless but valid.\n-- \nVik Fearing",
"msg_date": "Sat, 9 Jan 2021 17:01:44 +0200",
"msg_from": "Krasiyan Andreev <krasiyan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "Hi,\nFor WinGetFuncArgInFrame():\n\n+ if (winobj->null_treatment == NULL_TREATMENT_IGNORE)\n {\n...\n+ if (target > 0)\n+ step = 1;\n+ else if (target < 0)\n+ step = -1;\n+ else if (seektype == WINDOW_SEEK_HEAD)\n+ step = 1;\n+ else if (seektype == WINDOW_SEEK_TAIL)\n+ step = -1;\n+ else\n+ step = 0;\n...\n+ relpos = 0;\n+ }\n\nWhy is relpos always set to 0 ?\nIn similar code in WinGetFuncArgInPartition(), I saw the following:\n\n+ if (target > 0)\n+ step = 1;\n+ else if (target < 0)\n+ step = -1;\n+ else\n+ step = 0;\n+ relpos = step;\n\nMaybe add a comment above the relpos assignment.\n\nThanks\n\nOn Sat, Jan 9, 2021 at 3:31 AM Krasiyan Andreev <krasiyan@gmail.com> wrote:\n\n> Hi, the building warning below is fixed now, no other changes. Also, I can\n> confirm that the corner case with offset=0 in lead and lag works correctly.\n>\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../../src/include\n> -I/home/krasiyan/pgsql/postgresql/src/include -D_GNU_SOURCE\n> -I/usr/include/libxml2 -c -o nodeWindowAgg.o\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In\n> function ‘WinGetFuncArgInPartition’:\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3274:10:\n> warning: ‘step’ may be used uninitialized in this function\n> [-Wmaybe-uninitialized]\n> 3274 | relpos += step;\n> | ~~~~~~~^~~~~~~\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In\n> function ‘WinGetFuncArgInFrame’:\n> /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3531:10:\n> warning: ‘step’ may be used uninitialized in this function\n> [-Wmaybe-uninitialized]\n> 3531 | relpos += step;\n> | ~~~~~~~^~~~~~~\n>\n>\n>\n> На пт, 8.01.2021 г. в 2:02 ч. Vik Fearing <vik@postgresfriends.org>\n> написа:\n>\n>> On 1/1/21 10:21 PM, Zhihong Yu wrote:\n>> > Krasiyan:\n>> > Happy New Year.\n>> >\n>> > For WinGetFuncArgInPartition():\n>> >\n>> > + if (target > 0)\n>> > + step = 1;\n>> > + else if (target < 0)\n>> > + step = -1;\n>> > + else\n>> > + step = 0;\n>> >\n>> > When would the last else statement execute ? Since the above code is\n>> > for WINDOW_SEEK_CURRENT, I wonder why step should be 0.\n>>\n>> Hi.\n>>\n>> \"lag(expr, 0) over w\" is useless but valid.\n>> --\n>> Vik Fearing\n>>\n>\n\nHi,For WinGetFuncArgInFrame():+ if (winobj->null_treatment == NULL_TREATMENT_IGNORE) {...+ if (target > 0)+ step = 1;+ else if (target < 0)+ step = -1;+ else if (seektype == WINDOW_SEEK_HEAD)+ step = 1;+ else if (seektype == WINDOW_SEEK_TAIL)+ step = -1;+ else+ step = 0;...+ relpos = 0;+ }Why is relpos always set to 0 ?In similar code in WinGetFuncArgInPartition(), I saw the following:+ if (target > 0)+ step = 1;+ else if (target < 0)+ step = -1;+ else+ step = 0;+ relpos = step;Maybe add a comment above the relpos assignment.ThanksOn Sat, Jan 9, 2021 at 3:31 AM Krasiyan Andreev <krasiyan@gmail.com> wrote:Hi, the building warning below is fixed now, no other changes. Also, I can confirm that the corner case with offset=0 in lead and lag works correctly.gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -I../../../src/include -I/home/krasiyan/pgsql/postgresql/src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o nodeWindowAgg.o /home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In function ‘WinGetFuncArgInPartition’:/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3274:10: warning: ‘step’ may be used uninitialized in this function [-Wmaybe-uninitialized] 3274 | relpos += step; | ~~~~~~~^~~~~~~/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c: In function ‘WinGetFuncArgInFrame’:/home/krasiyan/pgsql/postgresql/src/backend/executor/nodeWindowAgg.c:3531:10: warning: ‘step’ may be used uninitialized in this function [-Wmaybe-uninitialized] 3531 | relpos += step; | ~~~~~~~^~~~~~~На пт, 8.01.2021 г. в 2:02 ч. Vik Fearing <vik@postgresfriends.org> написа:On 1/1/21 10:21 PM, Zhihong Yu wrote:\n> Krasiyan:\n> Happy New Year.\n> \n> For WinGetFuncArgInPartition():\n> \n> + if (target > 0)\n> + step = 1;\n> + else if (target < 0)\n> + step = -1;\n> + else\n> + step = 0;\n> \n> When would the last else statement execute ? Since the above code is\n> for WINDOW_SEEK_CURRENT, I wonder why step should be 0.\n\nHi.\n\n\"lag(expr, 0) over w\" is useless but valid.\n-- \nVik Fearing",
"msg_date": "Sat, 9 Jan 2021 07:29:11 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "I started to look through this patch, and the first thing I'm wondering\nabout is why bother with a new pg_proc column, ie why not just apply the\nbehavior to any window function? The fact that the SQL committee\nrestricts this syntax to a few window functions is just their usual\ndesign tic of creating one-off syntax that could be much more general.\nWe've not hesitated to generalize in similar situations in the past.\n\nThe main thing I can see against that position is that it's not very\nclear what to do if the window function has more than one window-ized\nargument --- or at least, the only plausible interpretation would be\nto ignore rows in which any of those arguments is null, which this\nimplementation is incapable of doing (since we don't know exactly\nwhich arguments the function will try to treat as window-ized).\nHowever, having a pronulltreatment column isn't helping that\nsituation at all: somebody could mark a multi-input window function\nas ignoring nulls, and we'd silently do the wrong thing in any case\nwhere those inputs weren't nulls at exactly the same rows.\n\nMy thought, therefore, is to drop pg_proc.pronulltreatment and instead\nenforce an implementation restriction that when IGNORE NULLS is specified, \nWinGetFuncArgInPartition and WinGetFuncArgInFrame throw an error if\nasked about any argument position other than the first one. As long\nas only the first argument is window-ized, the implementation you have\nhere will act correctly. If anybody ever finds that annoying, they can\nfigure out how to relax the restriction at that time.\n\nThe need for a TREAT NULLS option to CREATE FUNCTION would thereby\nalso go away, which is good because I don't think this patch has\nfully implemented that (notably, I don't see any pg_dump changes).\n\nAs far as the actual implementation goes:\n\n* The undocumented relationship between \"relpos\" (which used to be\nconstant and now isn't) and \"target\" and \"step\" makes my head hurt.\nI'm sure this could be redesigned to be simpler, or if not, at\nleast it should be commented a lot more thoroughly.\n\n* I'm quite concerned about performance; it looks like this will\ndegrade to O(N^2) in practical situations, which isn't going to\nsatisfy anyone. I think we need to track how many nulls we've\nalready seen so that we aren't re-visiting earlier rows over and\nover. That should make it possible to un-disable the set_mark\noptimization, which is something that's independently catastrophic\nfor performance. While I've not stopped to design this fully, maybe\nwe could keep state along the lines of \"there are j rows with null\nvalues of the window-ized argument before row k of the partition.\"\nUpdating that by dead reckoning as we navigate would be enough to\nfix the O(N^2) problem for typical scenarios. I think.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Jan 2021 17:00:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "Hi Vik,\n\nOn 1/11/21 5:00 PM, Tom Lane wrote:\n> I started to look through this patch...\n\nI see you moved this patch to PG15. If you won't be working on the patch \nin this CF perhaps it would be better to close it as Returned with \nFeedback for now and reopen it when you have a new patch?\n\nI'll do that on March 23 unless I hear arguments to the contrary.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 18 Mar 2021 11:12:03 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "On 3/18/21 4:12 PM, David Steele wrote:\n> Hi Vik,\n> \n> On 1/11/21 5:00 PM, Tom Lane wrote:\n>> I started to look through this patch...\n> \n> I see you moved this patch to PG15. If you won't be working on the patch\n> in this CF\n\nCorrect. I won't be working on this again until I finish my review of\nthe system versioning patch.\n\n> perhaps it would be better to close it as Returned with\n> Feedback for now and reopen it when you have a new patch?\n\nIf that is preferred over moving it to PG15, then no objection. As long\nas people don't think I've abandoned it.\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 18 Mar 2021 17:03:34 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
},
{
"msg_contents": "On 3/18/21 12:03 PM, Vik Fearing wrote:\n> On 3/18/21 4:12 PM, David Steele wrote:\n>> Hi Vik,\n>>\n>> On 1/11/21 5:00 PM, Tom Lane wrote:\n>>> I started to look through this patch...\n>>\n>> I see you moved this patch to PG15. If you won't be working on the patch\n>> in this CF\n> \n> Correct. I won't be working on this again until I finish my review of\n> the system versioning patch.\n> \n>> perhaps it would be better to close it as Returned with\n>> Feedback for now and reopen it when you have a new patch?\n> \n> If that is preferred over moving it to PG15, then no objection. \n\nIt is, because it means it doesn't need to be looked at again until you \nhave had time to work on it.\n\n> As long\n> as people don't think I've abandoned it.\n\nThis declaration should prevent that.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 18 Mar 2021 12:36:09 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Implement <null treatment> for window functions"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a patch to allow estimation of (Var op Var) clauses using\nextended statistics. Currently we only use extended stats to estimate\n(Var op Const) clauses, which is sufficient for most cases, but it's not\nvery hard to support this second type of clauses.\n\nThis is not an entirely new patch - I've originally included it in the\npatch series in [1] but it's probably better to discuss it separately,\nso that it does not get buried in that discussion.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development\n\nTo illustrate the purpose of this patch, consider this:\n\n db=# create table t (a int, b int);\n CREATE TABLE\n\n db=# insert into t select mod(i,10), mod(i,10)+1\n from generate_series(1,100000) s(i);\n INSERT 0 100000\n\n db=# analyze t;\n ANALYZE\n\n db=# explain select * from t where a < b;\n QUERY PLAN\n --------------------------------------------------------\n Seq Scan on t (cost=0.00..1693.00 rows=33333 width=8)\n Filter: (a < b)\n (2 rows)\n\n db=# explain select * from t where a > b;\n QUERY PLAN\n --------------------------------------------------------\n Seq Scan on t (cost=0.00..1693.00 rows=33333 width=8)\n Filter: (a > b)\n (2 rows)\n\n db=# create statistics s (mcv) on a,b from t;\n CREATE STATISTICS\n\n db=# analyze t;\n ANALYZE\n\n db=# explain select * from t where a < b;\n QUERY PLAN\n ---------------------------------------------------------\n Seq Scan on t (cost=0.00..1693.00 rows=100000 width=8)\n Filter: (a < b)\n (2 rows)\n\n db=# explain select * from t where a > b;\n QUERY PLAN\n ----------------------------------------------------\n Seq Scan on t (cost=0.00..1693.00 rows=1 width=8)\n Filter: (a > b)\n (2 rows)\n\n\nI'm not entirely convinced this patch (on it's own) is very useful, for\na couple of reasons:\n\n(a) Clauses of this form are not particularly common, at least compared\nto the Var op Const clauses. (I don't recall slow-query reports from any\nof our mailing lists that might be attributed to such clauses.)\n\n(b) For known cases of such queries (e.g. several TPC-H queries do use\nclauses like \"l_commitdate < l_receiptdate\" etc.) this is somewhat\nhindered by extended statistics only supporting MCV lists, which may not\nwork particularly well for high-cardinality columns like dates etc.\n\nBut despite that it seems like a useful feature / building block, and\nthose limitations may be addressed in some other way (e.g. we may add\nmulti-dimensional histograms to address the second one).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 13 Nov 2020 02:14:28 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Nov 12, 2020, at 5:14 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> <0001-Support-estimation-of-clauses-of-the-form-V-20201113.patch>\n\nYour patch no longer applies. Can we get a new version please?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 11:58:15 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\nOn 3/1/21 8:58 PM, Mark Dilger wrote:\n> \n> \n>> On Nov 12, 2020, at 5:14 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> <0001-Support-estimation-of-clauses-of-the-form-V-20201113.patch>\n> \n> Your patch no longer applies. Can we get a new version please?\n> \n\nI do not plan to work on this patch in the 2021-03 commitfest. I'll \nfocus on the other patch about extended statistics on expressions.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 1 Mar 2021 21:49:35 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "Hi,\n\nHere is a slightly updated version of the patch - rebased to current \nmaster and fixing some minor issues to handle expressions (and not just \nthe Var nodes as before).\n\nThe changes needed to support (Expr op Expr) are mostly mechanical, \nthough I'm sure the code needs some cleanup. The main issue I ran into \nis the special case clauselist_selectivity, which does\n\n if (list_length(clauses) == 1)\n return clause_selectivity_ext(...);\n\nwhich applies to cases like \"WHERE a < b\" which can now be handled by \nextended statistics, thanks to this patch. But clause_selectivity_ext \nonly used to call restriction_selectivity for these clauses, which does \nnot use extended statistics, of course.\n\nI considered either getting rid of the special case, passing everything \nthrough extended stats, including cases with a single clause. But that \nends up affecting e.g. OR clauses, so I tweaked clause_selectivity_ext a \nbit, which seems like a better approach.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 13 Jun 2021 22:28:43 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 1:29 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> Here is a slightly updated version of the patch - rebased to current\n> master and fixing some minor issues to handle expressions (and not just\n> the Var nodes as before).\n>\n> The changes needed to support (Expr op Expr) are mostly mechanical,\n> though I'm sure the code needs some cleanup. The main issue I ran into\n> is the special case clauselist_selectivity, which does\n>\n> if (list_length(clauses) == 1)\n> return clause_selectivity_ext(...);\n>\n> which applies to cases like \"WHERE a < b\" which can now be handled by\n> extended statistics, thanks to this patch. But clause_selectivity_ext\n> only used to call restriction_selectivity for these clauses, which does\n> not use extended statistics, of course.\n>\n> I considered either getting rid of the special case, passing everything\n> through extended stats, including cases with a single clause. But that\n> ends up affecting e.g. OR clauses, so I tweaked clause_selectivity_ext a\n> bit, which seems like a better approach.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\nHi,\n\n- for (i = 0; i < mcvlist->nitems; i++)\n+ if (cst) /* Expr op Const */\n\nIt seems the Const op Expr is also covered by this if branch. Hence the\ncomment should include this case.\n\nCheers\n\nOn Sun, Jun 13, 2021 at 1:29 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nHere is a slightly updated version of the patch - rebased to current \nmaster and fixing some minor issues to handle expressions (and not just \nthe Var nodes as before).\n\nThe changes needed to support (Expr op Expr) are mostly mechanical, \nthough I'm sure the code needs some cleanup. The main issue I ran into \nis the special case clauselist_selectivity, which does\n\n if (list_length(clauses) == 1)\n return clause_selectivity_ext(...);\n\nwhich applies to cases like \"WHERE a < b\" which can now be handled by \nextended statistics, thanks to this patch. But clause_selectivity_ext \nonly used to call restriction_selectivity for these clauses, which does \nnot use extended statistics, of course.\n\nI considered either getting rid of the special case, passing everything \nthrough extended stats, including cases with a single clause. But that \nends up affecting e.g. OR clauses, so I tweaked clause_selectivity_ext a \nbit, which seems like a better approach.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,- for (i = 0; i < mcvlist->nitems; i++)+ if (cst) /* Expr op Const */It seems the Const op Expr is also covered by this if branch. Hence the comment should include this case.Cheers",
"msg_date": "Sun, 13 Jun 2021 18:46:13 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Jun 13, 2021, at 1:28 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> Here is a slightly updated version of the patch\n\nThanks for taking this up again!\n\nApplying the new test cases from your patch, multiple estimates have gotten better. That looks good. I wrote a few extra test cases and saw no change, which is fine. I was looking for regressions where the estimates are now worse than before. Do you expect there to be any such cases?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 08:36:11 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On 6/14/21 5:36 PM, Mark Dilger wrote:\n> \n> \n>> On Jun 13, 2021, at 1:28 PM, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>> \n>> Here is a slightly updated version of the patch\n> \n> Thanks for taking this up again!\n> \n> Applying the new test cases from your patch, multiple estimates have\n> gotten better. That looks good. I wrote a few extra test cases and\n> saw no change, which is fine. I was looking for regressions where\n> the estimates are now worse than before. Do you expect there to be\n> any such cases?\n> \n\nNot really. These clauses could not be estimated before, we generally\nused default estimates for them. So\n\n WHERE a = b\n\nwould use 0.5%, while\n\n WHERE a < b\n\nwould use 33%, and so on. OTOH it depends on the accuracy of the\nextended statistics - particularly the MCV list (what fraction of the\ndata it covers, etc.).\n\nSo it's possible the default estimate is very accurate by chance, and\nMCV list represents only a tiny fraction of the data. Then the new\nestimate could we worse. Consider for example this:\n\ncreate table t (a int, b int);\ninsert into t select 100, 100 from generate_series(1,5000) s(i);\ninsert into t select i, i+1 from generate_series(1,995000) s(i);\n\nThis has exactly 0.5% of rows with (a=b). Without extended stats it's\nperfect:\n\n explain analyze select * from t where a = b;\n\n Seq Scan on t (cost=0.00..16925.00 rows=5000 width=8)\n (actual time=0.064..159.928 rows=5000 loops=1)\n\nwhile with statistics it gets worse:\n\n create statistics s (mcv) on a, b from t;\n analyze t;\n\n Seq Scan on t (cost=0.00..16925.00 rows=9810 width=8)\n (actual time=0.059..160.467 rows=5000 loops=1)\n\nIt's not terrible, although we could construct worse examples. But the\nsame issue applies to other clauses, not just to these new ones. And it\nrelies on the regular estimation producing better estimate by chance.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 14 Jun 2021 18:28:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Sun, 13 Jun 2021 at 21:28, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Here is a slightly updated version of the patch\n>\n> The main issue I ran into\n> is the special case clauselist_selectivity, which does\n>\n> if (list_length(clauses) == 1)\n> return clause_selectivity_ext(...);\n>\n> which applies to cases like \"WHERE a < b\" which can now be handled by\n> extended statistics, thanks to this patch. But clause_selectivity_ext\n> only used to call restriction_selectivity for these clauses, which does\n> not use extended statistics, of course.\n>\n> I considered either getting rid of the special case, passing everything\n> through extended stats, including cases with a single clause. But that\n> ends up affecting e.g. OR clauses, so I tweaked clause_selectivity_ext a\n> bit, which seems like a better approach.\n\nYeah, I looked at this a few months ago, while looking at the other\nextended stats stuff, and I came to the same conclusion that solving\nthis issue by tweaking clause_selectivity_ext() is the best approach.\n\nI haven't looked at the patch in much detail yet, but I think the\nbasic approach looks OK.\n\nThere are a few comments that need updating, e.g.:\n - In statext_is_compatible_clause_internal(), before the \"if\n(is_opclause(clause))\" test.\n - The description of the arguments for examine_opclause_args().\n\nI wonder if \"expronleftp\" for examine_opclause_args() should be\n\"constonrightp\", or some such -- as it stands it's being set to false\nfor an Expr Op Expr clause, which doesn't seem right because there\n*is* an expression on the left.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 21 Jun 2021 15:39:08 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Sun, 13 Jun 2021 at 21:28, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Here is a slightly updated version of the patch\n>\n\nHi,\n\nI have looked at this in some more detail, and it all looks pretty\ngood, other than some mostly cosmetic stuff.\n\nThe new code in statext_is_compatible_clause_internal() is a little\nhard to follow because some of the comments aren't right (e.g. when\nchecking clause_expr2, it isn't an (Expr op Const) or (Const op Expr)\nas the comment says). Rather than trying to comment on each\nconditional branch, it might be simpler to just have a single\ncatch-all comment at the top, and also remove the \"return true\" in the\nmiddle, to make it something like:\n\n /*\n * Check Vars appearing on either side by recursing, and make a note of\n * any expressions.\n */\n if (IsA(clause_expr, Var))\n {\n if (!statext_is_compatible_clause_internal(...))\n return false;\n }\n else\n *exprs = lappend(*exprs, clause_expr);\n\n if (clause_expr2)\n {\n if (IsA(clause_expr2, Var))\n {\n if (!statext_is_compatible_clause_internal(...))\n return false;\n }\n else\n *exprs = lappend(*exprs, clause_expr2);\n }\n\n return true;\n\nIs the FIXME comment in examine_opclause_args() necessary? The check\nfor a single relation has already been done in\nclause[list]_selectivity_ext(), and I'm not sure what\nexamine_opclause_args() would do differently.\n\nIn mcv_get_match_bitmap(), perhaps do the RESULT_IS_FINAL() checks\nfirst in each loop.\n\nAlso in mcv_get_match_bitmap(), the 2 \"First check whether the\nconstant is below the lower boundary ...\" comments don't make any\nsense to me. Were those perhaps copied and pasted from somewhere else?\nThey should perhaps say \"Otherwise, compare the MCVItem with the\nconstant\" and \"Otherwise compare the values from the MCVItem using the\nclause operator\", or something like that.\n\nBut other than such cosmetic things, I think the patch is good, and\ngives some nice estimate improvements.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 5 Jul 2021 13:46:00 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "Hi,\n\nOn 7/5/21 2:46 PM, Dean Rasheed wrote:\n> On Sun, 13 Jun 2021 at 21:28, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Here is a slightly updated version of the patch\n>>\n> \n> Hi,\n> \n> I have looked at this in some more detail, and it all looks pretty\n> good, other than some mostly cosmetic stuff.\n> \n\nThanks for the review!\n\n> The new code in statext_is_compatible_clause_internal() is a little\n> hard to follow because some of the comments aren't right (e.g. when\n> checking clause_expr2, it isn't an (Expr op Const) or (Const op Expr)\n> as the comment says). Rather than trying to comment on each\n> conditional branch, it might be simpler to just have a single\n> catch-all comment at the top, and also remove the \"return true\" in the\n> middle, to make it something like:\n> \n> /*\n> * Check Vars appearing on either side by recursing, and make a note of\n> * any expressions.\n> */\n> if (IsA(clause_expr, Var))\n> {\n> if (!statext_is_compatible_clause_internal(...))\n> return false;\n> }\n> else\n> *exprs = lappend(*exprs, clause_expr);\n> \n> if (clause_expr2)\n> {\n> if (IsA(clause_expr2, Var))\n> {\n> if (!statext_is_compatible_clause_internal(...))\n> return false;\n> }\n> else\n> *exprs = lappend(*exprs, clause_expr2);\n> }\n> \n> return true;\n> \n\nI ended up doing something slightly different - examine_opclause_args\nnow \"returns\" a list of expressions, instead of explicitly setting two\nparameters. That means we can do a simple foreach() here, which seems\ncleaner. It means we have to extract the expressions from the list in a\ncouple places, but that seems acceptable. Do you agree?\n\nI also went through the comments and updated those that seemed wrong.\n\n> Is the FIXME comment in examine_opclause_args() necessary? The check\n> for a single relation has already been done in\n> clause[list]_selectivity_ext(), and I'm not sure what\n> examine_opclause_args() would do differently.\n> \n\nYeah, I came to the same conclusion.\n\n> In mcv_get_match_bitmap(), perhaps do the RESULT_IS_FINAL() checks\n> first in each loop.\n> \n\nThis is how master already does that now, and I wonder if it's done in\nthis order intentionally. It's not clear to me doing it in the other way\nwould be faster?\n\n> Also in mcv_get_match_bitmap(), the 2 \"First check whether the\n> constant is below the lower boundary ...\" comments don't make any\n> sense to me. Were those perhaps copied and pasted from somewhere else?\n> They should perhaps say \"Otherwise, compare the MCVItem with the\n> constant\" and \"Otherwise compare the values from the MCVItem using the\n> clause operator\", or something like that.\n> \n\nYeah, that's another bit that comes from current master - the patch just\nmakes a new copy of the comment. I agree it's bogus, Seems like a\nremainder of the original code which did various \"smart\" things we\nremoved over time. Will fix.\n\n> But other than such cosmetic things, I think the patch is good, and\n> gives some nice estimate improvements.\n> \n\nThanks, sounds good. I guess the last thing is maybe mentioning this in\nthe docs, adding an example etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 20 Jul 2021 20:28:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Tue, 20 Jul 2021 at 19:28, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> > The new code in statext_is_compatible_clause_internal() is a little\n> > hard to follow because some of the comments aren't right\n>\n> I ended up doing something slightly different - examine_opclause_args\n> now \"returns\" a list of expressions, instead of explicitly setting two\n> parameters. That means we can do a simple foreach() here, which seems\n> cleaner. It means we have to extract the expressions from the list in a\n> couple places, but that seems acceptable. Do you agree?\n\nYes, that looks much neater.\n\n> > In mcv_get_match_bitmap(), perhaps do the RESULT_IS_FINAL() checks\n> > first in each loop.\n>\n> This is how master already does that now, and I wonder if it's done in\n> this order intentionally. It's not clear to me doing it in the other way\n> would be faster?\n\nAh OK, it just felt more natural to do it the other way round. I\nsuppose though, that for the first clause, the is-final check isn't\ngoing to catch anything, whereas the is-null checks might. For the\nremaining clauses, it will depend on the data as to which way is\nfaster, but it probably isn't going to make any noticeable difference\neither way. So, although it initially seems a bit counter-intuitive,\nit's probably better the way it is.\n\n> I guess the last thing is maybe mentioning this in\n> the docs, adding an example etc.\n\nYeah, good idea.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 21 Jul 2021 14:02:50 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "> On Jul 20, 2021, at 11:28 AM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> <0001-Handling-Expr-op-Expr-clauses-in-extended-stats-20210720.patch>\n\nHi Tomas,\n\nI tested this patch against master looking for types of clauses that uniformly get worse with the patch applied. I found some.\n\nThe tests are too large to attach, but the scripts that generate them are not. To perform the tests:\n\n\tgit checkout master\n\tperl ./gentest.pl > src/test/regress/sql/gentest.sql\n\tcat /dev/null > src/test/regress/expected/gentest.out\n\techo \"test: gentest\" >> src/test/regress/parallel_schedule\n\t./configure && make && make check\n\tcp src/test/regress/results/gentest.out src/test/regress/expected/gentest.out\n\tpatch -p 1 < 0001-Handling-Expr-op-Expr-clauses-in-extended-stats-20210720.patch\n\tmake check\n\tcat src/test/regress/regression.diffs | perl ./check.pl\n\nThis shows patterns of conditions that get worse, such as:\n\nbetter:0, worse:80: A < B and A <> A or not A < A\nbetter:0, worse:80: A < B and not A <= A or A <= A\nbetter:0, worse:80: A < B or A = A\nbetter:0, worse:80: A < B or A = A or not A >= A\nbetter:0, worse:80: A < B or A >= A\nbetter:0, worse:80: A < B or A >= A and not A <> A\nbetter:0, worse:80: A < B or not A < A\nbetter:0, worse:80: A < B or not A <> A\nbetter:0, worse:80: A < B or not A <> A or A <= A\nbetter:0, worse:80: A < B or not A >= A or not A < A\n\nIt seems things get worse when the conditions contain a column compared against itself. I suspect that is being handled incorrectly.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Aug 2021 12:19:55 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On 8/9/21 9:19 PM, Mark Dilger wrote:\n> \n> \n>> On Jul 20, 2021, at 11:28 AM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Tomas Vondra\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>> <0001-Handling-Expr-op-Expr-clauses-in-extended-stats-20210720.patch>\n> \n> Hi Tomas,\n> \n> I tested this patch against master looking for types of clauses that uniformly get worse with the patch applied. I found some.\n> \n> The tests are too large to attach, but the scripts that generate them are not. To perform the tests:\n> \n> \tgit checkout master\n> \tperl ./gentest.pl > src/test/regress/sql/gentest.sql\n> \tcat /dev/null > src/test/regress/expected/gentest.out\n> \techo \"test: gentest\" >> src/test/regress/parallel_schedule\n> \t./configure && make && make check\n> \tcp src/test/regress/results/gentest.out src/test/regress/expected/gentest.out\n> \tpatch -p 1 < 0001-Handling-Expr-op-Expr-clauses-in-extended-stats-20210720.patch\n> \tmake check\n> \tcat src/test/regress/regression.diffs | perl ./check.pl\n> \n> This shows patterns of conditions that get worse, such as:\n> \n> better:0, worse:80: A < B and A <> A or not A < A\n> better:0, worse:80: A < B and not A <= A or A <= A\n> better:0, worse:80: A < B or A = A\n> better:0, worse:80: A < B or A = A or not A >= A\n> better:0, worse:80: A < B or A >= A\n> better:0, worse:80: A < B or A >= A and not A <> A\n> better:0, worse:80: A < B or not A < A\n> better:0, worse:80: A < B or not A <> A\n> better:0, worse:80: A < B or not A <> A or A <= A\n> better:0, worse:80: A < B or not A >= A or not A < A\n> \n> It seems things get worse when the conditions contain a column compared against itself. I suspect that is being handled incorrectly.\n>\n\nThanks for this testing!\n\nI took a quick look, and I think this is mostly due to luck in how the \n(default) range estimates combine without and with extended statistics. \nConsider for example this simple example:\n\ncreate table t (a int, b int);\n\ninsert into t select mod(i,10), mod(i,20)\n from generate_series(1,1000000) s(i);\n\nWithout stats, the first clauses example is estimated like this:\n\n explain (timing off, analyze) select * from t\n where (A < B and A <> A) or not A < A;\n\n QUERY PLAN\n ----------------------------------------------------------\n Seq Scan on t (cost=0.00..21925.00 rows=554444 width=8)\n (actual rows=1000000 loops=1)\n Filter: (((a < b) AND (a <> a)) OR (a >= a))\n Planning Time: 0.054 ms\n Execution Time: 80.485 ms\n (4 rows)\n\nand with MCV on (a,b) it gets estimates like this:\n\n QUERY PLAN \n\n ----------------------------------------------------------\n Seq Scan on t (cost=0.00..21925.00 rows=333333 width=8)\n (actual rows=1000000 loops=1)\n Filter: (((a < b) AND (a <> a)) OR (a >= a))\n Planning Time: 0.152 ms\n Execution Time: 79.917 ms\n (4 rows)\n\nSo with the statistics, the estimate gets a bit worse. The reason is \nfairly simple - if you look at the two parts of the OR clause, we get this:\n\n clause actual no stats with stats\n ---------------------------------------------------------------\n (A < B and A <> A) 0 331667 1\n not (A < A) 1000000 333333 333333\n\nThis clearly shows that the first clause is clearly improved, while the \n(A < A) is estimated the same way, because the clause has a single Var \nso it's considered to be \"simple\" so we ignore the MCV selectivity and \njust use the simple_sel calculated by clause_selectivity_ext.\n\nAnd the 333333 and 331667 just happen to be closer to the actual row \ncount. But that's mostly by luck, clearly.\n\nBut now that I think about it, maybe the problem really is in how \nstatext_mcv_clauselist_selectivity treats this clause - the definition \nof \"simple\" clauses as \"has one attnum\" was appropriate when only \nclauses (Var op Const) were supported. But with (Var op Var) that's \nprobably not correct anymore.\n\nAnd indeed, commenting out the if condition on line 1933 (so ignoring \nsimple_sel) and that does improve the estimates for this query. But \nperhaps I'm missing something, this needs more thought.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Aug 2021 01:04:56 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Wed, 11 Aug 2021 at 00:05, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> So with the statistics, the estimate gets a bit worse. The reason is\n> fairly simple - if you look at the two parts of the OR clause, we get this:\n>\n> clause actual no stats with stats\n> ---------------------------------------------------------------\n> (A < B and A <> A) 0 331667 1\n> not (A < A) 1000000 333333 333333\n>\n> This clearly shows that the first clause is clearly improved, while the\n> (A < A) is estimated the same way, because the clause has a single Var\n> so it's considered to be \"simple\" so we ignore the MCV selectivity and\n> just use the simple_sel calculated by clause_selectivity_ext.\n>\n> And the 333333 and 331667 just happen to be closer to the actual row\n> count. But that's mostly by luck, clearly.\n>\n> But now that I think about it, maybe the problem really is in how\n> statext_mcv_clauselist_selectivity treats this clause - the definition\n> of \"simple\" clauses as \"has one attnum\" was appropriate when only\n> clauses (Var op Const) were supported. But with (Var op Var) that's\n> probably not correct anymore.\n>\n\nHmm, interesting. Clearly the fact that the combined estimate without\nextended stats was better was just luck, based on it's large\noverestimate of the first clause. But it's also true that a (Var op\nVar) clause should not be treated as simple, because \"simple\" in this\ncontext is meant to be for clauses that are likely to be better\nestimated with regular stats, whereas in this case, extended stats\nwould almost certainly do better on the second clause.\n\nPerhaps the easiest way to identify simple clauses would be in\nstatext_is_compatible_clause(), rather than the way it's done now,\nbecause it has the relevant information at hand, so it could be made\nto return an extra flag.\n\nThis feels like rather an artificial example though. Is there any real\nuse for this sort of clause?\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 11 Aug 2021 13:08:55 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 11, 2021, at 5:08 AM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> \n> This feels like rather an artificial example though. Is there any real\n> use for this sort of clause?\n\nThe test generated random combinations of clauses and then checked if any had consistently worse performance. These came up. I don't know that they represent anything real.\n\nWhat was not random in the tests was the data in the tables. I've gotten curious if these types of clauses (with columns compared against themselves) would still be bad for random rather than orderly data sets. I'll go check....\n\n testing....\n\nWow. Randomizing the data makes the problems even more extreme. It seems my original test set was actually playing to this patch's strengths, not its weaknesses. I've changed the columns to double precision and filled the columns with random() data, where column1 gets random()^1, column2 gets random()^2, etc. So on average the larger numbered columns will be smaller, and the mcv list will be irrelevant, since values should not tend to repeat.\n\nOver all queries, 47791 have better estimates after the patch, but 34802 had worse estimates after the patch (with the remaining 17407 queries having roughly equal quality).\n\nThe worst estimates are still ones that have a column compared to itself:\n\nbetter:0, worse:33: A <= B or A <= A or A <= A\nbetter:0, worse:33: A <= B or A = A or not A <> A\nbetter:0, worse:33: A <= B or A >= A or not A <> A\nbetter:0, worse:33: A <> B or A <= A\nbetter:0, worse:33: A <> B or A <= A or A <> A\nbetter:0, worse:33: A <> B or A <= A or A >= A\nbetter:0, worse:33: A <> B or A <= A or not A = A\nbetter:0, worse:33: A <> B or A > A or not A < A\nbetter:0, worse:33: A <> B or A >= A\nbetter:0, worse:33: A <> B or A >= A and A <= A\nbetter:0, worse:33: A = B or not A > A or not A > A\nbetter:0, worse:33: A >= B or not A <> A or A = A\nbetter:0, worse:39: B <= A or B <= B or B <= B\nbetter:0, worse:39: B <= A or B = B or not B <> B\nbetter:0, worse:39: B <= A or B >= B or not B <> B\nbetter:0, worse:39: B <> A or B <= B\nbetter:0, worse:39: B <> A or B <= B or B <> B\nbetter:0, worse:39: B <> A or B <= B or B >= B\nbetter:0, worse:39: B <> A or B <= B or not B = B\nbetter:0, worse:39: B <> A or B > B or not B < B\nbetter:0, worse:39: B <> A or B >= B\nbetter:0, worse:39: B <> A or B >= B and B <= B\nbetter:0, worse:39: B = A or not B > B or not B > B\nbetter:0, worse:39: B >= A or not B <> B or B = B\n\nBut there are plenty that got worse without that, such as the following examples:\n\nbetter:25, worse:39: A < B and A < B or B > A\nbetter:10, worse:48: A < B and A < C\nbetter:10, worse:54: A < B and A < C or C > A\n\nI'll go test random data designed to have mcv lists of significance....\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 11 Aug 2021 07:51:36 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\nOn 8/11/21 2:08 PM, Dean Rasheed wrote:\n> On Wed, 11 Aug 2021 at 00:05, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> So with the statistics, the estimate gets a bit worse. The reason is\n>> fairly simple - if you look at the two parts of the OR clause, we get this:\n>>\n>> clause actual no stats with stats\n>> ---------------------------------------------------------------\n>> (A < B and A <> A) 0 331667 1\n>> not (A < A) 1000000 333333 333333\n>>\n>> This clearly shows that the first clause is clearly improved, while the\n>> (A < A) is estimated the same way, because the clause has a single Var\n>> so it's considered to be \"simple\" so we ignore the MCV selectivity and\n>> just use the simple_sel calculated by clause_selectivity_ext.\n>>\n>> And the 333333 and 331667 just happen to be closer to the actual row\n>> count. But that's mostly by luck, clearly.\n>>\n>> But now that I think about it, maybe the problem really is in how\n>> statext_mcv_clauselist_selectivity treats this clause - the definition\n>> of \"simple\" clauses as \"has one attnum\" was appropriate when only\n>> clauses (Var op Const) were supported. But with (Var op Var) that's\n>> probably not correct anymore.\n>>\n> \n> Hmm, interesting. Clearly the fact that the combined estimate without\n> extended stats was better was just luck, based on it's large\n> overestimate of the first clause. But it's also true that a (Var op\n> Var) clause should not be treated as simple, because \"simple\" in this\n> context is meant to be for clauses that are likely to be better\n> estimated with regular stats, whereas in this case, extended stats\n> would almost certainly do better on the second clause.\n\nI don't see why extended stats would do better on the second clause. I \nmean, if you have (A < A) then extended stats pretty much \"collapse\" \ninto per-column stats. We could get almost the same estimate on \nsingle-column MCV list, etc. The reason why that does not happen is that \nwe just treat it as a range clause, and assign it a default 33% estimate.\n\nBut we could make that a bit smarter, and assign better estimates to \nthose clauses\n\n (A < A) => 0.0\n (A = A) => 1.0\n (A <= A) => 1.0\n\nAnd that'd give us the same estimates, I think. Not sure that's worth \nit, because (A op A) clauses are probably very rare, OTOH it's cheap.\n\n> \n> Perhaps the easiest way to identify simple clauses would be in\n> statext_is_compatible_clause(), rather than the way it's done now,\n> because it has the relevant information at hand, so it could be made\n> to return an extra flag.\n> \n\nAgreed, that seems like a better place to fix this.\n\n> This feels like rather an artificial example though. Is there any real\n> use for this sort of clause?\n> \n\nTrue. It seems a bit artificial, which is understandable as it came from \na synthetic test generating all possible clauses. OTOH, fixing it seems \nfairly cheap ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Aug 2021 17:13:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 11, 2021, at 7:51 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I'll go test random data designed to have mcv lists of significance....\n\nDone. The data for column_i is set to floor(random()^i*20). column_1 therefore is evenly distributed between 0..19, with successive columns weighted more towards smaller values.\n\nThis still gives (marginally) worse results than the original test I posted, but better than the completely random data from the last post. After the patch, 72294 estimates got better and 30654 got worse. The biggest losers from this data set are:\n\nbetter:0, worse:31: A >= B or A = A or not A = A\nbetter:0, worse:31: A >= B or A = A\nbetter:0, worse:31: A >= B or not A <> A\nbetter:0, worse:31: A >= A or A = B or not B = A\nbetter:0, worse:31: A >= B and not A < A or A = A\nbetter:0, worse:31: A = A or not A > B or B <> A\nbetter:0, worse:31: A >= B or not A <> A or not A >= A\nbetter:0, worse:32: B < A and B > C and not C < B <----\nbetter:1, worse:65: A <> C and A <= B <----\nbetter:0, worse:33: B <> A or B >= B\nbetter:0, worse:33: B <> A or B <= B\nbetter:0, worse:33: B <= A or B = B or not B > B\nbetter:0, worse:33: B <> A or not B >= B or not B < B\nbetter:0, worse:33: B = A or not B > B or B = B\nbetter:0, worse:44: A = B or not A > A or A = A\nbetter:0, worse:44: A <> B or A <= A\nbetter:0, worse:44: A <> B or not A >= A or not A < A\nbetter:0, worse:44: A <= B or A = A or not A > A\nbetter:0, worse:44: A <> B or A >= A\n\nOf which, a few do not contain columns compared against themselves, marked with <---- above.\n\nI don't really know what to make of these results. It doesn't bother me that any particular estimate gets worse after the patch. That's just the nature of estimating. But it does bother me a bit that some types of estimates consistently get worse. We should either show that my analysis is wrong about that, or find a way to address it to avoid performance regressions. If I'm right that there are whole classes of estimates that are made consistently worse, then it stands to reason some users will have those data distributions and queries, and could easily notice.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 11 Aug 2021 08:17:11 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\nOn 8/11/21 4:51 PM, Mark Dilger wrote:\n> \n> \n>> On Aug 11, 2021, at 5:08 AM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>\n>> This feels like rather an artificial example though. Is there any real\n>> use for this sort of clause?\n> \n> The test generated random combinations of clauses and then checked if any had consistently worse performance. These came up. I don't know that they represent anything real.\n> \n> What was not random in the tests was the data in the tables. I've gotten curious if these types of clauses (with columns compared against themselves) would still be bad for random rather than orderly data sets. I'll go check....\n> \n> testing....\n> \n> Wow. Randomizing the data makes the problems even more extreme. It seems my original test set was actually playing to this patch's strengths, not its weaknesses. I've changed the columns to double precision and filled the columns with random() data, where column1 gets random()^1, column2 gets random()^2, etc. So on average the larger numbered columns will be smaller, and the mcv list will be irrelevant, since values should not tend to repeat.\n> \n> Over all queries, 47791 have better estimates after the patch, but 34802 had worse estimates after the patch (with the remaining 17407 queries having roughly equal quality).\n> \n> The worst estimates are still ones that have a column compared to itself:\n> \n> better:0, worse:33: A <= B or A <= A or A <= A\n> better:0, worse:33: A <= B or A = A or not A <> A\n> better:0, worse:33: A <= B or A >= A or not A <> A\n> better:0, worse:33: A <> B or A <= A\n> better:0, worse:33: A <> B or A <= A or A <> A\n> better:0, worse:33: A <> B or A <= A or A >= A\n> better:0, worse:33: A <> B or A <= A or not A = A\n> better:0, worse:33: A <> B or A > A or not A < A\n> better:0, worse:33: A <> B or A >= A\n> better:0, worse:33: A <> B or A >= A and A <= A\n> better:0, worse:33: A = B or not A > A or not A > A\n> better:0, worse:33: A >= B or not A <> A or A = A\n> better:0, worse:39: B <= A or B <= B or B <= B\n> better:0, worse:39: B <= A or B = B or not B <> B\n> better:0, worse:39: B <= A or B >= B or not B <> B\n> better:0, worse:39: B <> A or B <= B\n> better:0, worse:39: B <> A or B <= B or B <> B\n> better:0, worse:39: B <> A or B <= B or B >= B\n> better:0, worse:39: B <> A or B <= B or not B = B\n> better:0, worse:39: B <> A or B > B or not B < B\n> better:0, worse:39: B <> A or B >= B\n> better:0, worse:39: B <> A or B >= B and B <= B\n> better:0, worse:39: B = A or not B > B or not B > B\n> better:0, worse:39: B >= A or not B <> B or B = B\n> \n\nThe other interesting thing all those clauses have in common is that \nthey're OR clauses. And we handle that a bit differently. But I think \nthe \"strange\" clauses with the same Var on both sides is the main issue, \nand not detecting them as \"simple\" clauses should fix that.\n\n> But there are plenty that got worse without that, such as the following examples:\n> \n> better:25, worse:39: A < B and A < B or B > A\n> better:10, worse:48: A < B and A < C\n> better:10, worse:54: A < B and A < C or C > A\n> \n> I'll go test random data designed to have mcv lists of significance....\n> \n\nHard to say without having a look at the data set, but there'll always \nbe cases where the extended stats perform a bit worse, due to (a) luck \nand (b) the stats covering only small fraction of the table.\n\nBut of course, it's worth investigating the suspicious cases.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Aug 2021 17:18:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\nOn 8/11/21 4:51 PM, Mark Dilger wrote:\n> \n> \n>> On Aug 11, 2021, at 5:08 AM, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>\n>> This feels like rather an artificial example though. Is there any real\n>> use for this sort of clause?\n> \n> The test generated random combinations of clauses and then checked if\n> any had consistently worse performance. These came up. I don't\n> know that they represent anything real.\n> \n> What was not random in the tests was the data in the tables. I've \n> gotten curious if these types of clauses (with columns compared\n> against themselves) would still be bad for random rather than orderly\n> data sets. I'll go check.... >\n> testing....\n> \n> Wow. Randomizing the data makes the problems even more extreme. It\nseems my original test set was actually playing to this patch's\nstrengths, not its weaknesses. I've changed the columns to double\nprecision and filled the columns with random() data, where column1 gets\nrandom()^1, column2 gets random()^2, etc. So on average the larger\nnumbered columns will be smaller, and the mcv list will be irrelevant,\nsince values should not tend to repeat.\n> \n\nI tried using the same randomized data set, i.e. essentially\n\n create statistics s (mcv) on a, b, c from t;\n\n insert into t\n\n select random(), pow(random(), 2), pow(random(), 3), pow(random(),4)\n from generate_series(1,1000000) s(i);\n\n create statistics s (mcv) on a, b, c from t;\n\nBut I don't see any difference compared to the estimates without \nextended statistics, which is not surprising because there should be no \nMCV list built. So I'm a bit puzzled about the claim that random data \nmake the problems more extreme. Can you explain?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Aug 2021 19:38:24 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\nOn 8/11/21 5:17 PM, Mark Dilger wrote:\n> \n> \n>> On Aug 11, 2021, at 7:51 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>\n>> I'll go test random data designed to have mcv lists of significance....\n> \n> Done. The data for column_i is set to floor(random()^i*20). \n> column_1 therefore is evenly distributed between 0..19, with\n> successive columns weighted more towards smaller values.\n>\n> This still gives (marginally) worse results than the original test I\n> posted, but better than the completely random data from the last post.\n> After the patch, 72294 estimates got better and 30654 got worse. The\n> biggest losers from this data set are:\n> \n> better:0, worse:31: A >= B or A = A or not A = A\n> better:0, worse:31: A >= B or A = A\n> better:0, worse:31: A >= B or not A <> A\n> better:0, worse:31: A >= A or A = B or not B = A\n> better:0, worse:31: A >= B and not A < A or A = A\n> better:0, worse:31: A = A or not A > B or B <> A\n> better:0, worse:31: A >= B or not A <> A or not A >= A\n> better:0, worse:32: B < A and B > C and not C < B <----\n> better:1, worse:65: A <> C and A <= B <----\n> better:0, worse:33: B <> A or B >= B\n> better:0, worse:33: B <> A or B <= B\n> better:0, worse:33: B <= A or B = B or not B > B\n> better:0, worse:33: B <> A or not B >= B or not B < B\n> better:0, worse:33: B = A or not B > B or B = B\n> better:0, worse:44: A = B or not A > A or A = A\n> better:0, worse:44: A <> B or A <= A\n> better:0, worse:44: A <> B or not A >= A or not A < A\n> better:0, worse:44: A <= B or A = A or not A > A\n> better:0, worse:44: A <> B or A >= A\n>\n> Of which, a few do not contain columns compared against themselves,\n> marked with <---- above.\n>\n> I don't really know what to make of these results. It doesn't \n> bother me that any particular estimate gets worse after the patch.\n> That's just the nature of estimating. But it does bother me a bit\n> that some types of estimates consistently get worse. We should\n> either show that my analysis is wrong about that, or find a way to\n> address it to avoid performance regressions. If I'm right that there\n> are whole classes of estimates that are made consistently worse, then\n> it stands to reason some users will have those data distributions and\n> queries, and could easily notice.\n\nI'm not quite sure that's really a problem. Extended statistics are \nmeant for correlated columns, and it's mostly expected the estimates may \nbe a bit worse for random / independent data. The idea is mostly that \nstatistics will be created only for correlated columns, in which case it \nshould improve the estimates. I'd be way more concerned if you observed \nconsistently worse estimates on such data set.\n\nOf course, there may be errors - the incorrect handling of (A op A) is \nan example of such issue, probably.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Aug 2021 00:00:12 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 11, 2021, at 10:38 AM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> So I'm a bit puzzled about the claim that random data make the problems more extreme. Can you explain?\n\nHmm... you appear to be right.\n\nI changed the gentest.pl script to fill the tables with randomized data, but the random data is being regenerated each test run (since the calls to random() are in the gentest.sql file). Adding an explicit setseed() call in the test to make sure the data is the same before and after applying your patch eliminates the differences.\n\nSo there are three tests here. The first tests deterministic orderly data. The second tests deterministic random data without repeats and hence without meaningful mvc. The third tests deterministic random data with rounding into twenty buckets skewed towards lower numbered buckets and hence with both repeats and meaningful mvc.\n\nThe original test set:\n\n\tTOTAL:\n\t\tbetter: 77827\n\t\tworse: 12317\n\nThe random test set, with setseed() calls to make it deterministic:\n\n\tTOTAL:\n \t\tbetter: 49708\n \t\tworse: 19393\n\nThe random test set , with setseed() calls to make it deterministic plus rounding into buckets:\n\n\tTOTAL:\n \t\tbetter: 81764\n \t\tworse: 19594\n\nOnce the data is made deterministic, the third set looks slightly better than the first, rather than slightly worse. But almost 20% of the query types still look worse after applying the patch. I'm going to dig deeper into those to see if that conclusion survives bumping up the size of the dataset. It will take quite some time to run the tests with a huge dataset, but I don't see how else to investigate this.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 11 Aug 2021 15:02:29 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\nOn 8/12/21 12:02 AM, Mark Dilger wrote:\n> \n> ...\n> \n> Once the data is made deterministic, the third set looks slightly\n> better than the first, rather than slightly worse. But almost 20% of\n> the query types still look worse after applying the patch. I'm going to\n> dig deeper into those to see if that conclusion survives bumping up the\n> size of the dataset. It will take quite some time to run the tests with\n> a huge dataset, but I don't see how else to investigate this.\n> \n\nAs I said in my last reply, I'm not sure it's particularly useful to \nlook at overall results from data sets with independent columns. That's \nnot what extended statistics are for, and people should not create them \nin those cases ...\n\nMaybe it'd be better to focus on cases with the largest difference in \nestimates, and investigate those more closely.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Aug 2021 00:45:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 11, 2021, at 3:45 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> As I said in my last reply, I'm not sure it's particularly useful to look at overall results from data sets with independent columns. That's not what extended statistics are for, and people should not create them in those cases ...\n\nWe sent our last emails more or less simultaneously. I'm not ignoring your email; I just hadn't seen it yet when I sent mine.\n\n> Maybe it'd be better to focus on cases with the largest difference in estimates, and investigate those more closely.\n\nYeah, I'm working on a correlated stats test as I write this. I'll get back to you when I have results.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 11 Aug 2021 15:48:05 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On 8/12/21 12:48 AM, Mark Dilger wrote:\n> \n> \n>> On Aug 11, 2021, at 3:45 PM, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>> \n>> As I said in my last reply, I'm not sure it's particularly useful\n>> to look at overall results from data sets with independent columns.\n>> That's not what extended statistics are for, and people should not\n>> create them in those cases ...\n> \n> We sent our last emails more or less simultaneously. I'm not\n> ignoring your email; I just hadn't seen it yet when I sent mine.\n> \n\nApologies, I didn't mean to imply you're ignoring my messages.\n\n>> Maybe it'd be better to focus on cases with the largest difference\n>> in estimates, and investigate those more closely.\n> \n> Yeah, I'm working on a correlated stats test as I write this. I'll\n> get back to you when I have results.\n> \n\nCool, thanks!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Aug 2021 01:06:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 11, 2021, at 3:48 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I'm working on a correlated stats test as I write this. I'll get back to you when I have results.\n\nOk, the tests showed no statistically significant regressions. All tests included the same sorts of whereclause expressions as used in the tests from yesterday's email.\n\nThe first test created loosely correlated data and found no significant row estimate improvements or regressions.\n\nThe second test of more tightly correlated data showed a row estimate improvement overall, with no class of whereclause showing an estimate regression. I think the apparent regressions from yesterday were just statistical noise.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 12 Aug 2021 16:06:36 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "Hi Mark,\n\nThis thread inspired me to do something fairly similar - a generator \nthat generates queries of varying complexity, executes them on table \nwith and without extended statistics. I've been thinking about that \nbefore, but this finally pushed me to do that, and some of the results \nare fairly interesting ...\n\nI've pushed everything (generator and results) to this github repo:\n\n https://github.com/tvondra/stats-test\n\nwith a summary of all results here:\n\n https://github.com/tvondra/stats-test/blob/master/results.md\n\n\nSome basic facts about the generator.py (see query_generator):\n\n* It's using a fixed seed to make it deterministic.\n\n* A small fraction of generated queries is sampled and executed (5%).\n\n* Thanks to a fixed seed we generate/sample the same set of queries for \ndifferent runs, which allows us to compare runs easily.\n\n* The queries use 2 - 5 clauses, either (Var op Const) or (Var op Var).\n\n* The operators are the usual equality/inequality ones.\n\n* The clauses are combined using AND/OR (also randomly picked).\n\n* There's a random set of parens added, to vary the operator precedence \n(otherwise it'd be driven entirely by AND/OR).\n\n* There are two datasets - a random and correlated one, with different \nnumber of distinct values in each column (10, 100, 1000, 10000).\n\n* The statistics target is set to 10, 100, 1000, 10000.\n\n\nIt's a bit hacky, with various bits hard-coded at the moment. But it \ncould be extended to do other stuff fairly easily, I think.\n\nAnyway, the repository contains results for three cases:\n\n1) master\n2) patched: master with the (Var op Var) patch\n3) fixed: patched, with a fix for \"simple\" clauses (a crude patch)\n\nAnd for each case we have three row counts:\n\n* actual (from explain analyze)\n* estimate without extended stats\n* estimate with extended stats\n\nAnd then we can calculate \"estimation error\" as\n\n estimate / actual\n\nboth with and without statistics. Results for two cases can be plotted \nas a scatter plot, with the two estimation errors as (x,y) values. The \nidea is that this shows how a patch affects estimates - a point (100,10) \nmeans that it was 100x over-estimated, and with the patch it's just 10x, \nand similarly for other points.\n\nThis is what the charts at\n\n https://github.com/tvondra/stats-test/blob/master/results.md\n\ndo, for each combination of parameters (dataset, statistics target and \nnumber of distinct values). There's one chart without extended stats, \none with extended stats.\n\nAn \"ideal\" chart would look like like a single point (1,1) which means \n\"accurate estimates without/with patch\", or (?,1) which means \"poor \nestimates before, accurate estimates now\". Diagonal means \"no change\".\n\nIn principle, we expect the charts to look like this:\n\n* random: diagonal charts, because there should be no extended stats \nbuilt, hence no impact on estimates is expected\n\n* correlated: getting closer to 1.0, which looks like a horizontal line \nin the chart\n\nConsider for example this:\n\nhttps://github.com/tvondra/stats-test/raw/master/correlated-1000-10.png\n\nwhich clearly shows that the first patch is almost exactly the same as \nmaster, while with the fix the estimates improve significantly (and are \nalmost perfect), at least with the statistics.\n\nWithout stats there's a bunch of queries that suddenly get from \n\"perfect\" to much worse (looks like a vertical line on the left chart).\n\nBut there are other \"strange\" cases with \"interesting patterns\", like \nfor example\n\n* \nhttps://raw.githubusercontent.com/tvondra/stats-test/master/correlated-100-100.png\n\n* \nhttps://raw.githubusercontent.com/tvondra/stats-test/master/correlated-1000-100.png\n\n* \nhttps://raw.githubusercontent.com/tvondra/stats-test/master/random-10000-10.png\n\nThis likely shows the patches are a significant improvement for some \nqueries (either getting better than master, or even making the estimates \npretty accurate). But it's probably worth looking into the queries that \ngot worse, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 18 Aug 2021 12:43:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 18, 2021, at 3:43 AM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> I've pushed everything (generator and results) to this github repo\n\nThanks for the link. I took a very brief look. Perhaps we can combine efforts. I need to make progress on several other patches first, but hope to get back to this.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 18 Aug 2021 06:16:45 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On 8/18/21 3:16 PM, Mark Dilger wrote:\n> \n> \n>> On Aug 18, 2021, at 3:43 AM, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>> \n>> I've pushed everything (generator and results) to this github repo\n> \n> Thanks for the link. I took a very brief look. Perhaps we can\n> combine efforts. I need to make progress on several other patches\n> first, but hope to get back to this.\n> \n\nSure - it'd be great to combine efforts. That's why I posted my scripts \n& results. I understand there's plenty other work for both of us, so \ntake your time - no rush.\n\nAfter looking at this for a while, it's clear the main issue is handling \nof clauses referencing the same Var twice, like for example (a = a) or \n(a < a). But it's not clear to me if this is something worth fixing, or \nif extended statistics is the right place to do it.\n\nIf those clauses are worth the effort, why not to handle them better \neven without extended statistics? We can easily evaluate these clauses \non per-column MCV, because they only reference a single Var.\n\nIt'd be rather strange if for example\n\n select * from t where (a < a)\n\nis mis-estimated simply because it can't use extended statistics \n(there's just a single Var, so we won't consider extended stats), while\n\n select * from t where (a < a) and b = 1\n\nsuddenly gets much better thanks to extended stats on (a,b), even when \n(a,b) are perfectly independent.\n\nSo I think we better make eqsel/ineqsel smarter about estimating those \nclauses, assuming we consider them important enough.\n\n\nI think we can either reject the patch, which would mean we don't \nconsider (Var op Var) clauses to be common/important enough. Or we need \nto improve the existing selectivity functions (even those without \nextended statistics) to handle those clauses in a smarter way. Otherwise \nthere'd be strange/surprising inconsistencies.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Aug 2021 20:20:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 20, 2021, at 11:20 AM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> I think we can either reject the patch, which would mean we don't consider (Var op Var) clauses to be common/important enough. Or we need to improve the existing selectivity functions (even those without extended statistics) to handle those clauses in a smarter way. Otherwise there'd be strange/surprising inconsistencies.\n\nFor datatypes with very few distinct values (bool, some enums, etc.) keeping an mcv list of (a,b) pairs seems helpful. The patch may be worth keeping for such cases. In other cases, I don't much see the point.\n\nIt seems that sampling the fraction of rows where (A op B) is true for any given op would be more helpful.\n \n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 20 Aug 2021 11:36:56 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Fri, Aug 20, 2021 at 2:21 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> After looking at this for a while, it's clear the main issue is handling\n> of clauses referencing the same Var twice, like for example (a = a) or\n> (a < a). But it's not clear to me if this is something worth fixing, or\n> if extended statistics is the right place to do it.\n>\n> If those clauses are worth the effort, why not to handle them better\n> even without extended statistics? We can easily evaluate these clauses\n> on per-column MCV, because they only reference a single Var.\n\n+1.\n\nIt seems to me that what we ought to do is make \"a < a\", \"a > a\", and\n\"a != 0\" all have an estimate of zero, and make \"a <= a\", \"a >= a\",\nand \"a = a\" estimate 1-nullfrac. The extended statistics mechanism can\njust ignore the first three types of clauses; the zero estimate has to\nbe 100% correct. It can't necessarily ignore the second three cases,\nthough. If the query says \"WHERE a = a AND b = 1\", \"b = 1\" may be more\nor less likely given that a is known to be not null, and extended\nstatistics can tell us that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Aug 2021 14:56:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On 8/18/21 12:43 PM, Tomas Vondra wrote:\n> Hi Mark,\n> \n> This thread inspired me to do something fairly similar - a generator \n> that generates queries of varying complexity, executes them on table \n> with and without extended statistics. I've been thinking about that \n> before, but this finally pushed me to do that, and some of the results \n> are fairly interesting ...\n> \n> I've pushed everything (generator and results) to this github repo:\n> \n> https://github.com/tvondra/stats-test\n> \n> with a summary of all results here:\n> \n> https://github.com/tvondra/stats-test/blob/master/results.md\n> \n\nFWIW I've pushed slightly reworked scripts and results - there are \nresults from two machines - xeon and i5. Xeon is mostly the same as \nbefore, with some minor fixes, while i5 is does not allow clauses \nreferencing the same column twice (per discussion in this thread).\n\nI think there was a bug in the original plot script, combining incorrect \ndata series in some cases, causing (at least) some of the strange \npatterns mentioned.\n\nI've also made the charts easier to read by splitting the cases into \nseparate plots and using transparency. I've also added png version back, \nbecause plotting the .svg is quite slow.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Aug 2021 21:25:22 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\nOn 8/20/21 8:56 PM, Robert Haas wrote:\n> On Fri, Aug 20, 2021 at 2:21 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> After looking at this for a while, it's clear the main issue is handling\n>> of clauses referencing the same Var twice, like for example (a = a) or\n>> (a < a). But it's not clear to me if this is something worth fixing, or\n>> if extended statistics is the right place to do it.\n>>\n>> If those clauses are worth the effort, why not to handle them better\n>> even without extended statistics? We can easily evaluate these clauses\n>> on per-column MCV, because they only reference a single Var.\n> \n> +1.\n> \n> It seems to me that what we ought to do is make \"a < a\", \"a > a\", and\n> \"a != 0\" all have an estimate of zero, and make \"a <= a\", \"a >= a\",\n> and \"a = a\" estimate 1-nullfrac. The extended statistics mechanism can\n> just ignore the first three types of clauses; the zero estimate has to\n> be 100% correct. It can't necessarily ignore the second three cases,\n> though. If the query says \"WHERE a = a AND b = 1\", \"b = 1\" may be more\n> or less likely given that a is known to be not null, and extended\n> statistics can tell us that.\n> \n\nYeah, I agree this seems like the right approach (except I guess you \nmeant \"a != a\" and not \"a != 0\"). Assuming we want to do something about \nthese clauses at all - I'm still wondering if those clauses are common \nin practice or just synthetic.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Aug 2021 21:32:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Fri, Aug 20, 2021 at 3:32 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Yeah, I agree this seems like the right approach (except I guess you\n> meant \"a != a\" and not \"a != 0\").\n\nErr, yes.\n\n> Assuming we want to do something about\n> these clauses at all - I'm still wondering if those clauses are common\n> in practice or just synthetic.\n\nWell, they are certainly less common than some things, but query\ngenerators do a lot of wonky things.\n\nAlso, as a practical matter, it might be cheaper to do something about\nthem than to not do something about them. I don't really understand\nthe mechanism of operation of the patch, but I guess if somebody\nwrites \"WHERE a = b\", one thing you could do would be check whether\nany of the MCVs for a are also MCVs for b, and if so you could\nestimate something on that basis. If you happened to have extended\nstatistics for (a, b) then I guess you could do even better using, uh,\nmath, or something. But all of that sounds like hard work, and\nchecking whether \"a\" happens to be the same as \"b\" sounds super-cheap\nby comparison.\n\nIf, as normally will be the case, the two sides are not the same, you\nhaven't really lost anything, because the expenditure of cycles to\ntest varno and varattno for equality must be utterly trivial in\ncomparison with fetching stats data and looping over MCV lists and\nthings. But if on occasion you find out that they are the same, then\nyou win! You can give a more accurate estimate with less computational\nwork.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Aug 2021 16:06:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "Hi,\n\nThe attached patch series is modified to improve estimates for these \nspecial clauses (Var op Var with the same var on both sides) without \nextended statistics. This is done in 0001, and it seems fairly simple \nand cheap.\n\nThe 0002 part is still the same patch as on 2021/07/20. Part 0003 fixes \nhandling of those clauses so that we don't treat them as simple, but it \ndoes that by tweaking statext_is_compatible_clause(), as suggested by \nDean. It does work, although it's a bit more invasive than simply \nchecking the shape of clause in statext_mcv_clauselist_selectivity.\n\nI do have results for the randomly generated queries, and this does \nimprove the situation a lot - pretty much all the queries with (a=a) or \n(a<a) clauses had terrible estimates, and this fixes that.\n\nThat being said, I'm still not sure if this is an issue in real-world \napplications, or whether we're solving something because of synthetic \nqueries generated by the randomized generator. But the checks seem \nfairly cheap, so maybe it doesn't matter too much.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 28 Aug 2021 15:52:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Sat, Aug 28, 2021 at 6:53 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> The attached patch series is modified to improve estimates for these\n> special clauses (Var op Var with the same var on both sides) without\n> extended statistics. This is done in 0001, and it seems fairly simple\n> and cheap.\n>\n> The 0002 part is still the same patch as on 2021/07/20. Part 0003 fixes\n> handling of those clauses so that we don't treat them as simple, but it\n> does that by tweaking statext_is_compatible_clause(), as suggested by\n> Dean. It does work, although it's a bit more invasive than simply\n> checking the shape of clause in statext_mcv_clauselist_selectivity.\n>\n> I do have results for the randomly generated queries, and this does\n> improve the situation a lot - pretty much all the queries with (a=a) or\n> (a<a) clauses had terrible estimates, and this fixes that.\n>\n> That being said, I'm still not sure if this is an issue in real-world\n> applications, or whether we're solving something because of synthetic\n> queries generated by the randomized generator. But the checks seem\n> fairly cheap, so maybe it doesn't matter too much.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\nHi,\nFor 0001-Improve-estimates-for-Var-op-Var-with-the-s-20210828.patch :\n\n+ * form (variable op variable) with the save variable on both sides.\n\ntypo: save -> same\n\nCheers\n\nOn Sat, Aug 28, 2021 at 6:53 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nThe attached patch series is modified to improve estimates for these \nspecial clauses (Var op Var with the same var on both sides) without \nextended statistics. This is done in 0001, and it seems fairly simple \nand cheap.\n\nThe 0002 part is still the same patch as on 2021/07/20. Part 0003 fixes \nhandling of those clauses so that we don't treat them as simple, but it \ndoes that by tweaking statext_is_compatible_clause(), as suggested by \nDean. It does work, although it's a bit more invasive than simply \nchecking the shape of clause in statext_mcv_clauselist_selectivity.\n\nI do have results for the randomly generated queries, and this does \nimprove the situation a lot - pretty much all the queries with (a=a) or \n(a<a) clauses had terrible estimates, and this fixes that.\n\nThat being said, I'm still not sure if this is an issue in real-world \napplications, or whether we're solving something because of synthetic \nqueries generated by the randomized generator. But the checks seem \nfairly cheap, so maybe it doesn't matter too much.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,For 0001-Improve-estimates-for-Var-op-Var-with-the-s-20210828.patch :+ * form (variable op variable) with the save variable on both sides.typo: save -> sameCheers",
"msg_date": "Sat, 28 Aug 2021 07:06:13 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 28, 2021, at 6:52 AM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> Part 0003 fixes handling of those clauses so that we don't treat them as simple, but it does that by tweaking statext_is_compatible_clause(), as suggested by Dean.\n\nFunction examine_opclause_args() doesn't set issimple to anything in the IsA(rightop, Const) case, but assigns *issimplep = issimple at the bottom. The compiler is not complaining about using a possibly uninitialized variable, but if I change the \"return true\" on the very next line to \"return issimple\", the compiler complains quite loudly.\n\n\nSome functions define bool *issimple, others bool *issimplep and bool issimple. You might want to standardize the naming.\n\nIt's difficult to know what \"simple\" means in extended_stats.c. There is no file-global comment explaining the concept, and functions like compare_scalars_simple don't have correlates named compare_scalars_complex or such, so the reader cannot infer by comparison what the difference might be between a \"simple\" case and some non-\"simple\" case. The functions' issimple (or issimplep) argument are undocumented.\n\nThere is a comment:\n\n/*\n * statext_mcv_clauselist_selectivity\n * Estimate clauses using the best multi-column statistics.\n ....\n *\n * - simple selectivity: Computed without extended statistics, i.e. as if the\n * columns/clauses were independent.\n *\n ....\n */\n\nbut it takes a while to find if you search for \"issimple\".\n\n\nIn both scalarineqsel_wrapper() and eqsel_internal(), the call to matching_restriction_variables() should usually return false, since comparing a variable to itself is an unusual case. The next call is to get_restriction_variable(), which repeats the work of examining the left and right variables. So in almost all cases, after throwing away the results of:\n\n examine_variable(root, left, varRelid, &ldata);\n examine_variable(root, right, varRelid, &rdata);\n\nperformed in matching_restriction_variables(), we'll do exactly the same work again (with one variable named differently) in get_restriction_variable():\n\n examine_variable(root, left, varRelid, vardata);\n examine_variable(root, right, varRelid, &rdata);\n\nThat'd be fine if example_variable() were a cheap function, but it appears not to be. Do you think you could save the results rather than recomputing them? It's a little messy, since these are the only two functions out of about ten which follow this pattern, so you'd have to pass NULLs into get_restriction_variable() from the other eight callers, but it still looks like that would be a win. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 28 Aug 2021 09:30:12 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Sat, Aug 28, 2021 at 9:30 AM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Aug 28, 2021, at 6:52 AM, Tomas Vondra <tomas.vondra@enterprisedb.com>\n> wrote:\n> >\n> > Part 0003 fixes handling of those clauses so that we don't treat them as\n> simple, but it does that by tweaking statext_is_compatible_clause(), as\n> suggested by Dean.\n>\n> Function examine_opclause_args() doesn't set issimple to anything in the\n> IsA(rightop, Const) case, but assigns *issimplep = issimple at the bottom.\n> The compiler is not complaining about using a possibly uninitialized\n> variable, but if I change the \"return true\" on the very next line to\n> \"return issimple\", the compiler complains quite loudly.\n>\n>\n> Some functions define bool *issimple, others bool *issimplep and bool\n> issimple. You might want to standardize the naming.\n>\n> It's difficult to know what \"simple\" means in extended_stats.c. There is\n> no file-global comment explaining the concept, and functions like\n> compare_scalars_simple don't have correlates named compare_scalars_complex\n> or such, so the reader cannot infer by comparison what the difference might\n> be between a \"simple\" case and some non-\"simple\" case. The functions'\n> issimple (or issimplep) argument are undocumented.\n>\n> There is a comment:\n>\n> /*\n> * statext_mcv_clauselist_selectivity\n> * Estimate clauses using the best multi-column statistics.\n> ....\n> *\n> * - simple selectivity: Computed without extended statistics, i.e. as if\n> the\n> * columns/clauses were independent.\n> *\n> ....\n> */\n>\n> but it takes a while to find if you search for \"issimple\".\n>\n>\n> In both scalarineqsel_wrapper() and eqsel_internal(), the call to\n> matching_restriction_variables() should usually return false, since\n> comparing a variable to itself is an unusual case. The next call is to\n> get_restriction_variable(), which repeats the work of examining the left\n> and right variables. So in almost all cases, after throwing away the\n> results of:\n>\n> examine_variable(root, left, varRelid, &ldata);\n> examine_variable(root, right, varRelid, &rdata);\n>\n> performed in matching_restriction_variables(), we'll do exactly the same\n> work again (with one variable named differently) in\n> get_restriction_variable():\n>\n> examine_variable(root, left, varRelid, vardata);\n> examine_variable(root, right, varRelid, &rdata);\n>\n> That'd be fine if example_variable() were a cheap function, but it appears\n> not to be. Do you think you could save the results rather than recomputing\n> them? It's a little messy, since these are the only two functions out of\n> about ten which follow this pattern, so you'd have to pass NULLs into\n> get_restriction_variable() from the other eight callers, but it still looks\n> like that would be a win.\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n> Hi,\nI wonder if the queries with (a=a) or (a<a) clauses are worth this\nadditional complexity to address.\nHas anyone seen such clause in production queries ?\n\nI would think the randomly generated queries should be narrowed a bit to\nexclude such cases where the result of the clause is known regardless of\nthe underlying data.\n\nCheers\n\nOn Sat, Aug 28, 2021 at 9:30 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Aug 28, 2021, at 6:52 AM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> Part 0003 fixes handling of those clauses so that we don't treat them as simple, but it does that by tweaking statext_is_compatible_clause(), as suggested by Dean.\n\nFunction examine_opclause_args() doesn't set issimple to anything in the IsA(rightop, Const) case, but assigns *issimplep = issimple at the bottom. The compiler is not complaining about using a possibly uninitialized variable, but if I change the \"return true\" on the very next line to \"return issimple\", the compiler complains quite loudly.\n\n\nSome functions define bool *issimple, others bool *issimplep and bool issimple. You might want to standardize the naming.\n\nIt's difficult to know what \"simple\" means in extended_stats.c. There is no file-global comment explaining the concept, and functions like compare_scalars_simple don't have correlates named compare_scalars_complex or such, so the reader cannot infer by comparison what the difference might be between a \"simple\" case and some non-\"simple\" case. The functions' issimple (or issimplep) argument are undocumented.\n\nThere is a comment:\n\n/*\n * statext_mcv_clauselist_selectivity\n * Estimate clauses using the best multi-column statistics.\n ....\n *\n * - simple selectivity: Computed without extended statistics, i.e. as if the\n * columns/clauses were independent.\n *\n ....\n */\n\nbut it takes a while to find if you search for \"issimple\".\n\n\nIn both scalarineqsel_wrapper() and eqsel_internal(), the call to matching_restriction_variables() should usually return false, since comparing a variable to itself is an unusual case. The next call is to get_restriction_variable(), which repeats the work of examining the left and right variables. So in almost all cases, after throwing away the results of:\n\n examine_variable(root, left, varRelid, &ldata);\n examine_variable(root, right, varRelid, &rdata);\n\nperformed in matching_restriction_variables(), we'll do exactly the same work again (with one variable named differently) in get_restriction_variable():\n\n examine_variable(root, left, varRelid, vardata);\n examine_variable(root, right, varRelid, &rdata);\n\nThat'd be fine if example_variable() were a cheap function, but it appears not to be. Do you think you could save the results rather than recomputing them? It's a little messy, since these are the only two functions out of about ten which follow this pattern, so you'd have to pass NULLs into get_restriction_variable() from the other eight callers, but it still looks like that would be a win. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\nHi,I wonder if the queries with (a=a) or (a<a) clauses are worth this additional complexity to address.Has anyone seen such clause in production queries ?I would think the randomly generated queries should be narrowed a bit to exclude such cases where the result of the clause is known regardless of the underlying data.Cheers",
"msg_date": "Sat, 28 Aug 2021 10:18:06 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Aug 28, 2021, at 10:18 AM, Zhihong Yu <zyu@yugabyte.com> wrote:\n> \n> I wonder if the queries with (a=a) or (a<a) clauses are worth this additional complexity to address.\n> Has anyone seen such clause in production queries ?\n\nYou might expect clauses like WHERE FALSE to be unusual, but that phrase gets added quite a lot by query generators. Somebody could add \"WHERE a != a\" in a misguided attempt to achieve the same thing.\n\nI wouldn't be terribly surprised if query generators might generate queries with a variable number of tables joined together with comparisons between the joined tables, and in the degenerate case of only one table end up with a table column compared against itself.\n\nYou could argue that those people need to fix their queries/generators to not do this sort of thing, but the end user affected by such queries may have little ability to fix it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 28 Aug 2021 10:40:08 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On 8/28/21 6:30 PM, Mark Dilger wrote:\n> \n> \n>> On Aug 28, 2021, at 6:52 AM, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>> \n>> Part 0003 fixes handling of those clauses so that we don't treat\n>> them as simple, but it does that by tweaking\n>> statext_is_compatible_clause(), as suggested by Dean.\n> \n> Function examine_opclause_args() doesn't set issimple to anything in\n> the IsA(rightop, Const) case, but assigns *issimplep = issimple at\n> the bottom. The compiler is not complaining about using a possibly\n> uninitialized variable, but if I change the \"return true\" on the very\n> next line to \"return issimple\", the compiler complains quite loudly.\n> \n\nYeah, true. Thanks for noticing this was a bug - I forgot to set the \nissimple variable in the first branch.\n\n> \n> Some functions define bool *issimple, others bool *issimplep and bool\n> issimple. You might want to standardize the naming.\n> \n\nI think the naming is standard with respect to the surrounding code. If \nthe other parameters use \"p\" to mark \"pointer\" then issimplep is used, \nbut in other places it's just \"issimple\". IMHO this is appropriate.\n\n> It's difficult to know what \"simple\" means in extended_stats.c.\n> There is no file-global comment explaining the concept, and functions\n> like compare_scalars_simple don't have correlates named\n> compare_scalars_complex or such, so the reader cannot infer by\n> comparison what the difference might be between a \"simple\" case and\n> some non-\"simple\" case. The functions' issimple (or issimplep)\n> argument are undocumented.\n> \n> There is a comment:\n> \n> /* * statext_mcv_clauselist_selectivity * Estimate clauses using\n> the best multi-column statistics. .... * * - simple selectivity:\n> Computed without extended statistics, i.e. as if the *\n> columns/clauses were independent. * .... */\n> \n> but it takes a while to find if you search for \"issimple\".\n> \n\nYeah, true. This was added a while ago when Dean reworked the estimation \n(based on MCV), and it seemed clear back then. But now a comment \nexplaining this concept (and how it affects the estimation) would be \nhelpful. I'll try digging in the archives for the details.\n\n> \n> In both scalarineqsel_wrapper() and eqsel_internal(), the call to\n> matching_restriction_variables() should usually return false, since\n> comparing a variable to itself is an unusual case. The next call is\n> to get_restriction_variable(), which repeats the work of examining\n> the left and right variables. So in almost all cases, after throwing\n> away the results of:\n> \n> examine_variable(root, left, varRelid, &ldata); \n> examine_variable(root, right, varRelid, &rdata);\n> \n> performed in matching_restriction_variables(), we'll do exactly the\n> same work again (with one variable named differently) in\n> get_restriction_variable():\n> \n> examine_variable(root, left, varRelid, vardata); \n> examine_variable(root, right, varRelid, &rdata);\n> \n> That'd be fine if example_variable() were a cheap function, but it\n> appears not to be. Do you think you could save the results rather\n> than recomputing them? It's a little messy, since these are the only\n> two functions out of about ten which follow this pattern, so you'd\n> have to pass NULLs into get_restriction_variable() from the other\n> eight callers, but it still looks like that would be a win.\n> \n\nI had similar concerns, although I don't think those functions are very \nexpensive compared to the rest of the estimation code. I haven't done \nany measurements yet, though.\n\nBut I don't think saving the results is the way to go - in a way, we \nalready store the stats (which seems like the most expensive bit) in \nsyscache. It seems better to just simplify examine_variable() so that it \ndoes not lookup the statistics, which we don't need here at all.\n\n\nThe attached version of the patches fixes the other bugs reported here \nso far - most importantly it reworks how we set issimple while examining \nthe clauses, so that it's never skips the initialization. Hopefully the \nadded comments also explain it a bit more clearly.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 30 Aug 2021 17:59:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 9:00 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 8/28/21 6:30 PM, Mark Dilger wrote:\n> >\n> >\n> >> On Aug 28, 2021, at 6:52 AM, Tomas Vondra\n> >> <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Part 0003 fixes handling of those clauses so that we don't treat\n> >> them as simple, but it does that by tweaking\n> >> statext_is_compatible_clause(), as suggested by Dean.\n> >\n> > Function examine_opclause_args() doesn't set issimple to anything in\n> > the IsA(rightop, Const) case, but assigns *issimplep = issimple at\n> > the bottom. The compiler is not complaining about using a possibly\n> > uninitialized variable, but if I change the \"return true\" on the very\n> > next line to \"return issimple\", the compiler complains quite loudly.\n> >\n>\n> Yeah, true. Thanks for noticing this was a bug - I forgot to set the\n> issimple variable in the first branch.\n>\n> >\n> > Some functions define bool *issimple, others bool *issimplep and bool\n> > issimple. You might want to standardize the naming.\n> >\n>\n> I think the naming is standard with respect to the surrounding code. If\n> the other parameters use \"p\" to mark \"pointer\" then issimplep is used,\n> but in other places it's just \"issimple\". IMHO this is appropriate.\n>\n> > It's difficult to know what \"simple\" means in extended_stats.c.\n> > There is no file-global comment explaining the concept, and functions\n> > like compare_scalars_simple don't have correlates named\n> > compare_scalars_complex or such, so the reader cannot infer by\n> > comparison what the difference might be between a \"simple\" case and\n> > some non-\"simple\" case. The functions' issimple (or issimplep)\n> > argument are undocumented.\n> >\n> > There is a comment:\n> >\n> > /* * statext_mcv_clauselist_selectivity * Estimate clauses using\n> > the best multi-column statistics. .... * * - simple selectivity:\n> > Computed without extended statistics, i.e. as if the *\n> > columns/clauses were independent. * .... */\n> >\n> > but it takes a while to find if you search for \"issimple\".\n> >\n>\n> Yeah, true. This was added a while ago when Dean reworked the estimation\n> (based on MCV), and it seemed clear back then. But now a comment\n> explaining this concept (and how it affects the estimation) would be\n> helpful. I'll try digging in the archives for the details.\n>\n> >\n> > In both scalarineqsel_wrapper() and eqsel_internal(), the call to\n> > matching_restriction_variables() should usually return false, since\n> > comparing a variable to itself is an unusual case. The next call is\n> > to get_restriction_variable(), which repeats the work of examining\n> > the left and right variables. So in almost all cases, after throwing\n> > away the results of:\n> >\n> > examine_variable(root, left, varRelid, &ldata);\n> > examine_variable(root, right, varRelid, &rdata);\n> >\n> > performed in matching_restriction_variables(), we'll do exactly the\n> > same work again (with one variable named differently) in\n> > get_restriction_variable():\n> >\n> > examine_variable(root, left, varRelid, vardata);\n> > examine_variable(root, right, varRelid, &rdata);\n> >\n> > That'd be fine if example_variable() were a cheap function, but it\n> > appears not to be. Do you think you could save the results rather\n> > than recomputing them? It's a little messy, since these are the only\n> > two functions out of about ten which follow this pattern, so you'd\n> > have to pass NULLs into get_restriction_variable() from the other\n> > eight callers, but it still looks like that would be a win.\n> >\n>\n> I had similar concerns, although I don't think those functions are very\n> expensive compared to the rest of the estimation code. I haven't done\n> any measurements yet, though.\n>\n> But I don't think saving the results is the way to go - in a way, we\n> already store the stats (which seems like the most expensive bit) in\n> syscache. It seems better to just simplify examine_variable() so that it\n> does not lookup the statistics, which we don't need here at all.\n>\n>\n> The attached version of the patches fixes the other bugs reported here\n> so far - most importantly it reworks how we set issimple while examining\n> the clauses, so that it's never skips the initialization. Hopefully the\n> added comments also explain it a bit more clearly.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\nHi,\nFor patch 0002,\n\n+ s1 = statext_clauselist_selectivity(root, clauses,\nvarRelid,\n+ jointype, sjinfo,\nrel,\n+ &estimatedclauses,\nfalse);\n+\n+ estimated = (bms_num_members(estimatedclauses) == 1);\n\nI took a look at clauselist_apply_dependencies() (called by\nstatext_clauselist_selectivity) where estimatedclauses is modified.\nSince the caller would not use the returned Selectivity if number of\nelements in estimatedclauses is greater than 1, I wonder\nif a parameter can be added to clauselist_apply_dependencies() which\nindicates early return if the second element is added to estimatedclauses.\n\nCheers\n\nOn Mon, Aug 30, 2021 at 9:00 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 8/28/21 6:30 PM, Mark Dilger wrote:\n> \n> \n>> On Aug 28, 2021, at 6:52 AM, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>> \n>> Part 0003 fixes handling of those clauses so that we don't treat\n>> them as simple, but it does that by tweaking\n>> statext_is_compatible_clause(), as suggested by Dean.\n> \n> Function examine_opclause_args() doesn't set issimple to anything in\n> the IsA(rightop, Const) case, but assigns *issimplep = issimple at\n> the bottom. The compiler is not complaining about using a possibly\n> uninitialized variable, but if I change the \"return true\" on the very\n> next line to \"return issimple\", the compiler complains quite loudly.\n> \n\nYeah, true. Thanks for noticing this was a bug - I forgot to set the \nissimple variable in the first branch.\n\n> \n> Some functions define bool *issimple, others bool *issimplep and bool\n> issimple. You might want to standardize the naming.\n> \n\nI think the naming is standard with respect to the surrounding code. If \nthe other parameters use \"p\" to mark \"pointer\" then issimplep is used, \nbut in other places it's just \"issimple\". IMHO this is appropriate.\n\n> It's difficult to know what \"simple\" means in extended_stats.c.\n> There is no file-global comment explaining the concept, and functions\n> like compare_scalars_simple don't have correlates named\n> compare_scalars_complex or such, so the reader cannot infer by\n> comparison what the difference might be between a \"simple\" case and\n> some non-\"simple\" case. The functions' issimple (or issimplep)\n> argument are undocumented.\n> \n> There is a comment:\n> \n> /* * statext_mcv_clauselist_selectivity * Estimate clauses using\n> the best multi-column statistics. .... * * - simple selectivity:\n> Computed without extended statistics, i.e. as if the *\n> columns/clauses were independent. * .... */\n> \n> but it takes a while to find if you search for \"issimple\".\n> \n\nYeah, true. This was added a while ago when Dean reworked the estimation \n(based on MCV), and it seemed clear back then. But now a comment \nexplaining this concept (and how it affects the estimation) would be \nhelpful. I'll try digging in the archives for the details.\n\n> \n> In both scalarineqsel_wrapper() and eqsel_internal(), the call to\n> matching_restriction_variables() should usually return false, since\n> comparing a variable to itself is an unusual case. The next call is\n> to get_restriction_variable(), which repeats the work of examining\n> the left and right variables. So in almost all cases, after throwing\n> away the results of:\n> \n> examine_variable(root, left, varRelid, &ldata); \n> examine_variable(root, right, varRelid, &rdata);\n> \n> performed in matching_restriction_variables(), we'll do exactly the\n> same work again (with one variable named differently) in\n> get_restriction_variable():\n> \n> examine_variable(root, left, varRelid, vardata); \n> examine_variable(root, right, varRelid, &rdata);\n> \n> That'd be fine if example_variable() were a cheap function, but it\n> appears not to be. Do you think you could save the results rather\n> than recomputing them? It's a little messy, since these are the only\n> two functions out of about ten which follow this pattern, so you'd\n> have to pass NULLs into get_restriction_variable() from the other\n> eight callers, but it still looks like that would be a win.\n> \n\nI had similar concerns, although I don't think those functions are very \nexpensive compared to the rest of the estimation code. I haven't done \nany measurements yet, though.\n\nBut I don't think saving the results is the way to go - in a way, we \nalready store the stats (which seems like the most expensive bit) in \nsyscache. It seems better to just simplify examine_variable() so that it \ndoes not lookup the statistics, which we don't need here at all.\n\n\nThe attached version of the patches fixes the other bugs reported here \nso far - most importantly it reworks how we set issimple while examining \nthe clauses, so that it's never skips the initialization. Hopefully the \nadded comments also explain it a bit more clearly.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,For patch 0002,+ s1 = statext_clauselist_selectivity(root, clauses, varRelid,+ jointype, sjinfo, rel,+ &estimatedclauses, false);++ estimated = (bms_num_members(estimatedclauses) == 1);I took a look at clauselist_apply_dependencies() (called by statext_clauselist_selectivity) where estimatedclauses is modified.Since the caller would not use the returned Selectivity if number of elements in estimatedclauses is greater than 1, I wonderif a parameter can be added to clauselist_apply_dependencies() which indicates early return if the second element is added to estimatedclauses.Cheers",
"msg_date": "Mon, 30 Aug 2021 15:14:28 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On 8/31/21 00:14, Zhihong Yu wrote:\n> Hi,\n> For patch 0002,\n> \n> + s1 = statext_clauselist_selectivity(root, clauses, \n> varRelid,\n> + jointype, \n> sjinfo, rel,\n> + \n> &estimatedclauses, false);\n> +\n> + estimated = (bms_num_members(estimatedclauses) == 1);\n> \n> I took a look at clauselist_apply_dependencies() (called by \n> statext_clauselist_selectivity) where estimatedclauses is modified.\n> Since the caller would not use the returned Selectivity if number of \n> elements in estimatedclauses is greater than 1, I wonder\n> if a parameter can be added to clauselist_apply_dependencies() which \n> indicates early return if the second element is added to estimatedclauses.\n> \n\nHmmm, I'm not sure I understand your point. Are you suggesting there's a \nbug in not updating the bitmap, or would this be an optimization? Can \nyou give an example of a query affected by this?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 Dec 2021 03:04:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "Hi,\n\nI finally got around to this patch again, focusing mostly on the first \npart that simply returns either 1.0 or 0.0 for Var op Var conditions \n(i.e. the part not really using extended statistics).\n\nI have been unhappy about using examine_variable, which does various \nexpensive things like searching for statistics (which only got worse \nbecause now we're also looking for expression stats). But we don't \nreally need the stats - we just need to check the Vars match (same \nrelation, same attribute). So 0002 fixes this.\n\nWhich got me thinking that maybe we don't need to restrict this to Var \nnodes only. We can just as easily compare arbitrary expressions, \nprovided it's for the same relation and there are no volatile functions. \nSo 0003 does this. Conditions with the same complex expression on each \nside of an operator are probably fairly rare, but it's cheap so why not.\n\n0004 and 0005 parts are unchanged.\n\n\nThe next steps is adding some tests to the first parts, and extending \nthe tests in the main patch (to also use more complex expressions, if \n0003 gets included).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 13 Dec 2021 03:21:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Sun, Dec 12, 2021 at 6:04 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 8/31/21 00:14, Zhihong Yu wrote:\n> > Hi,\n> > For patch 0002,\n> >\n> > + s1 = statext_clauselist_selectivity(root, clauses,\n> > varRelid,\n> > + jointype,\n> > sjinfo, rel,\n> > +\n> > &estimatedclauses, false);\n> > +\n> > + estimated = (bms_num_members(estimatedclauses) == 1);\n> >\n> > I took a look at clauselist_apply_dependencies() (called by\n> > statext_clauselist_selectivity) where estimatedclauses is modified.\n> > Since the caller would not use the returned Selectivity if number of\n> > elements in estimatedclauses is greater than 1, I wonder\n> > if a parameter can be added to clauselist_apply_dependencies() which\n> > indicates early return if the second element is added\n> to estimatedclauses.\n> >\n>\n> Hmmm, I'm not sure I understand your point. Are you suggesting there's a\n> bug in not updating the bitmap, or would this be an optimization? Can\n> you give an example of a query affected by this?\n>\n> Hi,\nMy previous comment was from 3 months ago - let me see if I can come up\nwith an example.\n\nCheers\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nOn Sun, Dec 12, 2021 at 6:04 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 8/31/21 00:14, Zhihong Yu wrote:\n> Hi,\n> For patch 0002,\n> \n> + s1 = statext_clauselist_selectivity(root, clauses, \n> varRelid,\n> + jointype, \n> sjinfo, rel,\n> + \n> &estimatedclauses, false);\n> +\n> + estimated = (bms_num_members(estimatedclauses) == 1);\n> \n> I took a look at clauselist_apply_dependencies() (called by \n> statext_clauselist_selectivity) where estimatedclauses is modified.\n> Since the caller would not use the returned Selectivity if number of \n> elements in estimatedclauses is greater than 1, I wonder\n> if a parameter can be added to clauselist_apply_dependencies() which \n> indicates early return if the second element is added to estimatedclauses.\n> \n\nHmmm, I'm not sure I understand your point. Are you suggesting there's a \nbug in not updating the bitmap, or would this be an optimization? Can \nyou give an example of a query affected by this?\nHi,My previous comment was from 3 months ago - let me see if I can come up with an example.Cheers \n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 12 Dec 2021 18:50:22 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Dec 12, 2021, at 6:21 PM, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> <0001-Improve-estimates-for-Var-op-Var-with-the-same-Var.patch>\n\n+ * It it's (variable = variable) with the same variable on both sides, it's\n\ns/It it's/If it's/\n\n0001 lacks regression coverage.\n\n> <0002-simplification.patch>\n\nChanging comments introduced by patch 0001 in patch 0002 just creates git churn:\n\n- * estimate the selectivity as 1.0 (or 0.0 if it's negated).\n+ * estimate the selectivity as 1.0 (or 0.0 when it's negated).\n\nand:\n\n * matching_restriction_variable\n- * Examine the args of a restriction clause to see if it's of the\n- * form (variable op variable) with the same variable on both sides.\n+ * Check if the two arguments of a restriction clause refer to the same\n+ * variable, i.e. if the condition is of the form (variable op variable).\n+ * We can deduce selectivity for such (in)equality clauses.\n\n0002 also lacks regression coverage.\n\n> <0003-relax-the-restrictions.patch>\n\n0003 also lacks regression coverage.\n\n> <0004-main-patch.patch>\n\nOk.\n\n> <0005-Don-t-treat-Var-op-Var-as-simple-clauses.patch>\n\n0005 again lacks regression coverage.\n\n\n\nThere might be a problem in how selectivity thinks about comparison between identical columns from the NEW and OLD pseudotables. To show this, add an Assert to see where matching_restriction_variables() might return true:\n\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -4952,6 +4952,8 @@ matching_restriction_variables(PlannerInfo *root, List *args, int varRelid)\n return false;\n \n /* The two variables need to match */\n+ Assert(!equal(left, right));\n+\n return equal(left, right);\n\nThis results in the regression tests failing on \"update rtest_emp set ename = 'wiecx' where ename = 'wiecc';\". It may seem counterintuitive that matching_restriction_variables() would return true for a where-clause with only one occurrence of variable \"ename\", until you read the rule defined in rules.sql:\n\n create rule rtest_emp_upd as on update to rtest_emp where new.salary != old.salary do\n insert into rtest_emplog values (new.ename, current_user,\n 'honored', new.salary, old.salary);\n\nI think what's really happening here is that \"new.salary != old.salary\" is being processed by matching_restriction_variables() and returning that the new.salary refers to the same thing that old.salary refers to.\n\nHere is the full stack trace, for reference:\n\n(lldb) bt\n* thread #1, stop reason = signal SIGSTOP\n frame #0: 0x00007fff6d8fd33a libsystem_kernel.dylib`__pthread_kill + 10\n frame #1: 0x00007fff6d9b9e60 libsystem_pthread.dylib`pthread_kill + 430\n frame #2: 0x00007fff6d884808 libsystem_c.dylib`abort + 120\n frame #3: 0x00000001048a6c31 postgres`ExceptionalCondition(conditionName=\"!equal(left, right)\", errorType=\"FailedAssertion\", fileName=\"selfuncs.c\", lineNumber=4955) at assert.c:69:2\n frame #4: 0x000000010481e733 postgres`matching_restriction_variables(root=0x00007fe65e02b2d0, args=0x00007fe65e02bf38, varRelid=0) at selfuncs.c:4955:2\n frame #5: 0x000000010480f63c postgres`eqsel_internal(fcinfo=0x00007ffeebb9aeb8, negate=true) at selfuncs.c:265:6\n frame #6: 0x000000010481040a postgres`neqsel(fcinfo=0x00007ffeebb9aeb8) at selfuncs.c:565:2\n frame #7: 0x00000001048b420c postgres`FunctionCall4Coll(flinfo=0x00007ffeebb9af38, collation=0, arg1=140627396440784, arg2=901, arg3=140627396443960, arg4=0) at fmgr.c:1212:11\n frame #8: 0x00000001048b50e6 postgres`OidFunctionCall4Coll(functionId=102, collation=0, arg1=140627396440784, arg2=901, arg3=140627396443960, arg4=0) at fmgr.c:1448:9\n * frame #9: 0x0000000104557e4d postgres`restriction_selectivity(root=0x00007fe65e02b2d0, operatorid=901, args=0x00007fe65e02bf38, inputcollid=0, varRelid=0) at plancat.c:1828:26\n frame #10: 0x00000001044d4c76 postgres`clause_selectivity_ext(root=0x00007fe65e02b2d0, clause=0x00007fe65e02bfe8, varRelid=0, jointype=JOIN_INNER, sjinfo=0x0000000000000000, use_extended_stats=true) at clausesel.c:902:10\n frame #11: 0x00000001044d4186 postgres`clauselist_selectivity_ext(root=0x00007fe65e02b2d0, clauses=0x00007fe65e02cdd0, varRelid=0, jointype=JOIN_INNER, sjinfo=0x0000000000000000, use_extended_stats=true) at clausesel.c:185:8\n frame #12: 0x00000001044d3f97 postgres`clauselist_selectivity(root=0x00007fe65e02b2d0, clauses=0x00007fe65e02cdd0, varRelid=0, jointype=JOIN_INNER, sjinfo=0x0000000000000000) at clausesel.c:108:9\n frame #13: 0x00000001044de192 postgres`set_baserel_size_estimates(root=0x00007fe65e02b2d0, rel=0x00007fe65e01e640) at costsize.c:4931:3\n frame #14: 0x00000001044d16be postgres`set_plain_rel_size(root=0x00007fe65e02b2d0, rel=0x00007fe65e01e640, rte=0x00007fe65e02ac40) at allpaths.c:584:2\n frame #15: 0x00000001044d0a79 postgres`set_rel_size(root=0x00007fe65e02b2d0, rel=0x00007fe65e01e640, rti=1, rte=0x00007fe65e02ac40) at allpaths.c:413:6\n frame #16: 0x00000001044cd990 postgres`set_base_rel_sizes(root=0x00007fe65e02b2d0) at allpaths.c:324:3\n frame #17: 0x00000001044cd660 postgres`make_one_rel(root=0x00007fe65e02b2d0, joinlist=0x00007fe65e02ccd8) at allpaths.c:186:2\n frame #18: 0x0000000104511b3b postgres`query_planner(root=0x00007fe65e02b2d0, qp_callback=(postgres`standard_qp_callback at planner.c:3071), qp_extra=0x00007ffeebb9b648) at planmain.c:276:14\n frame #19: 0x0000000104514dfc postgres`grouping_planner(root=0x00007fe65e02b2d0, tuple_fraction=0) at planner.c:1448:17\n frame #20: 0x00000001045138fb postgres`subquery_planner(glob=0x00007fe66e010cb0, parse=0x00007fe66e0107a8, parent_root=0x0000000000000000, hasRecursion=false, tuple_fraction=0) at planner.c:1025:2\n frame #21: 0x0000000104511faa postgres`standard_planner(parse=0x00007fe66e0107a8, query_string=\"update rtest_emp set ename = 'wiecx' where ename = 'wiecc';\", cursorOptions=2048, boundParams=0x0000000000000000) at planner.c:406:9\n frame #22: 0x0000000104511d3a postgres`planner(parse=0x00007fe66e0107a8, query_string=\"update rtest_emp set ename = 'wiecx' where ename = 'wiecc';\", cursorOptions=2048, boundParams=0x0000000000000000) at planner.c:277:12\n frame #23: 0x00000001046957fb postgres`pg_plan_query(querytree=0x00007fe66e0107a8, query_string=\"update rtest_emp set ename = 'wiecx' where ename = 'wiecc';\", cursorOptions=2048, boundParams=0x0000000000000000) at postgres.c:848:9\n frame #24: 0x00000001046959d2 postgres`pg_plan_queries(querytrees=0x00007fe65e02b170, query_string=\"update rtest_emp set ename = 'wiecx' where ename = 'wiecc';\", cursorOptions=2048, boundParams=0x0000000000000000) at postgres.c:940:11\n frame #25: 0x0000000104698d0d postgres`exec_simple_query(query_string=\"update rtest_emp set ename = 'wiecx' where ename = 'wiecc';\") at postgres.c:1134:19\n frame #26: 0x0000000104697f4c postgres`PostgresMain(dbname=\"regression\", username=\"mark.dilger\") at postgres.c:4498:7\n frame #27: 0x000000010459b859 postgres`BackendRun(port=0x00007fe65dc043e0) at postmaster.c:4594:2\n frame #28: 0x000000010459adba postgres`BackendStartup(port=0x00007fe65dc043e0) at postmaster.c:4322:3\n frame #29: 0x0000000104599ade postgres`ServerLoop at postmaster.c:1802:7\n frame #30: 0x0000000104597206 postgres`PostmasterMain(argc=8, argv=0x00007fe66dc06300) at postmaster.c:1474:11\n frame #31: 0x000000010446be4f postgres`main(argc=8, argv=0x00007fe66dc06300) at main.c:198:3\n frame #32: 0x00007fff6d7b5cc9 libdyld.dylib`start + 1\n\nIn frame 9, operatorid=901. Checking pg_operator.dat, that's:\n\n{ oid => '901', descr => 'not equal',\n oprname => '<>', oprleft => 'money', oprright => 'money', oprresult => 'bool',\n oprcom => '<>(money,money)', oprnegate => '=(money,money)',\n oprcode => 'cash_ne', oprrest => 'neqsel', oprjoin => 'neqjoinsel' },\n\nLooking at frame 10, restrict_selectivity is being called with clause={OPEXPR :opno 901 :opfuncid 889 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 2 :vartype 790 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 3 :varattnosyn 2 :location -1} {VAR :varno 1 :varattno 2 :vartype 790 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location -1}) :location -1}\n\n\nMaybe there is some reason this is ok. Any thoughts about it?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 21 Dec 2021 16:28:25 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "\n\n> On Dec 21, 2021, at 4:28 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Maybe there is some reason this is ok.\n\n... and there is. Sorry for the noise. The planner appears to be smart enough to know that column \"salary\" is not being changed, and therefore NEW.salary and OLD.salary are equal. If I test a different update statement that contains a new value for \"salary\", the added assertion is not triggered.\n\n(I didn't quite realize what the clause's varnosyn field was telling me until after I hit \"send\".)\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 21 Dec 2021 16:50:29 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Mon, 13 Dec 2021 at 02:21, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I finally got around to this patch again, focusing mostly on the first\n> part that simply returns either 1.0 or 0.0 for Var op Var conditions\n> (i.e. the part not really using extended statistics).\n>\n\nJust starting to look at this again, starting with 0001 ...\n\nThis needs to account for nullfrac, since x = x is only true if x is not null.\n\nI don't like how matching_restriction_variables() is adding a\nnon-trivial amount of overhead to each of these selectivity functions,\nby calling examine_variable() for each side, duplicating what\nget_restriction_variable() does later on.\n\nI think it would probably be better to make the changes in\nvar_eq_non_const(), where all of this information is available, and\nget rid of matching_restriction_variables() (it doesn't look like the\nnew scalarineqsel_wrapper() code really needs\nmatching_restriction_variables() - it can just use what\nget_restriction_variable() already returns).\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 9 Jul 2022 13:04:57 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On 7/9/22 14:04, Dean Rasheed wrote:\n> On Mon, 13 Dec 2021 at 02:21, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> I finally got around to this patch again, focusing mostly on the first\n>> part that simply returns either 1.0 or 0.0 for Var op Var conditions\n>> (i.e. the part not really using extended statistics).\n>>\n> \n> Just starting to look at this again, starting with 0001 ...\n> \n> This needs to account for nullfrac, since x = x is only true if x is not null.\n> \n\nRight, I forgot to account for nullfrac.\n\n> I don't like how matching_restriction_variables() is adding a\n> non-trivial amount of overhead to each of these selectivity functions,\n> by calling examine_variable() for each side, duplicating what\n> get_restriction_variable() does later on.\n> \n\nBut matching_restriction_variables() does not use examine_variable()\nanymore. It did, originally, but 0002 removed all of that. Now it does\njust pull_varnos() and that's it. I admit leaving those bits unsquashed\nwas a bit confusing, but the first part was really 0001+0002+0003.\n\n\n> I think it would probably be better to make the changes in\n> var_eq_non_const(), where all of this information is available, and\n> get rid of matching_restriction_variables() (it doesn't look like the\n> new scalarineqsel_wrapper() code really needs\n> matching_restriction_variables() - it can just use what\n> get_restriction_variable() already returns).\n> \n\nI'm not sure how could that help. var_eq_non_const only really applies\nto eqsel_internal(), so we'd still need to deal with various other\nplaces for inequality operators.\n\n\nAttached is a rebased and somewhat cleaned up version of the patch\nseries, addressing the review comments so far and squashing the bits I\npreviously kept separate to showcase the changes.\n\nI've also added a bunch of regression tests - queries with (Var op Var)\nclauses of varying complexity, to demonstrate the effect of each patch.\nI added them as 0001, so it's clear how the individual patches affect\nthe results.\n\nI've also wrote a single generator that generates both data and queries\nwith (Var op Var) clauses, and then runs them on multiple connections to\ncompare the estimates. It requires some effort to run that (setting up\nthe clusters, ...) but shouldn't be too hard to get it working.\n\nThe results are pretty massive (thousands of queries), but a simple\nsummary showing percentiles (0.25, 0.5, 0.75, 0.9) for estimation error,\ncalculated as (+1 to deal with 0 actual rows)\n\n abs(estimated - actual) / (actual + 1)\n\nlooks like this:\n\n clauses | stats | master | patched\n ---------+-------+---------------------------+------------------------\n 1 | no | {0.39, 0.67, 4.17, 10000} | {0.29, 0.67, 1, 10000}\n 2 | no | {0.38, 0.79, 50, 9950} | {0.26, 0.73, 1, 3333}\n 3 | no | {0.3, 0.84, 50, 3317} | {0.22, 0.78, 1, 1111}\n 4 | no | {0.24, 0.84, 25, 1852} | {0.14, 0.78, 1, 370}\n 5 | no | {0.2, 0.85, 17, 1100} | {0.11, 0.78, 1, 50}\n 1 | yes | {0.39, 0.67, 4.17, 10000} | {0, 0.14, 1, 1}\n 2 | yes | {0.38, 0.79, 50, 9950} | {0, 0.15, 1, 1}\n 3 | yes | {0.3, 0.84, 50, 3317} | {0, 0.15, 1, 1}\n 4 | yes | {0.24, 0.84, 25, 1852} | {0, 0.17, 1, 1}\n 5 | yes | {0.2, 0.85, 17, 1100} | {0, 0.14, 1, 1}\n\nThis seems pretty good, IMO. Without extended stats on the columns,\nthere's only so much we can do, but even then the errors got much\nsmaller. With the stats it's clearly way better.\n\nOf course, take this with a grain of salt - those are randomly generated\nsynthetic queries, with all queries being considered equally \"likely\".\nBut clearly some queries are more likely to appear in the applications,\nand those are more important to estimate. However, the point of this was\nto see if there are classes of queries that would get much worse, and I\nhaven't found anything like that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 21 Jul 2022 13:42:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Thu, 21 Jul 2022 at 12:42, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> > This needs to account for nullfrac, since x = x is only true if x is not null.\n>\n> Right, I forgot to account for nullfrac.\n>\n\nDitto variable <= variable\n\n> > I don't like how matching_restriction_variables() is adding a\n> > non-trivial amount of overhead to each of these selectivity functions,\n> > by calling examine_variable() for each side, duplicating what\n> > get_restriction_variable() does later on.\n>\n> But matching_restriction_variables() does not use examine_variable()\n> anymore. It did, originally, but 0002 removed all of that. Now it does\n> just pull_varnos() and that's it. I admit leaving those bits unsquashed\n> was a bit confusing, but the first part was really 0001+0002+0003.\n>\n\nAh, right, I only looked at 0001, because I thought that was the main\npart that I hadn't previously looked at.\n\nSo my previous concern with matching_restriction_variables() was how\nmany extra cycles it was adding to test for what should be a pretty\nuncommon case. Looking at the new version, I think it's a lot better,\nbut perhaps it would be more efficient to call equal() as soon as\nyou've extracted \"left\" and \"right\", so it can bail out earlier and\nfaster when they're not equal. I think a call to equal() should be\nvery fast compared to pull_varnos() and contain_volatile_functions().\n\n> Attached is a rebased and somewhat cleaned up version of the patch\n> series, addressing the review comments so far and squashing the bits I\n> previously kept separate to showcase the changes.\n>\n> I've also added a bunch of regression tests - queries with (Var op Var)\n> clauses of varying complexity, to demonstrate the effect of each patch.\n> I added them as 0001, so it's clear how the individual patches affect\n> the results.\n>\n\nCool. That's much clearer, and the results look quite good.\n\nThe main thing that jumps out at me now is the whole \"issimple\"\nprocessing stuff. Those changes turned out to be a lot more invasive\nthan I thought. I don't think this part is correct:\n\n /*\n * All AND/OR clauses are considered complex, even if all arguments are\n * simple clauses. For NOT clauses we need to check the argument and then\n * we can update the flag.\n *\n * XXX Maybe for AND/OR we should check if all arguments reference the\n * same attnum, and consider them complex only when there are multiple\n * attnum values (i.e. different Vars)?\n */\n\nI think the XXX part of that comment is right, and that's what the\noriginal code did.\n\nI had to go remind myself what \"simple\" was intended for, so apologies\nif this is really pedestrian:\n\nThe basic idea of a \"simple\" clause was meant to mean any clause\nthat's likely to be better estimated by standard statistics, rather\nthan extended statistics (and \"issimple\" only applies when such\nclauses are combined using \"OR\"). So, for example, given a query with\n\n WHERE a = 1 OR (b > 0 AND b < 10)\n\nboth \"a = 1\" and \"b > 0 AND b < 10\" should be considered \"simple\",\nsince the standard statistics code is likely to estimate them quite\nwell. For the second clause, it might use histograms and the\nRangeQueryClause-machinery, which ought to work well, whereas applying\na multivariate MCV list to \"b > 0 AND b < 10\" in isolation would\nprobably make its estimate worse.\n\nSo then, in this example, what the \"OR\" handling in\nstatext_mcv_clauselist_selectivity() would end up doing is:\n\n P(a = 1 OR (b > 0 AND b < 10))\n = P(a = 1)\n + P(b > 0 AND b < 10)\n - P(a = 1 AND b > 0 AND b < 10) # Overlap\n\nand only use extended statistics for the overlap term, since the other\n2 clauses are \"simple\", and best estimated without extended stats. The\npatch changes that, which ends up making things worse in some cases.\n\nIs it the case that the only reason for changing the \"issimple\"\nhandling was because the standard statistics code didn't work well for\nthings like \"a < a\", and so we wanted to treat that as not-simple? If\nso, given the selfuncs improvements, perhaps that's no longer\nnecessary, and the original definition of \"simple\" is OK. IOW, is 0004\nnecessary, given 0002?.\n\nI notice that 0004 didn't change any test results, so if there are\ncases where it improves things, they aren't tested.\n\nHere's a simple example using the WHERE clause above. Without the\npatch, extended stats improved the estimate, but with the patch they\ndon't anymore:\n\nDROP TABLE IF EXISTS foo;\nCREATE TABLE foo (a int, b int);\nINSERT INTO foo SELECT x/10+1, x FROM generate_series(1,10000) g(x);\nANALYSE foo;\nEXPLAIN ANALYSE SELECT * FROM foo WHERE a = 1 OR (b > 0 AND b < 10);\n\nEstimated: 18\nActual: 9\n\nCREATE STATISTICS foo_s (mcv) ON a,b FROM foo;\nANALYSE foo;\nEXPLAIN ANALYSE SELECT * FROM foo WHERE a = 1 OR (b > 0 AND b < 10);\n\nEstimated: 9 without the patch, 18 with the patch\nActual: 9\n\nIt's a worry that none of the existing regression tests picked up on\nthat. Perhaps a similar test could be added using the existing test\ndata. Otherwise, I think it'd be worth adding a new test with similar\ndata to the above.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 22 Jul 2022 14:17:47 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 02:17:47PM +0100, Dean Rasheed wrote:\n> It's a worry that none of the existing regression tests picked up on\n> that. Perhaps a similar test could be added using the existing test\n> data. Otherwise, I think it'd be worth adding a new test with similar\n> data to the above.\n\nThis feedback has not been answered for two months, so marked this\nentry as RwF for now.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 17:04:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use extended statistics to estimate (Var op Var) clauses"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nFor historical reasons, now we have two differently named but similar\nternary data types in TSearch and Gin text-related types. Before v13 there\nwas also Gin's private TS_execute() version, from which we eventually\nshifted to Tsearch's TS_execute().\n\nTo make things more even and beautiful I've made a minor refactor to\ncombine two left ternary types into one.\n\n<gin.h>\ntypedef char GinTernaryValue\n#define GIN_FALSE 0\n#define GIN_TRUE 1\n#define GIN_MAYBE 2\n\n<ts_utils.h>\ntypedef enum { TS_NO, TS_YES, TS_MAYBE } TSTernaryValue;\n\nThe change is simple and most of it is just the text replacement. The only\nthing worth noting is that some code does pointer cast between *bool and\n*TernaryValue so the size of them should coincide. (Declaration done in\n*char* type because simple enum on most architectures will be of *int*\nsize). There is no actual change in the code despite the order of header\nfiles inclusion in some modules.\n\nWhat do you think about this?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Fri, 13 Nov 2020 13:04:41 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Combine same ternary types in GIN and TSearch"
},
{
"msg_contents": "On 13/11/2020 11:04, Pavel Borisov wrote:\n> Hi, hackers!\n> \n> For historical reasons, now we have two differently named but similar \n> ternary data types in TSearch and Gin text-related types. Before v13 \n> there was also Gin's private TS_execute() version, from which we \n> eventually shifted to Tsearch's TS_execute().\n> \n> To make things more even and beautiful I've made a minor refactor to \n> combine two left ternary types into one.\n> \n> <gin.h>\n> typedef char GinTernaryValue\n> #define GIN_FALSE 0\n> #define GIN_TRUE 1\n> #define GIN_MAYBE 2\n> \n> <ts_utils.h>\n> typedef enum { TS_NO, TS_YES, TS_MAYBE } TSTernaryValue;\n> \n> The change is simple and most of it is just the text replacement. The \n> only thing worth noting is that some code does pointer cast between \n> *bool and *TernaryValue so the size of them should coincide. \n> (Declaration done in /char/ type because simple enum on most \n> architectures will be of /int/ size). There is no actual change in the \n> code despite the order of header files inclusion in some modules.\n> \n> What do you think about this?\n\nGIN is not just for full-text search, so using TSTernaryValue in \nGinScanKeyData is wrong. And it would break existing extensions.\n\nI didn't look much further than that, but I've got a feeling that \ncombining those is a bad idea. TSTernaryValue is used in text-search \ncode, even when there is no GIN involved. It's a separate concept, even \nthough it happens to have the same values.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 13 Nov 2020 13:08:44 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Combine same ternary types in GIN and TSearch"
},
{
"msg_contents": ">\n> GIN is not just for full-text search, so using TSTernaryValue in\n> GinScanKeyData is wrong. And it would break existing extensions.\n>\n> I didn't look much further than that, but I've got a feeling that\n> combining those is a bad idea. TSTernaryValue is used in text-search\n> code, even when there is no GIN involved. It's a separate concept, even\n> though it happens to have the same values.\n\nProbably you are right. But now the code already rely on equivalent value\nassignments for GinTernaryValue and TSTernaryValue\n(in checkcondition_gin()). So my idea was to combine them and use them like\nwe use other global data types. We may declare it somewhere outside both\ngin and search. Or just leave as it is.\n\nThank you, Heikki for your feedback!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nGIN is not just for full-text search, so using TSTernaryValue in \nGinScanKeyData is wrong. And it would break existing extensions.\n\nI didn't look much further than that, but I've got a feeling that \ncombining those is a bad idea. TSTernaryValue is used in text-search \ncode, even when there is no GIN involved. It's a separate concept, even \nthough it happens to have the same values.Probably you are right. But now the code already rely on equivalent value assignments for GinTernaryValue and TSTernaryValue (in checkcondition_gin()). So my idea was to combine them and use them like we use other global data types. We may declare it somewhere outside both gin and search. Or just leave as it is.Thank you, Heikki for your feedback!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 13 Nov 2020 15:47:59 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Combine same ternary types in GIN and TSearch"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 13/11/2020 11:04, Pavel Borisov wrote:\n>> For historical reasons, now we have two differently named but similar \n>> ternary data types in TSearch and Gin text-related types. Before v13 \n>> there was also Gin's private TS_execute() version, from which we \n>> eventually shifted to Tsearch's TS_execute().\n>> To make things more even and beautiful I've made a minor refactor to \n>> combine two left ternary types into one.\n\n> GIN is not just for full-text search, so using TSTernaryValue in \n> GinScanKeyData is wrong. And it would break existing extensions.\n\n> I didn't look much further than that, but I've got a feeling that \n> combining those is a bad idea. TSTernaryValue is used in text-search \n> code, even when there is no GIN involved. It's a separate concept, even \n> though it happens to have the same values.\n\nI'm definitely not on board with importing a TS-specific type into GIN,\nand even less with requiring major GIN headers to import random\nTS-related headers.\n\nThere might be a case for having just one neutrally-named \"ternary\" enum\ntype, declared in a neutral (probably new) header, that both areas of\nthe code could use. But it's not clear that it'd be worth the code\nthrashing to do that. As Heikki says, this will surely break some\nextensions; and I'd prefer that there be some non-cosmetic benefit\nif we ask extension authors to cope with that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 13 Nov 2020 11:39:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Combine same ternary types in GIN and TSearch"
}
] |
[
{
"msg_contents": "If a file is modified and becomes larger in the source system while \npg_rewind is running, pg_rewind can leave behind a partial copy of file. \nThat's by design, and it's OK for relation files because they're \nreplayed from WAL. But it can cause trouble for configuration files.\n\nI ran into this while playing with pg_auto_failover. After failover, \npg_auto_failover would often launch pg_rewind, and run ALTER SYSTEM on \nthe primary while pg_rewind was running. The resulting rewound system \nwould fail to start up:\n\nNov 13 09:24:42 pg-node-a pg_autoctl[2217]: 09:24:42 2220 ERROR \n2020-11-13 09:24:32.547 GMT [2246] LOG: syntax error in file \n\"/data/pgdata/postgresql.auto.conf\" line 4, near token \"'\"\nNov 13 09:24:42 pg-node-a pg_autoctl[2217]: 09:24:42 2220 ERROR \n2020-11-13 09:24:32.547 GMT [2246] FATAL: configuration file \n\"postgresql.auto.conf\" contains errors\n\nAttached is a patch to mitigate that. It changes pg_rewind so that when \nit copies a whole file, it ignores the original file size. It's not a \ncomplete cure: it still believes the original size for files larger than \n1 MB. That limit was just expedient given the way the chunking logic in \nlibpq_source.c works, but should be enough for configuration files.\n\nThere's another race condition that this doesn't try to fix: If a file \nis modified while it's being copied, you can have a torn file with one \nhalf of the file from the old version, and one half from the new. That's \na much more narrow window, though, and pg_basebackup has the same problem.\n\n- Heikki",
"msg_date": "Fri, 13 Nov 2020 11:46:01 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "pg_rewind copies"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nHello\r\n\r\nThe patch seems to do as described and the regression and tap tests are passing \r\n+\t/*\r\n+\t * A local source is not expected to change while we're rewinding, so check\r\n+\t * that we size of the file matches our earlier expectation.\r\n+\t */\r\nIs this a tyoo?\r\n\r\nthanks\r\nCary",
"msg_date": "Tue, 15 Dec 2020 22:08:50 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind copies"
},
{
"msg_contents": "On 16/12/2020 00:08, Cary Huang wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> Hello\n> \n> The patch seems to do as described and the regression and tap tests are passing\n> +\t/*\n> +\t * A local source is not expected to change while we're rewinding, so check\n> +\t * that we size of the file matches our earlier expectation.\n> +\t */\n> Is this a tyoo?\n\nYep, thanks! Attached is a new patch version, with that fixed and \nrebased. No other changes.\n\n- Heikki",
"msg_date": "Fri, 22 Jan 2021 15:26:12 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind copies"
},
{
"msg_contents": "On 1/22/21 8:26 AM, Heikki Linnakangas wrote:\n> On 16/12/2020 00:08, Cary Huang wrote:\n>> The following review has been posted through the commitfest application:\n>> make installcheck-world: tested, passed\n>> Implements feature: tested, passed\n>> Spec compliant: tested, passed\n>> Documentation: not tested\n>>\n>> Hello\n>>\n>> The patch seems to do as described and the regression and tap tests \n>> are passing\n>> + /*\n>> + * A local source is not expected to change while we're \n>> rewinding, so check\n>> + * that we size of the file matches our earlier expectation.\n>> + */\n>> Is this a tyoo?\n> \n> Yep, thanks! Attached is a new patch version, with that fixed and \n> rebased. No other changes.\n\nCary, does this patch look ready to commit? If so, please change the \nstate in the CF entry to \"Ready for Committer\".\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 25 Mar 2021 09:32:05 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind copies"
},
{
"msg_contents": "> On 25 Mar 2021, at 14:32, David Steele <david@pgmasters.net> wrote:\n> \n> On 1/22/21 8:26 AM, Heikki Linnakangas wrote:\n>> On 16/12/2020 00:08, Cary Huang wrote:\n>>> The following review has been posted through the commitfest application:\n>>> make installcheck-world: tested, passed\n>>> Implements feature: tested, passed\n>>> Spec compliant: tested, passed\n>>> Documentation: not tested\n>>> \n>>> Hello\n>>> \n>>> The patch seems to do as described and the regression and tap tests are passing\n>>> + /*\n>>> + * A local source is not expected to change while we're rewinding, so check\n>>> + * that we size of the file matches our earlier expectation.\n>>> + */\n>>> Is this a tyoo?\n>> Yep, thanks! Attached is a new patch version, with that fixed and rebased. No other changes.\n> \n> Cary, does this patch look ready to commit? If so, please change the state in the CF entry to \"Ready for Committer\".\n\nReading the patch I think it definitely qualifies for RFC state. Heikki, do\nyou have plans to address till patch in the ongoing CF?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 9 Nov 2021 14:11:51 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind copies"
},
{
"msg_contents": "I took another look at this patch, and I think it's ready to go in, it clearly\nfixes a bug that isn't too hard to hit in production settings. To ensure we\ndon't break this I've added a testcase which pipes the pg_rewind --verbose\noutput to a file it's asked to copy, which then guarantees that the file is\ngrowing in size during the operation without need for synchronizing two\nprocesses with IPC::Run (it also passes on Windows in the CI setup).\n\nOne small comment on the patch:\n\n+ snprintf(srcpath, sizeof(srcpath), \"%s/%s\", datadir, path);\n\nThis should IMO check the returnvalue of snprintf to ensure it wasn't\ntruncated. While the risk is exceedingly small, a truncated filename might\nmatch another existing filename and the error not getting caught. There is\nanother instance just like this one in open_target_file() to which I think we\nshould apply the same belts-and-suspenders treatment. I've fixed this in the\nattached version which also have had a pg_indent run on top of a fresh rebase.\n\nThoughts on this version?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Fri, 1 Apr 2022 11:00:00 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind copies"
},
{
"msg_contents": "On 01/04/2022 12:00, Daniel Gustafsson wrote:\n> I took another look at this patch, and I think it's ready to go in, it clearly\n> fixes a bug that isn't too hard to hit in production settings. To ensure we\n> don't break this I've added a testcase which pipes the pg_rewind --verbose\n> output to a file it's asked to copy, which then guarantees that the file is\n> growing in size during the operation without need for synchronizing two\n> processes with IPC::Run (it also passes on Windows in the CI setup).\n> \n> One small comment on the patch:\n> \n> + snprintf(srcpath, sizeof(srcpath), \"%s/%s\", datadir, path);\n> \n> This should IMO check the returnvalue of snprintf to ensure it wasn't\n> truncated. While the risk is exceedingly small, a truncated filename might\n> match another existing filename and the error not getting caught. There is\n> another instance just like this one in open_target_file() to which I think we\n> should apply the same belts-and-suspenders treatment. I've fixed this in the\n> attached version which also have had a pg_indent run on top of a fresh rebase.\n\n> +\tif (len >= sizeof(dstpath))\n> +\t\tpg_fatal(\"filepath buffer too small\");\t/* shouldn't happen */\n\nMakes sense. I would remove the \"shouldn't happen\"; it's not very hard \nto make it happen, you just need a very long target datadir path. And \nrephrase the error message as \"datadir path too long\".\n\nOne typo in the commit message: s/update/updates/.\n\nThanks!\n\n- Heikki\n\n\n",
"msg_date": "Fri, 1 Apr 2022 13:46:01 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind copies"
},
{
"msg_contents": "> On 1 Apr 2022, at 12:46, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n>> +\tif (len >= sizeof(dstpath))\n>> +\t\tpg_fatal(\"filepath buffer too small\");\t/* shouldn't happen */\n> \n> Makes sense. I would remove the \"shouldn't happen\"; it's not very hard to make it happen, you just need a very long target datadir path. And rephrase the error message as \"datadir path too long\".\n\nRight, good point.\n\n> One typo in the commit message: s/update/updates/.\n\nWill fix.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 1 Apr 2022 13:46:19 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind copies"
},
{
"msg_contents": "On 01.04.22 11:00, Daniel Gustafsson wrote:\n> One small comment on the patch:\n> \n> + snprintf(srcpath, sizeof(srcpath), \"%s/%s\", datadir, path);\n> \n> This should IMO check the returnvalue of snprintf to ensure it wasn't\n> truncated. While the risk is exceedingly small, a truncated filename might\n> match another existing filename and the error not getting caught. There is\n> another instance just like this one in open_target_file() to which I think we\n> should apply the same belts-and-suspenders treatment. I've fixed this in the\n> attached version which also have had a pg_indent run on top of a fresh rebase.\n\nWe use snprintf() like that countless times, and approximately none of \nthem check for overflow. So while you are right, this might not be the \nplace to start a new policy.\n\nIf you don't like this approach, use psprintf() perhaps.\n\n\n",
"msg_date": "Mon, 4 Apr 2022 15:08:12 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind copies"
},
{
"msg_contents": "> On 4 Apr 2022, at 15:08, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> We use snprintf() like that countless times, and approximately none of them check for overflow. So while you are right, this might not be the place to start a new policy.\n\nFair enough, I'll remove these hunks before committing and will look a bigger\npicture patch to address these across the codebase later.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 5 Apr 2022 10:11:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind copies"
}
] |
[
{
"msg_contents": "When spellchecking one of my patches for submission, a few other typos fell out\nas per the attached. The one change which isn't in a comment is the object\naddress class description used for error messages:\n\n-\t\t\"extented statistics\",\n+\t\t\"extended statistics\",\n\nIt's used in an elog for cache lookup failures, so maybe it should be\nbackported?\n\ncheers ./daniel",
"msg_date": "Fri, 13 Nov 2020 12:43:41 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Misc typos"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 12:43:41PM +0100, Daniel Gustafsson wrote:\n> When spellchecking one of my patches for submission, a few other typos fell out\n> as per the attached. The one change which isn't in a comment is the object\n> address class description used for error messages:\n> \n> -\t\t\"extented statistics\",\n> +\t\t\"extended statistics\",\n> \n> It's used in an elog for cache lookup failures, so maybe it should be\n> backported?\n\nThis is new as of HEAD (b1d32d3), so applied there. Thanks!\n--\nMichael",
"msg_date": "Sat, 14 Nov 2020 11:43:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Misc typos"
}
] |
[
{
"msg_contents": "I saw a bunch of these in my logs:\n\nlog_time | 2020-10-25 22:59:45.619-07\ndatabase | \nleft | could not open relation with OID 292103095\nleft | processing work entry for relation \"ts.child.alarms_202010_alarm_clear_time_idx\"\n\nThose happen following a REINDEX job on that index.\n\nI think that should be more like an INFO message, since that's what vacuum does\n(vacuum_open_relation), and a queued work item is even more likely to hit a\ndropped relation. It's easy to hit by setting autovacuum_naptime=1 and looping\naround REINDEX CONCURRENTLY while updating a table.\n\nAutovacuum is queueing work items for later:\n\nsrc/backend/postmaster/autovacuum.c- switch (workitem->avw_type)\nsrc/backend/postmaster/autovacuum.c- {\nsrc/backend/postmaster/autovacuum.c- case AVW_BRINSummarizeRange:\nsrc/backend/postmaster/autovacuum.c: DirectFunctionCall2(brin_summarize_range,\nsrc/backend/postmaster/autovacuum.c- ObjectIdGetDatum(workitem->avw_relation),\nsrc/backend/postmaster/autovacuum.c- Int64GetDatum((int64) workitem->avw_blockNumber));\nsrc/backend/postmaster/autovacuum.c- break;\n\nAnd if the index is missing:\n\nbrin_summarize_range(PG_FUNCTION_ARGS)\n| /*\n| * We must lock table before index to avoid deadlocks. However, if the\n| * passed indexoid isn't an index then IndexGetRelation() will fail.\n| * Rather than emitting a not-very-helpful error message, postpone\n| * complaining, expecting that the is-it-an-index test below will fail.\n| */\n| heapoid = IndexGetRelation(indexoid, true);\n| if (OidIsValid(heapoid))\n| heapRel = table_open(heapoid, ShareUpdateExclusiveLock);\n| else\n| heapRel = NULL;\n|\n| indexRel = index_open(indexoid, ShareUpdateExclusiveLock);\n\ntable_open is succcessful and then index_open fails. (I thought locks would\nhave prevented that ?)\n\nJustin\n\n\n",
"msg_date": "Fri, 13 Nov 2020 10:00:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "error_severity of brin work item"
},
{
"msg_contents": "On 2020-Nov-13, Justin Pryzby wrote:\n\n> I saw a bunch of these in my logs:\n> \n> log_time | 2020-10-25 22:59:45.619-07\n> database | \n> left | could not open relation with OID 292103095\n> left | processing work entry for relation \"ts.child.alarms_202010_alarm_clear_time_idx\"\n> \n> Those happen following a REINDEX job on that index.\n> \n> I think that should be more like an INFO message, since that's what vacuum does\n> (vacuum_open_relation), and a queued work item is even more likely to hit a\n> dropped relation.\n\nAh, interesting. Yeah, I agree this is a bug. I think it can be fixed\nby using try_relation_open() on the index; if that returns NULL, discard\nthe work item.\n\nDoes this patch solve the problem?",
"msg_date": "Fri, 13 Nov 2020 13:39:31 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 01:39:31PM -0300, Alvaro Herrera wrote:\n> On 2020-Nov-13, Justin Pryzby wrote:\n> \n> > I saw a bunch of these in my logs:\n> > \n> > log_time | 2020-10-25 22:59:45.619-07\n> > database | \n> > left | could not open relation with OID 292103095\n> > left | processing work entry for relation \"ts.child.alarms_202010_alarm_clear_time_idx\"\n> > \n> > Those happen following a REINDEX job on that index.\n> > \n> > I think that should be more like an INFO message, since that's what vacuum does\n> > (vacuum_open_relation), and a queued work item is even more likely to hit a\n> > dropped relation.\n> \n> Ah, interesting. Yeah, I agree this is a bug. I think it can be fixed\n> by using try_relation_open() on the index; if that returns NULL, discard\n> the work item.\n> \n> Does this patch solve the problem?\n\nYour patch didn't actually say \"try_relation_open\", so didn't work.\nBut it does works if I do that, and close the table.\n\nI tested like:\n\npryzbyj=# ALTER SYSTEM SET backtrace_functions='try_relation_open,relation_open';\npryzbyj=# ALTER SYSTEM SET autovacuum_naptime=3; SELECT pg_reload_conf();\npryzbyj=# CREATE TABLE tt AS SELECT generate_series(1,9999)i;\npryzbyj=# CREATE INDEX ON tt USING brin(i) WITH(autosummarize,pages_per_range=1);\npryzbyj=# \\! while :; do psql -h /tmp -qc 'SET client_min_messages=info' -c 'REINDEX INDEX CONCURRENTLY tt_i_idx'; done&\n\n-- run this 5-10 times and hit the \"...was not recorded\" message, which for\n-- whatever reason causes the race condition involving work queue\npryzbyj=# UPDATE tt SET i=1+i;\n\n2020-11-13 11:50:46.093 CST [30687] ERROR: could not open relation with OID 1110882\n2020-11-13 11:50:46.093 CST [30687] CONTEXT: processing work entry for relation \"pryzbyj.public.tt_i_idx\"\n2020-11-13 11:50:46.093 CST [30687] BACKTRACE:\n postgres: autovacuum worker pryzbyj(+0xb9ce8) [0x55acf2af0ce8]\n postgres: autovacuum worker pryzbyj(index_open+0xb) [0x55acf2bab59b]\n postgres: autovacuum worker pryzbyj(brin_summarize_range+0x8f) [0x55acf2b5b5bf]\n postgres: autovacuum worker pryzbyj(DirectFunctionCall2Coll+0x62) [0x55acf2f40372]\n ...\n\n-- \nJustin",
"msg_date": "Fri, 13 Nov 2020 12:11:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 12:11:21PM -0600, Justin Pryzby wrote:\n> On Fri, Nov 13, 2020 at 01:39:31PM -0300, Alvaro Herrera wrote:\n> > On 2020-Nov-13, Justin Pryzby wrote:\n> > \n> > > I saw a bunch of these in my logs:\n> > > \n> > > log_time | 2020-10-25 22:59:45.619-07\n> > > database | \n> > > left | could not open relation with OID 292103095\n> > > left | processing work entry for relation \"ts.child.alarms_202010_alarm_clear_time_idx\"\n> > > \n> > > Those happen following a REINDEX job on that index.\n> > > \n> > > I think that should be more like an INFO message, since that's what vacuum does\n> > > (vacuum_open_relation), and a queued work item is even more likely to hit a\n> > > dropped relation.\n> > \n> > Ah, interesting. Yeah, I agree this is a bug. I think it can be fixed\n> > by using try_relation_open() on the index; if that returns NULL, discard\n> > the work item.\n> > \n> > Does this patch solve the problem?\n> \n> Your patch didn't actually say \"try_relation_open\", so didn't work.\n> But it does works if I do that, and close the table.\n\nThat patch broke the case that a non-index is passed, which I addressed here.\n\nI wondered if the function should return NULL in those cases, but it seems to\nbe \"impossible\".",
"msg_date": "Thu, 19 Nov 2020 11:38:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On 2020-Nov-19, Justin Pryzby wrote:\n\n> On Fri, Nov 13, 2020 at 12:11:21PM -0600, Justin Pryzby wrote:\n\n> > Your patch didn't actually say \"try_relation_open\", so didn't work.\n> > But it does works if I do that, and close the table.\n\nThanks for fixing and testing.\n\n> That patch broke the case that a non-index is passed, which I addressed here.\n\nHmm, I think the reaction to that should be the same as before, so\nrather than return 0, the patch should raise the same error that\nindex_open() would.\n\n\n\n",
"msg_date": "Thu, 19 Nov 2020 15:15:21 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 03:15:21PM -0300, Alvaro Herrera wrote:\n> On 2020-Nov-19, Justin Pryzby wrote:\n> \n> > On Fri, Nov 13, 2020 at 12:11:21PM -0600, Justin Pryzby wrote:\n> \n> > > Your patch didn't actually say \"try_relation_open\", so didn't work.\n> > > But it does works if I do that, and close the table.\n> \n> Thanks for fixing and testing.\n> \n> > That patch broke the case that a non-index is passed, which I addressed here.\n> \n> Hmm, I think the reaction to that should be the same as before, so\n> rather than return 0, the patch should raise the same error that\n> index_open() would.\n\nThe resulting logic is not very clear and requires a lot of commentary..\n\nBTW I saw that in tablecmds.c, RangeVarCallbackForAttachIndex() does this:\n\n if (classform->relkind != RELKIND_PARTITIONED_INDEX &&\n classform->relkind != RELKIND_INDEX)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n errmsg(\"\\\"%s\\\" is not an index\", rv->relname)));\n\nIs it wrong to use ERRCODE_INVALID_OBJECT_DEFINITION ?\nMost other places say ERRCODE_WRONG_OBJECT_TYPE\n\nLikewise, transformPartitionCmd() in parse_utilcmd.c:\n\t\t(errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n\t\t errmsg(\"\\\"%s\\\" is not a partitioned table\",\n\t\t\t\tRelationGetRelationName(parentRel))));\n\n-- \nJustin",
"msg_date": "Thu, 19 Nov 2020 18:51:51 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "I think this formulation (attached v3) has fewer moving parts.\n\nHowever, now that I did that, I wonder if this is really the best\napproach to solve this problem. Maybe instead of doing this at the BRIN\nlevel, it should be handled at the autovac level, by having the worker\ncopy the work-item to local memory and remove it from the shared list as\nsoon as it is in progress. That way, if *any* error occurs while trying\nto execute it, it will go away instead of being retried for all\neternity.\n\nPreliminary patch for that attached as autovacuum-workitem.patch.\n\nI would propose to clean that up to apply instead of your proposed fix.",
"msg_date": "Mon, 23 Nov 2020 16:39:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 04:39:57PM -0300, Alvaro Herrera wrote:\n> I think this formulation (attached v3) has fewer moving parts.\n> \n> However, now that I did that, I wonder if this is really the best\n> approach to solve this problem. Maybe instead of doing this at the BRIN\n> level, it should be handled at the autovac level, by having the worker\n> copy the work-item to local memory and remove it from the shared list as\n> soon as it is in progress. That way, if *any* error occurs while trying\n> to execute it, it will go away instead of being retried for all\n> eternity.\n> \n> Preliminary patch for that attached as autovacuum-workitem.patch.\n> \n> I would propose to clean that up to apply instead of your proposed fix.\n\nI don't know why you said it would be retried for eternity ?\nI think the perform_work_item() TRY/CATCH avoids that.\n\nAlso .. I think your idea doesn't solve my issue, that REINDEX CONCURRENTLY is\ncausing vacuum to leave errors in my logs.\n\nI check that the first patch avoids the issue and the 2nd one does not.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Nov 2020 11:23:56 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "The more I look at this, the less I like it. This would set a precedent\nthat any action that can be initiated from an autovac work-item has a\nrequirement of silently being discarded when it referenced a\nnon-existant relation.\n\nI'd rather have the code that drops the index go through the list of\nwork-items and delete those that reference that relation.\n\nI'm not sure if this is something that ought to be done in index_drop();\nOne objection might be that if the drop is rolled back, the work-items\nare lost. It's the easiest, though; and work-items are supposed to be\nlossy anyway, and vacuum would fix the lack of summarization eventually.\nSo, not pretty, but not all that bad. (Hopefully rolled-back drops are\nnot all that common.)\n\n\n",
"msg_date": "Mon, 30 Nov 2020 20:47:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 08:47:32PM -0300, Alvaro Herrera wrote:\n> The more I look at this, the less I like it. This would set a precedent\n> that any action that can be initiated from an autovac work-item has a\n> requirement of silently being discarded when it referenced a\n> non-existant relation.\n\nMy original request was to change to INFO, which is what vacuum proper does at\nvacuum_open_relation(). I realize that still means that new work item handlers\nwould have to do the LockOid, try_open dance - maybe it could be factored out.\n\nI noticed this on multiple servers immediate after changing a nagios script to\nlook for all ERRORs (rather than a specific list of log_messages) and for a\nlonger time period.\n\n> I'd rather have the code that drops the index go through the list of\n> work-items and delete those that reference that relation.\n> \n> I'm not sure if this is something that ought to be done in index_drop();\n> One objection might be that if the drop is rolled back, the work-items\n> are lost.\n\nShould it be done in an AtCommit hook or something like that ?\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 30 Nov 2020 18:20:56 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On 2020-Nov-30, Justin Pryzby wrote:\n\n> On Mon, Nov 30, 2020 at 08:47:32PM -0300, Alvaro Herrera wrote:\n> > The more I look at this, the less I like it. This would set a precedent\n> > that any action that can be initiated from an autovac work-item has a\n> > requirement of silently being discarded when it referenced a\n> > non-existant relation.\n> \n> My original request was to change to INFO, which is what vacuum proper does at\n> vacuum_open_relation(). I realize that still means that new work item handlers\n> would have to do the LockOid, try_open dance - maybe it could be factored out.\n\nAs I understand, INFO is not well suited to messages that are not going\nto the client. Anyway, my point is about the contortions that are\nneeded to support the case, rather than what exactly do we do when it\nhappens.\n\nRetrospectively, it's strange that this problem (what happens when\nindexes with pending work-items are dropped) hadn't manifested. It\nseems a pretty obvious one.\n\n> > I'd rather have the code that drops the index go through the list of\n> > work-items and delete those that reference that relation.\n> > \n> > I'm not sure if this is something that ought to be done in index_drop();\n> > One objection might be that if the drop is rolled back, the work-items\n> > are lost.\n> \n> Should it be done in an AtCommit hook or something like that ?\n\nI didn't like this idea much on first read, on extensibility grounds,\nbut perhaps it's not so bad because we can generalize it whenever\nthere's pressure to add a second type of work-item (*if* that ever\nhappens).\n\nI suppose the idea is that index_drop saves the index OID when a BRIN\nindex with autosummarization=on is dropped, and then the\nAtCommit_WorkItems() call scans the items list and drops those that\nmatch any OIDs in the list. (It does require to be mindful of subxact\naborts, of course.)\n\n\n",
"msg_date": "Tue, 1 Dec 2020 11:07:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 11:07:30AM -0300, Alvaro Herrera wrote:\n> > Should it be done in an AtCommit hook or something like that ?\n> \n> I didn't like this idea much on first read, on extensibility grounds,\n> but perhaps it's not so bad because we can generalize it whenever\n> there's pressure to add a second type of work-item (*if* that ever\n> happens).\n> \n> I suppose the idea is that index_drop saves the index OID when a BRIN\n> index with autosummarization=on is dropped, and then the\n> AtCommit_WorkItems() call scans the items list and drops those that\n> match any OIDs in the list. (It does require to be mindful of subxact\n> aborts, of course.)\n\nThis was an idea I made up - I don't know any of the details of this, but if\nyou give a hint I could look at it more. There'd (still) be a race window, but\nI think that's ok.\n\nAnother idea is if perform_work_item() were responsible for discarding\nrelations which disappear. Currently it does this, which is racy since it\nholds no lock.\n\n cur_relname = get_rel_name(workitem->avw_relation);\n cur_nspname = get_namespace_name(get_rel_namespace(workitem->avw_relation));\n cur_datname = get_database_name(MyDatabaseId);\n if (!cur_relname || !cur_nspname || !cur_datname)\n goto deleted2;\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 1 Dec 2020 12:46:41 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On 2020-Dec-01, Justin Pryzby wrote:\n\n> This was an idea I made up - I don't know any of the details of this, but if\n> you give a hint I could look at it more. There'd (still) be a race window, but\n> I think that's ok.\n\nSee CommitTransaction() and friends, where AtEOXact_on_commit_actions()\nand others are called. You'd have to create a new routine (say\nAtEOXact_Autovacuum or more specific AtEOXact_AutovacuumWorkItems), to\nbe called at the right places in xact.c. Keep a global variable, say a\nlist of OIDs. On subxact commit, the list is reassigned to its parent\ntransaction; on subxact abort, the list is discarded. On top xact\ncommit, the list of OIDs is passed to some new routine in autovacuum.c\nthat scans the workitem array and deletes items as appropriate.\n\nNot sure what's a good place for OIDs to be added to the list. We don't\nhave AM-specific entry points for relation drop. I think this is the\nweakest point of this.\n\n\n> Another idea is if perform_work_item() were responsible for discarding\n> relations which disappear. Currently it does this, which is racy since it\n> holds no lock.\n\nThat has the property that it remains contained in autovacuum.c, but no\nother advantages I think.\n\n\n",
"msg_date": "Tue, 1 Dec 2020 15:57:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Tue, Dec 01, 2020 at 03:57:24PM -0300, Alvaro Herrera wrote:\n> On 2020-Dec-01, Justin Pryzby wrote:\n> \n> > This was an idea I made up - I don't know any of the details of this, but if\n> > you give a hint I could look at it more. There'd (still) be a race window, but\n> > I think that's ok.\n> \n> See CommitTransaction() and friends, where AtEOXact_on_commit_actions()\n\n..thanks. I was going to persue the other idea first since this is new to me.\n\n> Not sure what's a good place for OIDs to be added to the list. We don't\n> have AM-specific entry points for relation drop. I think this is the\n> weakest point of this.\n\nI assume it would just add OIDs of *all* dropped rels, and the autovacuum half\nwould silently ignore any OID for which there's no work item. (As it would do\nin any case).\n\n> > Another idea is if perform_work_item() were responsible for discarding\n> > relations which disappear. Currently it does this, which is racy since it\n> > holds no lock.\n> \n> That has the property that it remains contained in autovacuum.c, but no\n> other advantages I think.\n\nIt has the advantage that it moves all the try_open stuff out of brin.\n\nI started implementing this, and then realized that the try_open stuff *has* to\nbe in the brin_summarize function, to handle the case that someone passes a\nnon-index, since it's SQL exposed.\nSo maybe we should use your LockOid patch now, and refactor in the future if we\nadd additional work-item types.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 1 Dec 2020 16:25:34 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On 12/1/20 5:25 PM, Justin Pryzby wrote:\n> On Tue, Dec 01, 2020 at 03:57:24PM -0300, Alvaro Herrera wrote:\n> \n>>> Another idea is if perform_work_item() were responsible for discarding\n>>> relations which disappear. Currently it does this, which is racy since it\n>>> holds no lock.\n>>\n>> That has the property that it remains contained in autovacuum.c, but no\n>> other advantages I think.\n> \n> It has the advantage that it moves all the try_open stuff out of brin.\n> \n> I started implementing this, and then realized that the try_open stuff *has* to\n> be in the brin_summarize function, to handle the case that someone passes a\n> non-index, since it's SQL exposed.\n> So maybe we should use your LockOid patch now, and refactor in the future if we\n> add additional work-item types.\n\nThoughts on this, Álvaro? I can see that the first version of this patch \nwas not ideal but the rework seems to have stalled. Since it is a bug \nperhaps it would be better to get something in as Justin suggests?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 10 Mar 2021 08:24:50 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 08:24:50AM -0500, David Steele wrote:\n> On 12/1/20 5:25 PM, Justin Pryzby wrote:\n> > On Tue, Dec 01, 2020 at 03:57:24PM -0300, Alvaro Herrera wrote:\n> > \n> > > > Another idea is if perform_work_item() were responsible for discarding\n> > > > relations which disappear. Currently it does this, which is racy since it\n> > > > holds no lock.\n> > > \n> > > That has the property that it remains contained in autovacuum.c, but no\n> > > other advantages I think.\n> > \n> > It has the advantage that it moves all the try_open stuff out of brin.\n> > \n> > I started implementing this, and then realized that the try_open stuff *has* to\n> > be in the brin_summarize function, to handle the case that someone passes a\n> > non-index, since it's SQL exposed.\n> > So maybe we should use your LockOid patch now, and refactor in the future if we\n> > add additional work-item types.\n> \n> Thoughts on this, �lvaro? I can see that the first version of this patch was\n> not ideal but the rework seems to have stalled. Since it is a bug perhaps it\n> would be better to get something in as Justin suggests?\n> \n\nHi �lvaro,\n\nDo you plan to work on this for this CF?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Fri, 24 Sep 2021 03:36:16 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On 2021-Sep-24, Jaime Casanova wrote:\n\n> Do you plan to work on this for this CF?\n\nI think the most reasonable action is to push the patch in\nhttps://postgr.es/m/20201123193957.GA21810@alvherre.pgsql to all\nbranches, closing the immediate hole, and we can see about the xact-hook\nstuff (when we have it) to master only, which I view as a better\nmechanism to protect work-items going forward. That's what I understand\nJustin was suggesting in his last reply.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La fuerza no está en los medios físicos\nsino que reside en una voluntad indomable\" (Gandhi)\n\n\n",
"msg_date": "Fri, 24 Sep 2021 10:42:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 10:42:36AM -0300, Alvaro Herrera wrote:\n> I think the most reasonable action is to push the patch in\n> https://postgr.es/m/20201123193957.GA21810@alvherre.pgsql to all\n> branches, closing the immediate hole, and we can see about the xact-hook\n> stuff (when we have it) to master only, which I view as a better\n> mechanism to protect work-items going forward. That's what I understand\n> Justin was suggesting in his last reply.\n\nI suggested to use the 20 line solution rather than invent new infrastructure\njust to avoid corner case errors.\n\nBut I also don't understand why changing to use an INFO message doesn't work.\nvacuum_open_relation() outputs a WARNING for manual vacuum and an INFO for\nautovacuum when log_min_duration >= 0. Why couldn't this do the same thing?\n\nI found a few instances of this in my logs, indicating that my understanding of\nthe logic is not wrong.\n\nlog_time | 2021-01-13 15:23:40.687-05\nsession_line | 2\nsession_start_time | 2021-01-13 15:23:19-05\nerror_severity | LOG\nsql_state_code | 42P01\nmessage | skipping vacuum of \"pg_toast_984781820\" --- relation no longer exists\nbackend_type | autovacuum worker\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 24 Sep 2021 09:22:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: error_severity of brin work item"
},
{
"msg_contents": "This thread has stalled, and from the discussions there isn't consencus on how\nto proceed (if at all?), so I'm marking this Returned with Feedback. If the\ndiscussion and work picks up again then it can be resubmitted.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 2 Dec 2021 11:59:03 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: error_severity of brin work item"
}
] |
[
{
"msg_contents": "Hi,\n\nI got somewhat scared when my explain took a few seconds to complete and \nused a few gigs of RAM.\nTo reproduce try the following:\n\ndiscard temp;\ncreate temp table a as select to_timestamp(generate_series(1, 7000)) i;\nanalyze a;\nset work_mem to '3GB';\nexplain select distinct a1.i - a2.i from a a1, a a2;\n\nI would appreciate if someone could have a look at the patch attached, \nwhich makes executor skip initializing hash tables when doing explain only.\n\nBest, Alex",
"msg_date": "Fri, 13 Nov 2020 16:10:30 +0000",
"msg_from": "Alexey Bashtanov <bashtanov@imap.cc>",
"msg_from_op": true,
"msg_subject": "don't allocate HashAgg hash tables when running explain only"
},
{
"msg_contents": "On 13/11/2020 18:10, Alexey Bashtanov wrote:\n> Hi,\n> \n> I got somewhat scared when my explain took a few seconds to complete and\n> used a few gigs of RAM.\n> To reproduce try the following:\n> \n> discard temp;\n> create temp table a as select to_timestamp(generate_series(1, 7000)) i;\n> analyze a;\n> set work_mem to '3GB';\n> explain select distinct a1.i - a2.i from a a1, a a2;\n> \n> I would appreciate if someone could have a look at the patch attached,\n> which makes executor skip initializing hash tables when doing explain only.\n\nMakes sense. Committed, thanks for the patch!\n\n- Heikki\n\n\n",
"msg_date": "Wed, 18 Nov 2020 12:40:24 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: don't allocate HashAgg hash tables when running explain only"
},
{
"msg_contents": "On Wed, 18 Nov 2020 at 05:40, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 13/11/2020 18:10, Alexey Bashtanov wrote:\n>> > I would appreciate if someone could have a look at the patch attached,\n> > which makes executor skip initializing hash tables when doing explain only.\n>\n> Makes sense. Committed, thanks for the patch!\n\n\nEgads. That seems like a backpatchable bug fix to me. Have we been\ndoing this all along?!\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 19 Nov 2020 00:20:15 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: don't allocate HashAgg hash tables when running explain only"
},
{
"msg_contents": "On 19/11/2020 07:20, Greg Stark wrote:\n> On Wed, 18 Nov 2020 at 05:40, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> On 13/11/2020 18:10, Alexey Bashtanov wrote:\n>>>> I would appreciate if someone could have a look at the patch attached,\n>>> which makes executor skip initializing hash tables when doing explain only.\n>>\n>> Makes sense. Committed, thanks for the patch!\n> \n> Egads. That seems like a backpatchable bug fix to me. Have we been\n> doing this all along?!\n\nYeah, I believe it's always been like that. Yeah, arguably it should be \nbackpatched. I felt conservative and didn't backpatch, but feel free to \ndo it if you think it should be.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 19 Nov 2020 08:47:51 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: don't allocate HashAgg hash tables when running explain only"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 08:47:51AM +0200, Heikki Linnakangas wrote:\n> Yeah, I believe it's always been like that. Yeah, arguably it should be\n> backpatched. I felt conservative and didn't backpatch, but feel free to do\n> it if you think it should be.\n\n+1 for a backpatch. The difference in runtime for EXPLAIN in this case\nis quite something.\n--\nMichael",
"msg_date": "Fri, 20 Nov 2020 15:31:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: don't allocate HashAgg hash tables when running explain only"
},
{
"msg_contents": "On 20/11/2020 08:31, Michael Paquier wrote:\n> On Thu, Nov 19, 2020 at 08:47:51AM +0200, Heikki Linnakangas wrote:\n>> Yeah, I believe it's always been like that. Yeah, arguably it should be\n>> backpatched. I felt conservative and didn't backpatch, but feel free to do\n>> it if you think it should be.\n> \n> +1 for a backpatch. The difference in runtime for EXPLAIN in this case\n> is quite something.\n\nThat's two votes for backpatching. Ok, I'll go do it.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 20 Nov 2020 15:30:53 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: don't allocate HashAgg hash tables when running explain only"
}
] |
[
{
"msg_contents": "Hello,\n\nThis patch introduces a set of changes to the table AM APIs, making them\naccept a column projection list. That helps columnar table AMs, so that\nthey don't need to fetch all columns from disk, but only the ones\nactually needed.\n\nThe set of changes in this patch is not exhaustive -\nthere are many more opportunities that are discussed in the TODO section\nbelow. Before digging deeper, we want to elicit early feedback on the\nAPI changes and the column extraction logic.\n\nTableAM APIs that have been modified are:\n\n1. Sequential scan APIs\n2. Index scan APIs\n3. API to lock and return a row\n4. API to fetch a single row\n\nWe have seen performance benefits in Zedstore for many of the optimized\noperations [0]. This patch is extracted from the larger patch shared in\n[0].\n\n------------------------------------------------------------------------\nBuilding the column projection set:\n\nIn terms of building the column projection set necessary for each of\nthese APIs, this patch builds off of the scanCols patch [1], which\nAshwin and Melanie had started earlier. As noted in [1], there are cases\nwhere the scanCols set is not representative of the columns to be\nprojected. For instance, in a DELETE .. RETURNING query, there is\ntypically a sequential scan and a separate invocation of\ntuple_fetch_row_version() in order to satisfy the RETURNING clause (see\nExecDelete()). So for a query such as:\n\n DELETE from foo WHERE i < 100 && j < 1000 RETURNING k, l;\n\nWe need to pass the set (i, j) to the scan and (k, l) to the\ntuple_fetch_row_version() invocation. This is why we had to introduce\nthe returningCols field.\n\nIn the same spirit, separate column projection sets are computed for any\noperations that involve an EPQ check (INSERT, DELETE, UPDATE, row-level\nlocking etc), the columns involved in an ON CONFLICT UPDATE etc.\n\nRecognizing and collecting these sets of columns is done at various\nstages: analyze and rewrite, planner and executor - depending on the\ntype of operation for which the subset of columns is calculated. The\ncolumn bitmaps are stored in different places as well - such as the ones\nfor scans and RETURNING are stored in RangeTblEntry, whereas the set of\ncolumns for ON CONFLICT UPDATE are stored in OnConflictSetState.\n\n------------------------------------------------------------------------\nTable AM API changes:\n\nThe changes made to the table AM API, introducing the column projection\nset, come in different flavors. We would like feedback on what style\nwe need to converge to or if we should use different styles depending\non the situation.\n\n- A new function variant that takes a column projection list, such as:\n\n TableScanDesc (*scan_begin) (Relation rel,\n Snapshot snapshot,\n int nkeys, struct ScanKeyData *key,\n ParallelTableScanDesc pscan,\n uint32 flags);\n->\n\n TableScanDesc (*scan_begin_with_column_projection)(Relation relation,\n Snapshot snapshot,\n int nkeys, struct ScanKeyData *key,\n ParallelTableScanDesc parallel_scan,\n uint32 flags,\n Bitmapset *project_columns);\n\n- Modifying the existing function to take a column projection list, such\nas:\n\n TM_Result (*tuple_lock) (Relation rel,\n ItemPointer tid,\n Snapshot snapshot,\n TupleTableSlot *slot,\n CommandId cid,\n LockTupleMode mode,\n LockWaitPolicy wait_policy,\n uint8 flags,\n TM_FailureData *tmfd);\n\n->\n\n TM_Result (*tuple_lock) (Relation rel,\n ItemPointer tid,\n Snapshot snapshot,\n TupleTableSlot *slot,\n CommandId cid,\n LockTupleMode mode,\n LockWaitPolicy wait_policy,\n uint8 flags,\n TM_FailureData *tmfd,\n Bitmapset *project_cols);\n\n- A new function index_fetch_set_column_projection() to be called after\nindex_beginscan() to set the column projection set, which will be used\nlater by index_getnext_slot().\n\n void (*index_fetch_set_column_projection) (struct IndexFetchTableData *data,\n Bitmapset *project_columns);\n\nThe set of columns expected by the new/modified functions is represented\nas a Bitmapset of attnums for a specific base relation. An empty/NULL\nbitmap signals to the AM that no data columns are needed. A bitmap\ncontaining the single element 0 indicates that we want all data columns\nto be fetched.\n\nThe bitmaps do not include system columns.\n\nAdditionally, the TupleTableSlots populated by functions such\nas table_scan_getnextslot(), need to be densely filled upto the highest\nnumbered column in the projection list (any column not in the projection\nlist should be populated with NULL). This is due to the implicit\nassumptions of the slot_get_***() APIs.\n\n------------------------------------------------------------------------\nTODOs:\n\n- Explore opportunities to push the column extraction logic to the\nplanner or pre-planner stages from the executor stage (like scanCols and\nreturningCols), or at least elevate the column extraction logic to be\ndone once per executor run instead of once per tuple.\n\n- As was requested in [1], we should guard column projection set\nextraction logic with a table_scans_leverage_column_projection() call.\nWe wouldn't want a non-columnar AM to incur the overhead.\n\n- Standardize the table AM API for passing columns.\n\n- The optimization for DELETE RETURNING does not currently work for\nviews. We have to populate the list of columns for the base relation\nbeneath the view properly.\n\n- Currently the benefit of passing in an empty projection set for ON\nCONFLICT DO UPDATE (UPSERT) and ON CONFLICT DO NOTHING (see\nExecCheckTIDVisible()) is masked by a preceding call to\ncheck_exclusion_or_unique_constraint() which has not yet been modified\nto pass a column projection list to the index scan.\n\n- Compute scanCols earlier than set_base_rel_sizes() and use that\ninformation to produce better relation size estimates (relation size\nwill depend on the number of columns projected) in the planner.\nEssentially, we need to absorb the work done by Pengzhou [2].\n\n- Right now, we do not extract a set of columns for the call to\ntable_tuple_lock() within GetTupleForTrigger() as it may be hard to\ndetermine the list of columns used in a trigger body [3].\n\n- validateForeignKeyConstraint() should only need to fetch the\nforeign key column.\n\n- List of index scan callsites that will benefit from calling\nindex_fetch_set_column_projection():\n\n -- table_index_fetch_tuple_check() does not need to fetch any\n columns (we have to pass an empty column bitmap), fetching the tid\n should be enough.\n\n -- unique_key_recheck() performs a liveness check for which we do\n not need to fetch any columns (we have to pass an empty column\n bitmap)\n\n -- check_exclusion_or_unique_constraint() needs to only fetch the\n columns that are part of the exclusion or unique constraint.\n\n -- IndexNextWithReorder() needs to only fetch columns being\n projected along with columns in the index qual and columns in the\n ORDER BY clause.\n\n -- get_actual_variable_endpoint() only performs visibility checks,\n so we don't need to fetch any columns (we have to pass an empty\n column projection bitmap)\n\n- BitmapHeapScans can benefit from a column projection list the same\nway as an IndexScan and SeqScan can. We can possibly pass down scanCols\nin ExecInitBitmapHeapScan(). We would have to modify the BitmapHeapScan\ntable AM calls to take a column projection bitmap.\n\n- There may be more callsites where we can pass a column projection list.\n\nRegards,\n\nSoumyadeep & Jacob\n\n[0] https://www.postgresql.org/message-id/CAE-ML%2B-HwY4X4uTzBesLhOotHF7rUvP2Ur-rvEpqz2PUgK4K3g%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/flat/CAAKRu_Yj%3DQ_ZxiGX%2BpgstNWMbUJApEJX-imvAEwryCk5SLUebg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAG4reAQc9vYdmQXh%3D1D789x8XJ%3DgEkV%2BE%2BfT9%2Bs9tOWDXX3L9Q%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/23194.1560618101%40sss.pgh.pa.us",
"msg_date": "Fri, 13 Nov 2020 10:01:22 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Table AM modifications to accept column projection lists"
},
{
"msg_contents": "Hi Soumyadeep,\n\nOn Sat, Nov 14, 2020 at 3:02 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n>\n> Hello,\n>\n> This patch introduces a set of changes to the table AM APIs, making them\n> accept a column projection list. That helps columnar table AMs, so that\n> they don't need to fetch all columns from disk, but only the ones\n> actually needed.\n>\n> The set of changes in this patch is not exhaustive -\n> there are many more opportunities that are discussed in the TODO section\n> below. Before digging deeper, we want to elicit early feedback on the\n> API changes and the column extraction logic.\n>\n> TableAM APIs that have been modified are:\n>\n> 1. Sequential scan APIs\n> 2. Index scan APIs\n> 3. API to lock and return a row\n> 4. API to fetch a single row\n>\n> We have seen performance benefits in Zedstore for many of the optimized\n> operations [0]. This patch is extracted from the larger patch shared in\n> [0].\n>\n> ------------------------------------------------------------------------\n> Building the column projection set:\n>\n> In terms of building the column projection set necessary for each of\n> these APIs, this patch builds off of the scanCols patch [1], which\n> Ashwin and Melanie had started earlier. As noted in [1], there are cases\n> where the scanCols set is not representative of the columns to be\n> projected. For instance, in a DELETE .. RETURNING query, there is\n> typically a sequential scan and a separate invocation of\n> tuple_fetch_row_version() in order to satisfy the RETURNING clause (see\n> ExecDelete()). So for a query such as:\n>\n> DELETE from foo WHERE i < 100 && j < 1000 RETURNING k, l;\n>\n> We need to pass the set (i, j) to the scan and (k, l) to the\n> tuple_fetch_row_version() invocation. This is why we had to introduce\n> the returningCols field.\n>\n> In the same spirit, separate column projection sets are computed for any\n> operations that involve an EPQ check (INSERT, DELETE, UPDATE, row-level\n> locking etc), the columns involved in an ON CONFLICT UPDATE etc.\n>\n> Recognizing and collecting these sets of columns is done at various\n> stages: analyze and rewrite, planner and executor - depending on the\n> type of operation for which the subset of columns is calculated. The\n> column bitmaps are stored in different places as well - such as the ones\n> for scans and RETURNING are stored in RangeTblEntry, whereas the set of\n> columns for ON CONFLICT UPDATE are stored in OnConflictSetState.\n>\n> ------------------------------------------------------------------------\n> Table AM API changes:\n>\n> The changes made to the table AM API, introducing the column projection\n> set, come in different flavors. We would like feedback on what style\n> we need to converge to or if we should use different styles depending\n> on the situation.\n>\n> - A new function variant that takes a column projection list, such as:\n>\n> TableScanDesc (*scan_begin) (Relation rel,\n> Snapshot snapshot,\n> int nkeys, struct ScanKeyData *key,\n> ParallelTableScanDesc pscan,\n> uint32 flags);\n> ->\n>\n> TableScanDesc (*scan_begin_with_column_projection)(Relation relation,\n> Snapshot snapshot,\n> int nkeys, struct ScanKeyData *key,\n> ParallelTableScanDesc parallel_scan,\n> uint32 flags,\n> Bitmapset *project_columns);\n>\n> - Modifying the existing function to take a column projection list, such\n> as:\n>\n> TM_Result (*tuple_lock) (Relation rel,\n> ItemPointer tid,\n> Snapshot snapshot,\n> TupleTableSlot *slot,\n> CommandId cid,\n> LockTupleMode mode,\n> LockWaitPolicy wait_policy,\n> uint8 flags,\n> TM_FailureData *tmfd);\n>\n> ->\n>\n> TM_Result (*tuple_lock) (Relation rel,\n> ItemPointer tid,\n> Snapshot snapshot,\n> TupleTableSlot *slot,\n> CommandId cid,\n> LockTupleMode mode,\n> LockWaitPolicy wait_policy,\n> uint8 flags,\n> TM_FailureData *tmfd,\n> Bitmapset *project_cols);\n>\n> - A new function index_fetch_set_column_projection() to be called after\n> index_beginscan() to set the column projection set, which will be used\n> later by index_getnext_slot().\n>\n> void (*index_fetch_set_column_projection) (struct IndexFetchTableData *data,\n> Bitmapset *project_columns);\n>\n> The set of columns expected by the new/modified functions is represented\n> as a Bitmapset of attnums for a specific base relation. An empty/NULL\n> bitmap signals to the AM that no data columns are needed. A bitmap\n> containing the single element 0 indicates that we want all data columns\n> to be fetched.\n>\n> The bitmaps do not include system columns.\n>\n> Additionally, the TupleTableSlots populated by functions such\n> as table_scan_getnextslot(), need to be densely filled upto the highest\n> numbered column in the projection list (any column not in the projection\n> list should be populated with NULL). This is due to the implicit\n> assumptions of the slot_get_***() APIs.\n>\n> ------------------------------------------------------------------------\n> TODOs:\n>\n> - Explore opportunities to push the column extraction logic to the\n> planner or pre-planner stages from the executor stage (like scanCols and\n> returningCols), or at least elevate the column extraction logic to be\n> done once per executor run instead of once per tuple.\n>\n> - As was requested in [1], we should guard column projection set\n> extraction logic with a table_scans_leverage_column_projection() call.\n> We wouldn't want a non-columnar AM to incur the overhead.\n>\n> - Standardize the table AM API for passing columns.\n>\n> - The optimization for DELETE RETURNING does not currently work for\n> views. We have to populate the list of columns for the base relation\n> beneath the view properly.\n>\n> - Currently the benefit of passing in an empty projection set for ON\n> CONFLICT DO UPDATE (UPSERT) and ON CONFLICT DO NOTHING (see\n> ExecCheckTIDVisible()) is masked by a preceding call to\n> check_exclusion_or_unique_constraint() which has not yet been modified\n> to pass a column projection list to the index scan.\n>\n> - Compute scanCols earlier than set_base_rel_sizes() and use that\n> information to produce better relation size estimates (relation size\n> will depend on the number of columns projected) in the planner.\n> Essentially, we need to absorb the work done by Pengzhou [2].\n>\n> - Right now, we do not extract a set of columns for the call to\n> table_tuple_lock() within GetTupleForTrigger() as it may be hard to\n> determine the list of columns used in a trigger body [3].\n>\n> - validateForeignKeyConstraint() should only need to fetch the\n> foreign key column.\n>\n> - List of index scan callsites that will benefit from calling\n> index_fetch_set_column_projection():\n>\n> -- table_index_fetch_tuple_check() does not need to fetch any\n> columns (we have to pass an empty column bitmap), fetching the tid\n> should be enough.\n>\n> -- unique_key_recheck() performs a liveness check for which we do\n> not need to fetch any columns (we have to pass an empty column\n> bitmap)\n>\n> -- check_exclusion_or_unique_constraint() needs to only fetch the\n> columns that are part of the exclusion or unique constraint.\n>\n> -- IndexNextWithReorder() needs to only fetch columns being\n> projected along with columns in the index qual and columns in the\n> ORDER BY clause.\n>\n> -- get_actual_variable_endpoint() only performs visibility checks,\n> so we don't need to fetch any columns (we have to pass an empty\n> column projection bitmap)\n>\n> - BitmapHeapScans can benefit from a column projection list the same\n> way as an IndexScan and SeqScan can. We can possibly pass down scanCols\n> in ExecInitBitmapHeapScan(). We would have to modify the BitmapHeapScan\n> table AM calls to take a column projection bitmap.\n>\n> - There may be more callsites where we can pass a column projection list.\n>\n\nYou sent in your patch to pgsql-hackers on Nov 14, but you did not\npost it to the next CommitFest[1]. If this was intentional, then you\nneed to take no action. However, if you want your patch to be\nreviewed as part of the upcoming CommitFest, then you need to add it\nyourself and may need to rebase the patch to the current HEAD before\n2021-01-01 AOE[2]. Thanks for your contributions.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/31/\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Dec 2020 19:29:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "Hey Masahiko,\n\nI added it to the Jan CF (https://commitfest.postgresql.org/31/2922/).\n\nPFA a rebased version against latest head.\n\nRegards,\nSoumyadeep",
"msg_date": "Thu, 31 Dec 2020 13:02:24 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "Hi, Soumyadeep:\nHappy New Year.\n\n+typedef struct neededColumnContext\n+{\n+ Bitmapset **mask;\n+ int n;\n\n+ * n specifies the number of allowed entries in mask: we use\n+ * it for bounds-checking in the walker above.\n\nI think the code would be easier to read if the above comment is moved or\ncopied for field n of neededColumnContext\n\nCheers\n\nOn Thu, Dec 31, 2020 at 1:03 PM Soumyadeep Chakraborty <\nsoumyadeep2007@gmail.com> wrote:\n\n> Hey Masahiko,\n>\n> I added it to the Jan CF (https://commitfest.postgresql.org/31/2922/).\n>\n> PFA a rebased version against latest head.\n>\n> Regards,\n> Soumyadeep\n>\n\nHi, Soumyadeep:Happy New Year.+typedef struct neededColumnContext+{+ Bitmapset **mask;+ int n;+ * n specifies the number of allowed entries in mask: we use+ * it for bounds-checking in the walker above.I think the code would be easier to read if the above comment is moved or copied for field n of neededColumnContextCheersOn Thu, Dec 31, 2020 at 1:03 PM Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:Hey Masahiko,\n\nI added it to the Jan CF (https://commitfest.postgresql.org/31/2922/).\n\nPFA a rebased version against latest head.\n\nRegards,\nSoumyadeep",
"msg_date": "Thu, 31 Dec 2020 16:34:58 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Thu, Dec 31, 2020 at 01:02:24PM -0800, Soumyadeep Chakraborty wrote:\n> Hey Masahiko,\n> \n> I added it to the Jan CF (https://commitfest.postgresql.org/31/2922/).\n> \n> PFA a rebased version against latest head.\n\nThanks for working on this.\n\n- pgstat_progress_update_param(PROGRESS_ANALYZE_CURRENT_CHILD_TABLE_RELID, \n- RelationGetRelid(childrel)); \n- \n\nWhy is this removed ?\n\n+ * returningCols of it's base table's RTE.\n\nits (possessive) not it's (\"it is\")\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 1 Mar 2021 16:59:17 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Mon, 2021-03-01 at 16:59 -0600, Justin Pryzby wrote:\r\n> - pgstat_progress_update_param(PROGRESS_ANALYZE_CURRENT_CHILD_TABLE_RELID, \r\n> \r\n> - RelationGetRelid(childrel)); \r\n> - \r\n> \r\n> Why is this removed ?\r\n\r\nMm, both of those analyze.c changes seem suspect. Possibly a mismerge\r\nfrom the zedstore branch; let me double-check.\r\n\r\n> \r\n> + * returningCols of it's base table's RTE.\r\n> \r\n> its (possessive) not it's (\"it is\")\r\n\r\nThanks, I'll fix this at the same time.\r\n\r\n--Jacob\r\n",
"msg_date": "Mon, 1 Mar 2021 23:13:21 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Mon, 2021-03-01 at 23:13 +0000, Jacob Champion wrote:\r\n> On Mon, 2021-03-01 at 16:59 -0600, Justin Pryzby wrote:\r\n> > Why is this removed ?\r\n> \r\n> Mm, both of those analyze.c changes seem suspect. Possibly a mismerge\r\n> from the zedstore branch; let me double-check.\r\n> \r\n> > its (possessive) not it's (\"it is\")\r\n> \r\n> Thanks, I'll fix this at the same time.\r\n\r\nBoth fixed in v3; thanks for the catch!\r\n\r\n--Jacob",
"msg_date": "Tue, 2 Mar 2021 17:10:13 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "Hi,\n\n+ /* Make sure the the new slot is not dependent on the original tuple */\n\nThere is duplicate 'the'.\n\nFor neededColumnContextWalker(),\n\n+ else if(var->varattno == 0) {\n\nI think the if following the else is not needed - I assume var->varattno\nwouldn't be negative.\nSimilar comment for extract_scan_columns().\n\n+ while ((col = bms_next_member(parent_cols, col)) >= 0)\n+ {\n+ Var *var = (Var *) list_nth(translated_vars, col - 1);\n\nIf col is 0, do you still want to call list_nth() ?\n\nCheers\n\nOn Tue, Mar 2, 2021 at 9:10 AM Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Mon, 2021-03-01 at 23:13 +0000, Jacob Champion wrote:\n> > On Mon, 2021-03-01 at 16:59 -0600, Justin Pryzby wrote:\n> > > Why is this removed ?\n> >\n> > Mm, both of those analyze.c changes seem suspect. Possibly a mismerge\n> > from the zedstore branch; let me double-check.\n> >\n> > > its (possessive) not it's (\"it is\")\n> >\n> > Thanks, I'll fix this at the same time.\n>\n> Both fixed in v3; thanks for the catch!\n>\n> --Jacob\n>\n\nHi,+ /* Make sure the the new slot is not dependent on the original tuple */There is duplicate 'the'.For neededColumnContextWalker(),+ else if(var->varattno == 0) {I think the if following the else is not needed - I assume var->varattno wouldn't be negative.Similar comment for extract_scan_columns().+ while ((col = bms_next_member(parent_cols, col)) >= 0)+ {+ Var *var = (Var *) list_nth(translated_vars, col - 1);If col is 0, do you still want to call list_nth() ?CheersOn Tue, Mar 2, 2021 at 9:10 AM Jacob Champion <pchampion@vmware.com> wrote:On Mon, 2021-03-01 at 23:13 +0000, Jacob Champion wrote:\n> On Mon, 2021-03-01 at 16:59 -0600, Justin Pryzby wrote:\n> > Why is this removed ?\n> \n> Mm, both of those analyze.c changes seem suspect. Possibly a mismerge\n> from the zedstore branch; let me double-check.\n> \n> > its (possessive) not it's (\"it is\")\n> \n> Thanks, I'll fix this at the same time.\n\nBoth fixed in v3; thanks for the catch!\n\n--Jacob",
"msg_date": "Tue, 2 Mar 2021 10:35:06 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Tue, 2021-03-02 at 10:35 -0800, Zhihong Yu wrote:\r\n> Hi,\r\n\r\nThanks for the review!\r\n\r\n> + /* Make sure the the new slot is not dependent on the original tuple */\r\n> \r\n> There is duplicate 'the'.\r\n\r\nThanks, I'll add this for the next batch of updates.\r\n\r\n> For neededColumnContextWalker(),\r\n> \r\n> + else if(var->varattno == 0) {\r\n> \r\n> I think the if following the else is not needed - I assume var->varattno wouldn't be negative.\r\n> Similar comment for extract_scan_columns().\r\n\r\nI think you can have system columns in the tree here -- a common\r\nexample that we run into with the `make check` suite is ctid. (To see\r\nthis, you can change the (> 0) check just above this code into a (!= 0)\r\ncheck, and then take a look at the failing cases in the test suite.)\r\n\r\n> + while ((col = bms_next_member(parent_cols, col)) >= 0)\r\n> + {\r\n> + Var *var = (Var *) list_nth(translated_vars, col - 1);\r\n> \r\n> If col is 0, do you still want to call list_nth() ?\r\n\r\nThe (col == 0) case is handled just above this, with\r\ncontains_whole_row_col() and get_ordinal_attnos() replacing any zero\r\ncolumns with the entire user-column range. If one of those functions\r\nfails to do its job due to programmer error, we'll assert in the call\r\nto list_nth(), and I think that's what we want.\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 3 Mar 2021 18:05:17 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "Hi Soumyadeep, Jacob,\n\n> Thanks for the review!\n\nI came across this patch and noticed that it rotted a little, especially\nafter removing inheritance_planner() in 86dc9005. I managed to resolve the\nconflicts on current `master` (eb89cb43), see the attached patch. The code\ncompiles but doesn't pass the tests. I'm currently in the process of\nreviewing it and didn't figure out what the issue is yet. Just wanted to\nlet you know. I also believe changing the patch status to \"Waiting on\nAuthor\" would be appropriate.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 1 Jun 2021 15:38:33 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Tue, 2021-06-01 at 15:38 +0300, Aleksander Alekseev wrote:\r\n> I came across this patch and noticed that it rotted a little,\r\n> especially after removing inheritance_planner() in 86dc9005. I\r\n> managed to resolve the conflicts on current `master` (eb89cb43), see\r\n> the attached patch. The code compiles but doesn't pass the tests. I'm\r\n> currently in the process of reviewing it and didn't figure out what\r\n> the issue is yet. Just wanted to let you know.\r\n\r\nHi Alexsander, thanks!\r\n\r\nIn your patch's transformInsertStmt(), I see what I think is an\r\nextraneous call to transformReturningList() right before the ON\r\nCONFLICT processing. That call is already done later in the function,\r\nduring the RETURNING processing (this change came in with 6c0373ab77).\r\nOther than that, your rebased patch looks the same as mine.\r\n\r\n> I also believe changing the patch status to \"Waiting on Author\"\r\n> would be appropriate.\r\n\r\nAgreed. I'm going to double-check with Deep that the new calls\r\nto table_tuple_fetch_row_version() should be projecting the full row,\r\nthen post an updated patch some time next week.\r\n\r\nThanks again!\r\n--Jacob\r\n",
"msg_date": "Fri, 4 Jun 2021 23:14:21 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 4:14 PM Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Tue, 2021-06-01 at 15:38 +0300, Aleksander Alekseev wrote:\n> > I came across this patch and noticed that it rotted a little,\n> > especially after removing inheritance_planner() in 86dc9005. I\n> > managed to resolve the conflicts on current `master` (eb89cb43), see\n> > the attached patch. The code compiles but doesn't pass the tests. I'm\n> > currently in the process of reviewing it and didn't figure out what\n> > the issue is yet. Just wanted to let you know.\n>\n> Hi Alexsander, thanks!\n>\n> In your patch's transformInsertStmt(), I see what I think is an\n> extraneous call to transformReturningList() right before the ON\n> CONFLICT processing. That call is already done later in the function,\n> during the RETURNING processing (this change came in with 6c0373ab77).\n> Other than that, your rebased patch looks the same as mine.\n>\n> > I also believe changing the patch status to \"Waiting on Author\"\n> > would be appropriate.\n>\n> Agreed. I'm going to double-check with Deep that the new calls\n> to table_tuple_fetch_row_version() should be projecting the full row,\n> then post an updated patch some time next week.\n>\n> Thanks again!\n> --Jacob\n>\nHi,\n\n+ return\nrelation->rd_tableam->scan_begin_with_column_projection(relation, snapshot,\n0, NULL,\n+ parallel_scan, flags, proj);\n\nscan_begin_with_column_projection() adds a parameter to scan_begin().\nCan scan_begin() be enhanced with this projection parameter ?\nOtherwise in the future we may\nhave scan_begin_with_column_projection_with_x_y ...\n\n+ /* Make sure the the new slot is not dependent on the original tuple */\n\nDouble 'the' in the comment. More than one place with duplicate 'the' in\nthe patch.\n\n+typedef struct neededColumnContext\n+{\n+ Bitmapset **mask;\n+ int n;\n\nShould field n be named ncol ? 'n' seems too general.\n\n+ * TODO: Remove this hack!! This should be done once at the start\nof the tid scan.\n\nWould the above be addressed in the next patch ?\n\nToward the end of extract_scan_columns():\n\n+ bms_free(rte->scanCols);\n+ rte->scanCols = bms_make_singleton(0);\n+ break;\n\nShould 'goto outer;' be in place of 'break;' (since rte->scanCols has been\nassigned for whole-row) ?\n\nCheers\n\nOn Fri, Jun 4, 2021 at 4:14 PM Jacob Champion <pchampion@vmware.com> wrote:On Tue, 2021-06-01 at 15:38 +0300, Aleksander Alekseev wrote:\n> I came across this patch and noticed that it rotted a little,\n> especially after removing inheritance_planner() in 86dc9005. I\n> managed to resolve the conflicts on current `master` (eb89cb43), see\n> the attached patch. The code compiles but doesn't pass the tests. I'm\n> currently in the process of reviewing it and didn't figure out what\n> the issue is yet. Just wanted to let you know.\n\nHi Alexsander, thanks!\n\nIn your patch's transformInsertStmt(), I see what I think is an\nextraneous call to transformReturningList() right before the ON\nCONFLICT processing. That call is already done later in the function,\nduring the RETURNING processing (this change came in with 6c0373ab77).\nOther than that, your rebased patch looks the same as mine.\n\n> I also believe changing the patch status to \"Waiting on Author\"\n> would be appropriate.\n\nAgreed. I'm going to double-check with Deep that the new calls\nto table_tuple_fetch_row_version() should be projecting the full row,\nthen post an updated patch some time next week.\n\nThanks again!\n--JacobHi,+ return relation->rd_tableam->scan_begin_with_column_projection(relation, snapshot, 0, NULL,+ parallel_scan, flags, proj);scan_begin_with_column_projection() adds a parameter to scan_begin().Can scan_begin() be enhanced with this projection parameter ?Otherwise in the future we may have scan_begin_with_column_projection_with_x_y ...+ /* Make sure the the new slot is not dependent on the original tuple */Double 'the' in the comment. More than one place with duplicate 'the' in the patch.+typedef struct neededColumnContext+{+ Bitmapset **mask;+ int n;Should field n be named ncol ? 'n' seems too general.+ * TODO: Remove this hack!! This should be done once at the start of the tid scan.Would the above be addressed in the next patch ?Toward the end of extract_scan_columns():+ bms_free(rte->scanCols);+ rte->scanCols = bms_make_singleton(0);+ break;Should 'goto outer;' be in place of 'break;' (since rte->scanCols has been assigned for whole-row) ?Cheers",
"msg_date": "Sat, 5 Jun 2021 09:47:45 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Sat, 2021-06-05 at 09:47 -0700, Zhihong Yu wrote:\r\n> On Fri, Jun 4, 2021 at 4:14 PM Jacob Champion <pchampion@vmware.com> wrote:\r\n> > Agreed. I'm going to double-check with Deep that the new calls\r\n> > to table_tuple_fetch_row_version() should be projecting the full row,\r\n> > then post an updated patch some time next week.\r\n\r\n(The discussions over the fallout of the inheritance_planner fallout\r\nare still going, but in the meantime here's an updated v4 that builds\r\nand passes `make check`.)\r\n\r\n> + return relation->rd_tableam->scan_begin_with_column_projection(relation, snapshot, 0, NULL,\r\n> + parallel_scan, flags, proj);\r\n> \r\n> scan_begin_with_column_projection() adds a parameter to scan_begin().\r\n> Can scan_begin() be enhanced with this projection parameter ?\r\n> Otherwise in the future we may have scan_begin_with_column_projection_with_x_y ...\r\n\r\nMaybe; I agree that would match the current \"extension\" APIs a little\r\nbetter. I'll let Deep and/or Ashwin chime in on why this design was\r\nchosen.\r\n\r\n> + /* Make sure the the new slot is not dependent on the original tuple */\r\n> \r\n> Double 'the' in the comment. More than one place with duplicate 'the'\r\n> in the patch.\r\n\r\nFixed.\r\n\r\n> +typedef struct neededColumnContext\r\n> +{\r\n> + Bitmapset **mask;\r\n> + int n;\r\n> \r\n> Should field n be named ncol ? 'n' seems too general.\r\n\r\nAgreed; changed to ncol.\r\n\r\n> + * TODO: Remove this hack!! This should be done once at the start of the tid scan.\r\n> \r\n> Would the above be addressed in the next patch ?\r\n\r\nI have not had time to get to this in v4, sorry.\r\n\r\n> Toward the end of extract_scan_columns():\r\n> \r\n> + bms_free(rte->scanCols);\r\n> + rte->scanCols = bms_make_singleton(0);\r\n> + break;\r\n> \r\n> Should 'goto outer;' be in place of 'break;' (since rte->scanCols has\r\n> been assigned for whole-row) ?\r\n\r\nAgreed and fixed. Thank you!\r\n\r\n--Jacob",
"msg_date": "Fri, 11 Jun 2021 23:53:23 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "Hi all,\r\n\r\nAshwin, Deep, and I were discussing this patch today. We agree that\r\nit's fairly difficult to review in its current state, and the lack of a\r\nconcrete implementation of the new API isn't helping. (A big chunk of\r\nthe context for the patch exists in the zedstore megathread, which\r\nisn't exactly light reading.)\r\n\r\nWe'd like to improve that, but with current time constraints, we won't\r\nbe able to get to it for the July commitfest. So I'll mark this patch\r\nWithdrawn for now, to reduce the review load. (239 Needs Review and\r\ncounting!) We hope to revisit in the September timeframe.\r\n\r\nThanks for all the reviews and input!\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 30 Jun 2021 23:42:19 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 7:42 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> Hi all,\n>\n> Ashwin, Deep, and I were discussing this patch today. We agree that\n> it's fairly difficult to review in its current state, and the lack of a\n> concrete implementation of the new API isn't helping. (A big chunk of\n> the context for the patch exists in the zedstore megathread, which\n> isn't exactly light reading.)\n>\n> We'd like to improve that, but with current time constraints, we won't\n> be able to get to it for the July commitfest. So I'll mark this patch\n> Withdrawn for now, to reduce the review load. (239 Needs Review and\n> counting!) We hope to revisit in the September timeframe.\n>\n> Thanks for all the reviews and input!\n>\n> --Jacob\n\nSince this thread is mentioned again, I want to share some thoughts on the lazy\nmaterial part[1]. If we go with that direction, we may not need the AM for pass\nprojection to heapam. we just need something like AM like\n\nDatum fetch_column_value_for_column(.., rowid, colid);\n\nI worked in this direction with a small step and stopped quickly because of some\nother reasons. I would like to share my work so far here [2], However\nlazy material\nis not always good.\n\n[1] https://stratos.seas.harvard.edu/files/stratos/files/columnstoresfntdbs.pdf\n[2] https://github.com/zhihuiFan/postgres/tree/lazy_material_v2\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n",
"msg_date": "Wed, 18 Aug 2021 20:56:54 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table AM modifications to accept column projection lists"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.